problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_19195
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-extras-1570
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Verbs change in Elasticsearch 2.0.0
Since Elasticsearch 2.0.0 `plugin` dropped the double-dash for verbs. So what used to be `--install` now became `install`. Hence, [this](https://github.com/ansible/ansible-modules-extras/blob/devel/packaging/elasticsearch_plugin.py#L114-L117) doesn't work with new versions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packaging/elasticsearch_plugin.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 import os
5
6 """
7 Ansible module to manage elasticsearch plugins
8 (c) 2015, Mathew Davies <[email protected]>
9
10 This file is part of Ansible
11
12 Ansible is free software: you can redistribute it and/or modify
13 it under the terms of the GNU General Public License as published by
14 the Free Software Foundation, either version 3 of the License, or
15 (at your option) any later version.
16
17 Ansible is distributed in the hope that it will be useful,
18 but WITHOUT ANY WARRANTY; without even the implied warranty of
19 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 GNU General Public License for more details.
21 You should have received a copy of the GNU General Public License
22 along with Ansible. If not, see <http://www.gnu.org/licenses/>.
23 """
24
25 DOCUMENTATION = '''
26 ---
27 module: elasticsearch_plugin
28 short_description: Manage Elasticsearch plugins
29 description:
30 - Manages Elasticsearch plugins.
31 version_added: "2.0"
32 author: Mathew Davies (@ThePixelDeveloper)
33 options:
34 name:
35 description:
36 - Name of the plugin to install
37 required: True
38 state:
39 description:
40 - Desired state of a plugin.
41 required: False
42 choices: [present, absent]
43 default: present
44 url:
45 description:
46 - Set exact URL to download the plugin from
47 required: False
48 default: None
49 timeout:
50 description:
51 - "Timeout setting: 30s, 1m, 1h..."
52 required: False
53 default: 1m
54 plugin_bin:
55 description:
56 - Location of the plugin binary
57 required: False
58 default: /usr/share/elasticsearch/bin/plugin
59 plugin_dir:
60 description:
61 - Your configured plugin directory specified in Elasticsearch
62 required: False
63 default: /usr/share/elasticsearch/plugins/
64 version:
65 description:
66 - Version of the plugin to be installed.
67 If plugin exists with previous version, it will NOT be updated
68 required: False
69 default: None
70 '''
71
72 EXAMPLES = '''
73 # Install Elasticsearch head plugin
74 - elasticsearch_plugin: state=present name="mobz/elasticsearch-head"
75
76 # Install specific version of a plugin
77 - elasticsearch_plugin: state=present name="com.github.kzwang/elasticsearch-image" version="1.2.0"
78
79 # Uninstall Elasticsearch head plugin
80 - elasticsearch_plugin: state=absent name="mobz/elasticsearch-head"
81 '''
82
83
84 def parse_plugin_repo(string):
85 elements = string.split("/")
86
87 # We first consider the simplest form: pluginname
88 repo = elements[0]
89
90 # We consider the form: username/pluginname
91 if len(elements) > 1:
92 repo = elements[1]
93
94 # remove elasticsearch- prefix
95 # remove es- prefix
96 for string in ("elasticsearch-", "es-"):
97 if repo.startswith(string):
98 return repo[len(string):]
99
100 return repo
101
102
103 def is_plugin_present(plugin_dir, working_dir):
104 return os.path.isdir(os.path.join(working_dir, plugin_dir))
105
106
107 def parse_error(string):
108 reason = "reason: "
109 return string[string.index(reason) + len(reason):].strip()
110
111
112 def main():
113
114 package_state_map = dict(
115 present="--install",
116 absent="--remove"
117 )
118
119 module = AnsibleModule(
120 argument_spec=dict(
121 name=dict(required=True),
122 state=dict(default="present", choices=package_state_map.keys()),
123 url=dict(default=None),
124 timeout=dict(default="1m"),
125 plugin_bin=dict(default="/usr/share/elasticsearch/bin/plugin"),
126 plugin_dir=dict(default="/usr/share/elasticsearch/plugins/"),
127 version=dict(default=None)
128 )
129 )
130
131 plugin_bin = module.params["plugin_bin"]
132 plugin_dir = module.params["plugin_dir"]
133 name = module.params["name"]
134 state = module.params["state"]
135 url = module.params["url"]
136 timeout = module.params["timeout"]
137 version = module.params["version"]
138
139 present = is_plugin_present(parse_plugin_repo(name), plugin_dir)
140
141 # skip if the state is correct
142 if (present and state == "present") or (state == "absent" and not present):
143 module.exit_json(changed=False, name=name)
144
145 if (version):
146 name = name + '/' + version
147
148 cmd_args = [plugin_bin, package_state_map[state], name]
149
150 if url:
151 cmd_args.append("--url %s" % url)
152
153 if timeout:
154 cmd_args.append("--timeout %s" % timeout)
155
156 cmd = " ".join(cmd_args)
157
158 rc, out, err = module.run_command(cmd)
159
160 if rc != 0:
161 reason = parse_error(out)
162 module.fail_json(msg=reason)
163
164 module.exit_json(changed=True, cmd=cmd, name=name, state=state, url=url, timeout=timeout, stdout=out, stderr=err)
165
166 from ansible.module_utils.basic import *
167
168 main()
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/packaging/elasticsearch_plugin.py b/packaging/elasticsearch_plugin.py
--- a/packaging/elasticsearch_plugin.py
+++ b/packaging/elasticsearch_plugin.py
@@ -33,7 +33,7 @@
options:
name:
description:
- - Name of the plugin to install
+ - Name of the plugin to install. In ES 2.x, the name can be an url or file location
required: True
state:
description:
@@ -43,7 +43,7 @@
default: present
url:
description:
- - Set exact URL to download the plugin from
+ - Set exact URL to download the plugin from (Only works for ES 1.x)
required: False
default: None
timeout:
@@ -112,8 +112,8 @@
def main():
package_state_map = dict(
- present="--install",
- absent="--remove"
+ present="install",
+ absent="remove"
)
module = AnsibleModule(
|
{"golden_diff": "diff --git a/packaging/elasticsearch_plugin.py b/packaging/elasticsearch_plugin.py\n--- a/packaging/elasticsearch_plugin.py\n+++ b/packaging/elasticsearch_plugin.py\n@@ -33,7 +33,7 @@\n options:\n name:\n description:\n- - Name of the plugin to install\n+ - Name of the plugin to install. In ES 2.x, the name can be an url or file location\n required: True\n state:\n description:\n@@ -43,7 +43,7 @@\n default: present\n url:\n description:\n- - Set exact URL to download the plugin from\n+ - Set exact URL to download the plugin from (Only works for ES 1.x)\n required: False\n default: None\n timeout:\n@@ -112,8 +112,8 @@\n def main():\n \n package_state_map = dict(\n- present=\"--install\",\n- absent=\"--remove\"\n+ present=\"install\",\n+ absent=\"remove\"\n )\n \n module = AnsibleModule(\n", "issue": "Verbs change in Elasticsearch 2.0.0\nSince Elasticsearch 2.0.0 `plugin` dropped the double-dash for verbs. So what used to be `--install` now became `install`. Hence, [this](https://github.com/ansible/ansible-modules-extras/blob/devel/packaging/elasticsearch_plugin.py#L114-L117) doesn't work with new versions.\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\nimport os\n\n\"\"\"\nAnsible module to manage elasticsearch plugins\n(c) 2015, Mathew Davies <[email protected]>\n\nThis file is part of Ansible\n\nAnsible is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nAnsible is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\nYou should have received a copy of the GNU General Public License\nalong with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nDOCUMENTATION = '''\n---\nmodule: elasticsearch_plugin\nshort_description: Manage Elasticsearch plugins\ndescription:\n - Manages Elasticsearch plugins.\nversion_added: \"2.0\"\nauthor: Mathew Davies (@ThePixelDeveloper)\noptions:\n name:\n description:\n - Name of the plugin to install\n required: True\n state:\n description:\n - Desired state of a plugin.\n required: False\n choices: [present, absent]\n default: present\n url:\n description:\n - Set exact URL to download the plugin from\n required: False\n default: None\n timeout:\n description:\n - \"Timeout setting: 30s, 1m, 1h...\"\n required: False\n default: 1m\n plugin_bin:\n description:\n - Location of the plugin binary\n required: False\n default: /usr/share/elasticsearch/bin/plugin\n plugin_dir:\n description:\n - Your configured plugin directory specified in Elasticsearch\n required: False\n default: /usr/share/elasticsearch/plugins/\n version:\n description:\n - Version of the plugin to be installed.\n If plugin exists with previous version, it will NOT be updated\n required: False\n default: None\n'''\n\nEXAMPLES = '''\n# Install Elasticsearch head plugin\n- elasticsearch_plugin: state=present name=\"mobz/elasticsearch-head\"\n\n# Install specific version of a plugin\n- elasticsearch_plugin: state=present name=\"com.github.kzwang/elasticsearch-image\" version=\"1.2.0\"\n\n# Uninstall Elasticsearch head plugin\n- elasticsearch_plugin: state=absent name=\"mobz/elasticsearch-head\"\n'''\n\n\ndef parse_plugin_repo(string):\n elements = string.split(\"/\")\n\n # We first consider the simplest form: pluginname\n repo = elements[0]\n\n # We consider the form: username/pluginname\n if len(elements) > 1:\n repo = elements[1]\n\n # remove elasticsearch- prefix\n # remove es- prefix\n for string in (\"elasticsearch-\", \"es-\"):\n if repo.startswith(string):\n return repo[len(string):]\n\n return repo\n\n\ndef is_plugin_present(plugin_dir, working_dir):\n return os.path.isdir(os.path.join(working_dir, plugin_dir))\n\n\ndef parse_error(string):\n reason = \"reason: \"\n return string[string.index(reason) + len(reason):].strip()\n\n\ndef main():\n\n package_state_map = dict(\n present=\"--install\",\n absent=\"--remove\"\n )\n\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(required=True),\n state=dict(default=\"present\", choices=package_state_map.keys()),\n url=dict(default=None),\n timeout=dict(default=\"1m\"),\n plugin_bin=dict(default=\"/usr/share/elasticsearch/bin/plugin\"),\n plugin_dir=dict(default=\"/usr/share/elasticsearch/plugins/\"),\n version=dict(default=None)\n )\n )\n\n plugin_bin = module.params[\"plugin_bin\"]\n plugin_dir = module.params[\"plugin_dir\"]\n name = module.params[\"name\"]\n state = module.params[\"state\"]\n url = module.params[\"url\"]\n timeout = module.params[\"timeout\"]\n version = module.params[\"version\"]\n\n present = is_plugin_present(parse_plugin_repo(name), plugin_dir)\n\n # skip if the state is correct\n if (present and state == \"present\") or (state == \"absent\" and not present):\n module.exit_json(changed=False, name=name)\n\n if (version):\n name = name + '/' + version\n\n cmd_args = [plugin_bin, package_state_map[state], name]\n\n if url:\n cmd_args.append(\"--url %s\" % url)\n\n if timeout:\n cmd_args.append(\"--timeout %s\" % timeout)\n\n cmd = \" \".join(cmd_args)\n\n rc, out, err = module.run_command(cmd)\n\n if rc != 0:\n reason = parse_error(out)\n module.fail_json(msg=reason)\n\n module.exit_json(changed=True, cmd=cmd, name=name, state=state, url=url, timeout=timeout, stdout=out, stderr=err)\n\nfrom ansible.module_utils.basic import *\n\nmain()\n", "path": "packaging/elasticsearch_plugin.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\nimport os\n\n\"\"\"\nAnsible module to manage elasticsearch plugins\n(c) 2015, Mathew Davies <[email protected]>\n\nThis file is part of Ansible\n\nAnsible is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nAnsible is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\nYou should have received a copy of the GNU General Public License\nalong with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"\n\nDOCUMENTATION = '''\n---\nmodule: elasticsearch_plugin\nshort_description: Manage Elasticsearch plugins\ndescription:\n - Manages Elasticsearch plugins.\nversion_added: \"2.0\"\nauthor: Mathew Davies (@ThePixelDeveloper)\noptions:\n name:\n description:\n - Name of the plugin to install. In ES 2.x, the name can be an url or file location\n required: True\n state:\n description:\n - Desired state of a plugin.\n required: False\n choices: [present, absent]\n default: present\n url:\n description:\n - Set exact URL to download the plugin from (Only works for ES 1.x)\n required: False\n default: None\n timeout:\n description:\n - \"Timeout setting: 30s, 1m, 1h...\"\n required: False\n default: 1m\n plugin_bin:\n description:\n - Location of the plugin binary\n required: False\n default: /usr/share/elasticsearch/bin/plugin\n plugin_dir:\n description:\n - Your configured plugin directory specified in Elasticsearch\n required: False\n default: /usr/share/elasticsearch/plugins/\n version:\n description:\n - Version of the plugin to be installed.\n If plugin exists with previous version, it will NOT be updated\n required: False\n default: None\n'''\n\nEXAMPLES = '''\n# Install Elasticsearch head plugin\n- elasticsearch_plugin: state=present name=\"mobz/elasticsearch-head\"\n\n# Install specific version of a plugin\n- elasticsearch_plugin: state=present name=\"com.github.kzwang/elasticsearch-image\" version=\"1.2.0\"\n\n# Uninstall Elasticsearch head plugin\n- elasticsearch_plugin: state=absent name=\"mobz/elasticsearch-head\"\n'''\n\n\ndef parse_plugin_repo(string):\n elements = string.split(\"/\")\n\n # We first consider the simplest form: pluginname\n repo = elements[0]\n\n # We consider the form: username/pluginname\n if len(elements) > 1:\n repo = elements[1]\n\n # remove elasticsearch- prefix\n # remove es- prefix\n for string in (\"elasticsearch-\", \"es-\"):\n if repo.startswith(string):\n return repo[len(string):]\n\n return repo\n\n\ndef is_plugin_present(plugin_dir, working_dir):\n return os.path.isdir(os.path.join(working_dir, plugin_dir))\n\n\ndef parse_error(string):\n reason = \"reason: \"\n return string[string.index(reason) + len(reason):].strip()\n\n\ndef main():\n\n package_state_map = dict(\n present=\"install\",\n absent=\"remove\"\n )\n\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(required=True),\n state=dict(default=\"present\", choices=package_state_map.keys()),\n url=dict(default=None),\n timeout=dict(default=\"1m\"),\n plugin_bin=dict(default=\"/usr/share/elasticsearch/bin/plugin\"),\n plugin_dir=dict(default=\"/usr/share/elasticsearch/plugins/\"),\n version=dict(default=None)\n )\n )\n\n plugin_bin = module.params[\"plugin_bin\"]\n plugin_dir = module.params[\"plugin_dir\"]\n name = module.params[\"name\"]\n state = module.params[\"state\"]\n url = module.params[\"url\"]\n timeout = module.params[\"timeout\"]\n version = module.params[\"version\"]\n\n present = is_plugin_present(parse_plugin_repo(name), plugin_dir)\n\n # skip if the state is correct\n if (present and state == \"present\") or (state == \"absent\" and not present):\n module.exit_json(changed=False, name=name)\n\n if (version):\n name = name + '/' + version\n\n cmd_args = [plugin_bin, package_state_map[state], name]\n\n if url:\n cmd_args.append(\"--url %s\" % url)\n\n if timeout:\n cmd_args.append(\"--timeout %s\" % timeout)\n\n cmd = \" \".join(cmd_args)\n\n rc, out, err = module.run_command(cmd)\n\n if rc != 0:\n reason = parse_error(out)\n module.fail_json(msg=reason)\n\n module.exit_json(changed=True, cmd=cmd, name=name, state=state, url=url, timeout=timeout, stdout=out, stderr=err)\n\nfrom ansible.module_utils.basic import *\n\nmain()\n", "path": "packaging/elasticsearch_plugin.py"}]}
| 1,841 | 234 |
gh_patches_debug_35368
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2262
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Adding IAM parameters to redshift adapter
### Describe the feature
I would the arguments to get_cluster_credentials to be added to the dbt profile configuration. In particular DbGroups to allow the temporary user to be added to a group and AutoCreate to allow auto creation of users that do not exist.
### Describe alternatives you've considered
Since these are IAM specific configurations the only other alternative is to not use the temporary credentials.
### Additional context
This is a feature specifically for redshift users.
### Who will this benefit?
This feature will be useful for dbt users who want to use temporary and dynamic credentials with redshift.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/redshift/dbt/adapters/redshift/connections.py`
Content:
```
1 from multiprocessing import Lock
2 from contextlib import contextmanager
3 from typing import NewType
4
5 from dbt.adapters.postgres import PostgresConnectionManager
6 from dbt.adapters.postgres import PostgresCredentials
7 from dbt.logger import GLOBAL_LOGGER as logger # noqa
8 import dbt.exceptions
9 import dbt.flags
10
11 import boto3
12
13 from hologram import FieldEncoder, JsonSchemaMixin
14 from hologram.helpers import StrEnum
15
16 from dataclasses import dataclass, field
17 from typing import Optional
18
19 drop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()
20
21
22 IAMDuration = NewType('IAMDuration', int)
23
24
25 class IAMDurationEncoder(FieldEncoder):
26 @property
27 def json_schema(self):
28 return {'type': 'integer', 'minimum': 0, 'maximum': 65535}
29
30
31 JsonSchemaMixin.register_field_encoders({IAMDuration: IAMDurationEncoder()})
32
33
34 class RedshiftConnectionMethod(StrEnum):
35 DATABASE = 'database'
36 IAM = 'iam'
37
38
39 @dataclass
40 class RedshiftCredentials(PostgresCredentials):
41 method: RedshiftConnectionMethod = RedshiftConnectionMethod.DATABASE
42 password: Optional[str] = None
43 cluster_id: Optional[str] = field(
44 default=None,
45 metadata={'description': 'If using IAM auth, the name of the cluster'},
46 )
47 iam_duration_seconds: int = 900
48 search_path: Optional[str] = None
49 keepalives_idle: int = 240
50
51 @property
52 def type(self):
53 return 'redshift'
54
55 def _connection_keys(self):
56 keys = super()._connection_keys()
57 return keys + ('method', 'cluster_id', 'iam_duration_seconds')
58
59
60 class RedshiftConnectionManager(PostgresConnectionManager):
61 TYPE = 'redshift'
62
63 @contextmanager
64 def fresh_transaction(self, name=None):
65 """On entrance to this context manager, hold an exclusive lock and
66 create a fresh transaction for redshift, then commit and begin a new
67 one before releasing the lock on exit.
68
69 See drop_relation in RedshiftAdapter for more information.
70
71 :param Optional[str] name: The name of the connection to use, or None
72 to use the default.
73 """
74 with drop_lock:
75 connection = self.get_thread_connection()
76
77 if connection.transaction_open:
78 self.commit()
79
80 self.begin()
81 yield
82
83 self.commit()
84 self.begin()
85
86 @classmethod
87 def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,
88 duration_s):
89 """Fetches temporary login credentials from AWS. The specified user
90 must already exist in the database, or else an error will occur"""
91 boto_client = boto3.client('redshift')
92
93 try:
94 return boto_client.get_cluster_credentials(
95 DbUser=db_user,
96 DbName=db_name,
97 ClusterIdentifier=cluster_id,
98 DurationSeconds=duration_s,
99 AutoCreate=False)
100
101 except boto_client.exceptions.ClientError as e:
102 raise dbt.exceptions.FailedToConnectException(
103 "Unable to get temporary Redshift cluster credentials: {}"
104 .format(e))
105
106 @classmethod
107 def get_tmp_iam_cluster_credentials(cls, credentials):
108 cluster_id = credentials.cluster_id
109
110 # default via:
111 # boto3.readthedocs.io/en/latest/reference/services/redshift.html
112 iam_duration_s = credentials.iam_duration_seconds
113
114 if not cluster_id:
115 raise dbt.exceptions.FailedToConnectException(
116 "'cluster_id' must be provided in profile if IAM "
117 "authentication method selected")
118
119 cluster_creds = cls.fetch_cluster_credentials(
120 credentials.user,
121 credentials.database,
122 credentials.cluster_id,
123 iam_duration_s,
124 )
125
126 # replace username and password with temporary redshift credentials
127 return credentials.replace(user=cluster_creds.get('DbUser'),
128 password=cluster_creds.get('DbPassword'))
129
130 @classmethod
131 def get_credentials(cls, credentials):
132 method = credentials.method
133
134 # Support missing 'method' for backwards compatibility
135 if method == 'database' or method is None:
136 logger.debug("Connecting to Redshift using 'database' credentials")
137 # this requirement is really annoying to encode into json schema,
138 # so validate it here
139 if credentials.password is None:
140 raise dbt.exceptions.FailedToConnectException(
141 "'password' field is required for 'database' credentials"
142 )
143 return credentials
144
145 elif method == 'iam':
146 logger.debug("Connecting to Redshift using 'IAM' credentials")
147 return cls.get_tmp_iam_cluster_credentials(credentials)
148
149 else:
150 raise dbt.exceptions.FailedToConnectException(
151 "Invalid 'method' in profile: '{}'".format(method))
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/redshift/dbt/adapters/redshift/connections.py b/plugins/redshift/dbt/adapters/redshift/connections.py
--- a/plugins/redshift/dbt/adapters/redshift/connections.py
+++ b/plugins/redshift/dbt/adapters/redshift/connections.py
@@ -14,7 +14,7 @@
from hologram.helpers import StrEnum
from dataclasses import dataclass, field
-from typing import Optional
+from typing import Optional, List
drop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()
@@ -47,6 +47,8 @@
iam_duration_seconds: int = 900
search_path: Optional[str] = None
keepalives_idle: int = 240
+ autocreate: bool = False
+ db_groups: List[str] = field(default_factory=list)
@property
def type(self):
@@ -85,7 +87,7 @@
@classmethod
def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,
- duration_s):
+ duration_s, autocreate, db_groups):
"""Fetches temporary login credentials from AWS. The specified user
must already exist in the database, or else an error will occur"""
boto_client = boto3.client('redshift')
@@ -96,7 +98,8 @@
DbName=db_name,
ClusterIdentifier=cluster_id,
DurationSeconds=duration_s,
- AutoCreate=False)
+ AutoCreate=autocreate,
+ DbGroups=db_groups,)
except boto_client.exceptions.ClientError as e:
raise dbt.exceptions.FailedToConnectException(
@@ -121,6 +124,8 @@
credentials.database,
credentials.cluster_id,
iam_duration_s,
+ credentials.autocreate,
+ credentials.db_groups,
)
# replace username and password with temporary redshift credentials
|
{"golden_diff": "diff --git a/plugins/redshift/dbt/adapters/redshift/connections.py b/plugins/redshift/dbt/adapters/redshift/connections.py\n--- a/plugins/redshift/dbt/adapters/redshift/connections.py\n+++ b/plugins/redshift/dbt/adapters/redshift/connections.py\n@@ -14,7 +14,7 @@\n from hologram.helpers import StrEnum\n \n from dataclasses import dataclass, field\n-from typing import Optional\n+from typing import Optional, List\n \n drop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()\n \n@@ -47,6 +47,8 @@\n iam_duration_seconds: int = 900\n search_path: Optional[str] = None\n keepalives_idle: int = 240\n+ autocreate: bool = False\n+ db_groups: List[str] = field(default_factory=list)\n \n @property\n def type(self):\n@@ -85,7 +87,7 @@\n \n @classmethod\n def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,\n- duration_s):\n+ duration_s, autocreate, db_groups):\n \"\"\"Fetches temporary login credentials from AWS. The specified user\n must already exist in the database, or else an error will occur\"\"\"\n boto_client = boto3.client('redshift')\n@@ -96,7 +98,8 @@\n DbName=db_name,\n ClusterIdentifier=cluster_id,\n DurationSeconds=duration_s,\n- AutoCreate=False)\n+ AutoCreate=autocreate,\n+ DbGroups=db_groups,)\n \n except boto_client.exceptions.ClientError as e:\n raise dbt.exceptions.FailedToConnectException(\n@@ -121,6 +124,8 @@\n credentials.database,\n credentials.cluster_id,\n iam_duration_s,\n+ credentials.autocreate,\n+ credentials.db_groups,\n )\n \n # replace username and password with temporary redshift credentials\n", "issue": "Adding IAM parameters to redshift adapter\n### Describe the feature\r\nI would the arguments to get_cluster_credentials to be added to the dbt profile configuration. In particular DbGroups to allow the temporary user to be added to a group and AutoCreate to allow auto creation of users that do not exist. \r\n\r\n### Describe alternatives you've considered\r\nSince these are IAM specific configurations the only other alternative is to not use the temporary credentials.\r\n\r\n### Additional context\r\nThis is a feature specifically for redshift users.\r\n\r\n### Who will this benefit?\r\nThis feature will be useful for dbt users who want to use temporary and dynamic credentials with redshift. \r\n\n", "before_files": [{"content": "from multiprocessing import Lock\nfrom contextlib import contextmanager\nfrom typing import NewType\n\nfrom dbt.adapters.postgres import PostgresConnectionManager\nfrom dbt.adapters.postgres import PostgresCredentials\nfrom dbt.logger import GLOBAL_LOGGER as logger # noqa\nimport dbt.exceptions\nimport dbt.flags\n\nimport boto3\n\nfrom hologram import FieldEncoder, JsonSchemaMixin\nfrom hologram.helpers import StrEnum\n\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\ndrop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()\n\n\nIAMDuration = NewType('IAMDuration', int)\n\n\nclass IAMDurationEncoder(FieldEncoder):\n @property\n def json_schema(self):\n return {'type': 'integer', 'minimum': 0, 'maximum': 65535}\n\n\nJsonSchemaMixin.register_field_encoders({IAMDuration: IAMDurationEncoder()})\n\n\nclass RedshiftConnectionMethod(StrEnum):\n DATABASE = 'database'\n IAM = 'iam'\n\n\n@dataclass\nclass RedshiftCredentials(PostgresCredentials):\n method: RedshiftConnectionMethod = RedshiftConnectionMethod.DATABASE\n password: Optional[str] = None\n cluster_id: Optional[str] = field(\n default=None,\n metadata={'description': 'If using IAM auth, the name of the cluster'},\n )\n iam_duration_seconds: int = 900\n search_path: Optional[str] = None\n keepalives_idle: int = 240\n\n @property\n def type(self):\n return 'redshift'\n\n def _connection_keys(self):\n keys = super()._connection_keys()\n return keys + ('method', 'cluster_id', 'iam_duration_seconds')\n\n\nclass RedshiftConnectionManager(PostgresConnectionManager):\n TYPE = 'redshift'\n\n @contextmanager\n def fresh_transaction(self, name=None):\n \"\"\"On entrance to this context manager, hold an exclusive lock and\n create a fresh transaction for redshift, then commit and begin a new\n one before releasing the lock on exit.\n\n See drop_relation in RedshiftAdapter for more information.\n\n :param Optional[str] name: The name of the connection to use, or None\n to use the default.\n \"\"\"\n with drop_lock:\n connection = self.get_thread_connection()\n\n if connection.transaction_open:\n self.commit()\n\n self.begin()\n yield\n\n self.commit()\n self.begin()\n\n @classmethod\n def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,\n duration_s):\n \"\"\"Fetches temporary login credentials from AWS. The specified user\n must already exist in the database, or else an error will occur\"\"\"\n boto_client = boto3.client('redshift')\n\n try:\n return boto_client.get_cluster_credentials(\n DbUser=db_user,\n DbName=db_name,\n ClusterIdentifier=cluster_id,\n DurationSeconds=duration_s,\n AutoCreate=False)\n\n except boto_client.exceptions.ClientError as e:\n raise dbt.exceptions.FailedToConnectException(\n \"Unable to get temporary Redshift cluster credentials: {}\"\n .format(e))\n\n @classmethod\n def get_tmp_iam_cluster_credentials(cls, credentials):\n cluster_id = credentials.cluster_id\n\n # default via:\n # boto3.readthedocs.io/en/latest/reference/services/redshift.html\n iam_duration_s = credentials.iam_duration_seconds\n\n if not cluster_id:\n raise dbt.exceptions.FailedToConnectException(\n \"'cluster_id' must be provided in profile if IAM \"\n \"authentication method selected\")\n\n cluster_creds = cls.fetch_cluster_credentials(\n credentials.user,\n credentials.database,\n credentials.cluster_id,\n iam_duration_s,\n )\n\n # replace username and password with temporary redshift credentials\n return credentials.replace(user=cluster_creds.get('DbUser'),\n password=cluster_creds.get('DbPassword'))\n\n @classmethod\n def get_credentials(cls, credentials):\n method = credentials.method\n\n # Support missing 'method' for backwards compatibility\n if method == 'database' or method is None:\n logger.debug(\"Connecting to Redshift using 'database' credentials\")\n # this requirement is really annoying to encode into json schema,\n # so validate it here\n if credentials.password is None:\n raise dbt.exceptions.FailedToConnectException(\n \"'password' field is required for 'database' credentials\"\n )\n return credentials\n\n elif method == 'iam':\n logger.debug(\"Connecting to Redshift using 'IAM' credentials\")\n return cls.get_tmp_iam_cluster_credentials(credentials)\n\n else:\n raise dbt.exceptions.FailedToConnectException(\n \"Invalid 'method' in profile: '{}'\".format(method))\n", "path": "plugins/redshift/dbt/adapters/redshift/connections.py"}], "after_files": [{"content": "from multiprocessing import Lock\nfrom contextlib import contextmanager\nfrom typing import NewType\n\nfrom dbt.adapters.postgres import PostgresConnectionManager\nfrom dbt.adapters.postgres import PostgresCredentials\nfrom dbt.logger import GLOBAL_LOGGER as logger # noqa\nimport dbt.exceptions\nimport dbt.flags\n\nimport boto3\n\nfrom hologram import FieldEncoder, JsonSchemaMixin\nfrom hologram.helpers import StrEnum\n\nfrom dataclasses import dataclass, field\nfrom typing import Optional, List\n\ndrop_lock: Lock = dbt.flags.MP_CONTEXT.Lock()\n\n\nIAMDuration = NewType('IAMDuration', int)\n\n\nclass IAMDurationEncoder(FieldEncoder):\n @property\n def json_schema(self):\n return {'type': 'integer', 'minimum': 0, 'maximum': 65535}\n\n\nJsonSchemaMixin.register_field_encoders({IAMDuration: IAMDurationEncoder()})\n\n\nclass RedshiftConnectionMethod(StrEnum):\n DATABASE = 'database'\n IAM = 'iam'\n\n\n@dataclass\nclass RedshiftCredentials(PostgresCredentials):\n method: RedshiftConnectionMethod = RedshiftConnectionMethod.DATABASE\n password: Optional[str] = None\n cluster_id: Optional[str] = field(\n default=None,\n metadata={'description': 'If using IAM auth, the name of the cluster'},\n )\n iam_duration_seconds: int = 900\n search_path: Optional[str] = None\n keepalives_idle: int = 240\n autocreate: bool = False\n db_groups: List[str] = field(default_factory=list)\n\n @property\n def type(self):\n return 'redshift'\n\n def _connection_keys(self):\n keys = super()._connection_keys()\n return keys + ('method', 'cluster_id', 'iam_duration_seconds')\n\n\nclass RedshiftConnectionManager(PostgresConnectionManager):\n TYPE = 'redshift'\n\n @contextmanager\n def fresh_transaction(self, name=None):\n \"\"\"On entrance to this context manager, hold an exclusive lock and\n create a fresh transaction for redshift, then commit and begin a new\n one before releasing the lock on exit.\n\n See drop_relation in RedshiftAdapter for more information.\n\n :param Optional[str] name: The name of the connection to use, or None\n to use the default.\n \"\"\"\n with drop_lock:\n connection = self.get_thread_connection()\n\n if connection.transaction_open:\n self.commit()\n\n self.begin()\n yield\n\n self.commit()\n self.begin()\n\n @classmethod\n def fetch_cluster_credentials(cls, db_user, db_name, cluster_id,\n duration_s, autocreate, db_groups):\n \"\"\"Fetches temporary login credentials from AWS. The specified user\n must already exist in the database, or else an error will occur\"\"\"\n boto_client = boto3.client('redshift')\n\n try:\n return boto_client.get_cluster_credentials(\n DbUser=db_user,\n DbName=db_name,\n ClusterIdentifier=cluster_id,\n DurationSeconds=duration_s,\n AutoCreate=autocreate,\n DbGroups=db_groups,)\n\n except boto_client.exceptions.ClientError as e:\n raise dbt.exceptions.FailedToConnectException(\n \"Unable to get temporary Redshift cluster credentials: {}\"\n .format(e))\n\n @classmethod\n def get_tmp_iam_cluster_credentials(cls, credentials):\n cluster_id = credentials.cluster_id\n\n # default via:\n # boto3.readthedocs.io/en/latest/reference/services/redshift.html\n iam_duration_s = credentials.iam_duration_seconds\n\n if not cluster_id:\n raise dbt.exceptions.FailedToConnectException(\n \"'cluster_id' must be provided in profile if IAM \"\n \"authentication method selected\")\n\n cluster_creds = cls.fetch_cluster_credentials(\n credentials.user,\n credentials.database,\n credentials.cluster_id,\n iam_duration_s,\n credentials.autocreate,\n credentials.db_groups,\n )\n\n # replace username and password with temporary redshift credentials\n return credentials.replace(user=cluster_creds.get('DbUser'),\n password=cluster_creds.get('DbPassword'))\n\n @classmethod\n def get_credentials(cls, credentials):\n method = credentials.method\n\n # Support missing 'method' for backwards compatibility\n if method == 'database' or method is None:\n logger.debug(\"Connecting to Redshift using 'database' credentials\")\n # this requirement is really annoying to encode into json schema,\n # so validate it here\n if credentials.password is None:\n raise dbt.exceptions.FailedToConnectException(\n \"'password' field is required for 'database' credentials\"\n )\n return credentials\n\n elif method == 'iam':\n logger.debug(\"Connecting to Redshift using 'IAM' credentials\")\n return cls.get_tmp_iam_cluster_credentials(credentials)\n\n else:\n raise dbt.exceptions.FailedToConnectException(\n \"Invalid 'method' in profile: '{}'\".format(method))\n", "path": "plugins/redshift/dbt/adapters/redshift/connections.py"}]}
| 1,756 | 417 |
gh_patches_debug_27588
|
rasdani/github-patches
|
git_diff
|
engnadeau__pybotics-412
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Flaky `test_fk()` results
- Fails occasionally due to a small difference in the x-component of the transform matrix
- https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/c9mmdfvctt9jasie
- https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/qs645jqgd49iwa3s
- This corresponds to the first row of the UR10 resources
```
-45.0,147.0,-39.0,96.0,49.0,67.0,-0.10936549564013165,0.9937209495290638,0.02361912001841469,341.25528339185024,-0.30766985716369466,-0.011247226994026319,-0.9514266965341137,-658.5887448195482,-0. 9451869906829335,-0.11132014351410438,0.30696804115695675,-625.3245786240964,0.0,0.0,0.0,1.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pybotics/predefined_models.py`
Content:
```
1 """Predefined robot models."""
2 from typing import Any
3
4 import numpy as np # type: ignore
5
6 from pybotics import Robot
7 from pybotics.kinematic_chain import MDHKinematicChain
8
9
10 class KukaLBRiiwa7(Robot):
11 """KUKA LBR iiwa 7 R800 collaborative robot."""
12
13 # TODO: add manufacturer's joint limits
14 kinematic_chain = MDHKinematicChain(
15 np.array([
16 0, 0, 0, 340,
17 -np.pi / 2, 0, 0, 0,
18 np.pi / 2, 0, 0, 400,
19 np.pi / 2, 0, 0, 0,
20 -np.pi / 2, 0, 0, 400,
21 -np.pi / 2, 0, 0, 0,
22 np.pi / 2, 0, 0, 126
23 ])
24 )
25
26 def __init__(self, **kwargs: Any) -> None:
27 """Init robot."""
28 super().__init__(self.kinematic_chain, **kwargs)
29
30
31 class MecademicMeca500(Robot):
32 """Mecademic Meca500 small robot."""
33
34 # TODO: add manufacturer's joint limits
35 kinematic_chain = MDHKinematicChain(
36 np.array([
37 0, 0, 0, 135,
38 -np.pi / 2, 0, -np.pi / 2, 0,
39 0, 135, 0, 0,
40 -np.pi / 2, 38, 0, 120,
41 np.pi / 2, 0, 0, 0,
42 -np.pi / 2, 0, np.pi, 72
43 ])
44 )
45
46 def __init__(self, **kwargs: Any) -> None:
47 """Init robot."""
48 super().__init__(self.kinematic_chain, **kwargs)
49
50
51 class PUMA560(Robot):
52 """PUMA 560 robot."""
53
54 # TODO: add manufacturer's joint limits
55 kinematic_chain = MDHKinematicChain(
56 np.array([
57 0, 0, 0, 0,
58 -np.pi / 2, 0, 0, 0,
59 0, 612.7, 0, 0,
60 0, 571.6, 0, 163.9,
61 -np.pi / 2, 0, 0, 115.7,
62 np.pi / 2, 0, np.pi, 92.2
63 ])
64 )
65
66 def __init__(self, **kwargs: Any) -> None:
67 """Init robot."""
68 super().__init__(self.kinematic_chain, **kwargs)
69
70
71 class UR10(Robot):
72 """Universal Robots UR10 collaborative robot."""
73
74 # TODO: add manufacturer's joint limits
75 kinematic_chain = MDHKinematicChain(
76 np.array([
77 0, 0, 0, 118,
78 np.pi / 2, 0, np.pi, 0,
79 0, 612.7, 0, 0,
80 0, 571.6, 0, 163.9,
81 -np.pi / 2, 0, 0, 115.7,
82 np.pi / 2, 0, np.pi, 92.2
83 ])
84 )
85
86 def __init__(self, **kwargs: Any) -> None:
87 """Init robot."""
88 super().__init__(self.kinematic_chain, **kwargs)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pybotics/predefined_models.py b/pybotics/predefined_models.py
--- a/pybotics/predefined_models.py
+++ b/pybotics/predefined_models.py
@@ -1,4 +1,5 @@
"""Predefined robot models."""
+from copy import deepcopy
from typing import Any
import numpy as np # type: ignore
@@ -25,7 +26,7 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
class MecademicMeca500(Robot):
@@ -45,7 +46,7 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
class PUMA560(Robot):
@@ -65,7 +66,7 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
class UR10(Robot):
@@ -85,4 +86,4 @@
def __init__(self, **kwargs: Any) -> None:
"""Init robot."""
- super().__init__(self.kinematic_chain, **kwargs)
+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)
|
{"golden_diff": "diff --git a/pybotics/predefined_models.py b/pybotics/predefined_models.py\n--- a/pybotics/predefined_models.py\n+++ b/pybotics/predefined_models.py\n@@ -1,4 +1,5 @@\n \"\"\"Predefined robot models.\"\"\"\n+from copy import deepcopy\n from typing import Any\n \n import numpy as np # type: ignore\n@@ -25,7 +26,7 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n \n \n class MecademicMeca500(Robot):\n@@ -45,7 +46,7 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n \n \n class PUMA560(Robot):\n@@ -65,7 +66,7 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n \n \n class UR10(Robot):\n@@ -85,4 +86,4 @@\n \n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n- super().__init__(self.kinematic_chain, **kwargs)\n+ super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n", "issue": "Flaky `test_fk()` results\n- Fails occasionally due to a small difference in the x-component of the transform matrix\r\n - https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/c9mmdfvctt9jasie\r\n - https://ci.appveyor.com/project/nnadeau/pybotics/build/1.0.732/job/qs645jqgd49iwa3s\r\n- This corresponds to the first row of the UR10 resources\r\n```\r\n-45.0,147.0,-39.0,96.0,49.0,67.0,-0.10936549564013165,0.9937209495290638,0.02361912001841469,341.25528339185024,-0.30766985716369466,-0.011247226994026319,-0.9514266965341137,-658.5887448195482,-0. 9451869906829335,-0.11132014351410438,0.30696804115695675,-625.3245786240964,0.0,0.0,0.0,1.0\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Predefined robot models.\"\"\"\nfrom typing import Any\n\nimport numpy as np # type: ignore\n\nfrom pybotics import Robot\nfrom pybotics.kinematic_chain import MDHKinematicChain\n\n\nclass KukaLBRiiwa7(Robot):\n \"\"\"KUKA LBR iiwa 7 R800 collaborative robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 340,\n -np.pi / 2, 0, 0, 0,\n np.pi / 2, 0, 0, 400,\n np.pi / 2, 0, 0, 0,\n -np.pi / 2, 0, 0, 400,\n -np.pi / 2, 0, 0, 0,\n np.pi / 2, 0, 0, 126\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n\n\nclass MecademicMeca500(Robot):\n \"\"\"Mecademic Meca500 small robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 135,\n -np.pi / 2, 0, -np.pi / 2, 0,\n 0, 135, 0, 0,\n -np.pi / 2, 38, 0, 120,\n np.pi / 2, 0, 0, 0,\n -np.pi / 2, 0, np.pi, 72\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n\n\nclass PUMA560(Robot):\n \"\"\"PUMA 560 robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 0,\n -np.pi / 2, 0, 0, 0,\n 0, 612.7, 0, 0,\n 0, 571.6, 0, 163.9,\n -np.pi / 2, 0, 0, 115.7,\n np.pi / 2, 0, np.pi, 92.2\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n\n\nclass UR10(Robot):\n \"\"\"Universal Robots UR10 collaborative robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 118,\n np.pi / 2, 0, np.pi, 0,\n 0, 612.7, 0, 0,\n 0, 571.6, 0, 163.9,\n -np.pi / 2, 0, 0, 115.7,\n np.pi / 2, 0, np.pi, 92.2\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(self.kinematic_chain, **kwargs)\n", "path": "pybotics/predefined_models.py"}], "after_files": [{"content": "\"\"\"Predefined robot models.\"\"\"\nfrom copy import deepcopy\nfrom typing import Any\n\nimport numpy as np # type: ignore\n\nfrom pybotics import Robot\nfrom pybotics.kinematic_chain import MDHKinematicChain\n\n\nclass KukaLBRiiwa7(Robot):\n \"\"\"KUKA LBR iiwa 7 R800 collaborative robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 340,\n -np.pi / 2, 0, 0, 0,\n np.pi / 2, 0, 0, 400,\n np.pi / 2, 0, 0, 0,\n -np.pi / 2, 0, 0, 400,\n -np.pi / 2, 0, 0, 0,\n np.pi / 2, 0, 0, 126\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n\n\nclass MecademicMeca500(Robot):\n \"\"\"Mecademic Meca500 small robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 135,\n -np.pi / 2, 0, -np.pi / 2, 0,\n 0, 135, 0, 0,\n -np.pi / 2, 38, 0, 120,\n np.pi / 2, 0, 0, 0,\n -np.pi / 2, 0, np.pi, 72\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n\n\nclass PUMA560(Robot):\n \"\"\"PUMA 560 robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 0,\n -np.pi / 2, 0, 0, 0,\n 0, 612.7, 0, 0,\n 0, 571.6, 0, 163.9,\n -np.pi / 2, 0, 0, 115.7,\n np.pi / 2, 0, np.pi, 92.2\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n\n\nclass UR10(Robot):\n \"\"\"Universal Robots UR10 collaborative robot.\"\"\"\n\n # TODO: add manufacturer's joint limits\n kinematic_chain = MDHKinematicChain(\n np.array([\n 0, 0, 0, 118,\n np.pi / 2, 0, np.pi, 0,\n 0, 612.7, 0, 0,\n 0, 571.6, 0, 163.9,\n -np.pi / 2, 0, 0, 115.7,\n np.pi / 2, 0, np.pi, 92.2\n ])\n )\n\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Init robot.\"\"\"\n super().__init__(deepcopy(self.kinematic_chain), **kwargs)\n", "path": "pybotics/predefined_models.py"}]}
| 1,682 | 369 |
gh_patches_debug_640
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1922
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.106
On the docket:
+ [x] Providing a direct reference to a wheel with a local version fails to resolve #1919
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.105"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.105"
+__version__ = "2.1.106"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.105\"\n+__version__ = \"2.1.106\"\n", "issue": "Release 2.1.106\nOn the docket:\r\n+ [x] Providing a direct reference to a wheel with a local version fails to resolve #1919 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.105\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.106\"\n", "path": "pex/version.py"}]}
| 347 | 98 |
gh_patches_debug_20855
|
rasdani/github-patches
|
git_diff
|
pydantic__pydantic-1328
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Configurable SecretStr
This issue revisits this comment https://github.com/samuelcolvin/pydantic/issues/462#issuecomment-480326378 by @tiangolo.
I think it would be a good idea that there is a standard way of getting secrets exported for propagation to other services.
This is not something that tends to happen in three tier apps where e.g. the db creds are secret but rather a case that happens _a lot_ when dealing with microservice architectures wherein request payloads may serialize and deserialize multiple times through the end to end request lifecycle.
`.json()` to me is semantically like `.export` and as such defaulting to revealing secret makes sense. But that would be a breaking change.
Other approaches:
- `.json(reveal_secrets=True)`
- `.export()`
But maybe we can take the breaking change path via https://github.com/samuelcolvin/pydantic/issues/576 and then:
- `.json(keep_secrets=True)`
To be clear I don't see `.json` as being something used for logging. Something like `structlog` would work with `pydantic.dict()` instead:
```
log.info('something', data=model.dict())
```
I _think_ `.dict` defaulting to maintaining secrets seems right. But we could have, too:
```
log.info('something', data=model.dict(reveal_secrets=True))
```
But than we should make considerations around API consistency across methods and ensure usability is good overall, not just per case.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/examples/types_secret_types.py`
Content:
```
1 from pydantic import BaseModel, SecretStr, SecretBytes, ValidationError
2
3 class SimpleModel(BaseModel):
4 password: SecretStr
5 password_bytes: SecretBytes
6
7 sm = SimpleModel(password='IAmSensitive', password_bytes=b'IAmSensitiveBytes')
8
9 # Standard access methods will not display the secret
10 print(sm)
11 print(sm.password)
12 print(sm.json())
13
14 # Use get_secret_value method to see the secret's content.
15 print(sm.password.get_secret_value())
16 print(sm.password_bytes.get_secret_value())
17
18 try:
19 SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])
20 except ValidationError as e:
21 print(e)
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/examples/types_secret_types.py b/docs/examples/types_secret_types.py
--- a/docs/examples/types_secret_types.py
+++ b/docs/examples/types_secret_types.py
@@ -9,6 +9,7 @@
# Standard access methods will not display the secret
print(sm)
print(sm.password)
+print(sm.dict())
print(sm.json())
# Use get_secret_value method to see the secret's content.
@@ -19,3 +20,26 @@
SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])
except ValidationError as e:
print(e)
+
+# If you want the secret to be dumped as plain-text using the json method,
+# you can use json_encoders in the Config class.
+class SimpleModelDumpable(BaseModel):
+ password: SecretStr
+ password_bytes: SecretBytes
+
+ class Config:
+ json_encoders = {
+ SecretStr: lambda v: v.get_secret_value() if v else None,
+ SecretBytes: lambda v: v.get_secret_value() if v else None,
+ }
+
+sm2 = SimpleModelDumpable(password='IAmSensitive',
+ password_bytes=b'IAmSensitiveBytes')
+
+# Standard access methods will not display the secret
+print(sm2)
+print(sm2.password)
+print(sm2.dict())
+
+# But the json method will
+print(sm2.json())
|
{"golden_diff": "diff --git a/docs/examples/types_secret_types.py b/docs/examples/types_secret_types.py\n--- a/docs/examples/types_secret_types.py\n+++ b/docs/examples/types_secret_types.py\n@@ -9,6 +9,7 @@\n # Standard access methods will not display the secret\n print(sm)\n print(sm.password)\n+print(sm.dict())\n print(sm.json())\n \n # Use get_secret_value method to see the secret's content.\n@@ -19,3 +20,26 @@\n SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])\n except ValidationError as e:\n print(e)\n+\n+# If you want the secret to be dumped as plain-text using the json method,\n+# you can use json_encoders in the Config class.\n+class SimpleModelDumpable(BaseModel):\n+ password: SecretStr\n+ password_bytes: SecretBytes\n+\n+ class Config:\n+ json_encoders = {\n+ SecretStr: lambda v: v.get_secret_value() if v else None,\n+ SecretBytes: lambda v: v.get_secret_value() if v else None,\n+ }\n+\n+sm2 = SimpleModelDumpable(password='IAmSensitive', \n+ password_bytes=b'IAmSensitiveBytes')\n+\n+# Standard access methods will not display the secret\n+print(sm2)\n+print(sm2.password)\n+print(sm2.dict())\n+\n+# But the json method will\n+print(sm2.json())\n", "issue": "Configurable SecretStr\nThis issue revisits this comment https://github.com/samuelcolvin/pydantic/issues/462#issuecomment-480326378 by @tiangolo.\r\n\r\nI think it would be a good idea that there is a standard way of getting secrets exported for propagation to other services.\r\n\r\nThis is not something that tends to happen in three tier apps where e.g. the db creds are secret but rather a case that happens _a lot_ when dealing with microservice architectures wherein request payloads may serialize and deserialize multiple times through the end to end request lifecycle.\r\n\r\n`.json()` to me is semantically like `.export` and as such defaulting to revealing secret makes sense. But that would be a breaking change.\r\n\r\nOther approaches:\r\n\r\n- `.json(reveal_secrets=True)`\r\n- `.export()`\r\n\r\nBut maybe we can take the breaking change path via https://github.com/samuelcolvin/pydantic/issues/576 and then:\r\n\r\n- `.json(keep_secrets=True)`\r\n\r\nTo be clear I don't see `.json` as being something used for logging. Something like `structlog` would work with `pydantic.dict()` instead:\r\n\r\n```\r\nlog.info('something', data=model.dict())\r\n```\r\n\r\nI _think_ `.dict` defaulting to maintaining secrets seems right. But we could have, too:\r\n\r\n```\r\nlog.info('something', data=model.dict(reveal_secrets=True))\r\n```\r\n\r\nBut than we should make considerations around API consistency across methods and ensure usability is good overall, not just per case.\n", "before_files": [{"content": "from pydantic import BaseModel, SecretStr, SecretBytes, ValidationError\n\nclass SimpleModel(BaseModel):\n password: SecretStr\n password_bytes: SecretBytes\n\nsm = SimpleModel(password='IAmSensitive', password_bytes=b'IAmSensitiveBytes')\n\n# Standard access methods will not display the secret\nprint(sm)\nprint(sm.password)\nprint(sm.json())\n\n# Use get_secret_value method to see the secret's content.\nprint(sm.password.get_secret_value())\nprint(sm.password_bytes.get_secret_value())\n\ntry:\n SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])\nexcept ValidationError as e:\n print(e)\n", "path": "docs/examples/types_secret_types.py"}], "after_files": [{"content": "from pydantic import BaseModel, SecretStr, SecretBytes, ValidationError\n\nclass SimpleModel(BaseModel):\n password: SecretStr\n password_bytes: SecretBytes\n\nsm = SimpleModel(password='IAmSensitive', password_bytes=b'IAmSensitiveBytes')\n\n# Standard access methods will not display the secret\nprint(sm)\nprint(sm.password)\nprint(sm.dict())\nprint(sm.json())\n\n# Use get_secret_value method to see the secret's content.\nprint(sm.password.get_secret_value())\nprint(sm.password_bytes.get_secret_value())\n\ntry:\n SimpleModel(password=[1, 2, 3], password_bytes=[1, 2, 3])\nexcept ValidationError as e:\n print(e)\n\n# If you want the secret to be dumped as plain-text using the json method,\n# you can use json_encoders in the Config class.\nclass SimpleModelDumpable(BaseModel):\n password: SecretStr\n password_bytes: SecretBytes\n\n class Config:\n json_encoders = {\n SecretStr: lambda v: v.get_secret_value() if v else None,\n SecretBytes: lambda v: v.get_secret_value() if v else None,\n }\n\nsm2 = SimpleModelDumpable(password='IAmSensitive', \n password_bytes=b'IAmSensitiveBytes')\n\n# Standard access methods will not display the secret\nprint(sm2)\nprint(sm2.password)\nprint(sm2.dict())\n\n# But the json method will\nprint(sm2.json())\n", "path": "docs/examples/types_secret_types.py"}]}
| 761 | 307 |
gh_patches_debug_13660
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-3042
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect default cuda device when using single gpu other than cuda:0
## 🐛 Bug
The default `cuda` is not set properly to the `trainer.root_gpu` in single-GPU mode. The tensors created with `device='cuda'` will be placed on the incorrect gpu, and the dataloader will acquire memory on the incorrect gpu when `pin_memory=True`.
Maybe we'll need to add
`torch.cuda.set_device(self.trainer.root_gpu)` to https://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/gpu_backend.py#L24
as `DDPBackend` did:
https://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/ddp_backend.py#L195
### To Reproduce
Running the following code will get
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!`
#### Code sample
```python
import pytorch_lightning as pl
import torch
from torch import nn
from torch.utils import data
class Dataset(data.Dataset):
def __getitem__(self, item):
return torch.zeros(1)
def __len__(self):
return 5
class Model(pl.LightningModule):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.x = nn.Parameter(torch.zeros(1))
def forward(self, *args, **kwargs):
return self.x
def training_step(self, *args, **kwargs):
return self.x + torch.zeros(1, device='cuda') # RuntimeError.
def train_dataloader(self):
return data.DataLoader(Dataset(), num_workers=1, pin_memory=True)
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), 1.0)
if __name__ == '__main__':
trainer = pl.Trainer(gpus=[1], num_sanity_val_steps=0, max_epochs=1)
model = Model()
trainer.fit(model)
```
### Expected behavior
No `RuntimeError` occurs.
### Environment
* CUDA:
- GPU:
- available:
- version:
* Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0rc16
- tensorboard: 2.3.0
- tqdm: 4.48.2
* System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor:
- python: 3.7.3
- version: 10.0.18362
### Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/accelerators/gpu_backend.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from pytorch_lightning.core import LightningModule
16 from pytorch_lightning.utilities import AMPType
17
18 try:
19 from apex import amp
20 except ImportError:
21 amp = None
22
23
24 class GPUBackend(object):
25 amp_backend: AMPType
26
27 def __init__(self, trainer):
28 self.trainer = trainer
29
30 def setup(self, model):
31
32 # call setup
33 self.trainer.call_setup_hook(model)
34
35 model.cuda(self.trainer.root_gpu)
36
37 # CHOOSE OPTIMIZER
38 # allow for lr schedulers as well
39 optimizers, lr_schedulers, optimizer_frequencies = self.trainer.init_optimizers(model)
40 self.trainer.optimizers = optimizers
41 self.trainer.lr_schedulers = lr_schedulers
42 self.trainer.optimizer_frequencies = optimizer_frequencies
43
44 if self.trainer.amp_backend == AMPType.APEX:
45 model = self._setup_nvidia_apex(model)
46 return model
47
48 def train(self, model):
49 results = self.trainer.run_pretrain_routine(model)
50 return results
51
52 def _setup_nvidia_apex(self, model: LightningModule):
53 model, optimizers = model.configure_apex(amp, model, self.trainer.optimizers, self.trainer.amp_level)
54 self.trainer.optimizers = optimizers
55 self.trainer.reinit_scheduler_properties(self.trainer.optimizers, self.trainer.lr_schedulers)
56 return model
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pytorch_lightning/accelerators/gpu_backend.py b/pytorch_lightning/accelerators/gpu_backend.py
--- a/pytorch_lightning/accelerators/gpu_backend.py
+++ b/pytorch_lightning/accelerators/gpu_backend.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import torch
from pytorch_lightning.core import LightningModule
from pytorch_lightning.utilities import AMPType
@@ -32,6 +33,7 @@
# call setup
self.trainer.call_setup_hook(model)
+ torch.cuda.set_device(self.trainer.root_gpu)
model.cuda(self.trainer.root_gpu)
# CHOOSE OPTIMIZER
|
{"golden_diff": "diff --git a/pytorch_lightning/accelerators/gpu_backend.py b/pytorch_lightning/accelerators/gpu_backend.py\n--- a/pytorch_lightning/accelerators/gpu_backend.py\n+++ b/pytorch_lightning/accelerators/gpu_backend.py\n@@ -12,6 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import torch\n from pytorch_lightning.core import LightningModule\n from pytorch_lightning.utilities import AMPType\n \n@@ -32,6 +33,7 @@\n # call setup\n self.trainer.call_setup_hook(model)\n \n+ torch.cuda.set_device(self.trainer.root_gpu)\n model.cuda(self.trainer.root_gpu)\n \n # CHOOSE OPTIMIZER\n", "issue": "Incorrect default cuda device when using single gpu other than cuda:0\n## \ud83d\udc1b Bug\r\n\r\nThe default `cuda` is not set properly to the `trainer.root_gpu` in single-GPU mode. The tensors created with `device='cuda'` will be placed on the incorrect gpu, and the dataloader will acquire memory on the incorrect gpu when `pin_memory=True`.\r\n\r\nMaybe we'll need to add\r\n`torch.cuda.set_device(self.trainer.root_gpu)` to https://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/gpu_backend.py#L24\r\nas `DDPBackend` did:\r\n\r\nhttps://github.com/PyTorchLightning/pytorch-lightning/blob/5dfc7b157e7febab692036b7392dac8b52f41b87/pytorch_lightning/accelerators/ddp_backend.py#L195\r\n\r\n### To Reproduce\r\n\r\nRunning the following code will get \r\n\r\n`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!`\r\n\r\n#### Code sample\r\n\r\n```python\r\nimport pytorch_lightning as pl\r\nimport torch\r\nfrom torch import nn\r\nfrom torch.utils import data\r\n\r\n\r\nclass Dataset(data.Dataset):\r\n\r\n def __getitem__(self, item):\r\n return torch.zeros(1)\r\n\r\n def __len__(self):\r\n return 5\r\n\r\n\r\nclass Model(pl.LightningModule):\r\n\r\n def __init__(self, *args, **kwargs):\r\n super().__init__(*args, **kwargs)\r\n self.x = nn.Parameter(torch.zeros(1))\r\n\r\n def forward(self, *args, **kwargs):\r\n return self.x\r\n\r\n def training_step(self, *args, **kwargs):\r\n return self.x + torch.zeros(1, device='cuda') # RuntimeError.\r\n\r\n def train_dataloader(self):\r\n return data.DataLoader(Dataset(), num_workers=1, pin_memory=True)\r\n\r\n def configure_optimizers(self):\r\n return torch.optim.SGD(self.parameters(), 1.0)\r\n\r\n\r\nif __name__ == '__main__':\r\n trainer = pl.Trainer(gpus=[1], num_sanity_val_steps=0, max_epochs=1)\r\n model = Model()\r\n trainer.fit(model)\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nNo `RuntimeError` occurs.\r\n\r\n### Environment\r\n\r\n* CUDA:\r\n\t- GPU:\r\n\t- available:\r\n\t- version:\r\n* Packages:\r\n\t- numpy: 1.18.5\r\n\t- pyTorch_debug: False\r\n\t- pyTorch_version: 1.6.0\r\n\t- pytorch-lightning: 0.9.0rc16\r\n\t- tensorboard: 2.3.0\r\n\t- tqdm: 4.48.2\r\n* System:\r\n\t- OS: Windows\r\n\t- architecture:\r\n\t\t- 64bit\r\n\t\t- WindowsPE\r\n\t- processor:\r\n\t- python: 3.7.3\r\n\t- version: 10.0.18362\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pytorch_lightning.core import LightningModule\nfrom pytorch_lightning.utilities import AMPType\n\ntry:\n from apex import amp\nexcept ImportError:\n amp = None\n\n\nclass GPUBackend(object):\n amp_backend: AMPType\n\n def __init__(self, trainer):\n self.trainer = trainer\n\n def setup(self, model):\n\n # call setup\n self.trainer.call_setup_hook(model)\n\n model.cuda(self.trainer.root_gpu)\n\n # CHOOSE OPTIMIZER\n # allow for lr schedulers as well\n optimizers, lr_schedulers, optimizer_frequencies = self.trainer.init_optimizers(model)\n self.trainer.optimizers = optimizers\n self.trainer.lr_schedulers = lr_schedulers\n self.trainer.optimizer_frequencies = optimizer_frequencies\n\n if self.trainer.amp_backend == AMPType.APEX:\n model = self._setup_nvidia_apex(model)\n return model\n\n def train(self, model):\n results = self.trainer.run_pretrain_routine(model)\n return results\n\n def _setup_nvidia_apex(self, model: LightningModule):\n model, optimizers = model.configure_apex(amp, model, self.trainer.optimizers, self.trainer.amp_level)\n self.trainer.optimizers = optimizers\n self.trainer.reinit_scheduler_properties(self.trainer.optimizers, self.trainer.lr_schedulers)\n return model\n", "path": "pytorch_lightning/accelerators/gpu_backend.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport torch\nfrom pytorch_lightning.core import LightningModule\nfrom pytorch_lightning.utilities import AMPType\n\ntry:\n from apex import amp\nexcept ImportError:\n amp = None\n\n\nclass GPUBackend(object):\n amp_backend: AMPType\n\n def __init__(self, trainer):\n self.trainer = trainer\n\n def setup(self, model):\n\n # call setup\n self.trainer.call_setup_hook(model)\n\n torch.cuda.set_device(self.trainer.root_gpu)\n model.cuda(self.trainer.root_gpu)\n\n # CHOOSE OPTIMIZER\n # allow for lr schedulers as well\n optimizers, lr_schedulers, optimizer_frequencies = self.trainer.init_optimizers(model)\n self.trainer.optimizers = optimizers\n self.trainer.lr_schedulers = lr_schedulers\n self.trainer.optimizer_frequencies = optimizer_frequencies\n\n if self.trainer.amp_backend == AMPType.APEX:\n model = self._setup_nvidia_apex(model)\n return model\n\n def train(self, model):\n results = self.trainer.run_pretrain_routine(model)\n return results\n\n def _setup_nvidia_apex(self, model: LightningModule):\n model, optimizers = model.configure_apex(amp, model, self.trainer.optimizers, self.trainer.amp_level)\n self.trainer.optimizers = optimizers\n self.trainer.reinit_scheduler_properties(self.trainer.optimizers, self.trainer.lr_schedulers)\n return model\n", "path": "pytorch_lightning/accelerators/gpu_backend.py"}]}
| 1,526 | 172 |
gh_patches_debug_6024
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-340
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SQLAlchemy executemany not detected in hooks
In the Django ORM integration, we record `executemany` calls as the `SQL/Many` operation, but this is missing in the SQLAlchemy integration.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/sqlalchemy.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from sqlalchemy import event
5
6 from scout_apm.core.tracked_request import TrackedRequest
7
8
9 def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):
10 tracked_request = TrackedRequest.instance()
11 span = tracked_request.start_span(operation="SQL/Query")
12 span.tag("db.statement", statement)
13
14
15 def after_cursor_execute(conn, cursor, statement, parameters, context, executemany):
16 tracked_request = TrackedRequest.instance()
17 span = tracked_request.current_span()
18 if span is not None:
19 tracked_request.callset.update(statement, 1, span.duration())
20 if tracked_request.callset.should_capture_backtrace(statement):
21 span.capture_backtrace()
22 tracked_request.stop_span()
23
24
25 def instrument_sqlalchemy(engine):
26 if getattr(engine, "_scout_instrumented", False):
27 return
28 event.listen(engine, "before_cursor_execute", before_cursor_execute)
29 event.listen(engine, "after_cursor_execute", after_cursor_execute)
30 engine._scout_instrumented = True
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/scout_apm/sqlalchemy.py b/src/scout_apm/sqlalchemy.py
--- a/src/scout_apm/sqlalchemy.py
+++ b/src/scout_apm/sqlalchemy.py
@@ -7,8 +7,12 @@
def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):
+ if executemany:
+ operation = "SQL/Many"
+ else:
+ operation = "SQL/Query"
tracked_request = TrackedRequest.instance()
- span = tracked_request.start_span(operation="SQL/Query")
+ span = tracked_request.start_span(operation=operation)
span.tag("db.statement", statement)
|
{"golden_diff": "diff --git a/src/scout_apm/sqlalchemy.py b/src/scout_apm/sqlalchemy.py\n--- a/src/scout_apm/sqlalchemy.py\n+++ b/src/scout_apm/sqlalchemy.py\n@@ -7,8 +7,12 @@\n \n \n def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n+ if executemany:\n+ operation = \"SQL/Many\"\n+ else:\n+ operation = \"SQL/Query\"\n tracked_request = TrackedRequest.instance()\n- span = tracked_request.start_span(operation=\"SQL/Query\")\n+ span = tracked_request.start_span(operation=operation)\n span.tag(\"db.statement\", statement)\n", "issue": "SQLAlchemy executemany not detected in hooks\nIn the Django ORM integration, we record `executemany` calls as the `SQL/Many` operation, but this is missing in the SQLAlchemy integration.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom sqlalchemy import event\n\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tracked_request = TrackedRequest.instance()\n span = tracked_request.start_span(operation=\"SQL/Query\")\n span.tag(\"db.statement\", statement)\n\n\ndef after_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tracked_request = TrackedRequest.instance()\n span = tracked_request.current_span()\n if span is not None:\n tracked_request.callset.update(statement, 1, span.duration())\n if tracked_request.callset.should_capture_backtrace(statement):\n span.capture_backtrace()\n tracked_request.stop_span()\n\n\ndef instrument_sqlalchemy(engine):\n if getattr(engine, \"_scout_instrumented\", False):\n return\n event.listen(engine, \"before_cursor_execute\", before_cursor_execute)\n event.listen(engine, \"after_cursor_execute\", after_cursor_execute)\n engine._scout_instrumented = True\n", "path": "src/scout_apm/sqlalchemy.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom sqlalchemy import event\n\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n if executemany:\n operation = \"SQL/Many\"\n else:\n operation = \"SQL/Query\"\n tracked_request = TrackedRequest.instance()\n span = tracked_request.start_span(operation=operation)\n span.tag(\"db.statement\", statement)\n\n\ndef after_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tracked_request = TrackedRequest.instance()\n span = tracked_request.current_span()\n if span is not None:\n tracked_request.callset.update(statement, 1, span.duration())\n if tracked_request.callset.should_capture_backtrace(statement):\n span.capture_backtrace()\n tracked_request.stop_span()\n\n\ndef instrument_sqlalchemy(engine):\n if getattr(engine, \"_scout_instrumented\", False):\n return\n event.listen(engine, \"before_cursor_execute\", before_cursor_execute)\n event.listen(engine, \"after_cursor_execute\", after_cursor_execute)\n engine._scout_instrumented = True\n", "path": "src/scout_apm/sqlalchemy.py"}]}
| 598 | 148 |
gh_patches_debug_38250
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-536
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Certain Genre Detail pages are _very_ slow to load
I think the problem is trying to display the "chants by genre" table - for a genre with lots of example chants, e.g. [Antiphon](http://206.12.88.113/genre/122), I've tried loading the page several times and keep getting a 502 error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/views/genre.py`
Content:
```
1 from typing import Dict, List
2
3 from django.views.generic import ListView
4 from django.views.generic.detail import SingleObjectMixin
5 from extra_views import SearchableListMixin
6 from main_app.models import Genre
7
8
9 class GenreDetailView(SingleObjectMixin, ListView):
10 paginate_by = 100
11 template_name = "genre_detail.html"
12
13 def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:
14 """
15 Get a list with data on each unique ``cantus_id`` related to this Genre.
16
17 The list contains dicts and each dict has the following keys:
18
19 ``cantus_id``: The ``cantus_id``
20 ``num_chants``: The number of Chants that have this ``cantus_id``
21 ``first_incipit``: The incipit of first Chant with this ``cantus_id``
22 ``first_incipit_url``: The url of first Chant with this ``cantus_id``
23
24 Returns:
25 List[Dict]: A list of dicts with data on each unique ``cantus_id``
26 """
27 cantus_ids = (self.object.chant_set
28 .exclude(cantus_id=None)
29 .values_list("cantus_id", flat=True)
30 .distinct("cantus_id")
31 )
32 if not display_unpublished:
33 cantus_ids = cantus_ids.filter(source__published=True)
34
35 cantus_ids_list = list(cantus_ids)
36
37 chant_list = []
38 for cantus_id in cantus_ids_list:
39 chants = self.object.chant_set.filter(cantus_id=cantus_id)
40 num_chants = chants.count()
41 first_chant = chants.first()
42 first_incipit_url = first_chant.get_absolute_url()
43 first_incipit = first_chant.incipit
44 chant_list.append(
45 {
46 "cantus_id": cantus_id,
47 "num_chants": num_chants,
48 "first_incipit": first_incipit,
49 "first_incipit_url": first_incipit_url,
50 }
51 )
52 # Sort list based on number of Chants per cantus_id (descending)
53 chant_list = sorted(chant_list, key=lambda k: k["num_chants"], reverse=True)
54 return chant_list
55
56 def get(self, request, *args, **kwargs):
57 self.object = self.get_object(queryset=Genre.objects.all())
58 return super().get(request, *args, **kwargs)
59
60 def get_context_data(self, **kwargs):
61 context = super().get_context_data(**kwargs)
62 context["genre"] = self.object
63 return context
64
65 def get_queryset(self):
66 display_unpublished = self.request.user.is_authenticated
67 search_term = self.request.GET.get("incipit")
68 if not search_term:
69 return self.get_genre_cantus_ids(display_unpublished=display_unpublished)
70 else:
71 search_term = search_term.strip(" ")
72 filtered_chants = [
73 chant
74 for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)
75 if search_term.lower() in chant["first_incipit"].lower()
76 ]
77 return filtered_chants
78
79
80 class GenreListView(SearchableListMixin, ListView):
81 model = Genre
82 paginate_by = 100
83 context_object_name = "genres"
84 template_name = "genre_list.html"
85
86 def get_queryset(self):
87 queryset = super().get_queryset()
88 mass_office = self.request.GET.get("mass_office", None)
89 if mass_office in ["Mass", "Office", "Old Hispanic"]:
90 queryset = queryset.filter(mass_office__contains=mass_office)
91 return queryset.order_by("name")
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/views/genre.py b/django/cantusdb_project/main_app/views/genre.py
--- a/django/cantusdb_project/main_app/views/genre.py
+++ b/django/cantusdb_project/main_app/views/genre.py
@@ -1,81 +1,13 @@
-from typing import Dict, List
-
-from django.views.generic import ListView
-from django.views.generic.detail import SingleObjectMixin
+from django.views.generic import DetailView, ListView
from extra_views import SearchableListMixin
from main_app.models import Genre
-class GenreDetailView(SingleObjectMixin, ListView):
- paginate_by = 100
+class GenreDetailView(DetailView):
+ model = Genre
+ context_object_name = "genre"
template_name = "genre_detail.html"
- def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:
- """
- Get a list with data on each unique ``cantus_id`` related to this Genre.
-
- The list contains dicts and each dict has the following keys:
-
- ``cantus_id``: The ``cantus_id``
- ``num_chants``: The number of Chants that have this ``cantus_id``
- ``first_incipit``: The incipit of first Chant with this ``cantus_id``
- ``first_incipit_url``: The url of first Chant with this ``cantus_id``
-
- Returns:
- List[Dict]: A list of dicts with data on each unique ``cantus_id``
- """
- cantus_ids = (self.object.chant_set
- .exclude(cantus_id=None)
- .values_list("cantus_id", flat=True)
- .distinct("cantus_id")
- )
- if not display_unpublished:
- cantus_ids = cantus_ids.filter(source__published=True)
-
- cantus_ids_list = list(cantus_ids)
-
- chant_list = []
- for cantus_id in cantus_ids_list:
- chants = self.object.chant_set.filter(cantus_id=cantus_id)
- num_chants = chants.count()
- first_chant = chants.first()
- first_incipit_url = first_chant.get_absolute_url()
- first_incipit = first_chant.incipit
- chant_list.append(
- {
- "cantus_id": cantus_id,
- "num_chants": num_chants,
- "first_incipit": first_incipit,
- "first_incipit_url": first_incipit_url,
- }
- )
- # Sort list based on number of Chants per cantus_id (descending)
- chant_list = sorted(chant_list, key=lambda k: k["num_chants"], reverse=True)
- return chant_list
-
- def get(self, request, *args, **kwargs):
- self.object = self.get_object(queryset=Genre.objects.all())
- return super().get(request, *args, **kwargs)
-
- def get_context_data(self, **kwargs):
- context = super().get_context_data(**kwargs)
- context["genre"] = self.object
- return context
-
- def get_queryset(self):
- display_unpublished = self.request.user.is_authenticated
- search_term = self.request.GET.get("incipit")
- if not search_term:
- return self.get_genre_cantus_ids(display_unpublished=display_unpublished)
- else:
- search_term = search_term.strip(" ")
- filtered_chants = [
- chant
- for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)
- if search_term.lower() in chant["first_incipit"].lower()
- ]
- return filtered_chants
-
class GenreListView(SearchableListMixin, ListView):
model = Genre
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/genre.py b/django/cantusdb_project/main_app/views/genre.py\n--- a/django/cantusdb_project/main_app/views/genre.py\n+++ b/django/cantusdb_project/main_app/views/genre.py\n@@ -1,81 +1,13 @@\n-from typing import Dict, List\n-\n-from django.views.generic import ListView\n-from django.views.generic.detail import SingleObjectMixin\n+from django.views.generic import DetailView, ListView\n from extra_views import SearchableListMixin\n from main_app.models import Genre\n \n \n-class GenreDetailView(SingleObjectMixin, ListView):\n- paginate_by = 100\n+class GenreDetailView(DetailView):\n+ model = Genre\n+ context_object_name = \"genre\"\n template_name = \"genre_detail.html\"\n \n- def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:\n- \"\"\"\n- Get a list with data on each unique ``cantus_id`` related to this Genre.\n-\n- The list contains dicts and each dict has the following keys:\n-\n- ``cantus_id``: The ``cantus_id``\n- ``num_chants``: The number of Chants that have this ``cantus_id``\n- ``first_incipit``: The incipit of first Chant with this ``cantus_id``\n- ``first_incipit_url``: The url of first Chant with this ``cantus_id``\n-\n- Returns:\n- List[Dict]: A list of dicts with data on each unique ``cantus_id``\n- \"\"\"\n- cantus_ids = (self.object.chant_set\n- .exclude(cantus_id=None)\n- .values_list(\"cantus_id\", flat=True)\n- .distinct(\"cantus_id\")\n- )\n- if not display_unpublished:\n- cantus_ids = cantus_ids.filter(source__published=True)\n- \n- cantus_ids_list = list(cantus_ids)\n-\n- chant_list = []\n- for cantus_id in cantus_ids_list:\n- chants = self.object.chant_set.filter(cantus_id=cantus_id)\n- num_chants = chants.count()\n- first_chant = chants.first()\n- first_incipit_url = first_chant.get_absolute_url()\n- first_incipit = first_chant.incipit\n- chant_list.append(\n- {\n- \"cantus_id\": cantus_id,\n- \"num_chants\": num_chants,\n- \"first_incipit\": first_incipit,\n- \"first_incipit_url\": first_incipit_url,\n- }\n- )\n- # Sort list based on number of Chants per cantus_id (descending)\n- chant_list = sorted(chant_list, key=lambda k: k[\"num_chants\"], reverse=True)\n- return chant_list\n-\n- def get(self, request, *args, **kwargs):\n- self.object = self.get_object(queryset=Genre.objects.all())\n- return super().get(request, *args, **kwargs)\n-\n- def get_context_data(self, **kwargs):\n- context = super().get_context_data(**kwargs)\n- context[\"genre\"] = self.object\n- return context\n-\n- def get_queryset(self):\n- display_unpublished = self.request.user.is_authenticated\n- search_term = self.request.GET.get(\"incipit\")\n- if not search_term:\n- return self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n- else:\n- search_term = search_term.strip(\" \")\n- filtered_chants = [\n- chant\n- for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n- if search_term.lower() in chant[\"first_incipit\"].lower()\n- ]\n- return filtered_chants\n-\n \n class GenreListView(SearchableListMixin, ListView):\n model = Genre\n", "issue": "Certain Genre Detail pages are _very_ slow to load\nI think the problem is trying to display the \"chants by genre\" table - for a genre with lots of example chants, e.g. [Antiphon](http://206.12.88.113/genre/122), I've tried loading the page several times and keep getting a 502 error.\n", "before_files": [{"content": "from typing import Dict, List\n\nfrom django.views.generic import ListView\nfrom django.views.generic.detail import SingleObjectMixin\nfrom extra_views import SearchableListMixin\nfrom main_app.models import Genre\n\n\nclass GenreDetailView(SingleObjectMixin, ListView):\n paginate_by = 100\n template_name = \"genre_detail.html\"\n\n def get_genre_cantus_ids(self, display_unpublished=True) -> List[Dict]:\n \"\"\"\n Get a list with data on each unique ``cantus_id`` related to this Genre.\n\n The list contains dicts and each dict has the following keys:\n\n ``cantus_id``: The ``cantus_id``\n ``num_chants``: The number of Chants that have this ``cantus_id``\n ``first_incipit``: The incipit of first Chant with this ``cantus_id``\n ``first_incipit_url``: The url of first Chant with this ``cantus_id``\n\n Returns:\n List[Dict]: A list of dicts with data on each unique ``cantus_id``\n \"\"\"\n cantus_ids = (self.object.chant_set\n .exclude(cantus_id=None)\n .values_list(\"cantus_id\", flat=True)\n .distinct(\"cantus_id\")\n )\n if not display_unpublished:\n cantus_ids = cantus_ids.filter(source__published=True)\n \n cantus_ids_list = list(cantus_ids)\n\n chant_list = []\n for cantus_id in cantus_ids_list:\n chants = self.object.chant_set.filter(cantus_id=cantus_id)\n num_chants = chants.count()\n first_chant = chants.first()\n first_incipit_url = first_chant.get_absolute_url()\n first_incipit = first_chant.incipit\n chant_list.append(\n {\n \"cantus_id\": cantus_id,\n \"num_chants\": num_chants,\n \"first_incipit\": first_incipit,\n \"first_incipit_url\": first_incipit_url,\n }\n )\n # Sort list based on number of Chants per cantus_id (descending)\n chant_list = sorted(chant_list, key=lambda k: k[\"num_chants\"], reverse=True)\n return chant_list\n\n def get(self, request, *args, **kwargs):\n self.object = self.get_object(queryset=Genre.objects.all())\n return super().get(request, *args, **kwargs)\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"genre\"] = self.object\n return context\n\n def get_queryset(self):\n display_unpublished = self.request.user.is_authenticated\n search_term = self.request.GET.get(\"incipit\")\n if not search_term:\n return self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n else:\n search_term = search_term.strip(\" \")\n filtered_chants = [\n chant\n for chant in self.get_genre_cantus_ids(display_unpublished=display_unpublished)\n if search_term.lower() in chant[\"first_incipit\"].lower()\n ]\n return filtered_chants\n\n\nclass GenreListView(SearchableListMixin, ListView):\n model = Genre\n paginate_by = 100\n context_object_name = \"genres\"\n template_name = \"genre_list.html\"\n\n def get_queryset(self):\n queryset = super().get_queryset()\n mass_office = self.request.GET.get(\"mass_office\", None)\n if mass_office in [\"Mass\", \"Office\", \"Old Hispanic\"]:\n queryset = queryset.filter(mass_office__contains=mass_office)\n return queryset.order_by(\"name\")\n", "path": "django/cantusdb_project/main_app/views/genre.py"}], "after_files": [{"content": "from django.views.generic import DetailView, ListView\nfrom extra_views import SearchableListMixin\nfrom main_app.models import Genre\n\n\nclass GenreDetailView(DetailView):\n model = Genre\n context_object_name = \"genre\"\n template_name = \"genre_detail.html\"\n\n\nclass GenreListView(SearchableListMixin, ListView):\n model = Genre\n paginate_by = 100\n context_object_name = \"genres\"\n template_name = \"genre_list.html\"\n\n def get_queryset(self):\n queryset = super().get_queryset()\n mass_office = self.request.GET.get(\"mass_office\", None)\n if mass_office in [\"Mass\", \"Office\", \"Old Hispanic\"]:\n queryset = queryset.filter(mass_office__contains=mass_office)\n return queryset.order_by(\"name\")\n", "path": "django/cantusdb_project/main_app/views/genre.py"}]}
| 1,315 | 865 |
gh_patches_debug_39013
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-2091
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make `MetricsLambda` update it's underlying metrics
## 🚀 Feature
As title.
The current state is that we cannot use `MetricsLambda` as a standard metric because it doesn't update/reset it's underlying metrics.
Adding this will expand the scope and usability of `MetricsLambda`
I've already an idea how to implement this based on our discussion https://github.com/pytorch/ignite/pull/1887#discussion_r610902667
So @vfdev-5 please assign the issue to me.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/metrics_lambda.py`
Content:
```
1 import itertools
2 from typing import Any, Callable, Optional, Union
3
4 import torch
5
6 from ignite.engine import Engine
7 from ignite.metrics.metric import EpochWise, Metric, MetricUsage, reinit__is_reduced
8
9 __all__ = ["MetricsLambda"]
10
11
12 class MetricsLambda(Metric):
13 """
14 Apply a function to other metrics to obtain a new metric.
15 The result of the new metric is defined to be the result
16 of applying the function to the result of argument metrics.
17
18 When update, this metric does not recursively update the metrics
19 it depends on. When reset, all its dependency metrics would be
20 resetted. When attach, all its dependency metrics would be attached
21 automatically (but partially, e.g :meth:`~ignite.metrics.metric.Metric.is_attached()` will return False).
22
23 Args:
24 f: the function that defines the computation
25 args: Sequence of other metrics or something
26 else that will be fed to ``f`` as arguments.
27 kwargs: Sequence of other metrics or something
28 else that will be fed to ``f`` as keyword arguments.
29
30 Example:
31
32 .. code-block:: python
33
34 precision = Precision(average=False)
35 recall = Recall(average=False)
36
37 def Fbeta(r, p, beta):
38 return torch.mean((1 + beta ** 2) * p * r / (beta ** 2 * p + r + 1e-20)).item()
39
40 F1 = MetricsLambda(Fbeta, recall, precision, 1)
41 F2 = MetricsLambda(Fbeta, recall, precision, 2)
42 F3 = MetricsLambda(Fbeta, recall, precision, 3)
43 F4 = MetricsLambda(Fbeta, recall, precision, 4)
44
45 When check if the metric is attached, if one of its dependency
46 metrics is detached, the metric is considered detached too.
47
48 .. code-block:: python
49
50 engine = ...
51 precision = Precision(average=False)
52
53 aP = precision.mean()
54
55 aP.attach(engine, "aP")
56
57 assert aP.is_attached(engine)
58 # partially attached
59 assert not precision.is_attached(engine)
60
61 precision.detach(engine)
62
63 assert not aP.is_attached(engine)
64 # fully attached
65 assert not precision.is_attached(engine)
66
67 """
68
69 def __init__(self, f: Callable, *args: Any, **kwargs: Any) -> None:
70 self.function = f
71 self.args = args
72 self.kwargs = kwargs
73 self.engine = None # type: Optional[Engine]
74 super(MetricsLambda, self).__init__(device="cpu")
75
76 @reinit__is_reduced
77 def reset(self) -> None:
78 for i in itertools.chain(self.args, self.kwargs.values()):
79 if isinstance(i, Metric):
80 i.reset()
81
82 @reinit__is_reduced
83 def update(self, output: Any) -> None:
84 # NB: this method does not recursively update dependency metrics,
85 # which might cause duplicate update issue. To update this metric,
86 # users should manually update its dependencies.
87 pass
88
89 def compute(self) -> Any:
90 materialized = [_get_value_on_cpu(i) for i in self.args]
91 materialized_kwargs = {k: _get_value_on_cpu(v) for k, v in self.kwargs.items()}
92 return self.function(*materialized, **materialized_kwargs)
93
94 def _internal_attach(self, engine: Engine, usage: MetricUsage) -> None:
95 self.engine = engine
96 for index, metric in enumerate(itertools.chain(self.args, self.kwargs.values())):
97 if isinstance(metric, MetricsLambda):
98 metric._internal_attach(engine, usage)
99 elif isinstance(metric, Metric):
100 # NB : metrics is attached partially
101 # We must not use is_attached() but rather if these events exist
102 if not engine.has_event_handler(metric.started, usage.STARTED):
103 engine.add_event_handler(usage.STARTED, metric.started)
104 if not engine.has_event_handler(metric.iteration_completed, usage.ITERATION_COMPLETED):
105 engine.add_event_handler(usage.ITERATION_COMPLETED, metric.iteration_completed)
106
107 def attach(self, engine: Engine, name: str, usage: Union[str, MetricUsage] = EpochWise()) -> None:
108 usage = self._check_usage(usage)
109 # recursively attach all its dependencies (partially)
110 self._internal_attach(engine, usage)
111 # attach only handler on EPOCH_COMPLETED
112 engine.add_event_handler(usage.COMPLETED, self.completed, name)
113
114 def detach(self, engine: Engine, usage: Union[str, MetricUsage] = EpochWise()) -> None:
115 usage = self._check_usage(usage)
116 # remove from engine
117 super(MetricsLambda, self).detach(engine, usage)
118 self.engine = None
119
120 def is_attached(self, engine: Engine, usage: Union[str, MetricUsage] = EpochWise()) -> bool:
121 usage = self._check_usage(usage)
122 # check recursively the dependencies
123 return super(MetricsLambda, self).is_attached(engine, usage) and self._internal_is_attached(engine, usage)
124
125 def _internal_is_attached(self, engine: Engine, usage: MetricUsage) -> bool:
126 # if no engine, metrics is not attached
127 if engine is None:
128 return False
129 # check recursively if metrics are attached
130 is_detached = False
131 for metric in itertools.chain(self.args, self.kwargs.values()):
132 if isinstance(metric, MetricsLambda):
133 if not metric._internal_is_attached(engine, usage):
134 is_detached = True
135 elif isinstance(metric, Metric):
136 if not engine.has_event_handler(metric.started, usage.STARTED):
137 is_detached = True
138 if not engine.has_event_handler(metric.iteration_completed, usage.ITERATION_COMPLETED):
139 is_detached = True
140 return not is_detached
141
142
143 def _get_value_on_cpu(v: Any) -> Any:
144 if isinstance(v, Metric):
145 v = v.compute()
146 if isinstance(v, torch.Tensor):
147 v = v.cpu()
148 return v
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/metrics/metrics_lambda.py b/ignite/metrics/metrics_lambda.py
--- a/ignite/metrics/metrics_lambda.py
+++ b/ignite/metrics/metrics_lambda.py
@@ -15,9 +15,9 @@
The result of the new metric is defined to be the result
of applying the function to the result of argument metrics.
- When update, this metric does not recursively update the metrics
+ When update, this metric recursively updates the metrics
it depends on. When reset, all its dependency metrics would be
- resetted. When attach, all its dependency metrics would be attached
+ resetted as well. When attach, all its dependency metrics would be attached
automatically (but partially, e.g :meth:`~ignite.metrics.metric.Metric.is_attached()` will return False).
Args:
@@ -71,6 +71,7 @@
self.args = args
self.kwargs = kwargs
self.engine = None # type: Optional[Engine]
+ self._updated = False
super(MetricsLambda, self).__init__(device="cpu")
@reinit__is_reduced
@@ -78,13 +79,21 @@
for i in itertools.chain(self.args, self.kwargs.values()):
if isinstance(i, Metric):
i.reset()
+ self._updated = False
@reinit__is_reduced
def update(self, output: Any) -> None:
- # NB: this method does not recursively update dependency metrics,
- # which might cause duplicate update issue. To update this metric,
- # users should manually update its dependencies.
- pass
+ if self.engine:
+ raise ValueError(
+ "MetricsLambda is already attached to an engine, "
+ "and MetricsLambda can't use update API while it's attached."
+ )
+
+ for i in itertools.chain(self.args, self.kwargs.values()):
+ if isinstance(i, Metric):
+ i.update(output)
+
+ self._updated = True
def compute(self) -> Any:
materialized = [_get_value_on_cpu(i) for i in self.args]
@@ -105,6 +114,10 @@
engine.add_event_handler(usage.ITERATION_COMPLETED, metric.iteration_completed)
def attach(self, engine: Engine, name: str, usage: Union[str, MetricUsage] = EpochWise()) -> None:
+ if self._updated:
+ raise ValueError(
+ "The underlying metrics are already updated, can't attach while using reset/update/compute API."
+ )
usage = self._check_usage(usage)
# recursively attach all its dependencies (partially)
self._internal_attach(engine, usage)
|
{"golden_diff": "diff --git a/ignite/metrics/metrics_lambda.py b/ignite/metrics/metrics_lambda.py\n--- a/ignite/metrics/metrics_lambda.py\n+++ b/ignite/metrics/metrics_lambda.py\n@@ -15,9 +15,9 @@\n The result of the new metric is defined to be the result\n of applying the function to the result of argument metrics.\n \n- When update, this metric does not recursively update the metrics\n+ When update, this metric recursively updates the metrics\n it depends on. When reset, all its dependency metrics would be\n- resetted. When attach, all its dependency metrics would be attached\n+ resetted as well. When attach, all its dependency metrics would be attached\n automatically (but partially, e.g :meth:`~ignite.metrics.metric.Metric.is_attached()` will return False).\n \n Args:\n@@ -71,6 +71,7 @@\n self.args = args\n self.kwargs = kwargs\n self.engine = None # type: Optional[Engine]\n+ self._updated = False\n super(MetricsLambda, self).__init__(device=\"cpu\")\n \n @reinit__is_reduced\n@@ -78,13 +79,21 @@\n for i in itertools.chain(self.args, self.kwargs.values()):\n if isinstance(i, Metric):\n i.reset()\n+ self._updated = False\n \n @reinit__is_reduced\n def update(self, output: Any) -> None:\n- # NB: this method does not recursively update dependency metrics,\n- # which might cause duplicate update issue. To update this metric,\n- # users should manually update its dependencies.\n- pass\n+ if self.engine:\n+ raise ValueError(\n+ \"MetricsLambda is already attached to an engine, \"\n+ \"and MetricsLambda can't use update API while it's attached.\"\n+ )\n+\n+ for i in itertools.chain(self.args, self.kwargs.values()):\n+ if isinstance(i, Metric):\n+ i.update(output)\n+\n+ self._updated = True\n \n def compute(self) -> Any:\n materialized = [_get_value_on_cpu(i) for i in self.args]\n@@ -105,6 +114,10 @@\n engine.add_event_handler(usage.ITERATION_COMPLETED, metric.iteration_completed)\n \n def attach(self, engine: Engine, name: str, usage: Union[str, MetricUsage] = EpochWise()) -> None:\n+ if self._updated:\n+ raise ValueError(\n+ \"The underlying metrics are already updated, can't attach while using reset/update/compute API.\"\n+ )\n usage = self._check_usage(usage)\n # recursively attach all its dependencies (partially)\n self._internal_attach(engine, usage)\n", "issue": "Make `MetricsLambda` update it's underlying metrics\n## \ud83d\ude80 Feature\r\n\r\nAs title.\r\n\r\nThe current state is that we cannot use `MetricsLambda` as a standard metric because it doesn't update/reset it's underlying metrics.\r\nAdding this will expand the scope and usability of `MetricsLambda`\r\n\r\nI've already an idea how to implement this based on our discussion https://github.com/pytorch/ignite/pull/1887#discussion_r610902667\r\n\r\nSo @vfdev-5 please assign the issue to me.\r\n\r\n\n", "before_files": [{"content": "import itertools\nfrom typing import Any, Callable, Optional, Union\n\nimport torch\n\nfrom ignite.engine import Engine\nfrom ignite.metrics.metric import EpochWise, Metric, MetricUsage, reinit__is_reduced\n\n__all__ = [\"MetricsLambda\"]\n\n\nclass MetricsLambda(Metric):\n \"\"\"\n Apply a function to other metrics to obtain a new metric.\n The result of the new metric is defined to be the result\n of applying the function to the result of argument metrics.\n\n When update, this metric does not recursively update the metrics\n it depends on. When reset, all its dependency metrics would be\n resetted. When attach, all its dependency metrics would be attached\n automatically (but partially, e.g :meth:`~ignite.metrics.metric.Metric.is_attached()` will return False).\n\n Args:\n f: the function that defines the computation\n args: Sequence of other metrics or something\n else that will be fed to ``f`` as arguments.\n kwargs: Sequence of other metrics or something\n else that will be fed to ``f`` as keyword arguments.\n\n Example:\n\n .. code-block:: python\n\n precision = Precision(average=False)\n recall = Recall(average=False)\n\n def Fbeta(r, p, beta):\n return torch.mean((1 + beta ** 2) * p * r / (beta ** 2 * p + r + 1e-20)).item()\n\n F1 = MetricsLambda(Fbeta, recall, precision, 1)\n F2 = MetricsLambda(Fbeta, recall, precision, 2)\n F3 = MetricsLambda(Fbeta, recall, precision, 3)\n F4 = MetricsLambda(Fbeta, recall, precision, 4)\n\n When check if the metric is attached, if one of its dependency\n metrics is detached, the metric is considered detached too.\n\n .. code-block:: python\n\n engine = ...\n precision = Precision(average=False)\n\n aP = precision.mean()\n\n aP.attach(engine, \"aP\")\n\n assert aP.is_attached(engine)\n # partially attached\n assert not precision.is_attached(engine)\n\n precision.detach(engine)\n\n assert not aP.is_attached(engine)\n # fully attached\n assert not precision.is_attached(engine)\n\n \"\"\"\n\n def __init__(self, f: Callable, *args: Any, **kwargs: Any) -> None:\n self.function = f\n self.args = args\n self.kwargs = kwargs\n self.engine = None # type: Optional[Engine]\n super(MetricsLambda, self).__init__(device=\"cpu\")\n\n @reinit__is_reduced\n def reset(self) -> None:\n for i in itertools.chain(self.args, self.kwargs.values()):\n if isinstance(i, Metric):\n i.reset()\n\n @reinit__is_reduced\n def update(self, output: Any) -> None:\n # NB: this method does not recursively update dependency metrics,\n # which might cause duplicate update issue. To update this metric,\n # users should manually update its dependencies.\n pass\n\n def compute(self) -> Any:\n materialized = [_get_value_on_cpu(i) for i in self.args]\n materialized_kwargs = {k: _get_value_on_cpu(v) for k, v in self.kwargs.items()}\n return self.function(*materialized, **materialized_kwargs)\n\n def _internal_attach(self, engine: Engine, usage: MetricUsage) -> None:\n self.engine = engine\n for index, metric in enumerate(itertools.chain(self.args, self.kwargs.values())):\n if isinstance(metric, MetricsLambda):\n metric._internal_attach(engine, usage)\n elif isinstance(metric, Metric):\n # NB : metrics is attached partially\n # We must not use is_attached() but rather if these events exist\n if not engine.has_event_handler(metric.started, usage.STARTED):\n engine.add_event_handler(usage.STARTED, metric.started)\n if not engine.has_event_handler(metric.iteration_completed, usage.ITERATION_COMPLETED):\n engine.add_event_handler(usage.ITERATION_COMPLETED, metric.iteration_completed)\n\n def attach(self, engine: Engine, name: str, usage: Union[str, MetricUsage] = EpochWise()) -> None:\n usage = self._check_usage(usage)\n # recursively attach all its dependencies (partially)\n self._internal_attach(engine, usage)\n # attach only handler on EPOCH_COMPLETED\n engine.add_event_handler(usage.COMPLETED, self.completed, name)\n\n def detach(self, engine: Engine, usage: Union[str, MetricUsage] = EpochWise()) -> None:\n usage = self._check_usage(usage)\n # remove from engine\n super(MetricsLambda, self).detach(engine, usage)\n self.engine = None\n\n def is_attached(self, engine: Engine, usage: Union[str, MetricUsage] = EpochWise()) -> bool:\n usage = self._check_usage(usage)\n # check recursively the dependencies\n return super(MetricsLambda, self).is_attached(engine, usage) and self._internal_is_attached(engine, usage)\n\n def _internal_is_attached(self, engine: Engine, usage: MetricUsage) -> bool:\n # if no engine, metrics is not attached\n if engine is None:\n return False\n # check recursively if metrics are attached\n is_detached = False\n for metric in itertools.chain(self.args, self.kwargs.values()):\n if isinstance(metric, MetricsLambda):\n if not metric._internal_is_attached(engine, usage):\n is_detached = True\n elif isinstance(metric, Metric):\n if not engine.has_event_handler(metric.started, usage.STARTED):\n is_detached = True\n if not engine.has_event_handler(metric.iteration_completed, usage.ITERATION_COMPLETED):\n is_detached = True\n return not is_detached\n\n\ndef _get_value_on_cpu(v: Any) -> Any:\n if isinstance(v, Metric):\n v = v.compute()\n if isinstance(v, torch.Tensor):\n v = v.cpu()\n return v\n", "path": "ignite/metrics/metrics_lambda.py"}], "after_files": [{"content": "import itertools\nfrom typing import Any, Callable, Optional, Union\n\nimport torch\n\nfrom ignite.engine import Engine\nfrom ignite.metrics.metric import EpochWise, Metric, MetricUsage, reinit__is_reduced\n\n__all__ = [\"MetricsLambda\"]\n\n\nclass MetricsLambda(Metric):\n \"\"\"\n Apply a function to other metrics to obtain a new metric.\n The result of the new metric is defined to be the result\n of applying the function to the result of argument metrics.\n\n When update, this metric recursively updates the metrics\n it depends on. When reset, all its dependency metrics would be\n resetted as well. When attach, all its dependency metrics would be attached\n automatically (but partially, e.g :meth:`~ignite.metrics.metric.Metric.is_attached()` will return False).\n\n Args:\n f: the function that defines the computation\n args: Sequence of other metrics or something\n else that will be fed to ``f`` as arguments.\n kwargs: Sequence of other metrics or something\n else that will be fed to ``f`` as keyword arguments.\n\n Example:\n\n .. code-block:: python\n\n precision = Precision(average=False)\n recall = Recall(average=False)\n\n def Fbeta(r, p, beta):\n return torch.mean((1 + beta ** 2) * p * r / (beta ** 2 * p + r + 1e-20)).item()\n\n F1 = MetricsLambda(Fbeta, recall, precision, 1)\n F2 = MetricsLambda(Fbeta, recall, precision, 2)\n F3 = MetricsLambda(Fbeta, recall, precision, 3)\n F4 = MetricsLambda(Fbeta, recall, precision, 4)\n\n When check if the metric is attached, if one of its dependency\n metrics is detached, the metric is considered detached too.\n\n .. code-block:: python\n\n engine = ...\n precision = Precision(average=False)\n\n aP = precision.mean()\n\n aP.attach(engine, \"aP\")\n\n assert aP.is_attached(engine)\n # partially attached\n assert not precision.is_attached(engine)\n\n precision.detach(engine)\n\n assert not aP.is_attached(engine)\n # fully attached\n assert not precision.is_attached(engine)\n\n \"\"\"\n\n def __init__(self, f: Callable, *args: Any, **kwargs: Any) -> None:\n self.function = f\n self.args = args\n self.kwargs = kwargs\n self.engine = None # type: Optional[Engine]\n self._updated = False\n super(MetricsLambda, self).__init__(device=\"cpu\")\n\n @reinit__is_reduced\n def reset(self) -> None:\n for i in itertools.chain(self.args, self.kwargs.values()):\n if isinstance(i, Metric):\n i.reset()\n self._updated = False\n\n @reinit__is_reduced\n def update(self, output: Any) -> None:\n if self.engine:\n raise ValueError(\n \"MetricsLambda is already attached to an engine, \"\n \"and MetricsLambda can't use update API while it's attached.\"\n )\n\n for i in itertools.chain(self.args, self.kwargs.values()):\n if isinstance(i, Metric):\n i.update(output)\n\n self._updated = True\n\n def compute(self) -> Any:\n materialized = [_get_value_on_cpu(i) for i in self.args]\n materialized_kwargs = {k: _get_value_on_cpu(v) for k, v in self.kwargs.items()}\n return self.function(*materialized, **materialized_kwargs)\n\n def _internal_attach(self, engine: Engine, usage: MetricUsage) -> None:\n self.engine = engine\n for index, metric in enumerate(itertools.chain(self.args, self.kwargs.values())):\n if isinstance(metric, MetricsLambda):\n metric._internal_attach(engine, usage)\n elif isinstance(metric, Metric):\n # NB : metrics is attached partially\n # We must not use is_attached() but rather if these events exist\n if not engine.has_event_handler(metric.started, usage.STARTED):\n engine.add_event_handler(usage.STARTED, metric.started)\n if not engine.has_event_handler(metric.iteration_completed, usage.ITERATION_COMPLETED):\n engine.add_event_handler(usage.ITERATION_COMPLETED, metric.iteration_completed)\n\n def attach(self, engine: Engine, name: str, usage: Union[str, MetricUsage] = EpochWise()) -> None:\n if self._updated:\n raise ValueError(\n \"The underlying metrics are already updated, can't attach while using reset/update/compute API.\"\n )\n usage = self._check_usage(usage)\n # recursively attach all its dependencies (partially)\n self._internal_attach(engine, usage)\n # attach only handler on EPOCH_COMPLETED\n engine.add_event_handler(usage.COMPLETED, self.completed, name)\n\n def detach(self, engine: Engine, usage: Union[str, MetricUsage] = EpochWise()) -> None:\n usage = self._check_usage(usage)\n # remove from engine\n super(MetricsLambda, self).detach(engine, usage)\n self.engine = None\n\n def is_attached(self, engine: Engine, usage: Union[str, MetricUsage] = EpochWise()) -> bool:\n usage = self._check_usage(usage)\n # check recursively the dependencies\n return super(MetricsLambda, self).is_attached(engine, usage) and self._internal_is_attached(engine, usage)\n\n def _internal_is_attached(self, engine: Engine, usage: MetricUsage) -> bool:\n # if no engine, metrics is not attached\n if engine is None:\n return False\n # check recursively if metrics are attached\n is_detached = False\n for metric in itertools.chain(self.args, self.kwargs.values()):\n if isinstance(metric, MetricsLambda):\n if not metric._internal_is_attached(engine, usage):\n is_detached = True\n elif isinstance(metric, Metric):\n if not engine.has_event_handler(metric.started, usage.STARTED):\n is_detached = True\n if not engine.has_event_handler(metric.iteration_completed, usage.ITERATION_COMPLETED):\n is_detached = True\n return not is_detached\n\n\ndef _get_value_on_cpu(v: Any) -> Any:\n if isinstance(v, Metric):\n v = v.compute()\n if isinstance(v, torch.Tensor):\n v = v.cpu()\n return v\n", "path": "ignite/metrics/metrics_lambda.py"}]}
| 2,022 | 598 |
gh_patches_debug_15081
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-713
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement reusable GLSL functions for color space transformations
At least RGB <-> HSV. Put them in `vispy.visuals.glsl`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/visuals/glsl/color.py`
Content:
```
1 """Color-related GLSL functions."""
2
3
4 # -----------------------------------------------------------------------------
5 # Colormaps
6 # -----------------------------------------------------------------------------
7
8 """Texture lookup for a discrete color map stored in a 1*ncolors 2D texture.
9
10 The `get_color()` function returns a RGB color from an index integer
11 referring to the colormap.
12
13
14 Inputs
15 ------
16
17 index (int): The color index.
18
19
20 Template variables
21 ------------------
22
23 $ncolors (int): The number of colors in the colormap.
24
25 $colormap (2D texture sampler): The sampler for the 2D 1*ncolors colormap
26 texture.
27
28
29 Outputs
30 -------
31
32 color (vec3): The color.
33
34 """
35 COLORMAP_TEXTURE = """
36 vec3 get_color(int index) {
37 float x = (float(index) + .5) / float($ncolors);
38 return texture2D($colormap, vec2(x, .5)).rgb;
39 }
40 """
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vispy/visuals/glsl/color.py b/vispy/visuals/glsl/color.py
--- a/vispy/visuals/glsl/color.py
+++ b/vispy/visuals/glsl/color.py
@@ -38,3 +38,33 @@
return texture2D($colormap, vec2(x, .5)).rgb;
}
"""
+
+
+# -----------------------------------------------------------------------------
+# Color space transformations
+# -----------------------------------------------------------------------------
+
+# From http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
+# TODO: unit tests
+HSV_TO_RGB = """
+vec3 hsv_to_rgb(vec3 c)
+{
+ vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
+ vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
+ return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
+}
+"""
+
+
+RGB_TO_HSV = """
+vec3 rgb_to_hsv(vec3 c)
+{
+ vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
+ vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
+ vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
+
+ float d = q.x - min(q.w, q.y);
+ float e = 1.0e-10;
+ return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
+}
+"""
|
{"golden_diff": "diff --git a/vispy/visuals/glsl/color.py b/vispy/visuals/glsl/color.py\n--- a/vispy/visuals/glsl/color.py\n+++ b/vispy/visuals/glsl/color.py\n@@ -38,3 +38,33 @@\n return texture2D($colormap, vec2(x, .5)).rgb;\n }\n \"\"\"\n+\n+\n+# -----------------------------------------------------------------------------\n+# Color space transformations\n+# -----------------------------------------------------------------------------\n+\n+# From http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl\n+# TODO: unit tests\n+HSV_TO_RGB = \"\"\"\n+vec3 hsv_to_rgb(vec3 c)\n+{\n+ vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);\n+ vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);\n+ return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);\n+}\n+\"\"\"\n+\n+\n+RGB_TO_HSV = \"\"\"\n+vec3 rgb_to_hsv(vec3 c)\n+{\n+ vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);\n+ vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));\n+ vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));\n+\n+ float d = q.x - min(q.w, q.y);\n+ float e = 1.0e-10;\n+ return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);\n+}\n+\"\"\"\n", "issue": "Implement reusable GLSL functions for color space transformations\nAt least RGB <-> HSV. Put them in `vispy.visuals.glsl`.\n\n", "before_files": [{"content": "\"\"\"Color-related GLSL functions.\"\"\"\n\n\n# -----------------------------------------------------------------------------\n# Colormaps\n# -----------------------------------------------------------------------------\n\n\"\"\"Texture lookup for a discrete color map stored in a 1*ncolors 2D texture.\n\nThe `get_color()` function returns a RGB color from an index integer\nreferring to the colormap.\n\n\nInputs\n------\n\nindex (int): The color index.\n\n\nTemplate variables\n------------------\n\n$ncolors (int): The number of colors in the colormap.\n\n$colormap (2D texture sampler): The sampler for the 2D 1*ncolors colormap\n texture.\n\n\nOutputs\n-------\n\ncolor (vec3): The color.\n\n\"\"\"\nCOLORMAP_TEXTURE = \"\"\"\nvec3 get_color(int index) {\n float x = (float(index) + .5) / float($ncolors);\n return texture2D($colormap, vec2(x, .5)).rgb;\n}\n\"\"\"\n", "path": "vispy/visuals/glsl/color.py"}], "after_files": [{"content": "\"\"\"Color-related GLSL functions.\"\"\"\n\n\n# -----------------------------------------------------------------------------\n# Colormaps\n# -----------------------------------------------------------------------------\n\n\"\"\"Texture lookup for a discrete color map stored in a 1*ncolors 2D texture.\n\nThe `get_color()` function returns a RGB color from an index integer\nreferring to the colormap.\n\n\nInputs\n------\n\nindex (int): The color index.\n\n\nTemplate variables\n------------------\n\n$ncolors (int): The number of colors in the colormap.\n\n$colormap (2D texture sampler): The sampler for the 2D 1*ncolors colormap\n texture.\n\n\nOutputs\n-------\n\ncolor (vec3): The color.\n\n\"\"\"\nCOLORMAP_TEXTURE = \"\"\"\nvec3 get_color(int index) {\n float x = (float(index) + .5) / float($ncolors);\n return texture2D($colormap, vec2(x, .5)).rgb;\n}\n\"\"\"\n\n\n# -----------------------------------------------------------------------------\n# Color space transformations\n# -----------------------------------------------------------------------------\n\n# From http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl\n# TODO: unit tests\nHSV_TO_RGB = \"\"\"\nvec3 hsv_to_rgb(vec3 c)\n{\n vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);\n vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);\n return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);\n}\n\"\"\"\n\n\nRGB_TO_HSV = \"\"\"\nvec3 rgb_to_hsv(vec3 c)\n{\n vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);\n vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));\n vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));\n\n float d = q.x - min(q.w, q.y);\n float e = 1.0e-10;\n return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);\n}\n\"\"\"\n", "path": "vispy/visuals/glsl/color.py"}]}
| 561 | 438 |
gh_patches_debug_39445
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-1558
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Garden waste type to Cornwall, UK source
### I propose a feature for:
Sources
### Describe your wanted feature
Can you please add Garden to the cornwall_gov_uk.py source?
Change - COLLECTIONS = {"Rubbish", "Recycling"}
to COLLECTIONS = {"Rubbish", "Recycling", "Garden"}
For my house I'm getting the following html snip so I think it should work.
<div id="my-waste-collection">
<h3 class="font-weight-bolder">Current collections</h3>
<div class="row text-center">
<div class="col-12 col-md-4">
<div id="recycling" class="collection text-center service">
<span>Recycling</span>
<span>SAT</span>
<span>30 Dec</span>
</div>
</div>
<div class="col-12 col-md-4">
<div id="rubbish" class="collection text-center service">
<span>Rubbish</span>
<span>TUE</span>
<span>2 Jan</span>
</div>
</div>
<div class="col-12 col-md-4">
<div id="gardenhassubscription" class="collection text-cente r service">
<span>Garden</span>
<span>FRI</span>
<span>22 Dec</span>
</div>
</div>
</div>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py`
Content:
```
1 from datetime import date, datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection
6
7 TITLE = "Cornwall Council"
8 DESCRIPTION = "Source for cornwall.gov.uk services for Cornwall Council"
9 URL = "https://cornwall.gov.uk"
10 TEST_CASES = {
11 "known_uprn": {"uprn": "100040118005"},
12 "unknown_uprn": {"postcode": "TR261SP", "housenumberorname": "7"},
13 }
14
15 SEARCH_URLS = {
16 "uprn_search": "https://www.cornwall.gov.uk/my-area/",
17 "collection_search": "https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False",
18 }
19 COLLECTIONS = {"Rubbish", "Recycling"}
20
21
22 class Source:
23 def __init__(
24 self, uprn=None, postcode=None, housenumberorname=None
25 ): # argX correspond to the args dict in the source configuration
26 self._uprn = uprn
27 self._postcode = postcode
28 self._housenumberorname = housenumberorname
29
30 def fetch(self):
31 entries = []
32 session = requests.Session()
33
34 # Find the UPRN based on the postcode and the property name/number
35 if self._uprn is None:
36 args = {"Postcode": self._postcode}
37 r = session.get(SEARCH_URLS["uprn_search"], params=args)
38 r.raise_for_status()
39 soup = BeautifulSoup(r.text, features="html.parser")
40 propertyUprns = soup.find(id="Uprn").find_all("option")
41 for match in propertyUprns:
42 if match.text.startswith(self._housenumberorname):
43 self._uprn = match["value"]
44
45 # Get the collection days based on the UPRN (either supplied through arguments or searched for above)
46 if self._uprn is not None:
47 args = {"uprn": self._uprn}
48 r = session.get(SEARCH_URLS["collection_search"], params=args)
49 r.raise_for_status()
50 soup = BeautifulSoup(r.text, features="html.parser")
51 for collection in COLLECTIONS:
52 d = (
53 soup.find(id=collection.lower()).find_all("span")[-1].text
54 + " "
55 + str(date.today().year)
56 )
57
58 entries.append(
59 Collection(
60 datetime.strptime(d, "%d %b %Y").date(),
61 collection,
62 )
63 )
64
65 return entries
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py
@@ -2,7 +2,7 @@
import requests
from bs4 import BeautifulSoup
-from waste_collection_schedule import Collection
+from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Cornwall Council"
DESCRIPTION = "Source for cornwall.gov.uk services for Cornwall Council"
@@ -10,13 +10,18 @@
TEST_CASES = {
"known_uprn": {"uprn": "100040118005"},
"unknown_uprn": {"postcode": "TR261SP", "housenumberorname": "7"},
+ "uprn_with_garden": {"uprn": "100040080721"},
}
SEARCH_URLS = {
"uprn_search": "https://www.cornwall.gov.uk/my-area/",
"collection_search": "https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False",
}
-COLLECTIONS = {"Rubbish", "Recycling"}
+ICON_MAP = {
+ "Rubbish": "mdi:delete",
+ "Recycling": "mdi:recycle",
+ "Garden": "mdi:flower",
+}
class Source:
@@ -41,25 +46,29 @@
for match in propertyUprns:
if match.text.startswith(self._housenumberorname):
self._uprn = match["value"]
+ if self._uprn is None:
+ raise Exception(
+ f"No UPRN found for {self._postcode} {self._housenumberorname}"
+ )
# Get the collection days based on the UPRN (either supplied through arguments or searched for above)
- if self._uprn is not None:
- args = {"uprn": self._uprn}
- r = session.get(SEARCH_URLS["collection_search"], params=args)
- r.raise_for_status()
- soup = BeautifulSoup(r.text, features="html.parser")
- for collection in COLLECTIONS:
- d = (
- soup.find(id=collection.lower()).find_all("span")[-1].text
- + " "
- + str(date.today().year)
- )
+ args = {"uprn": self._uprn}
+ r = session.get(SEARCH_URLS["collection_search"], params=args)
+ r.raise_for_status()
+ soup = BeautifulSoup(r.text, features="html.parser")
+ for collection_div in soup.find_all("div", class_="collection"):
+ spans = collection_div.find_all("span")
+ if not spans:
+ continue
+ collection = spans[0].text
+ d = spans[-1].text + " " + str(date.today().year)
- entries.append(
- Collection(
- datetime.strptime(d, "%d %b %Y").date(),
- collection,
- )
+ entries.append(
+ Collection(
+ datetime.strptime(d, "%d %b %Y").date(),
+ collection,
+ icon=ICON_MAP.get(collection),
)
+ )
return entries
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py\n@@ -2,7 +2,7 @@\n \n import requests\n from bs4 import BeautifulSoup\n-from waste_collection_schedule import Collection\n+from waste_collection_schedule import Collection # type: ignore[attr-defined]\n \n TITLE = \"Cornwall Council\"\n DESCRIPTION = \"Source for cornwall.gov.uk services for Cornwall Council\"\n@@ -10,13 +10,18 @@\n TEST_CASES = {\n \"known_uprn\": {\"uprn\": \"100040118005\"},\n \"unknown_uprn\": {\"postcode\": \"TR261SP\", \"housenumberorname\": \"7\"},\n+ \"uprn_with_garden\": {\"uprn\": \"100040080721\"},\n }\n \n SEARCH_URLS = {\n \"uprn_search\": \"https://www.cornwall.gov.uk/my-area/\",\n \"collection_search\": \"https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False\",\n }\n-COLLECTIONS = {\"Rubbish\", \"Recycling\"}\n+ICON_MAP = {\n+ \"Rubbish\": \"mdi:delete\",\n+ \"Recycling\": \"mdi:recycle\",\n+ \"Garden\": \"mdi:flower\",\n+}\n \n \n class Source:\n@@ -41,25 +46,29 @@\n for match in propertyUprns:\n if match.text.startswith(self._housenumberorname):\n self._uprn = match[\"value\"]\n+ if self._uprn is None:\n+ raise Exception(\n+ f\"No UPRN found for {self._postcode} {self._housenumberorname}\"\n+ )\n \n # Get the collection days based on the UPRN (either supplied through arguments or searched for above)\n- if self._uprn is not None:\n- args = {\"uprn\": self._uprn}\n- r = session.get(SEARCH_URLS[\"collection_search\"], params=args)\n- r.raise_for_status()\n- soup = BeautifulSoup(r.text, features=\"html.parser\")\n- for collection in COLLECTIONS:\n- d = (\n- soup.find(id=collection.lower()).find_all(\"span\")[-1].text\n- + \" \"\n- + str(date.today().year)\n- )\n+ args = {\"uprn\": self._uprn}\n+ r = session.get(SEARCH_URLS[\"collection_search\"], params=args)\n+ r.raise_for_status()\n+ soup = BeautifulSoup(r.text, features=\"html.parser\")\n+ for collection_div in soup.find_all(\"div\", class_=\"collection\"):\n+ spans = collection_div.find_all(\"span\")\n+ if not spans:\n+ continue\n+ collection = spans[0].text\n+ d = spans[-1].text + \" \" + str(date.today().year)\n \n- entries.append(\n- Collection(\n- datetime.strptime(d, \"%d %b %Y\").date(),\n- collection,\n- )\n+ entries.append(\n+ Collection(\n+ datetime.strptime(d, \"%d %b %Y\").date(),\n+ collection,\n+ icon=ICON_MAP.get(collection),\n )\n+ )\n \n return entries\n", "issue": "Add Garden waste type to Cornwall, UK source\n### I propose a feature for:\n\nSources\n\n### Describe your wanted feature\n\nCan you please add Garden to the cornwall_gov_uk.py source?\r\n\r\nChange - COLLECTIONS = {\"Rubbish\", \"Recycling\"}\r\nto COLLECTIONS = {\"Rubbish\", \"Recycling\", \"Garden\"}\r\n\r\nFor my house I'm getting the following html snip so I think it should work. \r\n\r\n<div id=\"my-waste-collection\">\r\n <h3 class=\"font-weight-bolder\">Current collections</h3>\r\n <div class=\"row text-center\">\r\n <div class=\"col-12 col-md-4\">\r\n <div id=\"recycling\" class=\"collection text-center service\">\r\n <span>Recycling</span>\r\n <span>SAT</span>\r\n <span>30 Dec</span>\r\n </div>\r\n </div>\r\n <div class=\"col-12 col-md-4\">\r\n <div id=\"rubbish\" class=\"collection text-center service\">\r\n <span>Rubbish</span>\r\n <span>TUE</span>\r\n <span>2 Jan</span>\r\n </div>\r\n </div>\r\n <div class=\"col-12 col-md-4\">\r\n <div id=\"gardenhassubscription\" class=\"collection text-cente r service\">\r\n <span>Garden</span>\r\n <span>FRI</span>\r\n <span>22 Dec</span>\r\n </div>\r\n </div>\r\n </div>\n", "before_files": [{"content": "from datetime import date, datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Cornwall Council\"\nDESCRIPTION = \"Source for cornwall.gov.uk services for Cornwall Council\"\nURL = \"https://cornwall.gov.uk\"\nTEST_CASES = {\n \"known_uprn\": {\"uprn\": \"100040118005\"},\n \"unknown_uprn\": {\"postcode\": \"TR261SP\", \"housenumberorname\": \"7\"},\n}\n\nSEARCH_URLS = {\n \"uprn_search\": \"https://www.cornwall.gov.uk/my-area/\",\n \"collection_search\": \"https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False\",\n}\nCOLLECTIONS = {\"Rubbish\", \"Recycling\"}\n\n\nclass Source:\n def __init__(\n self, uprn=None, postcode=None, housenumberorname=None\n ): # argX correspond to the args dict in the source configuration\n self._uprn = uprn\n self._postcode = postcode\n self._housenumberorname = housenumberorname\n\n def fetch(self):\n entries = []\n session = requests.Session()\n\n # Find the UPRN based on the postcode and the property name/number\n if self._uprn is None:\n args = {\"Postcode\": self._postcode}\n r = session.get(SEARCH_URLS[\"uprn_search\"], params=args)\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n propertyUprns = soup.find(id=\"Uprn\").find_all(\"option\")\n for match in propertyUprns:\n if match.text.startswith(self._housenumberorname):\n self._uprn = match[\"value\"]\n\n # Get the collection days based on the UPRN (either supplied through arguments or searched for above)\n if self._uprn is not None:\n args = {\"uprn\": self._uprn}\n r = session.get(SEARCH_URLS[\"collection_search\"], params=args)\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n for collection in COLLECTIONS:\n d = (\n soup.find(id=collection.lower()).find_all(\"span\")[-1].text\n + \" \"\n + str(date.today().year)\n )\n\n entries.append(\n Collection(\n datetime.strptime(d, \"%d %b %Y\").date(),\n collection,\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py"}], "after_files": [{"content": "from datetime import date, datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Cornwall Council\"\nDESCRIPTION = \"Source for cornwall.gov.uk services for Cornwall Council\"\nURL = \"https://cornwall.gov.uk\"\nTEST_CASES = {\n \"known_uprn\": {\"uprn\": \"100040118005\"},\n \"unknown_uprn\": {\"postcode\": \"TR261SP\", \"housenumberorname\": \"7\"},\n \"uprn_with_garden\": {\"uprn\": \"100040080721\"},\n}\n\nSEARCH_URLS = {\n \"uprn_search\": \"https://www.cornwall.gov.uk/my-area/\",\n \"collection_search\": \"https://www.cornwall.gov.uk/umbraco/Surface/Waste/MyCollectionDays?subscribe=False\",\n}\nICON_MAP = {\n \"Rubbish\": \"mdi:delete\",\n \"Recycling\": \"mdi:recycle\",\n \"Garden\": \"mdi:flower\",\n}\n\n\nclass Source:\n def __init__(\n self, uprn=None, postcode=None, housenumberorname=None\n ): # argX correspond to the args dict in the source configuration\n self._uprn = uprn\n self._postcode = postcode\n self._housenumberorname = housenumberorname\n\n def fetch(self):\n entries = []\n session = requests.Session()\n\n # Find the UPRN based on the postcode and the property name/number\n if self._uprn is None:\n args = {\"Postcode\": self._postcode}\n r = session.get(SEARCH_URLS[\"uprn_search\"], params=args)\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n propertyUprns = soup.find(id=\"Uprn\").find_all(\"option\")\n for match in propertyUprns:\n if match.text.startswith(self._housenumberorname):\n self._uprn = match[\"value\"]\n if self._uprn is None:\n raise Exception(\n f\"No UPRN found for {self._postcode} {self._housenumberorname}\"\n )\n\n # Get the collection days based on the UPRN (either supplied through arguments or searched for above)\n args = {\"uprn\": self._uprn}\n r = session.get(SEARCH_URLS[\"collection_search\"], params=args)\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n for collection_div in soup.find_all(\"div\", class_=\"collection\"):\n spans = collection_div.find_all(\"span\")\n if not spans:\n continue\n collection = spans[0].text\n d = spans[-1].text + \" \" + str(date.today().year)\n\n entries.append(\n Collection(\n datetime.strptime(d, \"%d %b %Y\").date(),\n collection,\n icon=ICON_MAP.get(collection),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/cornwall_gov_uk.py"}]}
| 1,278 | 782 |
gh_patches_debug_25618
|
rasdani/github-patches
|
git_diff
|
fonttools__fonttools-2014
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove MacOS module in macCreatorType
utils.macCreatorType uses either the `xattr` module or the `MacOS` module to do its thing. But the MacOS module has been removed from Python 3.x. If we only support 3.x, we should remove the `MacOS`-related code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/misc/macCreatorType.py`
Content:
```
1 from fontTools.misc.py23 import *
2 import sys
3 try:
4 import xattr
5 except ImportError:
6 xattr = None
7 try:
8 import MacOS
9 except ImportError:
10 MacOS = None
11
12
13 def _reverseString(s):
14 s = list(s)
15 s.reverse()
16 return strjoin(s)
17
18
19 def getMacCreatorAndType(path):
20 """Returns file creator and file type codes for a path.
21
22 Args:
23 path (str): A file path.
24
25 Returns:
26 A tuple of two :py:class:`fontTools.py23.Tag` objects, the first
27 representing the file creator and the second representing the
28 file type.
29 """
30 if xattr is not None:
31 try:
32 finderInfo = xattr.getxattr(path, 'com.apple.FinderInfo')
33 except (KeyError, IOError):
34 pass
35 else:
36 fileType = Tag(finderInfo[:4])
37 fileCreator = Tag(finderInfo[4:8])
38 return fileCreator, fileType
39 if MacOS is not None:
40 fileCreator, fileType = MacOS.GetCreatorAndType(path)
41 if sys.version_info[:2] < (2, 7) and sys.byteorder == "little":
42 # work around bug in MacOS.GetCreatorAndType() on intel:
43 # http://bugs.python.org/issue1594
44 # (fixed with Python 2.7)
45 fileCreator = _reverseString(fileCreator)
46 fileType = _reverseString(fileType)
47 return fileCreator, fileType
48 else:
49 return None, None
50
51
52 def setMacCreatorAndType(path, fileCreator, fileType):
53 """Set file creator and file type codes for a path.
54
55 Note that if the ``xattr`` module is not installed, no action is
56 taken but no error is raised.
57
58 Args:
59 path (str): A file path.
60 fileCreator: A four-character file creator tag.
61 fileType: A four-character file type tag.
62
63 """
64 if xattr is not None:
65 from fontTools.misc.textTools import pad
66 if not all(len(s) == 4 for s in (fileCreator, fileType)):
67 raise TypeError('arg must be string of 4 chars')
68 finderInfo = pad(bytesjoin([fileType, fileCreator]), 32)
69 xattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)
70 if MacOS is not None:
71 MacOS.SetCreatorAndType(path, fileCreator, fileType)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Lib/fontTools/misc/macCreatorType.py b/Lib/fontTools/misc/macCreatorType.py
--- a/Lib/fontTools/misc/macCreatorType.py
+++ b/Lib/fontTools/misc/macCreatorType.py
@@ -4,10 +4,6 @@
import xattr
except ImportError:
xattr = None
-try:
- import MacOS
-except ImportError:
- MacOS = None
def _reverseString(s):
@@ -36,17 +32,7 @@
fileType = Tag(finderInfo[:4])
fileCreator = Tag(finderInfo[4:8])
return fileCreator, fileType
- if MacOS is not None:
- fileCreator, fileType = MacOS.GetCreatorAndType(path)
- if sys.version_info[:2] < (2, 7) and sys.byteorder == "little":
- # work around bug in MacOS.GetCreatorAndType() on intel:
- # http://bugs.python.org/issue1594
- # (fixed with Python 2.7)
- fileCreator = _reverseString(fileCreator)
- fileType = _reverseString(fileType)
- return fileCreator, fileType
- else:
- return None, None
+ return None, None
def setMacCreatorAndType(path, fileCreator, fileType):
@@ -67,5 +53,3 @@
raise TypeError('arg must be string of 4 chars')
finderInfo = pad(bytesjoin([fileType, fileCreator]), 32)
xattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)
- if MacOS is not None:
- MacOS.SetCreatorAndType(path, fileCreator, fileType)
|
{"golden_diff": "diff --git a/Lib/fontTools/misc/macCreatorType.py b/Lib/fontTools/misc/macCreatorType.py\n--- a/Lib/fontTools/misc/macCreatorType.py\n+++ b/Lib/fontTools/misc/macCreatorType.py\n@@ -4,10 +4,6 @@\n \timport xattr\n except ImportError:\n \txattr = None\n-try:\n-\timport MacOS\n-except ImportError:\n-\tMacOS = None\n \n \n def _reverseString(s):\n@@ -36,17 +32,7 @@\n \t\t\tfileType = Tag(finderInfo[:4])\n \t\t\tfileCreator = Tag(finderInfo[4:8])\n \t\t\treturn fileCreator, fileType\n-\tif MacOS is not None:\n-\t\tfileCreator, fileType = MacOS.GetCreatorAndType(path)\n-\t\tif sys.version_info[:2] < (2, 7) and sys.byteorder == \"little\":\n-\t\t\t# work around bug in MacOS.GetCreatorAndType() on intel:\n-\t\t\t# http://bugs.python.org/issue1594\n-\t\t\t# (fixed with Python 2.7)\n-\t\t\tfileCreator = _reverseString(fileCreator)\n-\t\t\tfileType = _reverseString(fileType)\n-\t\treturn fileCreator, fileType\n-\telse:\n-\t\treturn None, None\n+\treturn None, None\n \n \n def setMacCreatorAndType(path, fileCreator, fileType):\n@@ -67,5 +53,3 @@\n \t\t\traise TypeError('arg must be string of 4 chars')\n \t\tfinderInfo = pad(bytesjoin([fileType, fileCreator]), 32)\n \t\txattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)\n-\tif MacOS is not None:\n-\t\tMacOS.SetCreatorAndType(path, fileCreator, fileType)\n", "issue": "Remove MacOS module in macCreatorType\nutils.macCreatorType uses either the `xattr` module or the `MacOS` module to do its thing. But the MacOS module has been removed from Python 3.x. If we only support 3.x, we should remove the `MacOS`-related code.\n", "before_files": [{"content": "from fontTools.misc.py23 import *\nimport sys\ntry:\n\timport xattr\nexcept ImportError:\n\txattr = None\ntry:\n\timport MacOS\nexcept ImportError:\n\tMacOS = None\n\n\ndef _reverseString(s):\n\ts = list(s)\n\ts.reverse()\n\treturn strjoin(s)\n\n\ndef getMacCreatorAndType(path):\n\t\"\"\"Returns file creator and file type codes for a path.\n\n\tArgs:\n\t\tpath (str): A file path.\n\n\tReturns:\n\t\tA tuple of two :py:class:`fontTools.py23.Tag` objects, the first\n\t\trepresenting the file creator and the second representing the\n\t\tfile type.\n\t\"\"\"\n\tif xattr is not None:\n\t\ttry:\n\t\t\tfinderInfo = xattr.getxattr(path, 'com.apple.FinderInfo')\n\t\texcept (KeyError, IOError):\n\t\t\tpass\n\t\telse:\n\t\t\tfileType = Tag(finderInfo[:4])\n\t\t\tfileCreator = Tag(finderInfo[4:8])\n\t\t\treturn fileCreator, fileType\n\tif MacOS is not None:\n\t\tfileCreator, fileType = MacOS.GetCreatorAndType(path)\n\t\tif sys.version_info[:2] < (2, 7) and sys.byteorder == \"little\":\n\t\t\t# work around bug in MacOS.GetCreatorAndType() on intel:\n\t\t\t# http://bugs.python.org/issue1594\n\t\t\t# (fixed with Python 2.7)\n\t\t\tfileCreator = _reverseString(fileCreator)\n\t\t\tfileType = _reverseString(fileType)\n\t\treturn fileCreator, fileType\n\telse:\n\t\treturn None, None\n\n\ndef setMacCreatorAndType(path, fileCreator, fileType):\n\t\"\"\"Set file creator and file type codes for a path.\n\n\tNote that if the ``xattr`` module is not installed, no action is\n\ttaken but no error is raised.\n\n\tArgs:\n\t\tpath (str): A file path.\n\t\tfileCreator: A four-character file creator tag.\n\t\tfileType: A four-character file type tag.\n\n\t\"\"\"\n\tif xattr is not None:\n\t\tfrom fontTools.misc.textTools import pad\n\t\tif not all(len(s) == 4 for s in (fileCreator, fileType)):\n\t\t\traise TypeError('arg must be string of 4 chars')\n\t\tfinderInfo = pad(bytesjoin([fileType, fileCreator]), 32)\n\t\txattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)\n\tif MacOS is not None:\n\t\tMacOS.SetCreatorAndType(path, fileCreator, fileType)\n", "path": "Lib/fontTools/misc/macCreatorType.py"}], "after_files": [{"content": "from fontTools.misc.py23 import *\nimport sys\ntry:\n\timport xattr\nexcept ImportError:\n\txattr = None\n\n\ndef _reverseString(s):\n\ts = list(s)\n\ts.reverse()\n\treturn strjoin(s)\n\n\ndef getMacCreatorAndType(path):\n\t\"\"\"Returns file creator and file type codes for a path.\n\n\tArgs:\n\t\tpath (str): A file path.\n\n\tReturns:\n\t\tA tuple of two :py:class:`fontTools.py23.Tag` objects, the first\n\t\trepresenting the file creator and the second representing the\n\t\tfile type.\n\t\"\"\"\n\tif xattr is not None:\n\t\ttry:\n\t\t\tfinderInfo = xattr.getxattr(path, 'com.apple.FinderInfo')\n\t\texcept (KeyError, IOError):\n\t\t\tpass\n\t\telse:\n\t\t\tfileType = Tag(finderInfo[:4])\n\t\t\tfileCreator = Tag(finderInfo[4:8])\n\t\t\treturn fileCreator, fileType\n\treturn None, None\n\n\ndef setMacCreatorAndType(path, fileCreator, fileType):\n\t\"\"\"Set file creator and file type codes for a path.\n\n\tNote that if the ``xattr`` module is not installed, no action is\n\ttaken but no error is raised.\n\n\tArgs:\n\t\tpath (str): A file path.\n\t\tfileCreator: A four-character file creator tag.\n\t\tfileType: A four-character file type tag.\n\n\t\"\"\"\n\tif xattr is not None:\n\t\tfrom fontTools.misc.textTools import pad\n\t\tif not all(len(s) == 4 for s in (fileCreator, fileType)):\n\t\t\traise TypeError('arg must be string of 4 chars')\n\t\tfinderInfo = pad(bytesjoin([fileType, fileCreator]), 32)\n\t\txattr.setxattr(path, 'com.apple.FinderInfo', finderInfo)\n", "path": "Lib/fontTools/misc/macCreatorType.py"}]}
| 1,017 | 378 |
gh_patches_debug_90
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-470
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PermissionError redeclared in exceptions.py shadows built-in PermissionError class
```
class PermissionError(BaseException):
pass
```
Can we remove this and just use the built-in? Or we could rename ours to something different.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/exceptions.py`
Content:
```
1 class RequirementError(BaseException):
2 pass
3
4
5 class DiskError(BaseException):
6 pass
7
8
9 class UnknownFilesystemFormat(BaseException):
10 pass
11
12
13 class ProfileError(BaseException):
14 pass
15
16
17 class SysCallError(BaseException):
18 def __init__(self, message, exit_code):
19 super(SysCallError, self).__init__(message)
20 self.message = message
21 self.exit_code = exit_code
22
23
24 class ProfileNotFound(BaseException):
25 pass
26
27
28 class HardwareIncompatibilityError(BaseException):
29 pass
30
31
32 class PermissionError(BaseException):
33 pass
34
35
36 class UserError(BaseException):
37 pass
38
39
40 class ServiceException(BaseException):
41 pass
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/lib/exceptions.py b/archinstall/lib/exceptions.py
--- a/archinstall/lib/exceptions.py
+++ b/archinstall/lib/exceptions.py
@@ -29,10 +29,6 @@
pass
-class PermissionError(BaseException):
- pass
-
-
class UserError(BaseException):
pass
|
{"golden_diff": "diff --git a/archinstall/lib/exceptions.py b/archinstall/lib/exceptions.py\n--- a/archinstall/lib/exceptions.py\n+++ b/archinstall/lib/exceptions.py\n@@ -29,10 +29,6 @@\n \tpass\n \n \n-class PermissionError(BaseException):\n-\tpass\n-\n-\n class UserError(BaseException):\n \tpass\n", "issue": "PermissionError redeclared in exceptions.py shadows built-in PermissionError class\n```\r\nclass PermissionError(BaseException):\r\n\tpass\r\n```\r\n\r\nCan we remove this and just use the built-in? Or we could rename ours to something different.\n", "before_files": [{"content": "class RequirementError(BaseException):\n\tpass\n\n\nclass DiskError(BaseException):\n\tpass\n\n\nclass UnknownFilesystemFormat(BaseException):\n\tpass\n\n\nclass ProfileError(BaseException):\n\tpass\n\n\nclass SysCallError(BaseException):\n\tdef __init__(self, message, exit_code):\n\t\tsuper(SysCallError, self).__init__(message)\n\t\tself.message = message\n\t\tself.exit_code = exit_code\n\n\nclass ProfileNotFound(BaseException):\n\tpass\n\n\nclass HardwareIncompatibilityError(BaseException):\n\tpass\n\n\nclass PermissionError(BaseException):\n\tpass\n\n\nclass UserError(BaseException):\n\tpass\n\n\nclass ServiceException(BaseException):\n\tpass\n", "path": "archinstall/lib/exceptions.py"}], "after_files": [{"content": "class RequirementError(BaseException):\n\tpass\n\n\nclass DiskError(BaseException):\n\tpass\n\n\nclass UnknownFilesystemFormat(BaseException):\n\tpass\n\n\nclass ProfileError(BaseException):\n\tpass\n\n\nclass SysCallError(BaseException):\n\tdef __init__(self, message, exit_code):\n\t\tsuper(SysCallError, self).__init__(message)\n\t\tself.message = message\n\t\tself.exit_code = exit_code\n\n\nclass ProfileNotFound(BaseException):\n\tpass\n\n\nclass HardwareIncompatibilityError(BaseException):\n\tpass\n\n\nclass UserError(BaseException):\n\tpass\n\n\nclass ServiceException(BaseException):\n\tpass\n", "path": "archinstall/lib/exceptions.py"}]}
| 531 | 73 |
gh_patches_debug_26469
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-1074
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Apply lazy import for `optuna.dashboard`.
Optuna always imports the dependencies of `optuna.dashboard` (e.g., `bokeh`), which makes unnecessary overhead in many use cases. Similar to #334, we can apply lazy import for them.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/__init__.py`
Content:
```
1 from optuna import dashboard # NOQA
2 from optuna import distributions # NOQA
3 from optuna import exceptions # NOQA
4 from optuna import importance # NOQA
5 from optuna import integration # NOQA
6 from optuna import logging # NOQA
7 from optuna import pruners # NOQA
8 from optuna import samplers # NOQA
9 from optuna import storages # NOQA
10 from optuna import structs # NOQA
11 from optuna import study # NOQA
12 from optuna import trial # NOQA
13 from optuna import version # NOQA
14 from optuna import visualization # NOQA
15
16 from optuna.study import create_study # NOQA
17 from optuna.study import delete_study # NOQA
18 from optuna.study import get_all_study_summaries # NOQA
19 from optuna.study import load_study # NOQA
20 from optuna.study import Study # NOQA
21 from optuna.trial import Trial # NOQA
22 from optuna.version import __version__ # NOQA
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/optuna/__init__.py b/optuna/__init__.py
--- a/optuna/__init__.py
+++ b/optuna/__init__.py
@@ -1,4 +1,6 @@
-from optuna import dashboard # NOQA
+import importlib
+import types
+
from optuna import distributions # NOQA
from optuna import exceptions # NOQA
from optuna import importance # NOQA
@@ -20,3 +22,37 @@
from optuna.study import Study # NOQA
from optuna.trial import Trial # NOQA
from optuna.version import __version__ # NOQA
+from optuna.type_checking import TYPE_CHECKING # NOQA
+
+
+if TYPE_CHECKING:
+ from optuna import dashboard # NOQA
+else:
+ from typing import Any
+
+ class _LazyImport(types.ModuleType):
+ """Module wrapper for lazy import.
+
+ This class wraps specified module and lazily import it when they are actually accessed.
+ Otherwise, `import optuna` becomes slower because it imports all submodules and
+ their dependencies (e.g., bokeh) all at once.
+ Within this project's usage, importlib override this module's attribute on the first
+ access and the imported submodule is directly accessed from the second access.
+
+ Args:
+ name: Name of module to apply lazy import.
+ """
+
+ def __init__(self, name: str) -> None:
+ super(_LazyImport, self).__init__(name)
+ self._name = name
+
+ def _load(self) -> types.ModuleType:
+ module = importlib.import_module(self._name)
+ self.__dict__.update(module.__dict__)
+ return module
+
+ def __getattr__(self, item: str) -> Any:
+ return getattr(self._load(), item)
+
+ dashboard = _LazyImport("optuna.dashboard")
|
{"golden_diff": "diff --git a/optuna/__init__.py b/optuna/__init__.py\n--- a/optuna/__init__.py\n+++ b/optuna/__init__.py\n@@ -1,4 +1,6 @@\n-from optuna import dashboard # NOQA\n+import importlib\n+import types\n+\n from optuna import distributions # NOQA\n from optuna import exceptions # NOQA\n from optuna import importance # NOQA\n@@ -20,3 +22,37 @@\n from optuna.study import Study # NOQA\n from optuna.trial import Trial # NOQA\n from optuna.version import __version__ # NOQA\n+from optuna.type_checking import TYPE_CHECKING # NOQA\n+\n+\n+if TYPE_CHECKING:\n+ from optuna import dashboard # NOQA\n+else:\n+ from typing import Any\n+\n+ class _LazyImport(types.ModuleType):\n+ \"\"\"Module wrapper for lazy import.\n+\n+ This class wraps specified module and lazily import it when they are actually accessed.\n+ Otherwise, `import optuna` becomes slower because it imports all submodules and\n+ their dependencies (e.g., bokeh) all at once.\n+ Within this project's usage, importlib override this module's attribute on the first\n+ access and the imported submodule is directly accessed from the second access.\n+\n+ Args:\n+ name: Name of module to apply lazy import.\n+ \"\"\"\n+\n+ def __init__(self, name: str) -> None:\n+ super(_LazyImport, self).__init__(name)\n+ self._name = name\n+\n+ def _load(self) -> types.ModuleType:\n+ module = importlib.import_module(self._name)\n+ self.__dict__.update(module.__dict__)\n+ return module\n+\n+ def __getattr__(self, item: str) -> Any:\n+ return getattr(self._load(), item)\n+\n+ dashboard = _LazyImport(\"optuna.dashboard\")\n", "issue": "Apply lazy import for `optuna.dashboard`.\nOptuna always imports the dependencies of `optuna.dashboard` (e.g., `bokeh`), which makes unnecessary overhead in many use cases. Similar to #334, we can apply lazy import for them.\n", "before_files": [{"content": "from optuna import dashboard # NOQA\nfrom optuna import distributions # NOQA\nfrom optuna import exceptions # NOQA\nfrom optuna import importance # NOQA\nfrom optuna import integration # NOQA\nfrom optuna import logging # NOQA\nfrom optuna import pruners # NOQA\nfrom optuna import samplers # NOQA\nfrom optuna import storages # NOQA\nfrom optuna import structs # NOQA\nfrom optuna import study # NOQA\nfrom optuna import trial # NOQA\nfrom optuna import version # NOQA\nfrom optuna import visualization # NOQA\n\nfrom optuna.study import create_study # NOQA\nfrom optuna.study import delete_study # NOQA\nfrom optuna.study import get_all_study_summaries # NOQA\nfrom optuna.study import load_study # NOQA\nfrom optuna.study import Study # NOQA\nfrom optuna.trial import Trial # NOQA\nfrom optuna.version import __version__ # NOQA\n", "path": "optuna/__init__.py"}], "after_files": [{"content": "import importlib\nimport types\n\nfrom optuna import distributions # NOQA\nfrom optuna import exceptions # NOQA\nfrom optuna import importance # NOQA\nfrom optuna import integration # NOQA\nfrom optuna import logging # NOQA\nfrom optuna import pruners # NOQA\nfrom optuna import samplers # NOQA\nfrom optuna import storages # NOQA\nfrom optuna import structs # NOQA\nfrom optuna import study # NOQA\nfrom optuna import trial # NOQA\nfrom optuna import version # NOQA\nfrom optuna import visualization # NOQA\n\nfrom optuna.study import create_study # NOQA\nfrom optuna.study import delete_study # NOQA\nfrom optuna.study import get_all_study_summaries # NOQA\nfrom optuna.study import load_study # NOQA\nfrom optuna.study import Study # NOQA\nfrom optuna.trial import Trial # NOQA\nfrom optuna.version import __version__ # NOQA\nfrom optuna.type_checking import TYPE_CHECKING # NOQA\n\n\nif TYPE_CHECKING:\n from optuna import dashboard # NOQA\nelse:\n from typing import Any\n\n class _LazyImport(types.ModuleType):\n \"\"\"Module wrapper for lazy import.\n\n This class wraps specified module and lazily import it when they are actually accessed.\n Otherwise, `import optuna` becomes slower because it imports all submodules and\n their dependencies (e.g., bokeh) all at once.\n Within this project's usage, importlib override this module's attribute on the first\n access and the imported submodule is directly accessed from the second access.\n\n Args:\n name: Name of module to apply lazy import.\n \"\"\"\n\n def __init__(self, name: str) -> None:\n super(_LazyImport, self).__init__(name)\n self._name = name\n\n def _load(self) -> types.ModuleType:\n module = importlib.import_module(self._name)\n self.__dict__.update(module.__dict__)\n return module\n\n def __getattr__(self, item: str) -> Any:\n return getattr(self._load(), item)\n\n dashboard = _LazyImport(\"optuna.dashboard\")\n", "path": "optuna/__init__.py"}]}
| 583 | 435 |
gh_patches_debug_4624
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-2419
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'install_folder' attribute is not always set
To reproduce, take https://github.com/memsharded/conan-hello example with one addition:
```python
def package(self):
print("self.source_folder:", self.source_folder)
print("self.build_folder:", self.build_folder)
print("self.install_folder:", self.install_folder)
...
```
now package it:
```
conan create . dbely/testing
```
everything goes well, with the output
```
Hello/0.1@dbely/testing: Calling package()
self.source_folder: C:\Users\dbely\.conan\data\Hello\0.1\dbely\testing\build\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7
self.build_folder: C:\Users\dbely\.conan\data\Hello\0.1\dbely\testing\build\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7
self.install_folder: C:\Users\dbely\.conan\data\Hello\0.1\dbely\testing\build\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7
```
Now do it step by step:
```
conan source .
conan install .
conan build .
conan package .
```
All the commands succeed except the last one:
```
PROJECT: Calling package()
self.source_folder: C:\Users\dbely\conan\conan-hello.git
self.build_folder: C:\Users\dbely\conan\conan-hello.git
ERROR: Hello/0.1@PROJECT: Error in package() method, line 21
print("self.install_folder:", self.install_folder)
AttributeError: 'HelloConan' object has no attribute 'install_folder'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/packager.py`
Content:
```
1 import os
2 import shutil
3
4 from conans.client import tools
5 from conans.util.files import mkdir, save, rmdir
6 from conans.util.log import logger
7 from conans.paths import CONANINFO, CONAN_MANIFEST
8 from conans.errors import ConanException, ConanExceptionInUserConanfileMethod, conanfile_exception_formatter
9 from conans.model.manifest import FileTreeManifest
10 from conans.client.output import ScopedOutput
11 from conans.client.file_copier import FileCopier
12
13
14 def create_package(conanfile, source_folder, build_folder, package_folder, install_folder,
15 output, local=False, copy_info=False):
16 """ copies built artifacts, libs, headers, data, etc from build_folder to
17 package folder
18 """
19 mkdir(package_folder)
20
21 # Make the copy of all the patterns
22 output.info("Generating the package")
23 output.info("Package folder %s" % (package_folder))
24
25 try:
26 package_output = ScopedOutput("%s package()" % output.scope, output)
27 output.highlight("Calling package()")
28 conanfile.package_folder = package_folder
29 conanfile.source_folder = source_folder
30 conanfile.build_folder = build_folder
31
32 def recipe_has(conanfile, attribute):
33 return attribute in conanfile.__class__.__dict__
34
35 if source_folder != build_folder:
36 conanfile.copy = FileCopier(source_folder, package_folder, build_folder)
37 with conanfile_exception_formatter(str(conanfile), "package"):
38 with tools.chdir(source_folder):
39 conanfile.package()
40 warn = recipe_has(conanfile, "package")
41 conanfile.copy.report(package_output, warn=warn)
42
43 conanfile.copy = FileCopier(build_folder, package_folder)
44 with tools.chdir(build_folder):
45 with conanfile_exception_formatter(str(conanfile), "package"):
46 conanfile.package()
47 warn = recipe_has(conanfile, "build") and recipe_has(conanfile, "package")
48 conanfile.copy.report(package_output, warn=warn)
49 except Exception as e:
50 if not local:
51 os.chdir(build_folder)
52 try:
53 rmdir(package_folder)
54 except Exception as e_rm:
55 output.error("Unable to remove package folder %s\n%s" % (package_folder, str(e_rm)))
56 output.warn("**** Please delete it manually ****")
57
58 if isinstance(e, ConanExceptionInUserConanfileMethod):
59 raise
60 raise ConanException(e)
61
62 _create_aux_files(install_folder, package_folder, conanfile, copy_info)
63 output.success("Package '%s' created" % os.path.basename(package_folder))
64
65
66 def _create_aux_files(install_folder, package_folder, conanfile, copy_info):
67 """ auxiliary method that creates CONANINFO and manifest in
68 the package_folder
69 """
70 logger.debug("Creating config files to %s" % package_folder)
71 if copy_info:
72 try:
73 shutil.copy(os.path.join(install_folder, CONANINFO), package_folder)
74 except IOError:
75 raise ConanException("%s does not exist inside of your %s folder. "
76 "Try to re-build it again to solve it."
77 % (CONANINFO, install_folder))
78 else:
79 save(os.path.join(package_folder, CONANINFO), conanfile.info.dumps())
80
81 # Create the digest for the package
82 digest = FileTreeManifest.create(package_folder)
83 save(os.path.join(package_folder, CONAN_MANIFEST), str(digest))
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/packager.py b/conans/client/packager.py
--- a/conans/client/packager.py
+++ b/conans/client/packager.py
@@ -27,6 +27,7 @@
output.highlight("Calling package()")
conanfile.package_folder = package_folder
conanfile.source_folder = source_folder
+ conanfile.install_folder = install_folder
conanfile.build_folder = build_folder
def recipe_has(conanfile, attribute):
|
{"golden_diff": "diff --git a/conans/client/packager.py b/conans/client/packager.py\n--- a/conans/client/packager.py\n+++ b/conans/client/packager.py\n@@ -27,6 +27,7 @@\n output.highlight(\"Calling package()\")\n conanfile.package_folder = package_folder\n conanfile.source_folder = source_folder\n+ conanfile.install_folder = install_folder\n conanfile.build_folder = build_folder\n \n def recipe_has(conanfile, attribute):\n", "issue": "'install_folder' attribute is not always set\nTo reproduce, take https://github.com/memsharded/conan-hello example with one addition:\r\n```python\r\n def package(self):\r\n print(\"self.source_folder:\", self.source_folder)\r\n print(\"self.build_folder:\", self.build_folder)\r\n print(\"self.install_folder:\", self.install_folder)\r\n ...\r\n```\r\nnow package it:\r\n```\r\nconan create . dbely/testing\r\n```\r\neverything goes well, with the output\r\n```\r\nHello/0.1@dbely/testing: Calling package()\r\nself.source_folder: C:\\Users\\dbely\\.conan\\data\\Hello\\0.1\\dbely\\testing\\build\\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7\r\nself.build_folder: C:\\Users\\dbely\\.conan\\data\\Hello\\0.1\\dbely\\testing\\build\\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7\r\nself.install_folder: C:\\Users\\dbely\\.conan\\data\\Hello\\0.1\\dbely\\testing\\build\\6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7\r\n```\r\nNow do it step by step:\r\n```\r\nconan source .\r\nconan install .\r\nconan build .\r\nconan package .\r\n```\r\nAll the commands succeed except the last one:\r\n```\r\nPROJECT: Calling package()\r\nself.source_folder: C:\\Users\\dbely\\conan\\conan-hello.git\r\nself.build_folder: C:\\Users\\dbely\\conan\\conan-hello.git\r\nERROR: Hello/0.1@PROJECT: Error in package() method, line 21\r\n print(\"self.install_folder:\", self.install_folder)\r\n AttributeError: 'HelloConan' object has no attribute 'install_folder'\r\n```\r\n\n", "before_files": [{"content": "import os\nimport shutil\n\nfrom conans.client import tools\nfrom conans.util.files import mkdir, save, rmdir\nfrom conans.util.log import logger\nfrom conans.paths import CONANINFO, CONAN_MANIFEST\nfrom conans.errors import ConanException, ConanExceptionInUserConanfileMethod, conanfile_exception_formatter\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.client.output import ScopedOutput\nfrom conans.client.file_copier import FileCopier\n\n\ndef create_package(conanfile, source_folder, build_folder, package_folder, install_folder,\n output, local=False, copy_info=False):\n \"\"\" copies built artifacts, libs, headers, data, etc from build_folder to\n package folder\n \"\"\"\n mkdir(package_folder)\n\n # Make the copy of all the patterns\n output.info(\"Generating the package\")\n output.info(\"Package folder %s\" % (package_folder))\n\n try:\n package_output = ScopedOutput(\"%s package()\" % output.scope, output)\n output.highlight(\"Calling package()\")\n conanfile.package_folder = package_folder\n conanfile.source_folder = source_folder\n conanfile.build_folder = build_folder\n\n def recipe_has(conanfile, attribute):\n return attribute in conanfile.__class__.__dict__\n\n if source_folder != build_folder:\n conanfile.copy = FileCopier(source_folder, package_folder, build_folder)\n with conanfile_exception_formatter(str(conanfile), \"package\"):\n with tools.chdir(source_folder):\n conanfile.package()\n warn = recipe_has(conanfile, \"package\")\n conanfile.copy.report(package_output, warn=warn)\n\n conanfile.copy = FileCopier(build_folder, package_folder)\n with tools.chdir(build_folder):\n with conanfile_exception_formatter(str(conanfile), \"package\"):\n conanfile.package()\n warn = recipe_has(conanfile, \"build\") and recipe_has(conanfile, \"package\")\n conanfile.copy.report(package_output, warn=warn)\n except Exception as e:\n if not local:\n os.chdir(build_folder)\n try:\n rmdir(package_folder)\n except Exception as e_rm:\n output.error(\"Unable to remove package folder %s\\n%s\" % (package_folder, str(e_rm)))\n output.warn(\"**** Please delete it manually ****\")\n\n if isinstance(e, ConanExceptionInUserConanfileMethod):\n raise\n raise ConanException(e)\n\n _create_aux_files(install_folder, package_folder, conanfile, copy_info)\n output.success(\"Package '%s' created\" % os.path.basename(package_folder))\n\n\ndef _create_aux_files(install_folder, package_folder, conanfile, copy_info):\n \"\"\" auxiliary method that creates CONANINFO and manifest in\n the package_folder\n \"\"\"\n logger.debug(\"Creating config files to %s\" % package_folder)\n if copy_info:\n try:\n shutil.copy(os.path.join(install_folder, CONANINFO), package_folder)\n except IOError:\n raise ConanException(\"%s does not exist inside of your %s folder. \"\n \"Try to re-build it again to solve it.\"\n % (CONANINFO, install_folder))\n else:\n save(os.path.join(package_folder, CONANINFO), conanfile.info.dumps())\n\n # Create the digest for the package\n digest = FileTreeManifest.create(package_folder)\n save(os.path.join(package_folder, CONAN_MANIFEST), str(digest))\n", "path": "conans/client/packager.py"}], "after_files": [{"content": "import os\nimport shutil\n\nfrom conans.client import tools\nfrom conans.util.files import mkdir, save, rmdir\nfrom conans.util.log import logger\nfrom conans.paths import CONANINFO, CONAN_MANIFEST\nfrom conans.errors import ConanException, ConanExceptionInUserConanfileMethod, conanfile_exception_formatter\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.client.output import ScopedOutput\nfrom conans.client.file_copier import FileCopier\n\n\ndef create_package(conanfile, source_folder, build_folder, package_folder, install_folder,\n output, local=False, copy_info=False):\n \"\"\" copies built artifacts, libs, headers, data, etc from build_folder to\n package folder\n \"\"\"\n mkdir(package_folder)\n\n # Make the copy of all the patterns\n output.info(\"Generating the package\")\n output.info(\"Package folder %s\" % (package_folder))\n\n try:\n package_output = ScopedOutput(\"%s package()\" % output.scope, output)\n output.highlight(\"Calling package()\")\n conanfile.package_folder = package_folder\n conanfile.source_folder = source_folder\n conanfile.install_folder = install_folder\n conanfile.build_folder = build_folder\n\n def recipe_has(conanfile, attribute):\n return attribute in conanfile.__class__.__dict__\n\n if source_folder != build_folder:\n conanfile.copy = FileCopier(source_folder, package_folder, build_folder)\n with conanfile_exception_formatter(str(conanfile), \"package\"):\n with tools.chdir(source_folder):\n conanfile.package()\n warn = recipe_has(conanfile, \"package\")\n conanfile.copy.report(package_output, warn=warn)\n\n conanfile.copy = FileCopier(build_folder, package_folder)\n with tools.chdir(build_folder):\n with conanfile_exception_formatter(str(conanfile), \"package\"):\n conanfile.package()\n warn = recipe_has(conanfile, \"build\") and recipe_has(conanfile, \"package\")\n conanfile.copy.report(package_output, warn=warn)\n except Exception as e:\n if not local:\n os.chdir(build_folder)\n try:\n rmdir(package_folder)\n except Exception as e_rm:\n output.error(\"Unable to remove package folder %s\\n%s\" % (package_folder, str(e_rm)))\n output.warn(\"**** Please delete it manually ****\")\n\n if isinstance(e, ConanExceptionInUserConanfileMethod):\n raise\n raise ConanException(e)\n\n _create_aux_files(install_folder, package_folder, conanfile, copy_info)\n output.success(\"Package '%s' created\" % os.path.basename(package_folder))\n\n\ndef _create_aux_files(install_folder, package_folder, conanfile, copy_info):\n \"\"\" auxiliary method that creates CONANINFO and manifest in\n the package_folder\n \"\"\"\n logger.debug(\"Creating config files to %s\" % package_folder)\n if copy_info:\n try:\n shutil.copy(os.path.join(install_folder, CONANINFO), package_folder)\n except IOError:\n raise ConanException(\"%s does not exist inside of your %s folder. \"\n \"Try to re-build it again to solve it.\"\n % (CONANINFO, install_folder))\n else:\n save(os.path.join(package_folder, CONANINFO), conanfile.info.dumps())\n\n # Create the digest for the package\n digest = FileTreeManifest.create(package_folder)\n save(os.path.join(package_folder, CONAN_MANIFEST), str(digest))\n", "path": "conans/client/packager.py"}]}
| 1,614 | 110 |
gh_patches_debug_15615
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-2832
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hardcoded language ISO in instance info
This line of code
https://github.com/bookwyrm-social/bookwyrm/blob/290b74039297349693f4f139fa58659a19d1e1ac/bookwyrm/views/wellknown.py#L113
needs to be changed that the actual language of the instance is shown. My instance is set to German but it tells the world that it is in English.
The problem I have here (and why I just not solved it right now) is that the language is represented by a ISO 639-1 two-letter code and I do not see that we have it here. The question is: Do we need 4 letter codes in `.env`/`settings.py`or can we somehow derive from this setting and get the two letter code?
Anyhow, I am too confused to do it atm.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/wellknown.py`
Content:
```
1 """ responds to various requests to /.well-know """
2
3 from dateutil.relativedelta import relativedelta
4 from django.http import HttpResponseNotFound
5 from django.http import JsonResponse
6 from django.shortcuts import get_object_or_404
7 from django.template.response import TemplateResponse
8 from django.utils import timezone
9 from django.views.decorators.http import require_GET
10
11 from bookwyrm import models
12 from bookwyrm.settings import DOMAIN, VERSION
13
14
15 @require_GET
16 def webfinger(request):
17 """allow other servers to ask about a user"""
18 resource = request.GET.get("resource")
19 if not resource or not resource.startswith("acct:"):
20 return HttpResponseNotFound()
21
22 username = resource.replace("acct:", "")
23 user = get_object_or_404(models.User, username__iexact=username)
24
25 return JsonResponse(
26 {
27 "subject": f"acct:{user.username}",
28 "links": [
29 {
30 "rel": "self",
31 "type": "application/activity+json",
32 "href": user.remote_id,
33 },
34 {
35 "rel": "http://ostatus.org/schema/1.0/subscribe",
36 "template": f"https://{DOMAIN}/ostatus_subscribe?acct={{uri}}",
37 },
38 ],
39 }
40 )
41
42
43 @require_GET
44 def nodeinfo_pointer(_):
45 """direct servers to nodeinfo"""
46 return JsonResponse(
47 {
48 "links": [
49 {
50 "rel": "http://nodeinfo.diaspora.software/ns/schema/2.0",
51 "href": f"https://{DOMAIN}/nodeinfo/2.0",
52 }
53 ]
54 }
55 )
56
57
58 @require_GET
59 def nodeinfo(_):
60 """basic info about the server"""
61 status_count = models.Status.objects.filter(user__local=True, deleted=False).count()
62 user_count = models.User.objects.filter(is_active=True, local=True).count()
63
64 month_ago = timezone.now() - relativedelta(months=1)
65 last_month_count = models.User.objects.filter(
66 is_active=True, local=True, last_active_date__gt=month_ago
67 ).count()
68
69 six_months_ago = timezone.now() - relativedelta(months=6)
70 six_month_count = models.User.objects.filter(
71 is_active=True, local=True, last_active_date__gt=six_months_ago
72 ).count()
73
74 site = models.SiteSettings.get()
75 return JsonResponse(
76 {
77 "version": "2.0",
78 "software": {"name": "bookwyrm", "version": VERSION},
79 "protocols": ["activitypub"],
80 "usage": {
81 "users": {
82 "total": user_count,
83 "activeMonth": last_month_count,
84 "activeHalfyear": six_month_count,
85 },
86 "localPosts": status_count,
87 },
88 "openRegistrations": site.allow_registration,
89 }
90 )
91
92
93 @require_GET
94 def instance_info(_):
95 """let's talk about your cool unique instance"""
96 user_count = models.User.objects.filter(is_active=True, local=True).count()
97 status_count = models.Status.objects.filter(user__local=True, deleted=False).count()
98
99 site = models.SiteSettings.get()
100 logo = site.logo_url
101 return JsonResponse(
102 {
103 "uri": DOMAIN,
104 "title": site.name,
105 "short_description": site.instance_short_description,
106 "description": site.instance_description,
107 "version": VERSION,
108 "stats": {
109 "user_count": user_count,
110 "status_count": status_count,
111 },
112 "thumbnail": logo,
113 "languages": ["en"],
114 "registrations": site.allow_registration,
115 "approval_required": not site.allow_registration
116 and site.allow_invite_requests,
117 "email": site.admin_email,
118 }
119 )
120
121
122 @require_GET
123 def peers(_):
124 """list of federated servers this instance connects with"""
125 names = models.FederatedServer.objects.filter(status="federated").values_list(
126 "server_name", flat=True
127 )
128 return JsonResponse(list(names), safe=False)
129
130
131 @require_GET
132 def host_meta(request):
133 """meta of the host"""
134 return TemplateResponse(request, "host_meta.xml", {"DOMAIN": DOMAIN})
135
136
137 @require_GET
138 def opensearch(request):
139 """Open Search xml spec"""
140 site = models.SiteSettings.get()
141 image = site.favicon_url
142 return TemplateResponse(
143 request, "opensearch.xml", {"image": image, "DOMAIN": DOMAIN}
144 )
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/views/wellknown.py b/bookwyrm/views/wellknown.py
--- a/bookwyrm/views/wellknown.py
+++ b/bookwyrm/views/wellknown.py
@@ -9,7 +9,7 @@
from django.views.decorators.http import require_GET
from bookwyrm import models
-from bookwyrm.settings import DOMAIN, VERSION
+from bookwyrm.settings import DOMAIN, VERSION, LANGUAGE_CODE
@require_GET
@@ -110,7 +110,7 @@
"status_count": status_count,
},
"thumbnail": logo,
- "languages": ["en"],
+ "languages": [LANGUAGE_CODE[:2]],
"registrations": site.allow_registration,
"approval_required": not site.allow_registration
and site.allow_invite_requests,
|
{"golden_diff": "diff --git a/bookwyrm/views/wellknown.py b/bookwyrm/views/wellknown.py\n--- a/bookwyrm/views/wellknown.py\n+++ b/bookwyrm/views/wellknown.py\n@@ -9,7 +9,7 @@\n from django.views.decorators.http import require_GET\n \n from bookwyrm import models\n-from bookwyrm.settings import DOMAIN, VERSION\n+from bookwyrm.settings import DOMAIN, VERSION, LANGUAGE_CODE\n \n \n @require_GET\n@@ -110,7 +110,7 @@\n \"status_count\": status_count,\n },\n \"thumbnail\": logo,\n- \"languages\": [\"en\"],\n+ \"languages\": [LANGUAGE_CODE[:2]],\n \"registrations\": site.allow_registration,\n \"approval_required\": not site.allow_registration\n and site.allow_invite_requests,\n", "issue": "Hardcoded language ISO in instance info\nThis line of code \r\n\r\nhttps://github.com/bookwyrm-social/bookwyrm/blob/290b74039297349693f4f139fa58659a19d1e1ac/bookwyrm/views/wellknown.py#L113\r\n\r\nneeds to be changed that the actual language of the instance is shown. My instance is set to German but it tells the world that it is in English.\r\n\r\nThe problem I have here (and why I just not solved it right now) is that the language is represented by a ISO 639-1 two-letter code and I do not see that we have it here. The question is: Do we need 4 letter codes in `.env`/`settings.py`or can we somehow derive from this setting and get the two letter code? \r\n\r\nAnyhow, I am too confused to do it atm.\n", "before_files": [{"content": "\"\"\" responds to various requests to /.well-know \"\"\"\n\nfrom dateutil.relativedelta import relativedelta\nfrom django.http import HttpResponseNotFound\nfrom django.http import JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.views.decorators.http import require_GET\n\nfrom bookwyrm import models\nfrom bookwyrm.settings import DOMAIN, VERSION\n\n\n@require_GET\ndef webfinger(request):\n \"\"\"allow other servers to ask about a user\"\"\"\n resource = request.GET.get(\"resource\")\n if not resource or not resource.startswith(\"acct:\"):\n return HttpResponseNotFound()\n\n username = resource.replace(\"acct:\", \"\")\n user = get_object_or_404(models.User, username__iexact=username)\n\n return JsonResponse(\n {\n \"subject\": f\"acct:{user.username}\",\n \"links\": [\n {\n \"rel\": \"self\",\n \"type\": \"application/activity+json\",\n \"href\": user.remote_id,\n },\n {\n \"rel\": \"http://ostatus.org/schema/1.0/subscribe\",\n \"template\": f\"https://{DOMAIN}/ostatus_subscribe?acct={{uri}}\",\n },\n ],\n }\n )\n\n\n@require_GET\ndef nodeinfo_pointer(_):\n \"\"\"direct servers to nodeinfo\"\"\"\n return JsonResponse(\n {\n \"links\": [\n {\n \"rel\": \"http://nodeinfo.diaspora.software/ns/schema/2.0\",\n \"href\": f\"https://{DOMAIN}/nodeinfo/2.0\",\n }\n ]\n }\n )\n\n\n@require_GET\ndef nodeinfo(_):\n \"\"\"basic info about the server\"\"\"\n status_count = models.Status.objects.filter(user__local=True, deleted=False).count()\n user_count = models.User.objects.filter(is_active=True, local=True).count()\n\n month_ago = timezone.now() - relativedelta(months=1)\n last_month_count = models.User.objects.filter(\n is_active=True, local=True, last_active_date__gt=month_ago\n ).count()\n\n six_months_ago = timezone.now() - relativedelta(months=6)\n six_month_count = models.User.objects.filter(\n is_active=True, local=True, last_active_date__gt=six_months_ago\n ).count()\n\n site = models.SiteSettings.get()\n return JsonResponse(\n {\n \"version\": \"2.0\",\n \"software\": {\"name\": \"bookwyrm\", \"version\": VERSION},\n \"protocols\": [\"activitypub\"],\n \"usage\": {\n \"users\": {\n \"total\": user_count,\n \"activeMonth\": last_month_count,\n \"activeHalfyear\": six_month_count,\n },\n \"localPosts\": status_count,\n },\n \"openRegistrations\": site.allow_registration,\n }\n )\n\n\n@require_GET\ndef instance_info(_):\n \"\"\"let's talk about your cool unique instance\"\"\"\n user_count = models.User.objects.filter(is_active=True, local=True).count()\n status_count = models.Status.objects.filter(user__local=True, deleted=False).count()\n\n site = models.SiteSettings.get()\n logo = site.logo_url\n return JsonResponse(\n {\n \"uri\": DOMAIN,\n \"title\": site.name,\n \"short_description\": site.instance_short_description,\n \"description\": site.instance_description,\n \"version\": VERSION,\n \"stats\": {\n \"user_count\": user_count,\n \"status_count\": status_count,\n },\n \"thumbnail\": logo,\n \"languages\": [\"en\"],\n \"registrations\": site.allow_registration,\n \"approval_required\": not site.allow_registration\n and site.allow_invite_requests,\n \"email\": site.admin_email,\n }\n )\n\n\n@require_GET\ndef peers(_):\n \"\"\"list of federated servers this instance connects with\"\"\"\n names = models.FederatedServer.objects.filter(status=\"federated\").values_list(\n \"server_name\", flat=True\n )\n return JsonResponse(list(names), safe=False)\n\n\n@require_GET\ndef host_meta(request):\n \"\"\"meta of the host\"\"\"\n return TemplateResponse(request, \"host_meta.xml\", {\"DOMAIN\": DOMAIN})\n\n\n@require_GET\ndef opensearch(request):\n \"\"\"Open Search xml spec\"\"\"\n site = models.SiteSettings.get()\n image = site.favicon_url\n return TemplateResponse(\n request, \"opensearch.xml\", {\"image\": image, \"DOMAIN\": DOMAIN}\n )\n", "path": "bookwyrm/views/wellknown.py"}], "after_files": [{"content": "\"\"\" responds to various requests to /.well-know \"\"\"\n\nfrom dateutil.relativedelta import relativedelta\nfrom django.http import HttpResponseNotFound\nfrom django.http import JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.views.decorators.http import require_GET\n\nfrom bookwyrm import models\nfrom bookwyrm.settings import DOMAIN, VERSION, LANGUAGE_CODE\n\n\n@require_GET\ndef webfinger(request):\n \"\"\"allow other servers to ask about a user\"\"\"\n resource = request.GET.get(\"resource\")\n if not resource or not resource.startswith(\"acct:\"):\n return HttpResponseNotFound()\n\n username = resource.replace(\"acct:\", \"\")\n user = get_object_or_404(models.User, username__iexact=username)\n\n return JsonResponse(\n {\n \"subject\": f\"acct:{user.username}\",\n \"links\": [\n {\n \"rel\": \"self\",\n \"type\": \"application/activity+json\",\n \"href\": user.remote_id,\n },\n {\n \"rel\": \"http://ostatus.org/schema/1.0/subscribe\",\n \"template\": f\"https://{DOMAIN}/ostatus_subscribe?acct={{uri}}\",\n },\n ],\n }\n )\n\n\n@require_GET\ndef nodeinfo_pointer(_):\n \"\"\"direct servers to nodeinfo\"\"\"\n return JsonResponse(\n {\n \"links\": [\n {\n \"rel\": \"http://nodeinfo.diaspora.software/ns/schema/2.0\",\n \"href\": f\"https://{DOMAIN}/nodeinfo/2.0\",\n }\n ]\n }\n )\n\n\n@require_GET\ndef nodeinfo(_):\n \"\"\"basic info about the server\"\"\"\n status_count = models.Status.objects.filter(user__local=True, deleted=False).count()\n user_count = models.User.objects.filter(is_active=True, local=True).count()\n\n month_ago = timezone.now() - relativedelta(months=1)\n last_month_count = models.User.objects.filter(\n is_active=True, local=True, last_active_date__gt=month_ago\n ).count()\n\n six_months_ago = timezone.now() - relativedelta(months=6)\n six_month_count = models.User.objects.filter(\n is_active=True, local=True, last_active_date__gt=six_months_ago\n ).count()\n\n site = models.SiteSettings.get()\n return JsonResponse(\n {\n \"version\": \"2.0\",\n \"software\": {\"name\": \"bookwyrm\", \"version\": VERSION},\n \"protocols\": [\"activitypub\"],\n \"usage\": {\n \"users\": {\n \"total\": user_count,\n \"activeMonth\": last_month_count,\n \"activeHalfyear\": six_month_count,\n },\n \"localPosts\": status_count,\n },\n \"openRegistrations\": site.allow_registration,\n }\n )\n\n\n@require_GET\ndef instance_info(_):\n \"\"\"let's talk about your cool unique instance\"\"\"\n user_count = models.User.objects.filter(is_active=True, local=True).count()\n status_count = models.Status.objects.filter(user__local=True, deleted=False).count()\n\n site = models.SiteSettings.get()\n logo = site.logo_url\n return JsonResponse(\n {\n \"uri\": DOMAIN,\n \"title\": site.name,\n \"short_description\": site.instance_short_description,\n \"description\": site.instance_description,\n \"version\": VERSION,\n \"stats\": {\n \"user_count\": user_count,\n \"status_count\": status_count,\n },\n \"thumbnail\": logo,\n \"languages\": [LANGUAGE_CODE[:2]],\n \"registrations\": site.allow_registration,\n \"approval_required\": not site.allow_registration\n and site.allow_invite_requests,\n \"email\": site.admin_email,\n }\n )\n\n\n@require_GET\ndef peers(_):\n \"\"\"list of federated servers this instance connects with\"\"\"\n names = models.FederatedServer.objects.filter(status=\"federated\").values_list(\n \"server_name\", flat=True\n )\n return JsonResponse(list(names), safe=False)\n\n\n@require_GET\ndef host_meta(request):\n \"\"\"meta of the host\"\"\"\n return TemplateResponse(request, \"host_meta.xml\", {\"DOMAIN\": DOMAIN})\n\n\n@require_GET\ndef opensearch(request):\n \"\"\"Open Search xml spec\"\"\"\n site = models.SiteSettings.get()\n image = site.favicon_url\n return TemplateResponse(\n request, \"opensearch.xml\", {\"image\": image, \"DOMAIN\": DOMAIN}\n )\n", "path": "bookwyrm/views/wellknown.py"}]}
| 1,745 | 174 |
gh_patches_debug_19455
|
rasdani/github-patches
|
git_diff
|
python__peps-2658
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Links on topic pages are broken
If you go to a page like https://peps.python.org/topic/packaging/ and try to click on any of the PEPs, you get a github 404, and the URL is ``https://peps.python.org/topic/pep-0582``.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pep_sphinx_extensions/__init__.py`
Content:
```
1 """Sphinx extensions for performant PEP processing"""
2
3 from __future__ import annotations
4
5 from typing import TYPE_CHECKING
6
7 from docutils.writers.html5_polyglot import HTMLTranslator
8 from sphinx import environment
9 from sphinx import project
10
11 from pep_sphinx_extensions.pep_processor.html import pep_html_builder
12 from pep_sphinx_extensions.pep_processor.html import pep_html_translator
13 from pep_sphinx_extensions.pep_processor.parsing import pep_parser
14 from pep_sphinx_extensions.pep_processor.parsing import pep_role
15 from pep_sphinx_extensions.pep_processor.transforms import pep_references
16 from pep_sphinx_extensions.pep_zero_generator.pep_index_generator import create_pep_zero
17
18 if TYPE_CHECKING:
19 from sphinx.application import Sphinx
20 from sphinx.config import Config
21
22
23 def find_files(self: environment.BuildEnvironment, config: Config, _b) -> None:
24 """Find all pep source files."""
25 import fnmatch
26 from pathlib import Path
27
28 root = Path(self.project.srcdir).absolute()
29 self.project.docnames = set()
30 for pattern in config.include_patterns:
31 for path in root.glob(pattern):
32 filename = str(path.relative_to(root))
33 if any(fnmatch.fnmatch(filename, pattern) for pattern in config.exclude_patterns):
34 continue
35
36 doc_name = self.project.path2doc(filename)
37 if not doc_name:
38 continue
39
40 if doc_name not in self.project.docnames:
41 self.project.docnames.add(doc_name)
42 continue
43
44 other_files = [str(f.relative_to(root)) for f in root.glob(f"{doc_name}.*")]
45 project.logger.warning(
46 f'multiple files found for the document "{doc_name}": {other_files!r}\n'
47 f'Use {self.doc2path(doc_name)!r} for the build.', once=True)
48
49
50 environment.BuildEnvironment.find_files = find_files
51
52
53 def _depart_maths():
54 pass # No-op callable for the type checker
55
56
57 def _update_config_for_builder(app: Sphinx) -> None:
58 app.env.document_ids = {} # For PEPReferenceRoleTitleText
59 if app.builder.name == "dirhtml":
60 app.env.settings["pep_url"] = "../pep-{:0>4}"
61
62 # internal_builder exists if Sphinx is run by build.py
63 if "internal_builder" not in app.tags:
64 app.connect("build-finished", _post_build) # Post-build tasks
65
66
67 def _post_build(app: Sphinx, exception: Exception | None) -> None:
68 from pathlib import Path
69
70 from build import create_index_file
71
72 if exception is not None:
73 return
74 create_index_file(Path(app.outdir), app.builder.name)
75
76
77 def setup(app: Sphinx) -> dict[str, bool]:
78 """Initialize Sphinx extension."""
79
80 environment.default_settings["pep_url"] = "pep-{:0>4}.html"
81 environment.default_settings["halt_level"] = 2 # Fail on Docutils warning
82
83 # Register plugin logic
84 app.add_builder(pep_html_builder.FileBuilder, override=True)
85 app.add_builder(pep_html_builder.DirectoryBuilder, override=True)
86
87 app.add_source_parser(pep_parser.PEPParser) # Add PEP transforms
88
89 app.set_translator("html", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (html builder)
90 app.set_translator("dirhtml", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (dirhtml builder)
91
92 app.add_role("pep", pep_role.PEPRole(), override=True) # Transform PEP references to links
93
94 app.add_post_transform(pep_references.PEPReferenceRoleTitleText)
95
96 # Register event callbacks
97 app.connect("builder-inited", _update_config_for_builder) # Update configuration values for builder used
98 app.connect("env-before-read-docs", create_pep_zero) # PEP 0 hook
99
100 # Mathematics rendering
101 inline_maths = HTMLTranslator.visit_math, _depart_maths
102 block_maths = HTMLTranslator.visit_math_block, _depart_maths
103 app.add_html_math_renderer("maths_to_html", inline_maths, block_maths) # Render maths to HTML
104
105 # Parallel safety: https://www.sphinx-doc.org/en/master/extdev/index.html#extension-metadata
106 return {"parallel_read_safe": True, "parallel_write_safe": True}
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pep_sphinx_extensions/__init__.py b/pep_sphinx_extensions/__init__.py
--- a/pep_sphinx_extensions/__init__.py
+++ b/pep_sphinx_extensions/__init__.py
@@ -57,7 +57,7 @@
def _update_config_for_builder(app: Sphinx) -> None:
app.env.document_ids = {} # For PEPReferenceRoleTitleText
if app.builder.name == "dirhtml":
- app.env.settings["pep_url"] = "../pep-{:0>4}"
+ app.env.settings["pep_url"] = "/pep-{:0>4}"
# internal_builder exists if Sphinx is run by build.py
if "internal_builder" not in app.tags:
@@ -77,7 +77,7 @@
def setup(app: Sphinx) -> dict[str, bool]:
"""Initialize Sphinx extension."""
- environment.default_settings["pep_url"] = "pep-{:0>4}.html"
+ environment.default_settings["pep_url"] = "/pep-{:0>4}.html"
environment.default_settings["halt_level"] = 2 # Fail on Docutils warning
# Register plugin logic
|
{"golden_diff": "diff --git a/pep_sphinx_extensions/__init__.py b/pep_sphinx_extensions/__init__.py\n--- a/pep_sphinx_extensions/__init__.py\n+++ b/pep_sphinx_extensions/__init__.py\n@@ -57,7 +57,7 @@\n def _update_config_for_builder(app: Sphinx) -> None:\n app.env.document_ids = {} # For PEPReferenceRoleTitleText\n if app.builder.name == \"dirhtml\":\n- app.env.settings[\"pep_url\"] = \"../pep-{:0>4}\"\n+ app.env.settings[\"pep_url\"] = \"/pep-{:0>4}\"\n \n # internal_builder exists if Sphinx is run by build.py\n if \"internal_builder\" not in app.tags:\n@@ -77,7 +77,7 @@\n def setup(app: Sphinx) -> dict[str, bool]:\n \"\"\"Initialize Sphinx extension.\"\"\"\n \n- environment.default_settings[\"pep_url\"] = \"pep-{:0>4}.html\"\n+ environment.default_settings[\"pep_url\"] = \"/pep-{:0>4}.html\"\n environment.default_settings[\"halt_level\"] = 2 # Fail on Docutils warning\n \n # Register plugin logic\n", "issue": "Links on topic pages are broken\nIf you go to a page like https://peps.python.org/topic/packaging/ and try to click on any of the PEPs, you get a github 404, and the URL is ``https://peps.python.org/topic/pep-0582``. \n", "before_files": [{"content": "\"\"\"Sphinx extensions for performant PEP processing\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom docutils.writers.html5_polyglot import HTMLTranslator\nfrom sphinx import environment\nfrom sphinx import project\n\nfrom pep_sphinx_extensions.pep_processor.html import pep_html_builder\nfrom pep_sphinx_extensions.pep_processor.html import pep_html_translator\nfrom pep_sphinx_extensions.pep_processor.parsing import pep_parser\nfrom pep_sphinx_extensions.pep_processor.parsing import pep_role\nfrom pep_sphinx_extensions.pep_processor.transforms import pep_references\nfrom pep_sphinx_extensions.pep_zero_generator.pep_index_generator import create_pep_zero\n\nif TYPE_CHECKING:\n from sphinx.application import Sphinx\n from sphinx.config import Config\n\n\ndef find_files(self: environment.BuildEnvironment, config: Config, _b) -> None:\n \"\"\"Find all pep source files.\"\"\"\n import fnmatch\n from pathlib import Path\n\n root = Path(self.project.srcdir).absolute()\n self.project.docnames = set()\n for pattern in config.include_patterns:\n for path in root.glob(pattern):\n filename = str(path.relative_to(root))\n if any(fnmatch.fnmatch(filename, pattern) for pattern in config.exclude_patterns):\n continue\n\n doc_name = self.project.path2doc(filename)\n if not doc_name:\n continue\n\n if doc_name not in self.project.docnames:\n self.project.docnames.add(doc_name)\n continue\n\n other_files = [str(f.relative_to(root)) for f in root.glob(f\"{doc_name}.*\")]\n project.logger.warning(\n f'multiple files found for the document \"{doc_name}\": {other_files!r}\\n'\n f'Use {self.doc2path(doc_name)!r} for the build.', once=True)\n\n\nenvironment.BuildEnvironment.find_files = find_files\n\n\ndef _depart_maths():\n pass # No-op callable for the type checker\n\n\ndef _update_config_for_builder(app: Sphinx) -> None:\n app.env.document_ids = {} # For PEPReferenceRoleTitleText\n if app.builder.name == \"dirhtml\":\n app.env.settings[\"pep_url\"] = \"../pep-{:0>4}\"\n\n # internal_builder exists if Sphinx is run by build.py\n if \"internal_builder\" not in app.tags:\n app.connect(\"build-finished\", _post_build) # Post-build tasks\n\n\ndef _post_build(app: Sphinx, exception: Exception | None) -> None:\n from pathlib import Path\n\n from build import create_index_file\n\n if exception is not None:\n return\n create_index_file(Path(app.outdir), app.builder.name)\n\n\ndef setup(app: Sphinx) -> dict[str, bool]:\n \"\"\"Initialize Sphinx extension.\"\"\"\n\n environment.default_settings[\"pep_url\"] = \"pep-{:0>4}.html\"\n environment.default_settings[\"halt_level\"] = 2 # Fail on Docutils warning\n\n # Register plugin logic\n app.add_builder(pep_html_builder.FileBuilder, override=True)\n app.add_builder(pep_html_builder.DirectoryBuilder, override=True)\n\n app.add_source_parser(pep_parser.PEPParser) # Add PEP transforms\n\n app.set_translator(\"html\", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (html builder)\n app.set_translator(\"dirhtml\", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (dirhtml builder)\n\n app.add_role(\"pep\", pep_role.PEPRole(), override=True) # Transform PEP references to links\n\n app.add_post_transform(pep_references.PEPReferenceRoleTitleText)\n\n # Register event callbacks\n app.connect(\"builder-inited\", _update_config_for_builder) # Update configuration values for builder used\n app.connect(\"env-before-read-docs\", create_pep_zero) # PEP 0 hook\n\n # Mathematics rendering\n inline_maths = HTMLTranslator.visit_math, _depart_maths\n block_maths = HTMLTranslator.visit_math_block, _depart_maths\n app.add_html_math_renderer(\"maths_to_html\", inline_maths, block_maths) # Render maths to HTML\n\n # Parallel safety: https://www.sphinx-doc.org/en/master/extdev/index.html#extension-metadata\n return {\"parallel_read_safe\": True, \"parallel_write_safe\": True}\n", "path": "pep_sphinx_extensions/__init__.py"}], "after_files": [{"content": "\"\"\"Sphinx extensions for performant PEP processing\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom docutils.writers.html5_polyglot import HTMLTranslator\nfrom sphinx import environment\nfrom sphinx import project\n\nfrom pep_sphinx_extensions.pep_processor.html import pep_html_builder\nfrom pep_sphinx_extensions.pep_processor.html import pep_html_translator\nfrom pep_sphinx_extensions.pep_processor.parsing import pep_parser\nfrom pep_sphinx_extensions.pep_processor.parsing import pep_role\nfrom pep_sphinx_extensions.pep_processor.transforms import pep_references\nfrom pep_sphinx_extensions.pep_zero_generator.pep_index_generator import create_pep_zero\n\nif TYPE_CHECKING:\n from sphinx.application import Sphinx\n from sphinx.config import Config\n\n\ndef find_files(self: environment.BuildEnvironment, config: Config, _b) -> None:\n \"\"\"Find all pep source files.\"\"\"\n import fnmatch\n from pathlib import Path\n\n root = Path(self.project.srcdir).absolute()\n self.project.docnames = set()\n for pattern in config.include_patterns:\n for path in root.glob(pattern):\n filename = str(path.relative_to(root))\n if any(fnmatch.fnmatch(filename, pattern) for pattern in config.exclude_patterns):\n continue\n\n doc_name = self.project.path2doc(filename)\n if not doc_name:\n continue\n\n if doc_name not in self.project.docnames:\n self.project.docnames.add(doc_name)\n continue\n\n other_files = [str(f.relative_to(root)) for f in root.glob(f\"{doc_name}.*\")]\n project.logger.warning(\n f'multiple files found for the document \"{doc_name}\": {other_files!r}\\n'\n f'Use {self.doc2path(doc_name)!r} for the build.', once=True)\n\n\nenvironment.BuildEnvironment.find_files = find_files\n\n\ndef _depart_maths():\n pass # No-op callable for the type checker\n\n\ndef _update_config_for_builder(app: Sphinx) -> None:\n app.env.document_ids = {} # For PEPReferenceRoleTitleText\n if app.builder.name == \"dirhtml\":\n app.env.settings[\"pep_url\"] = \"/pep-{:0>4}\"\n\n # internal_builder exists if Sphinx is run by build.py\n if \"internal_builder\" not in app.tags:\n app.connect(\"build-finished\", _post_build) # Post-build tasks\n\n\ndef _post_build(app: Sphinx, exception: Exception | None) -> None:\n from pathlib import Path\n\n from build import create_index_file\n\n if exception is not None:\n return\n create_index_file(Path(app.outdir), app.builder.name)\n\n\ndef setup(app: Sphinx) -> dict[str, bool]:\n \"\"\"Initialize Sphinx extension.\"\"\"\n\n environment.default_settings[\"pep_url\"] = \"/pep-{:0>4}.html\"\n environment.default_settings[\"halt_level\"] = 2 # Fail on Docutils warning\n\n # Register plugin logic\n app.add_builder(pep_html_builder.FileBuilder, override=True)\n app.add_builder(pep_html_builder.DirectoryBuilder, override=True)\n\n app.add_source_parser(pep_parser.PEPParser) # Add PEP transforms\n\n app.set_translator(\"html\", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (html builder)\n app.set_translator(\"dirhtml\", pep_html_translator.PEPTranslator) # Docutils Node Visitor overrides (dirhtml builder)\n\n app.add_role(\"pep\", pep_role.PEPRole(), override=True) # Transform PEP references to links\n\n app.add_post_transform(pep_references.PEPReferenceRoleTitleText)\n\n # Register event callbacks\n app.connect(\"builder-inited\", _update_config_for_builder) # Update configuration values for builder used\n app.connect(\"env-before-read-docs\", create_pep_zero) # PEP 0 hook\n\n # Mathematics rendering\n inline_maths = HTMLTranslator.visit_math, _depart_maths\n block_maths = HTMLTranslator.visit_math_block, _depart_maths\n app.add_html_math_renderer(\"maths_to_html\", inline_maths, block_maths) # Render maths to HTML\n\n # Parallel safety: https://www.sphinx-doc.org/en/master/extdev/index.html#extension-metadata\n return {\"parallel_read_safe\": True, \"parallel_write_safe\": True}\n", "path": "pep_sphinx_extensions/__init__.py"}]}
| 1,500 | 269 |
gh_patches_debug_30969
|
rasdani/github-patches
|
git_diff
|
opensearch-project__opensearch-build-1395
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[META] Automate artifact signing with OpensearchSignerClient for linux
### Is your feature request related to a problem? Please describe
Signing currently is a long painstaking process. It needs to be automated to save time when releasing artifacts. We need a tool to be able to sign all artifacts -
- opensearch
- opensearch-dashboards
### Describe the solution you'd like
Use the existing `Signer.py` class to sign the artifacts and generate ".sig" file along with ".asc" files.
### Tasks
- [x] #1382
- [x] #1383
- [x] #1385
### Acceptance Criteria
- [ ] User is able to provide the artifact directory and tool is able to sign all the artifacts in the directory
- [ ] User is able to pass the artifact path and the tool is able to sign the artifact
### Next Steps
Next steps would include to extend this process over for mac and windows
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sign_workflow/signer.py`
Content:
```
1 #!/usr/bin/env python
2
3 # SPDX-License-Identifier: Apache-2.0
4 #
5 # The OpenSearch Contributors require contributions made to
6 # this file be licensed under the Apache-2.0 license or a
7 # compatible open source license.
8
9 import logging
10 import os
11
12 from git.git_repository import GitRepository
13
14 """
15 This class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.
16 The signed artifacts will be found in the same location as the original artifacts.
17 """
18
19
20 class Signer:
21 ACCEPTED_FILE_TYPES = [".zip", ".jar", ".war", ".pom", ".module", ".tar.gz"]
22
23 def __init__(self):
24 self.git_repo = GitRepository(self.get_repo_url(), "HEAD", working_subdirectory="src")
25 self.git_repo.execute("./bootstrap")
26 self.git_repo.execute("rm config.cfg")
27
28 def sign_artifact(self, artifact, basepath, signature_type):
29 self.generate_signature_and_verify(artifact, basepath, signature_type)
30
31 def sign_artifacts(self, artifacts, basepath, signature_type):
32 for artifact in artifacts:
33 if not self.is_valid_file_type(artifact):
34 logging.info(f"Skipping signing of file ${artifact}")
35 continue
36 self.generate_signature_and_verify(artifact, basepath, signature_type)
37
38 def generate_signature_and_verify(self, artifact, basepath, signature_type):
39 location = os.path.join(basepath, artifact)
40 self.sign(location, signature_type)
41 self.verify(location + signature_type)
42
43 def is_valid_file_type(self, file_name):
44 return any(
45 file_name.endswith(x) for x in Signer.ACCEPTED_FILE_TYPES
46 )
47
48 def get_repo_url(self):
49 if "GITHUB_TOKEN" in os.environ:
50 return "https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git"
51 return "https://github.com/opensearch-project/opensearch-signer-client.git"
52
53 def sign(self, filename, signature_type):
54 signature_file = filename + signature_type
55 signing_cmd = [
56 "./opensearch-signer-client",
57 "-i",
58 filename,
59 "-o",
60 signature_file,
61 "-p",
62 "pgp",
63 ]
64 self.git_repo.execute(" ".join(signing_cmd))
65
66 def verify(self, filename):
67 verify_cmd = ["gpg", "--verify-files", filename]
68 self.git_repo.execute(" ".join(verify_cmd))
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sign_workflow/signer.py b/src/sign_workflow/signer.py
--- a/src/sign_workflow/signer.py
+++ b/src/sign_workflow/signer.py
@@ -26,12 +26,15 @@
self.git_repo.execute("rm config.cfg")
def sign_artifact(self, artifact, basepath, signature_type):
+ if not self.is_valid_file_type(artifact):
+ logging.info(f"Skipping signing of file {artifact}")
+ return
self.generate_signature_and_verify(artifact, basepath, signature_type)
def sign_artifacts(self, artifacts, basepath, signature_type):
for artifact in artifacts:
if not self.is_valid_file_type(artifact):
- logging.info(f"Skipping signing of file ${artifact}")
+ logging.info(f"Skipping signing of file {artifact}")
continue
self.generate_signature_and_verify(artifact, basepath, signature_type)
@@ -50,8 +53,14 @@
return "https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git"
return "https://github.com/opensearch-project/opensearch-signer-client.git"
+ def __remove_existing_signature__(self, signature_file):
+ if os.path.exists(signature_file):
+ logging.warning(f"Removing existing signature file {signature_file}")
+ os.remove(signature_file)
+
def sign(self, filename, signature_type):
signature_file = filename + signature_type
+ self.__remove_existing_signature__(signature_file)
signing_cmd = [
"./opensearch-signer-client",
"-i",
|
{"golden_diff": "diff --git a/src/sign_workflow/signer.py b/src/sign_workflow/signer.py\n--- a/src/sign_workflow/signer.py\n+++ b/src/sign_workflow/signer.py\n@@ -26,12 +26,15 @@\n self.git_repo.execute(\"rm config.cfg\")\n \n def sign_artifact(self, artifact, basepath, signature_type):\n+ if not self.is_valid_file_type(artifact):\n+ logging.info(f\"Skipping signing of file {artifact}\")\n+ return\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n \n def sign_artifacts(self, artifacts, basepath, signature_type):\n for artifact in artifacts:\n if not self.is_valid_file_type(artifact):\n- logging.info(f\"Skipping signing of file ${artifact}\")\n+ logging.info(f\"Skipping signing of file {artifact}\")\n continue\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n \n@@ -50,8 +53,14 @@\n return \"https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git\"\n return \"https://github.com/opensearch-project/opensearch-signer-client.git\"\n \n+ def __remove_existing_signature__(self, signature_file):\n+ if os.path.exists(signature_file):\n+ logging.warning(f\"Removing existing signature file {signature_file}\")\n+ os.remove(signature_file)\n+\n def sign(self, filename, signature_type):\n signature_file = filename + signature_type\n+ self.__remove_existing_signature__(signature_file)\n signing_cmd = [\n \"./opensearch-signer-client\",\n \"-i\",\n", "issue": "[META] Automate artifact signing with OpensearchSignerClient for linux\n### Is your feature request related to a problem? Please describe\r\n\r\nSigning currently is a long painstaking process. It needs to be automated to save time when releasing artifacts. We need a tool to be able to sign all artifacts - \r\n- opensearch\r\n- opensearch-dashboards\r\n\r\n### Describe the solution you'd like\r\n\r\nUse the existing `Signer.py` class to sign the artifacts and generate \".sig\" file along with \".asc\" files.\r\n\r\n### Tasks\r\n- [x] #1382\r\n- [x] #1383\r\n- [x] #1385\r\n\r\n### Acceptance Criteria\r\n- [ ] User is able to provide the artifact directory and tool is able to sign all the artifacts in the directory\r\n- [ ] User is able to pass the artifact path and the tool is able to sign the artifact\r\n\r\n### Next Steps\r\nNext steps would include to extend this process over for mac and windows\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\nimport os\n\nfrom git.git_repository import GitRepository\n\n\"\"\"\nThis class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.\nThe signed artifacts will be found in the same location as the original artifacts.\n\"\"\"\n\n\nclass Signer:\n ACCEPTED_FILE_TYPES = [\".zip\", \".jar\", \".war\", \".pom\", \".module\", \".tar.gz\"]\n\n def __init__(self):\n self.git_repo = GitRepository(self.get_repo_url(), \"HEAD\", working_subdirectory=\"src\")\n self.git_repo.execute(\"./bootstrap\")\n self.git_repo.execute(\"rm config.cfg\")\n\n def sign_artifact(self, artifact, basepath, signature_type):\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n\n def sign_artifacts(self, artifacts, basepath, signature_type):\n for artifact in artifacts:\n if not self.is_valid_file_type(artifact):\n logging.info(f\"Skipping signing of file ${artifact}\")\n continue\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n\n def generate_signature_and_verify(self, artifact, basepath, signature_type):\n location = os.path.join(basepath, artifact)\n self.sign(location, signature_type)\n self.verify(location + signature_type)\n\n def is_valid_file_type(self, file_name):\n return any(\n file_name.endswith(x) for x in Signer.ACCEPTED_FILE_TYPES\n )\n\n def get_repo_url(self):\n if \"GITHUB_TOKEN\" in os.environ:\n return \"https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git\"\n return \"https://github.com/opensearch-project/opensearch-signer-client.git\"\n\n def sign(self, filename, signature_type):\n signature_file = filename + signature_type\n signing_cmd = [\n \"./opensearch-signer-client\",\n \"-i\",\n filename,\n \"-o\",\n signature_file,\n \"-p\",\n \"pgp\",\n ]\n self.git_repo.execute(\" \".join(signing_cmd))\n\n def verify(self, filename):\n verify_cmd = [\"gpg\", \"--verify-files\", filename]\n self.git_repo.execute(\" \".join(verify_cmd))\n", "path": "src/sign_workflow/signer.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\nimport os\n\nfrom git.git_repository import GitRepository\n\n\"\"\"\nThis class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.\nThe signed artifacts will be found in the same location as the original artifacts.\n\"\"\"\n\n\nclass Signer:\n ACCEPTED_FILE_TYPES = [\".zip\", \".jar\", \".war\", \".pom\", \".module\", \".tar.gz\"]\n\n def __init__(self):\n self.git_repo = GitRepository(self.get_repo_url(), \"HEAD\", working_subdirectory=\"src\")\n self.git_repo.execute(\"./bootstrap\")\n self.git_repo.execute(\"rm config.cfg\")\n\n def sign_artifact(self, artifact, basepath, signature_type):\n if not self.is_valid_file_type(artifact):\n logging.info(f\"Skipping signing of file {artifact}\")\n return\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n\n def sign_artifacts(self, artifacts, basepath, signature_type):\n for artifact in artifacts:\n if not self.is_valid_file_type(artifact):\n logging.info(f\"Skipping signing of file {artifact}\")\n continue\n self.generate_signature_and_verify(artifact, basepath, signature_type)\n\n def generate_signature_and_verify(self, artifact, basepath, signature_type):\n location = os.path.join(basepath, artifact)\n self.sign(location, signature_type)\n self.verify(location + signature_type)\n\n def is_valid_file_type(self, file_name):\n return any(\n file_name.endswith(x) for x in Signer.ACCEPTED_FILE_TYPES\n )\n\n def get_repo_url(self):\n if \"GITHUB_TOKEN\" in os.environ:\n return \"https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git\"\n return \"https://github.com/opensearch-project/opensearch-signer-client.git\"\n\n def __remove_existing_signature__(self, signature_file):\n if os.path.exists(signature_file):\n logging.warning(f\"Removing existing signature file {signature_file}\")\n os.remove(signature_file)\n\n def sign(self, filename, signature_type):\n signature_file = filename + signature_type\n self.__remove_existing_signature__(signature_file)\n signing_cmd = [\n \"./opensearch-signer-client\",\n \"-i\",\n filename,\n \"-o\",\n signature_file,\n \"-p\",\n \"pgp\",\n ]\n self.git_repo.execute(\" \".join(signing_cmd))\n\n def verify(self, filename):\n verify_cmd = [\"gpg\", \"--verify-files\", filename]\n self.git_repo.execute(\" \".join(verify_cmd))\n", "path": "src/sign_workflow/signer.py"}]}
| 1,126 | 346 |
gh_patches_debug_40057
|
rasdani/github-patches
|
git_diff
|
larq__larq-97
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support epoch level logging for QuantizedLogger
Currently this only work on a per batch bases and not with tensorboard using `update_freq="epoch"`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `larq/callbacks.py`
Content:
```
1 import tensorflow as tf
2 import numpy as np
3
4
5 class QuantizationLogger(tf.keras.callbacks.Callback):
6 """Callback that adds quantization specific metrics.
7
8 !!! note ""
9 In order for metrics to be picked up by TensorBoard this callback needs to be
10 applied before the TensorBoard callback and use the same update frequency.
11
12 !!! example
13 ```python
14 callbacks = [
15 QuantizationLogger(update_freq=100),
16 tf.keras.callbacks.TensorBoard(update_freq=100),
17 ]
18 model.fit(X_train, Y_train, callbacks=callbacks)
19 ```
20
21 # Metrics
22 - `changed_quantization_ration`: The ration of quantized weights in each layer that
23 changed during the weight update.
24
25 # Arguments
26 update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after
27 each batch. If using an integer the callback will compute the metrics every
28 `update_freq` batches. Note that computing too frequently can slow down training.
29 """
30
31 def __init__(self, update_freq="batch"):
32 self.previous_weights = {}
33 self.update_freq = update_freq if update_freq != "batch" else 1
34
35 def on_batch_end(self, batch, logs=None):
36 should_log = batch > 0 and (batch + 1) % self.update_freq == 0
37 should_store = (batch + 2) % self.update_freq == 0
38
39 if should_log or should_store:
40 ops = []
41 op_names = []
42 for layer in self.model.layers:
43 if hasattr(layer, "quantized_weights"):
44 for i, weight in enumerate(layer.quantized_weights):
45 ops.append(weight)
46 op_names.append(layer.name if i == 0 else f"{layer.name}_{i}")
47
48 for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):
49 if should_log:
50 logs[f"changed_quantization_ration/{key.replace(':', '_')}"] = 1 - (
51 np.count_nonzero(value == self.previous_weights[key])
52 / value.size
53 )
54 if should_store:
55 self.previous_weights[key] = value
56
57 if should_log and not should_store:
58 # We don't need it in the next batch anymore
59 self.previous_weights = {}
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/larq/callbacks.py b/larq/callbacks.py
--- a/larq/callbacks.py
+++ b/larq/callbacks.py
@@ -11,10 +11,7 @@
!!! example
```python
- callbacks = [
- QuantizationLogger(update_freq=100),
- tf.keras.callbacks.TensorBoard(update_freq=100),
- ]
+ callbacks = [QuantizationLogger(), tf.keras.callbacks.TensorBoard()]
model.fit(X_train, Y_train, callbacks=callbacks)
```
@@ -23,19 +20,18 @@
changed during the weight update.
# Arguments
- update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after
- each batch. If using an integer the callback will compute the metrics every
- `update_freq` batches. Note that computing too frequently can slow down training.
+ update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`, computes the
+ metrics after each batch. The same applies for `'epoch'`. If using an integer
+ the callback will compute the metrics every `update_freq` batches.
+ Note that computing too frequently can slow down training.
"""
- def __init__(self, update_freq="batch"):
- self.previous_weights = {}
+ def __init__(self, update_freq="epoch"):
+ self.batch_previous_weights = {}
+ self.epoch_previous_weights = {}
self.update_freq = update_freq if update_freq != "batch" else 1
- def on_batch_end(self, batch, logs=None):
- should_log = batch > 0 and (batch + 1) % self.update_freq == 0
- should_store = (batch + 2) % self.update_freq == 0
-
+ def _maybe_log_and_store(self, storage, logs, should_log=True, should_store=True):
if should_log or should_store:
ops = []
op_names = []
@@ -46,14 +42,29 @@
op_names.append(layer.name if i == 0 else f"{layer.name}_{i}")
for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):
+ value = value.astype(np.int8)
if should_log:
logs[f"changed_quantization_ration/{key.replace(':', '_')}"] = 1 - (
- np.count_nonzero(value == self.previous_weights[key])
- / value.size
+ np.count_nonzero(value == storage[key]) / value.size
)
if should_store:
- self.previous_weights[key] = value
+ storage[key] = value
if should_log and not should_store:
# We don't need it in the next batch anymore
- self.previous_weights = {}
+ storage = {}
+
+ def on_batch_end(self, batch, logs=None):
+ if self.update_freq != "epoch":
+ self._maybe_log_and_store(
+ self.batch_previous_weights,
+ logs,
+ should_log=batch > 0 and (batch + 1) % self.update_freq == 0,
+ should_store=(batch + 2) % self.update_freq == 0,
+ )
+
+ def on_train_begin(self, logs=None):
+ self._maybe_log_and_store(self.epoch_previous_weights, logs, should_log=False)
+
+ def on_epoch_end(self, epoch, logs=None):
+ self._maybe_log_and_store(self.epoch_previous_weights, logs)
|
{"golden_diff": "diff --git a/larq/callbacks.py b/larq/callbacks.py\n--- a/larq/callbacks.py\n+++ b/larq/callbacks.py\n@@ -11,10 +11,7 @@\n \n !!! example\n ```python\n- callbacks = [\n- QuantizationLogger(update_freq=100),\n- tf.keras.callbacks.TensorBoard(update_freq=100),\n- ]\n+ callbacks = [QuantizationLogger(), tf.keras.callbacks.TensorBoard()]\n model.fit(X_train, Y_train, callbacks=callbacks)\n ```\n \n@@ -23,19 +20,18 @@\n changed during the weight update.\n \n # Arguments\n- update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after\n- each batch. If using an integer the callback will compute the metrics every\n- `update_freq` batches. Note that computing too frequently can slow down training.\n+ update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`, computes the\n+ metrics after each batch. The same applies for `'epoch'`. If using an integer\n+ the callback will compute the metrics every `update_freq` batches.\n+ Note that computing too frequently can slow down training.\n \"\"\"\n \n- def __init__(self, update_freq=\"batch\"):\n- self.previous_weights = {}\n+ def __init__(self, update_freq=\"epoch\"):\n+ self.batch_previous_weights = {}\n+ self.epoch_previous_weights = {}\n self.update_freq = update_freq if update_freq != \"batch\" else 1\n \n- def on_batch_end(self, batch, logs=None):\n- should_log = batch > 0 and (batch + 1) % self.update_freq == 0\n- should_store = (batch + 2) % self.update_freq == 0\n-\n+ def _maybe_log_and_store(self, storage, logs, should_log=True, should_store=True):\n if should_log or should_store:\n ops = []\n op_names = []\n@@ -46,14 +42,29 @@\n op_names.append(layer.name if i == 0 else f\"{layer.name}_{i}\")\n \n for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):\n+ value = value.astype(np.int8)\n if should_log:\n logs[f\"changed_quantization_ration/{key.replace(':', '_')}\"] = 1 - (\n- np.count_nonzero(value == self.previous_weights[key])\n- / value.size\n+ np.count_nonzero(value == storage[key]) / value.size\n )\n if should_store:\n- self.previous_weights[key] = value\n+ storage[key] = value\n \n if should_log and not should_store:\n # We don't need it in the next batch anymore\n- self.previous_weights = {}\n+ storage = {}\n+\n+ def on_batch_end(self, batch, logs=None):\n+ if self.update_freq != \"epoch\":\n+ self._maybe_log_and_store(\n+ self.batch_previous_weights,\n+ logs,\n+ should_log=batch > 0 and (batch + 1) % self.update_freq == 0,\n+ should_store=(batch + 2) % self.update_freq == 0,\n+ )\n+\n+ def on_train_begin(self, logs=None):\n+ self._maybe_log_and_store(self.epoch_previous_weights, logs, should_log=False)\n+\n+ def on_epoch_end(self, epoch, logs=None):\n+ self._maybe_log_and_store(self.epoch_previous_weights, logs)\n", "issue": "Support epoch level logging for QuantizedLogger\nCurrently this only work on a per batch bases and not with tensorboard using `update_freq=\"epoch\"`\n", "before_files": [{"content": "import tensorflow as tf\nimport numpy as np\n\n\nclass QuantizationLogger(tf.keras.callbacks.Callback):\n \"\"\"Callback that adds quantization specific metrics.\n\n !!! note \"\"\n In order for metrics to be picked up by TensorBoard this callback needs to be\n applied before the TensorBoard callback and use the same update frequency.\n\n !!! example\n ```python\n callbacks = [\n QuantizationLogger(update_freq=100),\n tf.keras.callbacks.TensorBoard(update_freq=100),\n ]\n model.fit(X_train, Y_train, callbacks=callbacks)\n ```\n\n # Metrics\n - `changed_quantization_ration`: The ration of quantized weights in each layer that\n changed during the weight update.\n\n # Arguments\n update_freq: `'batch'` or integer. When using `'batch'`, computes the metrics after\n each batch. If using an integer the callback will compute the metrics every\n `update_freq` batches. Note that computing too frequently can slow down training.\n \"\"\"\n\n def __init__(self, update_freq=\"batch\"):\n self.previous_weights = {}\n self.update_freq = update_freq if update_freq != \"batch\" else 1\n\n def on_batch_end(self, batch, logs=None):\n should_log = batch > 0 and (batch + 1) % self.update_freq == 0\n should_store = (batch + 2) % self.update_freq == 0\n\n if should_log or should_store:\n ops = []\n op_names = []\n for layer in self.model.layers:\n if hasattr(layer, \"quantized_weights\"):\n for i, weight in enumerate(layer.quantized_weights):\n ops.append(weight)\n op_names.append(layer.name if i == 0 else f\"{layer.name}_{i}\")\n\n for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):\n if should_log:\n logs[f\"changed_quantization_ration/{key.replace(':', '_')}\"] = 1 - (\n np.count_nonzero(value == self.previous_weights[key])\n / value.size\n )\n if should_store:\n self.previous_weights[key] = value\n\n if should_log and not should_store:\n # We don't need it in the next batch anymore\n self.previous_weights = {}\n", "path": "larq/callbacks.py"}], "after_files": [{"content": "import tensorflow as tf\nimport numpy as np\n\n\nclass QuantizationLogger(tf.keras.callbacks.Callback):\n \"\"\"Callback that adds quantization specific metrics.\n\n !!! note \"\"\n In order for metrics to be picked up by TensorBoard this callback needs to be\n applied before the TensorBoard callback and use the same update frequency.\n\n !!! example\n ```python\n callbacks = [QuantizationLogger(), tf.keras.callbacks.TensorBoard()]\n model.fit(X_train, Y_train, callbacks=callbacks)\n ```\n\n # Metrics\n - `changed_quantization_ration`: The ration of quantized weights in each layer that\n changed during the weight update.\n\n # Arguments\n update_freq: `'batch'` or `'epoch'` or integer. When using `'batch'`, computes the\n metrics after each batch. The same applies for `'epoch'`. If using an integer\n the callback will compute the metrics every `update_freq` batches.\n Note that computing too frequently can slow down training.\n \"\"\"\n\n def __init__(self, update_freq=\"epoch\"):\n self.batch_previous_weights = {}\n self.epoch_previous_weights = {}\n self.update_freq = update_freq if update_freq != \"batch\" else 1\n\n def _maybe_log_and_store(self, storage, logs, should_log=True, should_store=True):\n if should_log or should_store:\n ops = []\n op_names = []\n for layer in self.model.layers:\n if hasattr(layer, \"quantized_weights\"):\n for i, weight in enumerate(layer.quantized_weights):\n ops.append(weight)\n op_names.append(layer.name if i == 0 else f\"{layer.name}_{i}\")\n\n for key, value in zip(op_names, tf.keras.backend.batch_get_value(ops)):\n value = value.astype(np.int8)\n if should_log:\n logs[f\"changed_quantization_ration/{key.replace(':', '_')}\"] = 1 - (\n np.count_nonzero(value == storage[key]) / value.size\n )\n if should_store:\n storage[key] = value\n\n if should_log and not should_store:\n # We don't need it in the next batch anymore\n storage = {}\n\n def on_batch_end(self, batch, logs=None):\n if self.update_freq != \"epoch\":\n self._maybe_log_and_store(\n self.batch_previous_weights,\n logs,\n should_log=batch > 0 and (batch + 1) % self.update_freq == 0,\n should_store=(batch + 2) % self.update_freq == 0,\n )\n\n def on_train_begin(self, logs=None):\n self._maybe_log_and_store(self.epoch_previous_weights, logs, should_log=False)\n\n def on_epoch_end(self, epoch, logs=None):\n self._maybe_log_and_store(self.epoch_previous_weights, logs)\n", "path": "larq/callbacks.py"}]}
| 887 | 776 |
gh_patches_debug_11194
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-4266
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`Switch.action_toggle` name clash
Similar to #4214 `Switch` has an `action_toggle` method whose name and form clash with `DOMNode.action_toggle`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/_switch.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, ClassVar
4
5 if TYPE_CHECKING:
6 from ..app import RenderResult
7 from ..binding import Binding, BindingType
8 from ..events import Click
9 from ..geometry import Size
10 from ..message import Message
11 from ..reactive import reactive
12 from ..scrollbar import ScrollBarRender
13 from ..widget import Widget
14
15 if TYPE_CHECKING:
16 from typing_extensions import Self
17
18
19 class Switch(Widget, can_focus=True):
20 """A switch widget that represents a boolean value.
21
22 Can be toggled by clicking on it or through its [bindings][textual.widgets.Switch.BINDINGS].
23
24 The switch widget also contains [component classes][textual.widgets.Switch.COMPONENT_CLASSES]
25 that enable more customization.
26 """
27
28 BINDINGS: ClassVar[list[BindingType]] = [
29 Binding("enter,space", "toggle", "Toggle", show=False),
30 ]
31 """
32 | Key(s) | Description |
33 | :- | :- |
34 | enter,space | Toggle the switch state. |
35 """
36
37 COMPONENT_CLASSES: ClassVar[set[str]] = {
38 "switch--slider",
39 }
40 """
41 | Class | Description |
42 | :- | :- |
43 | `switch--slider` | Targets the slider of the switch. |
44 """
45
46 DEFAULT_CSS = """
47 Switch {
48 border: tall transparent;
49 background: $boost;
50 height: auto;
51 width: auto;
52 padding: 0 2;
53 }
54
55 Switch > .switch--slider {
56 background: $panel-darken-2;
57 color: $panel-lighten-2;
58 }
59
60 Switch:hover {
61 border: tall $background;
62 }
63
64 Switch:focus {
65 border: tall $accent;
66 }
67
68 Switch.-on {
69
70 }
71
72 Switch.-on > .switch--slider {
73 color: $success;
74 }
75 """
76
77 value: reactive[bool] = reactive(False, init=False)
78 """The value of the switch; `True` for on and `False` for off."""
79
80 slider_pos = reactive(0.0)
81 """The position of the slider."""
82
83 class Changed(Message):
84 """Posted when the status of the switch changes.
85
86 Can be handled using `on_switch_changed` in a subclass of `Switch`
87 or in a parent widget in the DOM.
88
89 Attributes:
90 value: The value that the switch was changed to.
91 switch: The `Switch` widget that was changed.
92 """
93
94 def __init__(self, switch: Switch, value: bool) -> None:
95 super().__init__()
96 self.value: bool = value
97 self.switch: Switch = switch
98
99 @property
100 def control(self) -> Switch:
101 """Alias for self.switch."""
102 return self.switch
103
104 def __init__(
105 self,
106 value: bool = False,
107 *,
108 animate: bool = True,
109 name: str | None = None,
110 id: str | None = None,
111 classes: str | None = None,
112 disabled: bool = False,
113 ):
114 """Initialise the switch.
115
116 Args:
117 value: The initial value of the switch.
118 animate: True if the switch should animate when toggled.
119 name: The name of the switch.
120 id: The ID of the switch in the DOM.
121 classes: The CSS classes of the switch.
122 disabled: Whether the switch is disabled or not.
123 """
124 super().__init__(name=name, id=id, classes=classes, disabled=disabled)
125 if value:
126 self.slider_pos = 1.0
127 self.set_reactive(Switch.value, value)
128 self._should_animate = animate
129
130 def watch_value(self, value: bool) -> None:
131 target_slider_pos = 1.0 if value else 0.0
132 if self._should_animate:
133 self.animate(
134 "slider_pos",
135 target_slider_pos,
136 duration=0.3,
137 level="basic",
138 )
139 else:
140 self.slider_pos = target_slider_pos
141 self.post_message(self.Changed(self, self.value))
142
143 def watch_slider_pos(self, slider_pos: float) -> None:
144 self.set_class(slider_pos == 1, "-on")
145
146 def render(self) -> RenderResult:
147 style = self.get_component_rich_style("switch--slider")
148 return ScrollBarRender(
149 virtual_size=100,
150 window_size=50,
151 position=self.slider_pos * 50,
152 style=style,
153 vertical=False,
154 )
155
156 def get_content_width(self, container: Size, viewport: Size) -> int:
157 return 4
158
159 def get_content_height(self, container: Size, viewport: Size, width: int) -> int:
160 return 1
161
162 async def _on_click(self, event: Click) -> None:
163 """Toggle the state of the switch."""
164 event.stop()
165 self.toggle()
166
167 def action_toggle(self) -> None:
168 """Toggle the state of the switch."""
169 self.toggle()
170
171 def toggle(self) -> Self:
172 """Toggle the switch value.
173
174 As a result of the value changing, a `Switch.Changed` message will
175 be posted.
176
177 Returns:
178 The `Switch` instance.
179 """
180 self.value = not self.value
181 return self
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/textual/widgets/_switch.py b/src/textual/widgets/_switch.py
--- a/src/textual/widgets/_switch.py
+++ b/src/textual/widgets/_switch.py
@@ -26,7 +26,7 @@
"""
BINDINGS: ClassVar[list[BindingType]] = [
- Binding("enter,space", "toggle", "Toggle", show=False),
+ Binding("enter,space", "toggle_switch", "Toggle", show=False),
]
"""
| Key(s) | Description |
@@ -164,7 +164,7 @@
event.stop()
self.toggle()
- def action_toggle(self) -> None:
+ def action_toggle_switch(self) -> None:
"""Toggle the state of the switch."""
self.toggle()
|
{"golden_diff": "diff --git a/src/textual/widgets/_switch.py b/src/textual/widgets/_switch.py\n--- a/src/textual/widgets/_switch.py\n+++ b/src/textual/widgets/_switch.py\n@@ -26,7 +26,7 @@\n \"\"\"\n \n BINDINGS: ClassVar[list[BindingType]] = [\n- Binding(\"enter,space\", \"toggle\", \"Toggle\", show=False),\n+ Binding(\"enter,space\", \"toggle_switch\", \"Toggle\", show=False),\n ]\n \"\"\"\n | Key(s) | Description |\n@@ -164,7 +164,7 @@\n event.stop()\n self.toggle()\n \n- def action_toggle(self) -> None:\n+ def action_toggle_switch(self) -> None:\n \"\"\"Toggle the state of the switch.\"\"\"\n self.toggle()\n", "issue": "`Switch.action_toggle` name clash\nSimilar to #4214 `Switch` has an `action_toggle` method whose name and form clash with `DOMNode.action_toggle`.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, ClassVar\n\nif TYPE_CHECKING:\n from ..app import RenderResult\nfrom ..binding import Binding, BindingType\nfrom ..events import Click\nfrom ..geometry import Size\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..scrollbar import ScrollBarRender\nfrom ..widget import Widget\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n\nclass Switch(Widget, can_focus=True):\n \"\"\"A switch widget that represents a boolean value.\n\n Can be toggled by clicking on it or through its [bindings][textual.widgets.Switch.BINDINGS].\n\n The switch widget also contains [component classes][textual.widgets.Switch.COMPONENT_CLASSES]\n that enable more customization.\n \"\"\"\n\n BINDINGS: ClassVar[list[BindingType]] = [\n Binding(\"enter,space\", \"toggle\", \"Toggle\", show=False),\n ]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter,space | Toggle the switch state. |\n \"\"\"\n\n COMPONENT_CLASSES: ClassVar[set[str]] = {\n \"switch--slider\",\n }\n \"\"\"\n | Class | Description |\n | :- | :- |\n | `switch--slider` | Targets the slider of the switch. |\n \"\"\"\n\n DEFAULT_CSS = \"\"\"\n Switch {\n border: tall transparent;\n background: $boost;\n height: auto;\n width: auto;\n padding: 0 2;\n }\n\n Switch > .switch--slider {\n background: $panel-darken-2;\n color: $panel-lighten-2;\n }\n\n Switch:hover {\n border: tall $background;\n }\n\n Switch:focus {\n border: tall $accent;\n }\n\n Switch.-on {\n\n }\n\n Switch.-on > .switch--slider {\n color: $success;\n }\n \"\"\"\n\n value: reactive[bool] = reactive(False, init=False)\n \"\"\"The value of the switch; `True` for on and `False` for off.\"\"\"\n\n slider_pos = reactive(0.0)\n \"\"\"The position of the slider.\"\"\"\n\n class Changed(Message):\n \"\"\"Posted when the status of the switch changes.\n\n Can be handled using `on_switch_changed` in a subclass of `Switch`\n or in a parent widget in the DOM.\n\n Attributes:\n value: The value that the switch was changed to.\n switch: The `Switch` widget that was changed.\n \"\"\"\n\n def __init__(self, switch: Switch, value: bool) -> None:\n super().__init__()\n self.value: bool = value\n self.switch: Switch = switch\n\n @property\n def control(self) -> Switch:\n \"\"\"Alias for self.switch.\"\"\"\n return self.switch\n\n def __init__(\n self,\n value: bool = False,\n *,\n animate: bool = True,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ):\n \"\"\"Initialise the switch.\n\n Args:\n value: The initial value of the switch.\n animate: True if the switch should animate when toggled.\n name: The name of the switch.\n id: The ID of the switch in the DOM.\n classes: The CSS classes of the switch.\n disabled: Whether the switch is disabled or not.\n \"\"\"\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n if value:\n self.slider_pos = 1.0\n self.set_reactive(Switch.value, value)\n self._should_animate = animate\n\n def watch_value(self, value: bool) -> None:\n target_slider_pos = 1.0 if value else 0.0\n if self._should_animate:\n self.animate(\n \"slider_pos\",\n target_slider_pos,\n duration=0.3,\n level=\"basic\",\n )\n else:\n self.slider_pos = target_slider_pos\n self.post_message(self.Changed(self, self.value))\n\n def watch_slider_pos(self, slider_pos: float) -> None:\n self.set_class(slider_pos == 1, \"-on\")\n\n def render(self) -> RenderResult:\n style = self.get_component_rich_style(\"switch--slider\")\n return ScrollBarRender(\n virtual_size=100,\n window_size=50,\n position=self.slider_pos * 50,\n style=style,\n vertical=False,\n )\n\n def get_content_width(self, container: Size, viewport: Size) -> int:\n return 4\n\n def get_content_height(self, container: Size, viewport: Size, width: int) -> int:\n return 1\n\n async def _on_click(self, event: Click) -> None:\n \"\"\"Toggle the state of the switch.\"\"\"\n event.stop()\n self.toggle()\n\n def action_toggle(self) -> None:\n \"\"\"Toggle the state of the switch.\"\"\"\n self.toggle()\n\n def toggle(self) -> Self:\n \"\"\"Toggle the switch value.\n\n As a result of the value changing, a `Switch.Changed` message will\n be posted.\n\n Returns:\n The `Switch` instance.\n \"\"\"\n self.value = not self.value\n return self\n", "path": "src/textual/widgets/_switch.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, ClassVar\n\nif TYPE_CHECKING:\n from ..app import RenderResult\nfrom ..binding import Binding, BindingType\nfrom ..events import Click\nfrom ..geometry import Size\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..scrollbar import ScrollBarRender\nfrom ..widget import Widget\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n\nclass Switch(Widget, can_focus=True):\n \"\"\"A switch widget that represents a boolean value.\n\n Can be toggled by clicking on it or through its [bindings][textual.widgets.Switch.BINDINGS].\n\n The switch widget also contains [component classes][textual.widgets.Switch.COMPONENT_CLASSES]\n that enable more customization.\n \"\"\"\n\n BINDINGS: ClassVar[list[BindingType]] = [\n Binding(\"enter,space\", \"toggle_switch\", \"Toggle\", show=False),\n ]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter,space | Toggle the switch state. |\n \"\"\"\n\n COMPONENT_CLASSES: ClassVar[set[str]] = {\n \"switch--slider\",\n }\n \"\"\"\n | Class | Description |\n | :- | :- |\n | `switch--slider` | Targets the slider of the switch. |\n \"\"\"\n\n DEFAULT_CSS = \"\"\"\n Switch {\n border: tall transparent;\n background: $boost;\n height: auto;\n width: auto;\n padding: 0 2;\n }\n\n Switch > .switch--slider {\n background: $panel-darken-2;\n color: $panel-lighten-2;\n }\n\n Switch:hover {\n border: tall $background;\n }\n\n Switch:focus {\n border: tall $accent;\n }\n\n Switch.-on {\n\n }\n\n Switch.-on > .switch--slider {\n color: $success;\n }\n \"\"\"\n\n value: reactive[bool] = reactive(False, init=False)\n \"\"\"The value of the switch; `True` for on and `False` for off.\"\"\"\n\n slider_pos = reactive(0.0)\n \"\"\"The position of the slider.\"\"\"\n\n class Changed(Message):\n \"\"\"Posted when the status of the switch changes.\n\n Can be handled using `on_switch_changed` in a subclass of `Switch`\n or in a parent widget in the DOM.\n\n Attributes:\n value: The value that the switch was changed to.\n switch: The `Switch` widget that was changed.\n \"\"\"\n\n def __init__(self, switch: Switch, value: bool) -> None:\n super().__init__()\n self.value: bool = value\n self.switch: Switch = switch\n\n @property\n def control(self) -> Switch:\n \"\"\"Alias for self.switch.\"\"\"\n return self.switch\n\n def __init__(\n self,\n value: bool = False,\n *,\n animate: bool = True,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ):\n \"\"\"Initialise the switch.\n\n Args:\n value: The initial value of the switch.\n animate: True if the switch should animate when toggled.\n name: The name of the switch.\n id: The ID of the switch in the DOM.\n classes: The CSS classes of the switch.\n disabled: Whether the switch is disabled or not.\n \"\"\"\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n if value:\n self.slider_pos = 1.0\n self.set_reactive(Switch.value, value)\n self._should_animate = animate\n\n def watch_value(self, value: bool) -> None:\n target_slider_pos = 1.0 if value else 0.0\n if self._should_animate:\n self.animate(\n \"slider_pos\",\n target_slider_pos,\n duration=0.3,\n level=\"basic\",\n )\n else:\n self.slider_pos = target_slider_pos\n self.post_message(self.Changed(self, self.value))\n\n def watch_slider_pos(self, slider_pos: float) -> None:\n self.set_class(slider_pos == 1, \"-on\")\n\n def render(self) -> RenderResult:\n style = self.get_component_rich_style(\"switch--slider\")\n return ScrollBarRender(\n virtual_size=100,\n window_size=50,\n position=self.slider_pos * 50,\n style=style,\n vertical=False,\n )\n\n def get_content_width(self, container: Size, viewport: Size) -> int:\n return 4\n\n def get_content_height(self, container: Size, viewport: Size, width: int) -> int:\n return 1\n\n async def _on_click(self, event: Click) -> None:\n \"\"\"Toggle the state of the switch.\"\"\"\n event.stop()\n self.toggle()\n\n def action_toggle_switch(self) -> None:\n \"\"\"Toggle the state of the switch.\"\"\"\n self.toggle()\n\n def toggle(self) -> Self:\n \"\"\"Toggle the switch value.\n\n As a result of the value changing, a `Switch.Changed` message will\n be posted.\n\n Returns:\n The `Switch` instance.\n \"\"\"\n self.value = not self.value\n return self\n", "path": "src/textual/widgets/_switch.py"}]}
| 1,894 | 172 |
gh_patches_debug_9359
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-4919
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add description and deprecation support to filters
`FilterInputObjectType` doesn't provide a way to document fields. We could add two fields to the meta-class: `descriptions = {field: description}` and `deprecations = {field: reason}`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/core/types/filter_input.py`
Content:
```
1 import six
2 from graphene import InputField, InputObjectType
3 from graphene.types.inputobjecttype import InputObjectTypeOptions
4 from graphene.types.utils import yank_fields_from_attrs
5 from graphene_django.filter.utils import get_filterset_class
6
7 from .converter import convert_form_field
8
9
10 class FilterInputObjectType(InputObjectType):
11 """Class for storing and serving django-filtres as graphQL input.
12
13 FilterSet class which inherits from django-filters.FilterSet should be
14 provided with using fitlerset_class argument.
15 """
16
17 @classmethod
18 def __init_subclass_with_meta__(
19 cls, _meta=None, model=None, filterset_class=None, fields=None, **options
20 ):
21 cls.custom_filterset_class = filterset_class
22 cls.filterset_class = None
23 cls.fields = fields
24 cls.model = model
25
26 if not _meta:
27 _meta = InputObjectTypeOptions(cls)
28
29 fields = cls.get_filtering_args_from_filterset()
30 fields = yank_fields_from_attrs(fields, _as=InputField)
31 if _meta.fields:
32 _meta.fields.update(fields)
33 else:
34 _meta.fields = fields
35
36 super().__init_subclass_with_meta__(_meta=_meta, **options)
37
38 @classmethod
39 def get_filtering_args_from_filterset(cls):
40 """Retrieve the filtering arguments from the queryset.
41
42 Inspect a FilterSet and produce the arguments to pass to a Graphene field.
43 These arguments will be available to filter against in the GraphQL.
44 """
45 if not cls.custom_filterset_class:
46 assert cls.model and cls.fields, (
47 "Provide filterset class or model and fields requested to "
48 "create default filterset"
49 )
50
51 meta = dict(model=cls.model, fields=cls.fields)
52 cls.filterset_class = get_filterset_class(cls.custom_filterset_class, **meta)
53
54 args = {}
55 for name, filter_field in six.iteritems(cls.filterset_class.base_filters):
56 input_class = getattr(filter_field, "input_class", None)
57 if input_class:
58 field_type = convert_form_field(filter_field)
59 else:
60 field_type = convert_form_field(filter_field.field)
61 field_type.description = filter_field.label
62 kwargs = getattr(field_type, "kwargs", {})
63 field_type.kwargs = kwargs
64 args[name] = field_type
65 return args
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/graphql/core/types/filter_input.py b/saleor/graphql/core/types/filter_input.py
--- a/saleor/graphql/core/types/filter_input.py
+++ b/saleor/graphql/core/types/filter_input.py
@@ -58,7 +58,7 @@
field_type = convert_form_field(filter_field)
else:
field_type = convert_form_field(filter_field.field)
- field_type.description = filter_field.label
+ field_type.description = getattr(filter_field, "help_text", "")
kwargs = getattr(field_type, "kwargs", {})
field_type.kwargs = kwargs
args[name] = field_type
|
{"golden_diff": "diff --git a/saleor/graphql/core/types/filter_input.py b/saleor/graphql/core/types/filter_input.py\n--- a/saleor/graphql/core/types/filter_input.py\n+++ b/saleor/graphql/core/types/filter_input.py\n@@ -58,7 +58,7 @@\n field_type = convert_form_field(filter_field)\n else:\n field_type = convert_form_field(filter_field.field)\n- field_type.description = filter_field.label\n+ field_type.description = getattr(filter_field, \"help_text\", \"\")\n kwargs = getattr(field_type, \"kwargs\", {})\n field_type.kwargs = kwargs\n args[name] = field_type\n", "issue": "Add description and deprecation support to filters\n`FilterInputObjectType` doesn't provide a way to document fields. We could add two fields to the meta-class: `descriptions = {field: description}` and `deprecations = {field: reason}`.\n", "before_files": [{"content": "import six\nfrom graphene import InputField, InputObjectType\nfrom graphene.types.inputobjecttype import InputObjectTypeOptions\nfrom graphene.types.utils import yank_fields_from_attrs\nfrom graphene_django.filter.utils import get_filterset_class\n\nfrom .converter import convert_form_field\n\n\nclass FilterInputObjectType(InputObjectType):\n \"\"\"Class for storing and serving django-filtres as graphQL input.\n\n FilterSet class which inherits from django-filters.FilterSet should be\n provided with using fitlerset_class argument.\n \"\"\"\n\n @classmethod\n def __init_subclass_with_meta__(\n cls, _meta=None, model=None, filterset_class=None, fields=None, **options\n ):\n cls.custom_filterset_class = filterset_class\n cls.filterset_class = None\n cls.fields = fields\n cls.model = model\n\n if not _meta:\n _meta = InputObjectTypeOptions(cls)\n\n fields = cls.get_filtering_args_from_filterset()\n fields = yank_fields_from_attrs(fields, _as=InputField)\n if _meta.fields:\n _meta.fields.update(fields)\n else:\n _meta.fields = fields\n\n super().__init_subclass_with_meta__(_meta=_meta, **options)\n\n @classmethod\n def get_filtering_args_from_filterset(cls):\n \"\"\"Retrieve the filtering arguments from the queryset.\n\n Inspect a FilterSet and produce the arguments to pass to a Graphene field.\n These arguments will be available to filter against in the GraphQL.\n \"\"\"\n if not cls.custom_filterset_class:\n assert cls.model and cls.fields, (\n \"Provide filterset class or model and fields requested to \"\n \"create default filterset\"\n )\n\n meta = dict(model=cls.model, fields=cls.fields)\n cls.filterset_class = get_filterset_class(cls.custom_filterset_class, **meta)\n\n args = {}\n for name, filter_field in six.iteritems(cls.filterset_class.base_filters):\n input_class = getattr(filter_field, \"input_class\", None)\n if input_class:\n field_type = convert_form_field(filter_field)\n else:\n field_type = convert_form_field(filter_field.field)\n field_type.description = filter_field.label\n kwargs = getattr(field_type, \"kwargs\", {})\n field_type.kwargs = kwargs\n args[name] = field_type\n return args\n", "path": "saleor/graphql/core/types/filter_input.py"}], "after_files": [{"content": "import six\nfrom graphene import InputField, InputObjectType\nfrom graphene.types.inputobjecttype import InputObjectTypeOptions\nfrom graphene.types.utils import yank_fields_from_attrs\nfrom graphene_django.filter.utils import get_filterset_class\n\nfrom .converter import convert_form_field\n\n\nclass FilterInputObjectType(InputObjectType):\n \"\"\"Class for storing and serving django-filtres as graphQL input.\n\n FilterSet class which inherits from django-filters.FilterSet should be\n provided with using fitlerset_class argument.\n \"\"\"\n\n @classmethod\n def __init_subclass_with_meta__(\n cls, _meta=None, model=None, filterset_class=None, fields=None, **options\n ):\n cls.custom_filterset_class = filterset_class\n cls.filterset_class = None\n cls.fields = fields\n cls.model = model\n\n if not _meta:\n _meta = InputObjectTypeOptions(cls)\n\n fields = cls.get_filtering_args_from_filterset()\n fields = yank_fields_from_attrs(fields, _as=InputField)\n if _meta.fields:\n _meta.fields.update(fields)\n else:\n _meta.fields = fields\n\n super().__init_subclass_with_meta__(_meta=_meta, **options)\n\n @classmethod\n def get_filtering_args_from_filterset(cls):\n \"\"\"Retrieve the filtering arguments from the queryset.\n\n Inspect a FilterSet and produce the arguments to pass to a Graphene field.\n These arguments will be available to filter against in the GraphQL.\n \"\"\"\n if not cls.custom_filterset_class:\n assert cls.model and cls.fields, (\n \"Provide filterset class or model and fields requested to \"\n \"create default filterset\"\n )\n\n meta = dict(model=cls.model, fields=cls.fields)\n cls.filterset_class = get_filterset_class(cls.custom_filterset_class, **meta)\n\n args = {}\n for name, filter_field in six.iteritems(cls.filterset_class.base_filters):\n input_class = getattr(filter_field, \"input_class\", None)\n if input_class:\n field_type = convert_form_field(filter_field)\n else:\n field_type = convert_form_field(filter_field.field)\n field_type.description = getattr(filter_field, \"help_text\", \"\")\n kwargs = getattr(field_type, \"kwargs\", {})\n field_type.kwargs = kwargs\n args[name] = field_type\n return args\n", "path": "saleor/graphql/core/types/filter_input.py"}]}
| 938 | 135 |
gh_patches_debug_15934
|
rasdani/github-patches
|
git_diff
|
python-trio__trio-502
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
trio.Path.iterdir wrapping is broken
Given `pathlib.Path.iterdir` returns a generator that does IO access on each iteration, `trio.Path.iterdir` is currently broken given it currently only generates the generator asynchronously (which I suppose is pointless given there is no need for IO at generator creation)
The solution would be to modify `trio.Path.iterdir` to return an async generator, however this means creating a special case given the current implementation is only an async wrapper on `pathlib.Path.iterdir`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `trio/_path.py`
Content:
```
1 from functools import wraps, partial
2 import os
3 import types
4 import pathlib
5
6 import trio
7 from trio._util import async_wraps, fspath
8
9 __all__ = ['Path']
10
11
12 # python3.5 compat: __fspath__ does not exist in 3.5, so unwrap any trio.Path
13 # being passed to any wrapped method
14 def unwrap_paths(args):
15 new_args = []
16 for arg in args:
17 if isinstance(arg, Path):
18 arg = arg._wrapped
19 new_args.append(arg)
20 return new_args
21
22
23 # re-wrap return value from methods that return new instances of pathlib.Path
24 def rewrap_path(value):
25 if isinstance(value, pathlib.Path):
26 value = Path(value)
27 return value
28
29
30 def _forward_factory(cls, attr_name, attr):
31 @wraps(attr)
32 def wrapper(self, *args, **kwargs):
33 args = unwrap_paths(args)
34 attr = getattr(self._wrapped, attr_name)
35 value = attr(*args, **kwargs)
36 return rewrap_path(value)
37
38 return wrapper
39
40
41 def _forward_magic(cls, attr):
42 sentinel = object()
43
44 @wraps(attr)
45 def wrapper(self, other=sentinel):
46 if other is sentinel:
47 return attr(self._wrapped)
48 if isinstance(other, cls):
49 other = other._wrapped
50 value = attr(self._wrapped, other)
51 return rewrap_path(value)
52
53 return wrapper
54
55
56 def thread_wrapper_factory(cls, meth_name):
57 @async_wraps(cls, pathlib.Path, meth_name)
58 async def wrapper(self, *args, **kwargs):
59 args = unwrap_paths(args)
60 meth = getattr(self._wrapped, meth_name)
61 func = partial(meth, *args, **kwargs)
62 value = await trio.run_sync_in_worker_thread(func)
63 return rewrap_path(value)
64
65 return wrapper
66
67
68 class AsyncAutoWrapperType(type):
69 def __init__(cls, name, bases, attrs):
70 super().__init__(name, bases, attrs)
71
72 cls._forward = []
73 type(cls).generate_forwards(cls, attrs)
74 type(cls).generate_wraps(cls, attrs)
75 type(cls).generate_magic(cls, attrs)
76
77 def generate_forwards(cls, attrs):
78 # forward functions of _forwards
79 for attr_name, attr in cls._forwards.__dict__.items():
80 if attr_name.startswith('_') or attr_name in attrs:
81 continue
82
83 if isinstance(attr, property):
84 cls._forward.append(attr_name)
85 elif isinstance(attr, types.FunctionType):
86 wrapper = _forward_factory(cls, attr_name, attr)
87 setattr(cls, attr_name, wrapper)
88 else:
89 raise TypeError(attr_name, type(attr))
90
91 def generate_wraps(cls, attrs):
92 # generate wrappers for functions of _wraps
93 for attr_name, attr in cls._wraps.__dict__.items():
94 if attr_name.startswith('_') or attr_name in attrs:
95 continue
96
97 if isinstance(attr, classmethod):
98 setattr(cls, attr_name, attr)
99 elif isinstance(attr, types.FunctionType):
100 wrapper = thread_wrapper_factory(cls, attr_name)
101 setattr(cls, attr_name, wrapper)
102 else:
103 raise TypeError(attr_name, type(attr))
104
105 def generate_magic(cls, attrs):
106 # generate wrappers for magic
107 for attr_name in cls._forward_magic:
108 attr = getattr(cls._forwards, attr_name)
109 wrapper = _forward_magic(cls, attr)
110 setattr(cls, attr_name, wrapper)
111
112
113 class Path(metaclass=AsyncAutoWrapperType):
114 """A :class:`pathlib.Path` wrapper that executes blocking methods in
115 :meth:`trio.run_sync_in_worker_thread`.
116
117 """
118
119 _wraps = pathlib.Path
120 _forwards = pathlib.PurePath
121 _forward_magic = [
122 '__str__', '__bytes__', '__truediv__', '__rtruediv__', '__eq__',
123 '__lt__', '__le__', '__gt__', '__ge__'
124 ]
125
126 def __init__(self, *args):
127 args = unwrap_paths(args)
128
129 self._wrapped = pathlib.Path(*args)
130
131 def __getattr__(self, name):
132 if name in self._forward:
133 value = getattr(self._wrapped, name)
134 return rewrap_path(value)
135 raise AttributeError(name)
136
137 def __dir__(self):
138 return super().__dir__() + self._forward
139
140 def __repr__(self):
141 return 'trio.Path({})'.format(repr(str(self)))
142
143 def __fspath__(self):
144 return fspath(self._wrapped)
145
146 @wraps(pathlib.Path.open)
147 async def open(self, *args, **kwargs):
148 """Open the file pointed to by the path, like the :func:`trio.open_file`
149 function does.
150
151 """
152
153 func = partial(self._wrapped.open, *args, **kwargs)
154 value = await trio.run_sync_in_worker_thread(func)
155 return trio.wrap_file(value)
156
157
158 # The value of Path.absolute.__doc__ makes a reference to
159 # :meth:~pathlib.Path.absolute, which does not exist. Removing this makes more
160 # sense than inventing our own special docstring for this.
161 del Path.absolute.__doc__
162
163 # python3.5 compat
164 if hasattr(os, 'PathLike'):
165 os.PathLike.register(Path)
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/trio/_path.py b/trio/_path.py
--- a/trio/_path.py
+++ b/trio/_path.py
@@ -128,6 +128,28 @@
self._wrapped = pathlib.Path(*args)
+ async def iterdir(self):
+ """
+ Like :meth:`pathlib.Path.iterdir`, but async.
+
+ This is an async method that returns a synchronous iterator, so you
+ use it like::
+
+ for subpath in await mypath.iterdir():
+ ...
+
+ Note that it actually loads the whole directory list into memory
+ immediately, during the initial call. (See `issue #501
+ <https://github.com/python-trio/trio/issues/501>`__ for discussion.)
+
+ """
+
+ def _load_items():
+ return list(self._wrapped.iterdir())
+
+ items = await trio.run_sync_in_worker_thread(_load_items)
+ return (Path(item) for item in items)
+
def __getattr__(self, name):
if name in self._forward:
value = getattr(self._wrapped, name)
|
{"golden_diff": "diff --git a/trio/_path.py b/trio/_path.py\n--- a/trio/_path.py\n+++ b/trio/_path.py\n@@ -128,6 +128,28 @@\n \n self._wrapped = pathlib.Path(*args)\n \n+ async def iterdir(self):\n+ \"\"\"\n+ Like :meth:`pathlib.Path.iterdir`, but async.\n+\n+ This is an async method that returns a synchronous iterator, so you\n+ use it like::\n+ \n+ for subpath in await mypath.iterdir():\n+ ...\n+ \n+ Note that it actually loads the whole directory list into memory\n+ immediately, during the initial call. (See `issue #501\n+ <https://github.com/python-trio/trio/issues/501>`__ for discussion.)\n+ \n+ \"\"\"\n+\n+ def _load_items():\n+ return list(self._wrapped.iterdir())\n+\n+ items = await trio.run_sync_in_worker_thread(_load_items)\n+ return (Path(item) for item in items)\n+\n def __getattr__(self, name):\n if name in self._forward:\n value = getattr(self._wrapped, name)\n", "issue": "trio.Path.iterdir wrapping is broken\nGiven `pathlib.Path.iterdir` returns a generator that does IO access on each iteration, `trio.Path.iterdir` is currently broken given it currently only generates the generator asynchronously (which I suppose is pointless given there is no need for IO at generator creation)\r\n\r\nThe solution would be to modify `trio.Path.iterdir` to return an async generator, however this means creating a special case given the current implementation is only an async wrapper on `pathlib.Path.iterdir`.\n", "before_files": [{"content": "from functools import wraps, partial\nimport os\nimport types\nimport pathlib\n\nimport trio\nfrom trio._util import async_wraps, fspath\n\n__all__ = ['Path']\n\n\n# python3.5 compat: __fspath__ does not exist in 3.5, so unwrap any trio.Path\n# being passed to any wrapped method\ndef unwrap_paths(args):\n new_args = []\n for arg in args:\n if isinstance(arg, Path):\n arg = arg._wrapped\n new_args.append(arg)\n return new_args\n\n\n# re-wrap return value from methods that return new instances of pathlib.Path\ndef rewrap_path(value):\n if isinstance(value, pathlib.Path):\n value = Path(value)\n return value\n\n\ndef _forward_factory(cls, attr_name, attr):\n @wraps(attr)\n def wrapper(self, *args, **kwargs):\n args = unwrap_paths(args)\n attr = getattr(self._wrapped, attr_name)\n value = attr(*args, **kwargs)\n return rewrap_path(value)\n\n return wrapper\n\n\ndef _forward_magic(cls, attr):\n sentinel = object()\n\n @wraps(attr)\n def wrapper(self, other=sentinel):\n if other is sentinel:\n return attr(self._wrapped)\n if isinstance(other, cls):\n other = other._wrapped\n value = attr(self._wrapped, other)\n return rewrap_path(value)\n\n return wrapper\n\n\ndef thread_wrapper_factory(cls, meth_name):\n @async_wraps(cls, pathlib.Path, meth_name)\n async def wrapper(self, *args, **kwargs):\n args = unwrap_paths(args)\n meth = getattr(self._wrapped, meth_name)\n func = partial(meth, *args, **kwargs)\n value = await trio.run_sync_in_worker_thread(func)\n return rewrap_path(value)\n\n return wrapper\n\n\nclass AsyncAutoWrapperType(type):\n def __init__(cls, name, bases, attrs):\n super().__init__(name, bases, attrs)\n\n cls._forward = []\n type(cls).generate_forwards(cls, attrs)\n type(cls).generate_wraps(cls, attrs)\n type(cls).generate_magic(cls, attrs)\n\n def generate_forwards(cls, attrs):\n # forward functions of _forwards\n for attr_name, attr in cls._forwards.__dict__.items():\n if attr_name.startswith('_') or attr_name in attrs:\n continue\n\n if isinstance(attr, property):\n cls._forward.append(attr_name)\n elif isinstance(attr, types.FunctionType):\n wrapper = _forward_factory(cls, attr_name, attr)\n setattr(cls, attr_name, wrapper)\n else:\n raise TypeError(attr_name, type(attr))\n\n def generate_wraps(cls, attrs):\n # generate wrappers for functions of _wraps\n for attr_name, attr in cls._wraps.__dict__.items():\n if attr_name.startswith('_') or attr_name in attrs:\n continue\n\n if isinstance(attr, classmethod):\n setattr(cls, attr_name, attr)\n elif isinstance(attr, types.FunctionType):\n wrapper = thread_wrapper_factory(cls, attr_name)\n setattr(cls, attr_name, wrapper)\n else:\n raise TypeError(attr_name, type(attr))\n\n def generate_magic(cls, attrs):\n # generate wrappers for magic\n for attr_name in cls._forward_magic:\n attr = getattr(cls._forwards, attr_name)\n wrapper = _forward_magic(cls, attr)\n setattr(cls, attr_name, wrapper)\n\n\nclass Path(metaclass=AsyncAutoWrapperType):\n \"\"\"A :class:`pathlib.Path` wrapper that executes blocking methods in\n :meth:`trio.run_sync_in_worker_thread`.\n\n \"\"\"\n\n _wraps = pathlib.Path\n _forwards = pathlib.PurePath\n _forward_magic = [\n '__str__', '__bytes__', '__truediv__', '__rtruediv__', '__eq__',\n '__lt__', '__le__', '__gt__', '__ge__'\n ]\n\n def __init__(self, *args):\n args = unwrap_paths(args)\n\n self._wrapped = pathlib.Path(*args)\n\n def __getattr__(self, name):\n if name in self._forward:\n value = getattr(self._wrapped, name)\n return rewrap_path(value)\n raise AttributeError(name)\n\n def __dir__(self):\n return super().__dir__() + self._forward\n\n def __repr__(self):\n return 'trio.Path({})'.format(repr(str(self)))\n\n def __fspath__(self):\n return fspath(self._wrapped)\n\n @wraps(pathlib.Path.open)\n async def open(self, *args, **kwargs):\n \"\"\"Open the file pointed to by the path, like the :func:`trio.open_file`\n function does.\n\n \"\"\"\n\n func = partial(self._wrapped.open, *args, **kwargs)\n value = await trio.run_sync_in_worker_thread(func)\n return trio.wrap_file(value)\n\n\n# The value of Path.absolute.__doc__ makes a reference to\n# :meth:~pathlib.Path.absolute, which does not exist. Removing this makes more\n# sense than inventing our own special docstring for this.\ndel Path.absolute.__doc__\n\n# python3.5 compat\nif hasattr(os, 'PathLike'):\n os.PathLike.register(Path)\n", "path": "trio/_path.py"}], "after_files": [{"content": "from functools import wraps, partial\nimport os\nimport types\nimport pathlib\n\nimport trio\nfrom trio._util import async_wraps, fspath\n\n__all__ = ['Path']\n\n\n# python3.5 compat: __fspath__ does not exist in 3.5, so unwrap any trio.Path\n# being passed to any wrapped method\ndef unwrap_paths(args):\n new_args = []\n for arg in args:\n if isinstance(arg, Path):\n arg = arg._wrapped\n new_args.append(arg)\n return new_args\n\n\n# re-wrap return value from methods that return new instances of pathlib.Path\ndef rewrap_path(value):\n if isinstance(value, pathlib.Path):\n value = Path(value)\n return value\n\n\ndef _forward_factory(cls, attr_name, attr):\n @wraps(attr)\n def wrapper(self, *args, **kwargs):\n args = unwrap_paths(args)\n attr = getattr(self._wrapped, attr_name)\n value = attr(*args, **kwargs)\n return rewrap_path(value)\n\n return wrapper\n\n\ndef _forward_magic(cls, attr):\n sentinel = object()\n\n @wraps(attr)\n def wrapper(self, other=sentinel):\n if other is sentinel:\n return attr(self._wrapped)\n if isinstance(other, cls):\n other = other._wrapped\n value = attr(self._wrapped, other)\n return rewrap_path(value)\n\n return wrapper\n\n\ndef thread_wrapper_factory(cls, meth_name):\n @async_wraps(cls, pathlib.Path, meth_name)\n async def wrapper(self, *args, **kwargs):\n args = unwrap_paths(args)\n meth = getattr(self._wrapped, meth_name)\n func = partial(meth, *args, **kwargs)\n value = await trio.run_sync_in_worker_thread(func)\n return rewrap_path(value)\n\n return wrapper\n\n\nclass AsyncAutoWrapperType(type):\n def __init__(cls, name, bases, attrs):\n super().__init__(name, bases, attrs)\n\n cls._forward = []\n type(cls).generate_forwards(cls, attrs)\n type(cls).generate_wraps(cls, attrs)\n type(cls).generate_magic(cls, attrs)\n\n def generate_forwards(cls, attrs):\n # forward functions of _forwards\n for attr_name, attr in cls._forwards.__dict__.items():\n if attr_name.startswith('_') or attr_name in attrs:\n continue\n\n if isinstance(attr, property):\n cls._forward.append(attr_name)\n elif isinstance(attr, types.FunctionType):\n wrapper = _forward_factory(cls, attr_name, attr)\n setattr(cls, attr_name, wrapper)\n else:\n raise TypeError(attr_name, type(attr))\n\n def generate_wraps(cls, attrs):\n # generate wrappers for functions of _wraps\n for attr_name, attr in cls._wraps.__dict__.items():\n if attr_name.startswith('_') or attr_name in attrs:\n continue\n\n if isinstance(attr, classmethod):\n setattr(cls, attr_name, attr)\n elif isinstance(attr, types.FunctionType):\n wrapper = thread_wrapper_factory(cls, attr_name)\n setattr(cls, attr_name, wrapper)\n else:\n raise TypeError(attr_name, type(attr))\n\n def generate_magic(cls, attrs):\n # generate wrappers for magic\n for attr_name in cls._forward_magic:\n attr = getattr(cls._forwards, attr_name)\n wrapper = _forward_magic(cls, attr)\n setattr(cls, attr_name, wrapper)\n\n\nclass Path(metaclass=AsyncAutoWrapperType):\n \"\"\"A :class:`pathlib.Path` wrapper that executes blocking methods in\n :meth:`trio.run_sync_in_worker_thread`.\n\n \"\"\"\n\n _wraps = pathlib.Path\n _forwards = pathlib.PurePath\n _forward_magic = [\n '__str__', '__bytes__', '__truediv__', '__rtruediv__', '__eq__',\n '__lt__', '__le__', '__gt__', '__ge__'\n ]\n\n def __init__(self, *args):\n args = unwrap_paths(args)\n\n self._wrapped = pathlib.Path(*args)\n\n async def iterdir(self):\n \"\"\"\n Like :meth:`pathlib.Path.iterdir`, but async.\n\n This is an async method that returns a synchronous iterator, so you\n use it like::\n \n for subpath in await mypath.iterdir():\n ...\n \n Note that it actually loads the whole directory list into memory\n immediately, during the initial call. (See `issue #501\n <https://github.com/python-trio/trio/issues/501>`__ for discussion.)\n \n \"\"\"\n\n def _load_items():\n return list(self._wrapped.iterdir())\n\n items = await trio.run_sync_in_worker_thread(_load_items)\n return (Path(item) for item in items)\n\n def __getattr__(self, name):\n if name in self._forward:\n value = getattr(self._wrapped, name)\n return rewrap_path(value)\n raise AttributeError(name)\n\n def __dir__(self):\n return super().__dir__() + self._forward\n\n def __repr__(self):\n return 'trio.Path({})'.format(repr(str(self)))\n\n def __fspath__(self):\n return fspath(self._wrapped)\n\n @wraps(pathlib.Path.open)\n async def open(self, *args, **kwargs):\n \"\"\"Open the file pointed to by the path, like the :func:`trio.open_file`\n function does.\n\n \"\"\"\n\n func = partial(self._wrapped.open, *args, **kwargs)\n value = await trio.run_sync_in_worker_thread(func)\n return trio.wrap_file(value)\n\n\n# The value of Path.absolute.__doc__ makes a reference to\n# :meth:~pathlib.Path.absolute, which does not exist. Removing this makes more\n# sense than inventing our own special docstring for this.\ndel Path.absolute.__doc__\n\n# python3.5 compat\nif hasattr(os, 'PathLike'):\n os.PathLike.register(Path)\n", "path": "trio/_path.py"}]}
| 1,906 | 260 |
gh_patches_debug_15652
|
rasdani/github-patches
|
git_diff
|
airctic__icevision-1058
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
getting_started_object_detection.ipynb fails after CentripetalNet support merge
## 🐛 Bug
**Describe the bug**
`getting_started_object_detection.ipynb` fails to run with the following error.
`AttributeError: 'VFNet' object has no attribute 'mask_head'
`
**To Reproduce**
Steps to reproduce the behavior: Run the getting started notebook.
**Expected behavior**
Model should instantiate.
**Screenshots**

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/mmdet/utils.py`
Content:
```
1 __all__ = [
2 "MMDetBackboneConfig",
3 "mmdet_configs_path",
4 "param_groups",
5 "MMDetBackboneConfig",
6 "create_model_config",
7 ]
8
9 from icevision.imports import *
10 from icevision.utils import *
11 from icevision.backbones import BackboneConfig
12 from icevision.models.mmdet.download_configs import download_mmdet_configs
13 from mmdet.models.detectors import *
14 from mmcv import Config
15 from mmdet.models.backbones.ssd_vgg import SSDVGG
16 from mmdet.models.backbones.csp_darknet import CSPDarknet
17 from mmdet.models.backbones.swin import SwinTransformer
18 from mmdet.models.backbones.hourglass import HourglassNet
19
20
21 mmdet_configs_path = download_mmdet_configs()
22
23
24 class MMDetBackboneConfig(BackboneConfig):
25 def __init__(self, model_name, config_path, weights_url):
26 self.model_name = model_name
27 self.config_path = config_path
28 self.weights_url = weights_url
29 self.pretrained: bool
30
31 def __call__(self, pretrained: bool = True) -> "MMDetBackboneConfig":
32 self.pretrained = pretrained
33 return self
34
35
36 def param_groups(model):
37 body = model.backbone
38
39 layers = []
40
41 # add the backbone
42 if isinstance(body, SSDVGG):
43 layers += [body.features]
44 elif isinstance(body, CSPDarknet):
45 layers += [body.stem.conv.conv, body.stem.conv.bn]
46 layers += [body.stage1, body.stage2, body.stage3, body.stage4]
47
48 elif isinstance(body, HourglassNet):
49 layers += [
50 body.stem,
51 body.hourglass_modules,
52 body.inters,
53 body.conv1x1s,
54 body.out_convs,
55 body.remap_convs,
56 body.relu,
57 ]
58
59 elif isinstance(body, SwinTransformer):
60 layers += [
61 body.patch_embed.adap_padding,
62 body.patch_embed.projection,
63 body.patch_embed.norm,
64 body.drop_after_pos,
65 body.stages,
66 ]
67 # Swin backbone for two-stage detector has norm0 attribute
68 if getattr(body, "norm0", False):
69 layers += [body.norm0]
70
71 layers += [body.norm1, body.norm2, body.norm3]
72 else:
73 layers += [nn.Sequential(body.conv1, body.bn1)]
74 layers += [getattr(body, l) for l in body.res_layers]
75
76 # add the neck module if it exists (DETR doesn't have a neck module)
77 layers += [module for name, module in model.named_modules() if name == "neck"]
78
79 # add the head
80 if isinstance(model, SingleStageDetector):
81 layers += [model.bbox_head]
82
83 # YOLACT has mask_head and segm_head
84 if getattr(model, "mask_head"):
85 layers += [model.mask_head]
86 if getattr(model, "segm_head"):
87 layers += [model.segm_head]
88
89 elif isinstance(model, TwoStageDetector):
90 layers += [nn.Sequential(model.rpn_head, model.roi_head)]
91 else:
92 raise RuntimeError(
93 "{model} must inherit either from SingleStageDetector or TwoStageDetector class"
94 )
95
96 _param_groups = [list(layer.parameters()) for layer in layers]
97 check_all_model_params_in_groups2(model, _param_groups)
98 return _param_groups
99
100
101 def create_model_config(
102 backbone: MMDetBackboneConfig,
103 pretrained: bool = True,
104 checkpoints_path: Optional[Union[str, Path]] = "checkpoints",
105 force_download=False,
106 cfg_options=None,
107 ):
108
109 model_name = backbone.model_name
110 config_path = backbone.config_path
111 weights_url = backbone.weights_url
112
113 # download weights
114 weights_path = None
115 if pretrained and weights_url:
116 save_dir = Path(checkpoints_path) / model_name
117 save_dir.mkdir(exist_ok=True, parents=True)
118
119 fname = Path(weights_url).name
120 weights_path = save_dir / fname
121
122 if not weights_path.exists() or force_download:
123 download_url(url=weights_url, save_path=str(weights_path))
124
125 cfg = Config.fromfile(config_path)
126
127 if cfg_options is not None:
128 cfg.merge_from_dict(cfg_options)
129
130 return cfg, weights_path
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py
--- a/icevision/models/mmdet/utils.py
+++ b/icevision/models/mmdet/utils.py
@@ -6,6 +6,7 @@
"create_model_config",
]
+from numpy import False_
from icevision.imports import *
from icevision.utils import *
from icevision.backbones import BackboneConfig
@@ -81,9 +82,9 @@
layers += [model.bbox_head]
# YOLACT has mask_head and segm_head
- if getattr(model, "mask_head"):
+ if hasattr(model, "mask_head"):
layers += [model.mask_head]
- if getattr(model, "segm_head"):
+ if hasattr(model, "segm_head"):
layers += [model.segm_head]
elif isinstance(model, TwoStageDetector):
|
{"golden_diff": "diff --git a/icevision/models/mmdet/utils.py b/icevision/models/mmdet/utils.py\n--- a/icevision/models/mmdet/utils.py\n+++ b/icevision/models/mmdet/utils.py\n@@ -6,6 +6,7 @@\n \"create_model_config\",\n ]\n \n+from numpy import False_\n from icevision.imports import *\n from icevision.utils import *\n from icevision.backbones import BackboneConfig\n@@ -81,9 +82,9 @@\n layers += [model.bbox_head]\n \n # YOLACT has mask_head and segm_head\n- if getattr(model, \"mask_head\"):\n+ if hasattr(model, \"mask_head\"):\n layers += [model.mask_head]\n- if getattr(model, \"segm_head\"):\n+ if hasattr(model, \"segm_head\"):\n layers += [model.segm_head]\n \n elif isinstance(model, TwoStageDetector):\n", "issue": "getting_started_object_detection.ipynb fails after CentripetalNet support merge\n## \ud83d\udc1b Bug\r\n**Describe the bug**\r\n`getting_started_object_detection.ipynb` fails to run with the following error.\r\n\r\n`AttributeError: 'VFNet' object has no attribute 'mask_head'\r\n`\r\n**To Reproduce**\r\nSteps to reproduce the behavior: Run the getting started notebook.\r\n\r\n**Expected behavior**\r\nModel should instantiate.\r\n\r\n**Screenshots**\r\n\n", "before_files": [{"content": "__all__ = [\n \"MMDetBackboneConfig\",\n \"mmdet_configs_path\",\n \"param_groups\",\n \"MMDetBackboneConfig\",\n \"create_model_config\",\n]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.backbones import BackboneConfig\nfrom icevision.models.mmdet.download_configs import download_mmdet_configs\nfrom mmdet.models.detectors import *\nfrom mmcv import Config\nfrom mmdet.models.backbones.ssd_vgg import SSDVGG\nfrom mmdet.models.backbones.csp_darknet import CSPDarknet\nfrom mmdet.models.backbones.swin import SwinTransformer\nfrom mmdet.models.backbones.hourglass import HourglassNet\n\n\nmmdet_configs_path = download_mmdet_configs()\n\n\nclass MMDetBackboneConfig(BackboneConfig):\n def __init__(self, model_name, config_path, weights_url):\n self.model_name = model_name\n self.config_path = config_path\n self.weights_url = weights_url\n self.pretrained: bool\n\n def __call__(self, pretrained: bool = True) -> \"MMDetBackboneConfig\":\n self.pretrained = pretrained\n return self\n\n\ndef param_groups(model):\n body = model.backbone\n\n layers = []\n\n # add the backbone\n if isinstance(body, SSDVGG):\n layers += [body.features]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n\n elif isinstance(body, HourglassNet):\n layers += [\n body.stem,\n body.hourglass_modules,\n body.inters,\n body.conv1x1s,\n body.out_convs,\n body.remap_convs,\n body.relu,\n ]\n\n elif isinstance(body, SwinTransformer):\n layers += [\n body.patch_embed.adap_padding,\n body.patch_embed.projection,\n body.patch_embed.norm,\n body.drop_after_pos,\n body.stages,\n ]\n # Swin backbone for two-stage detector has norm0 attribute\n if getattr(body, \"norm0\", False):\n layers += [body.norm0]\n\n layers += [body.norm1, body.norm2, body.norm3]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n\n # add the neck module if it exists (DETR doesn't have a neck module)\n layers += [module for name, module in model.named_modules() if name == \"neck\"]\n\n # add the head\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n\n # YOLACT has mask_head and segm_head\n if getattr(model, \"mask_head\"):\n layers += [model.mask_head]\n if getattr(model, \"segm_head\"):\n layers += [model.segm_head]\n\n elif isinstance(model, TwoStageDetector):\n layers += [nn.Sequential(model.rpn_head, model.roi_head)]\n else:\n raise RuntimeError(\n \"{model} must inherit either from SingleStageDetector or TwoStageDetector class\"\n )\n\n _param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, _param_groups)\n return _param_groups\n\n\ndef create_model_config(\n backbone: MMDetBackboneConfig,\n pretrained: bool = True,\n checkpoints_path: Optional[Union[str, Path]] = \"checkpoints\",\n force_download=False,\n cfg_options=None,\n):\n\n model_name = backbone.model_name\n config_path = backbone.config_path\n weights_url = backbone.weights_url\n\n # download weights\n weights_path = None\n if pretrained and weights_url:\n save_dir = Path(checkpoints_path) / model_name\n save_dir.mkdir(exist_ok=True, parents=True)\n\n fname = Path(weights_url).name\n weights_path = save_dir / fname\n\n if not weights_path.exists() or force_download:\n download_url(url=weights_url, save_path=str(weights_path))\n\n cfg = Config.fromfile(config_path)\n\n if cfg_options is not None:\n cfg.merge_from_dict(cfg_options)\n\n return cfg, weights_path\n", "path": "icevision/models/mmdet/utils.py"}], "after_files": [{"content": "__all__ = [\n \"MMDetBackboneConfig\",\n \"mmdet_configs_path\",\n \"param_groups\",\n \"MMDetBackboneConfig\",\n \"create_model_config\",\n]\n\nfrom numpy import False_\nfrom icevision.imports import *\nfrom icevision.utils import *\nfrom icevision.backbones import BackboneConfig\nfrom icevision.models.mmdet.download_configs import download_mmdet_configs\nfrom mmdet.models.detectors import *\nfrom mmcv import Config\nfrom mmdet.models.backbones.ssd_vgg import SSDVGG\nfrom mmdet.models.backbones.csp_darknet import CSPDarknet\nfrom mmdet.models.backbones.swin import SwinTransformer\nfrom mmdet.models.backbones.hourglass import HourglassNet\n\n\nmmdet_configs_path = download_mmdet_configs()\n\n\nclass MMDetBackboneConfig(BackboneConfig):\n def __init__(self, model_name, config_path, weights_url):\n self.model_name = model_name\n self.config_path = config_path\n self.weights_url = weights_url\n self.pretrained: bool\n\n def __call__(self, pretrained: bool = True) -> \"MMDetBackboneConfig\":\n self.pretrained = pretrained\n return self\n\n\ndef param_groups(model):\n body = model.backbone\n\n layers = []\n\n # add the backbone\n if isinstance(body, SSDVGG):\n layers += [body.features]\n elif isinstance(body, CSPDarknet):\n layers += [body.stem.conv.conv, body.stem.conv.bn]\n layers += [body.stage1, body.stage2, body.stage3, body.stage4]\n\n elif isinstance(body, HourglassNet):\n layers += [\n body.stem,\n body.hourglass_modules,\n body.inters,\n body.conv1x1s,\n body.out_convs,\n body.remap_convs,\n body.relu,\n ]\n\n elif isinstance(body, SwinTransformer):\n layers += [\n body.patch_embed.adap_padding,\n body.patch_embed.projection,\n body.patch_embed.norm,\n body.drop_after_pos,\n body.stages,\n ]\n # Swin backbone for two-stage detector has norm0 attribute\n if getattr(body, \"norm0\", False):\n layers += [body.norm0]\n\n layers += [body.norm1, body.norm2, body.norm3]\n else:\n layers += [nn.Sequential(body.conv1, body.bn1)]\n layers += [getattr(body, l) for l in body.res_layers]\n\n # add the neck module if it exists (DETR doesn't have a neck module)\n layers += [module for name, module in model.named_modules() if name == \"neck\"]\n\n # add the head\n if isinstance(model, SingleStageDetector):\n layers += [model.bbox_head]\n\n # YOLACT has mask_head and segm_head\n if hasattr(model, \"mask_head\"):\n layers += [model.mask_head]\n if hasattr(model, \"segm_head\"):\n layers += [model.segm_head]\n\n elif isinstance(model, TwoStageDetector):\n layers += [nn.Sequential(model.rpn_head, model.roi_head)]\n else:\n raise RuntimeError(\n \"{model} must inherit either from SingleStageDetector or TwoStageDetector class\"\n )\n\n _param_groups = [list(layer.parameters()) for layer in layers]\n check_all_model_params_in_groups2(model, _param_groups)\n return _param_groups\n\n\ndef create_model_config(\n backbone: MMDetBackboneConfig,\n pretrained: bool = True,\n checkpoints_path: Optional[Union[str, Path]] = \"checkpoints\",\n force_download=False,\n cfg_options=None,\n):\n\n model_name = backbone.model_name\n config_path = backbone.config_path\n weights_url = backbone.weights_url\n\n # download weights\n weights_path = None\n if pretrained and weights_url:\n save_dir = Path(checkpoints_path) / model_name\n save_dir.mkdir(exist_ok=True, parents=True)\n\n fname = Path(weights_url).name\n weights_path = save_dir / fname\n\n if not weights_path.exists() or force_download:\n download_url(url=weights_url, save_path=str(weights_path))\n\n cfg = Config.fromfile(config_path)\n\n if cfg_options is not None:\n cfg.merge_from_dict(cfg_options)\n\n return cfg, weights_path\n", "path": "icevision/models/mmdet/utils.py"}]}
| 1,639 | 198 |
gh_patches_debug_32487
|
rasdani/github-patches
|
git_diff
|
PlasmaPy__PlasmaPy-2730
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expands examples in docstrings of `HarrisSheet` and its methods
Currently, the docstring for `HarrisSheet` ([permalink](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/src/plasmapy/plasma/equilibria1d.py#L12)) doesn't contain any examples. It would be helpful to add a simple working example that shows how to use it.
One possibility would be to adapt some of the [tests](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/tests/plasma/test_equilibria1d.py).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/plasmapy/plasma/equilibria1d.py`
Content:
```
1 """Functionality for representing one-dimensional equilibria."""
2
3 __all__ = ["HarrisSheet"]
4
5 import astropy.constants as const
6 import astropy.units as u
7 import numpy as np
8
9 from plasmapy.utils.decorators.validators import validate_quantities
10
11
12 class HarrisSheet:
13 r"""
14 Define a Harris Sheet Equilibrium.
15
16 Magnetic field will be in the :math:`±x` direction and the current
17 density will be in the :math:`±z` direction in a :math:`\hat{x} ×
18 \hat{y} = \hat{z}` coordinate system.
19
20 Parameters
21 ----------
22 B0 : `~astropy.units.Quantity`
23 Magnitude of magnetic field in the limit of :math:`y → ∞` in
24 units convertible to teslas.
25
26 delta : `~astropy.units.Quantity`
27 The thickness of the current sheet in units convertible to
28 meters.
29
30 P0 : `~astropy.units.Quantity`
31 The plasma pressure in the limit of :math:`y → ∞` in units
32 convertible to pascals.
33
34 Notes
35 -----
36 A current sheet is current limited to a surface.
37
38 A Harris sheet is a 1D ideal MHD equilibrium. In resistive MHD if
39 there is any resistivity, it won't be a true equilibrium since the
40 resistivity will gradually smooth the profile out over time.
41
42 A Harris sheet is often used as the initial condition for
43 simulations of magnetic reconnection.
44
45 Examples
46 --------
47 >>> import astropy.units as u
48 >>> harris_sheet = HarrisSheet(delta=3 * u.m, B0=2 * u.T)
49 >>> harris_sheet.magnetic_field(y=5 * u.m)
50 <Quantity 1.8622... T>
51 """
52
53 def __init__(self, B0, delta, P0=0 * u.Pa) -> None:
54 self.B0 = B0
55 self.delta = delta
56 self.P0 = P0
57
58 @validate_quantities
59 def magnetic_field(self, y: u.Quantity[u.m]) -> u.Quantity[u.T]:
60 r"""
61 Compute the magnetic field.
62
63 In this equation, :math:`B_0` is the asymptotic magnitude of the
64 magnetic field for :math:`y → ±∞` and :math:`δ` is the thickness
65 of the sheet.
66
67 .. math::
68
69 B_x(y) = B_0 \tanh \left( \frac{y}{δ} \right)
70
71 Parameters
72 ----------
73 y : `~astropy.units.Quantity`
74 Orthogonal distance from the current sheet center.
75 """
76 return self.B0 * np.tanh(u.rad * y / self.delta)
77
78 @validate_quantities
79 def current_density(self, y: u.Quantity[u.m]) -> u.Quantity[u.A / u.m**2]:
80 r"""
81 Compute the current density.
82
83 .. math::
84
85 J_z(y) = - \frac{B_0}{δ μ_0) \mathrm{sech}^2 \left( \frac{y}{δ} \right)
86
87 Parameters
88 ----------
89 y : `~astropy.units.Quantity`
90 Orthogonal distance from the current sheet center.
91 """
92 return (
93 -self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2
94 )
95
96 @validate_quantities
97 def plasma_pressure(self, y: u.Quantity[u.m]) -> u.Quantity[u.Pa]:
98 r"""
99 Compute plasma pressure.
100
101 .. math::
102
103 p(y) = \frac{B_0^2}{2 μ_0} \mathrm{sech}^2 \left( \frac{y}{δ} \right) + p_0
104
105 Parameters
106 ----------
107 y : `~astropy.units.Quantity`
108 Orthogonal distance from the current sheet center.
109 """
110 return (
111 self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)
112 + self.P0
113 )
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/plasmapy/plasma/equilibria1d.py b/src/plasmapy/plasma/equilibria1d.py
--- a/src/plasmapy/plasma/equilibria1d.py
+++ b/src/plasmapy/plasma/equilibria1d.py
@@ -72,6 +72,18 @@
----------
y : `~astropy.units.Quantity`
Orthogonal distance from the current sheet center.
+
+ Examples
+ --------
+ >>> import astropy.units as u
+ >>> B0 = 1 * u.T
+ >>> delta = 1 * u.m
+ >>> P0 = 0 * u.Pa
+ >>> hs = HarrisSheet(B0, delta, P0)
+ >>> y = [-2, 0, 2] * u.m
+ >>> hs.magnetic_field(y)
+ <Quantity [-0.96402758007, 0, 0.96402758007] T>
+
"""
return self.B0 * np.tanh(u.rad * y / self.delta)
@@ -88,6 +100,17 @@
----------
y : `~astropy.units.Quantity`
Orthogonal distance from the current sheet center.
+
+ Examples
+ --------
+ >>> import astropy.units as u
+ >>> B0 = 1 * u.T
+ >>> delta = 1 * u.m
+ >>> P0 = 0 * u.Pa
+ >>> hs = HarrisSheet(B0, delta, P0)
+ >>> y = [-2, 0, 2] * u.m
+ >>> hs.current_density(y)
+ <Quantity [-56222.1400445, -795774.715459, -56222.1400445] A / m2>
"""
return (
-self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2
@@ -106,6 +129,17 @@
----------
y : `~astropy.units.Quantity`
Orthogonal distance from the current sheet center.
+
+ Examples
+ --------
+ >>> import astropy.units as u
+ >>> B0 = 1 * u.T
+ >>> delta = 1 * u.m
+ >>> P0 = 0 * u.Pa
+ >>> hs = HarrisSheet(B0, delta, P0)
+ >>> y = [-2, 0, 2] * u.m
+ >>> hs.plasma_pressure(y)
+ <Quantity [28111.07, 397887.36, 28111.07] Pa>
"""
return (
self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)
|
{"golden_diff": "diff --git a/src/plasmapy/plasma/equilibria1d.py b/src/plasmapy/plasma/equilibria1d.py\n--- a/src/plasmapy/plasma/equilibria1d.py\n+++ b/src/plasmapy/plasma/equilibria1d.py\n@@ -72,6 +72,18 @@\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n+\n+ Examples\n+ --------\n+ >>> import astropy.units as u\n+ >>> B0 = 1 * u.T\n+ >>> delta = 1 * u.m\n+ >>> P0 = 0 * u.Pa\n+ >>> hs = HarrisSheet(B0, delta, P0)\n+ >>> y = [-2, 0, 2] * u.m\n+ >>> hs.magnetic_field(y)\n+ <Quantity [-0.96402758007, 0, 0.96402758007] T>\n+\n \"\"\"\n return self.B0 * np.tanh(u.rad * y / self.delta)\n \n@@ -88,6 +100,17 @@\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n+\n+ Examples\n+ --------\n+ >>> import astropy.units as u\n+ >>> B0 = 1 * u.T\n+ >>> delta = 1 * u.m\n+ >>> P0 = 0 * u.Pa\n+ >>> hs = HarrisSheet(B0, delta, P0)\n+ >>> y = [-2, 0, 2] * u.m\n+ >>> hs.current_density(y)\n+ <Quantity [-56222.1400445, -795774.715459, -56222.1400445] A / m2>\n \"\"\"\n return (\n -self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2\n@@ -106,6 +129,17 @@\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n+\n+ Examples\n+ --------\n+ >>> import astropy.units as u\n+ >>> B0 = 1 * u.T\n+ >>> delta = 1 * u.m\n+ >>> P0 = 0 * u.Pa\n+ >>> hs = HarrisSheet(B0, delta, P0)\n+ >>> y = [-2, 0, 2] * u.m\n+ >>> hs.plasma_pressure(y)\n+ <Quantity [28111.07, 397887.36, 28111.07] Pa>\n \"\"\"\n return (\n self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)\n", "issue": "Expands examples in docstrings of `HarrisSheet` and its methods\nCurrently, the docstring for `HarrisSheet` ([permalink](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/src/plasmapy/plasma/equilibria1d.py#L12)) doesn't contain any examples. It would be helpful to add a simple working example that shows how to use it.\r\n\r\nOne possibility would be to adapt some of the [tests](https://github.com/PlasmaPy/PlasmaPy/blob/2c1ee2e74e86d9519d1a306a6f78413683ca9a02/tests/plasma/test_equilibria1d.py).\n", "before_files": [{"content": "\"\"\"Functionality for representing one-dimensional equilibria.\"\"\"\n\n__all__ = [\"HarrisSheet\"]\n\nimport astropy.constants as const\nimport astropy.units as u\nimport numpy as np\n\nfrom plasmapy.utils.decorators.validators import validate_quantities\n\n\nclass HarrisSheet:\n r\"\"\"\n Define a Harris Sheet Equilibrium.\n\n Magnetic field will be in the :math:`\u00b1x` direction and the current\n density will be in the :math:`\u00b1z` direction in a :math:`\\hat{x} \u00d7\n \\hat{y} = \\hat{z}` coordinate system.\n\n Parameters\n ----------\n B0 : `~astropy.units.Quantity`\n Magnitude of magnetic field in the limit of :math:`y \u2192 \u221e` in\n units convertible to teslas.\n\n delta : `~astropy.units.Quantity`\n The thickness of the current sheet in units convertible to\n meters.\n\n P0 : `~astropy.units.Quantity`\n The plasma pressure in the limit of :math:`y \u2192 \u221e` in units\n convertible to pascals.\n\n Notes\n -----\n A current sheet is current limited to a surface.\n\n A Harris sheet is a 1D ideal MHD equilibrium. In resistive MHD if\n there is any resistivity, it won't be a true equilibrium since the\n resistivity will gradually smooth the profile out over time.\n\n A Harris sheet is often used as the initial condition for\n simulations of magnetic reconnection.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> harris_sheet = HarrisSheet(delta=3 * u.m, B0=2 * u.T)\n >>> harris_sheet.magnetic_field(y=5 * u.m)\n <Quantity 1.8622... T>\n \"\"\"\n\n def __init__(self, B0, delta, P0=0 * u.Pa) -> None:\n self.B0 = B0\n self.delta = delta\n self.P0 = P0\n\n @validate_quantities\n def magnetic_field(self, y: u.Quantity[u.m]) -> u.Quantity[u.T]:\n r\"\"\"\n Compute the magnetic field.\n\n In this equation, :math:`B_0` is the asymptotic magnitude of the\n magnetic field for :math:`y \u2192 \u00b1\u221e` and :math:`\u03b4` is the thickness\n of the sheet.\n\n .. math::\n\n B_x(y) = B_0 \\tanh \\left( \\frac{y}{\u03b4} \\right)\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n \"\"\"\n return self.B0 * np.tanh(u.rad * y / self.delta)\n\n @validate_quantities\n def current_density(self, y: u.Quantity[u.m]) -> u.Quantity[u.A / u.m**2]:\n r\"\"\"\n Compute the current density.\n\n .. math::\n\n J_z(y) = - \\frac{B_0}{\u03b4 \u03bc_0) \\mathrm{sech}^2 \\left( \\frac{y}{\u03b4} \\right)\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n \"\"\"\n return (\n -self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2\n )\n\n @validate_quantities\n def plasma_pressure(self, y: u.Quantity[u.m]) -> u.Quantity[u.Pa]:\n r\"\"\"\n Compute plasma pressure.\n\n .. math::\n\n p(y) = \\frac{B_0^2}{2 \u03bc_0} \\mathrm{sech}^2 \\left( \\frac{y}{\u03b4} \\right) + p_0\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n \"\"\"\n return (\n self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)\n + self.P0\n )\n", "path": "src/plasmapy/plasma/equilibria1d.py"}], "after_files": [{"content": "\"\"\"Functionality for representing one-dimensional equilibria.\"\"\"\n\n__all__ = [\"HarrisSheet\"]\n\nimport astropy.constants as const\nimport astropy.units as u\nimport numpy as np\n\nfrom plasmapy.utils.decorators.validators import validate_quantities\n\n\nclass HarrisSheet:\n r\"\"\"\n Define a Harris Sheet Equilibrium.\n\n Magnetic field will be in the :math:`\u00b1x` direction and the current\n density will be in the :math:`\u00b1z` direction in a :math:`\\hat{x} \u00d7\n \\hat{y} = \\hat{z}` coordinate system.\n\n Parameters\n ----------\n B0 : `~astropy.units.Quantity`\n Magnitude of magnetic field in the limit of :math:`y \u2192 \u221e` in\n units convertible to teslas.\n\n delta : `~astropy.units.Quantity`\n The thickness of the current sheet in units convertible to\n meters.\n\n P0 : `~astropy.units.Quantity`\n The plasma pressure in the limit of :math:`y \u2192 \u221e` in units\n convertible to pascals.\n\n Notes\n -----\n A current sheet is current limited to a surface.\n\n A Harris sheet is a 1D ideal MHD equilibrium. In resistive MHD if\n there is any resistivity, it won't be a true equilibrium since the\n resistivity will gradually smooth the profile out over time.\n\n A Harris sheet is often used as the initial condition for\n simulations of magnetic reconnection.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> harris_sheet = HarrisSheet(delta=3 * u.m, B0=2 * u.T)\n >>> harris_sheet.magnetic_field(y=5 * u.m)\n <Quantity 1.8622... T>\n \"\"\"\n\n def __init__(self, B0, delta, P0=0 * u.Pa) -> None:\n self.B0 = B0\n self.delta = delta\n self.P0 = P0\n\n @validate_quantities\n def magnetic_field(self, y: u.Quantity[u.m]) -> u.Quantity[u.T]:\n r\"\"\"\n Compute the magnetic field.\n\n In this equation, :math:`B_0` is the asymptotic magnitude of the\n magnetic field for :math:`y \u2192 \u00b1\u221e` and :math:`\u03b4` is the thickness\n of the sheet.\n\n .. math::\n\n B_x(y) = B_0 \\tanh \\left( \\frac{y}{\u03b4} \\right)\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> B0 = 1 * u.T\n >>> delta = 1 * u.m\n >>> P0 = 0 * u.Pa\n >>> hs = HarrisSheet(B0, delta, P0)\n >>> y = [-2, 0, 2] * u.m\n >>> hs.magnetic_field(y)\n <Quantity [-0.96402758007, 0, 0.96402758007] T>\n\n \"\"\"\n return self.B0 * np.tanh(u.rad * y / self.delta)\n\n @validate_quantities\n def current_density(self, y: u.Quantity[u.m]) -> u.Quantity[u.A / u.m**2]:\n r\"\"\"\n Compute the current density.\n\n .. math::\n\n J_z(y) = - \\frac{B_0}{\u03b4 \u03bc_0) \\mathrm{sech}^2 \\left( \\frac{y}{\u03b4} \\right)\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> B0 = 1 * u.T\n >>> delta = 1 * u.m\n >>> P0 = 0 * u.Pa\n >>> hs = HarrisSheet(B0, delta, P0)\n >>> y = [-2, 0, 2] * u.m\n >>> hs.current_density(y)\n <Quantity [-56222.1400445, -795774.715459, -56222.1400445] A / m2>\n \"\"\"\n return (\n -self.B0 / (self.delta * const.mu0) * np.cosh(u.rad * y / self.delta) ** -2\n )\n\n @validate_quantities\n def plasma_pressure(self, y: u.Quantity[u.m]) -> u.Quantity[u.Pa]:\n r\"\"\"\n Compute plasma pressure.\n\n .. math::\n\n p(y) = \\frac{B_0^2}{2 \u03bc_0} \\mathrm{sech}^2 \\left( \\frac{y}{\u03b4} \\right) + p_0\n\n Parameters\n ----------\n y : `~astropy.units.Quantity`\n Orthogonal distance from the current sheet center.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> B0 = 1 * u.T\n >>> delta = 1 * u.m\n >>> P0 = 0 * u.Pa\n >>> hs = HarrisSheet(B0, delta, P0)\n >>> y = [-2, 0, 2] * u.m\n >>> hs.plasma_pressure(y)\n <Quantity [28111.07, 397887.36, 28111.07] Pa>\n \"\"\"\n return (\n self.B0**2 / (2 * const.mu0) * (np.cosh(u.rad * y / self.delta) ** -2)\n + self.P0\n )\n", "path": "src/plasmapy/plasma/equilibria1d.py"}]}
| 1,607 | 685 |
gh_patches_debug_26647
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-1989
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Proposal] Warn and/or fail if default admin's password hasn't been changed
Feature description
---------
Proposal: warn and/or fail if default `admin`'s password hasn't been changed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/api/management/commands/create_admin.py`
Content:
```
1 from django.contrib.auth.management.commands import createsuperuser
2 from django.core.management import CommandError
3
4
5 class Command(createsuperuser.Command):
6 help = "Non-interactively create an admin user"
7
8 def add_arguments(self, parser):
9 super().add_arguments(parser)
10 parser.add_argument("--password", default=None, help="The password for the admin.")
11
12 def handle(self, *args, **options):
13 password = options.get("password")
14 username = options.get("username")
15
16 if password and not username:
17 raise CommandError("--username is required if specifying --password")
18
19 try:
20 super().handle(*args, **options)
21 except Exception as err:
22 if "is already taken" in str(err):
23 self.stderr.write(f"User {username} already exists.")
24 else:
25 raise
26
27 if password:
28 database = options.get("database")
29 db = self.UserModel._default_manager.db_manager(database)
30 user = db.get(username=username)
31 user.set_password(password)
32 self.stderr.write(f"Setting password for User {username}.")
33 user.save()
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/backend/api/management/commands/create_admin.py b/backend/api/management/commands/create_admin.py
--- a/backend/api/management/commands/create_admin.py
+++ b/backend/api/management/commands/create_admin.py
@@ -13,9 +13,17 @@
password = options.get("password")
username = options.get("username")
- if password and not username:
+ if not username:
+ self.stderr.write("Error: Blank username isn't allowed.")
raise CommandError("--username is required if specifying --password")
+ if not password:
+ self.stderr.write("Error: Blank password isn't allowed.")
+ raise CommandError("--password is required")
+
+ if password == "password":
+ self.stdout.write(self.style.WARNING("Warning: You should change the default password."))
+
try:
super().handle(*args, **options)
except Exception as err:
@@ -24,10 +32,10 @@
else:
raise
- if password:
- database = options.get("database")
- db = self.UserModel._default_manager.db_manager(database)
- user = db.get(username=username)
- user.set_password(password)
- self.stderr.write(f"Setting password for User {username}.")
- user.save()
+ database = options.get("database")
+ db = self.UserModel._default_manager.db_manager(database)
+ user = db.get(username=username)
+ user.set_password(password)
+ message = f"Setting password for User {username}."
+ self.stdout.write(self.style.SUCCESS(message))
+ user.save()
|
{"golden_diff": "diff --git a/backend/api/management/commands/create_admin.py b/backend/api/management/commands/create_admin.py\n--- a/backend/api/management/commands/create_admin.py\n+++ b/backend/api/management/commands/create_admin.py\n@@ -13,9 +13,17 @@\n password = options.get(\"password\")\n username = options.get(\"username\")\n \n- if password and not username:\n+ if not username:\n+ self.stderr.write(\"Error: Blank username isn't allowed.\")\n raise CommandError(\"--username is required if specifying --password\")\n \n+ if not password:\n+ self.stderr.write(\"Error: Blank password isn't allowed.\")\n+ raise CommandError(\"--password is required\")\n+\n+ if password == \"password\":\n+ self.stdout.write(self.style.WARNING(\"Warning: You should change the default password.\"))\n+\n try:\n super().handle(*args, **options)\n except Exception as err:\n@@ -24,10 +32,10 @@\n else:\n raise\n \n- if password:\n- database = options.get(\"database\")\n- db = self.UserModel._default_manager.db_manager(database)\n- user = db.get(username=username)\n- user.set_password(password)\n- self.stderr.write(f\"Setting password for User {username}.\")\n- user.save()\n+ database = options.get(\"database\")\n+ db = self.UserModel._default_manager.db_manager(database)\n+ user = db.get(username=username)\n+ user.set_password(password)\n+ message = f\"Setting password for User {username}.\"\n+ self.stdout.write(self.style.SUCCESS(message))\n+ user.save()\n", "issue": "[Proposal] Warn and/or fail if default admin's password hasn't been changed\nFeature description\r\n---------\r\nProposal: warn and/or fail if default `admin`'s password hasn't been changed.\n", "before_files": [{"content": "from django.contrib.auth.management.commands import createsuperuser\nfrom django.core.management import CommandError\n\n\nclass Command(createsuperuser.Command):\n help = \"Non-interactively create an admin user\"\n\n def add_arguments(self, parser):\n super().add_arguments(parser)\n parser.add_argument(\"--password\", default=None, help=\"The password for the admin.\")\n\n def handle(self, *args, **options):\n password = options.get(\"password\")\n username = options.get(\"username\")\n\n if password and not username:\n raise CommandError(\"--username is required if specifying --password\")\n\n try:\n super().handle(*args, **options)\n except Exception as err:\n if \"is already taken\" in str(err):\n self.stderr.write(f\"User {username} already exists.\")\n else:\n raise\n\n if password:\n database = options.get(\"database\")\n db = self.UserModel._default_manager.db_manager(database)\n user = db.get(username=username)\n user.set_password(password)\n self.stderr.write(f\"Setting password for User {username}.\")\n user.save()\n", "path": "backend/api/management/commands/create_admin.py"}], "after_files": [{"content": "from django.contrib.auth.management.commands import createsuperuser\nfrom django.core.management import CommandError\n\n\nclass Command(createsuperuser.Command):\n help = \"Non-interactively create an admin user\"\n\n def add_arguments(self, parser):\n super().add_arguments(parser)\n parser.add_argument(\"--password\", default=None, help=\"The password for the admin.\")\n\n def handle(self, *args, **options):\n password = options.get(\"password\")\n username = options.get(\"username\")\n\n if not username:\n self.stderr.write(\"Error: Blank username isn't allowed.\")\n raise CommandError(\"--username is required if specifying --password\")\n\n if not password:\n self.stderr.write(\"Error: Blank password isn't allowed.\")\n raise CommandError(\"--password is required\")\n\n if password == \"password\":\n self.stdout.write(self.style.WARNING(\"Warning: You should change the default password.\"))\n\n try:\n super().handle(*args, **options)\n except Exception as err:\n if \"is already taken\" in str(err):\n self.stderr.write(f\"User {username} already exists.\")\n else:\n raise\n\n database = options.get(\"database\")\n db = self.UserModel._default_manager.db_manager(database)\n user = db.get(username=username)\n user.set_password(password)\n message = f\"Setting password for User {username}.\"\n self.stdout.write(self.style.SUCCESS(message))\n user.save()\n", "path": "backend/api/management/commands/create_admin.py"}]}
| 595 | 350 |
gh_patches_debug_21351
|
rasdani/github-patches
|
git_diff
|
microsoft__ptvsd-343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Running program to completion now prints `Terminated` in the console window
* Python program contains `print("Hello world")`
* Debug the above code without any breakpoints
* I.e. let it run to completion
This did not happen in the past.
The output is as follows:
```
pydev debugger: New process is launching (breakpoints won't work in the new process).
pydev debugger: To debug that process please enable 'Attach to subprocess automatically while debugging?' option in the debugger settings.
a
Terminated
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/daemon.py`
Content:
```
1 import atexit
2 import os
3 import platform
4 import signal
5 import sys
6
7 from ptvsd import wrapper
8 from ptvsd.socket import close_socket
9
10
11 def _wait_on_exit():
12 if sys.__stdout__ is not None:
13 try:
14 import msvcrt
15 except ImportError:
16 sys.__stdout__.write('Press Enter to continue . . . ')
17 sys.__stdout__.flush()
18 sys.__stdin__.read(1)
19 else:
20 sys.__stdout__.write('Press any key to continue . . . ')
21 sys.__stdout__.flush()
22 msvcrt.getch()
23
24
25 class DaemonClosedError(RuntimeError):
26 """Indicates that a Daemon was unexpectedly closed."""
27 def __init__(self, msg='closed'):
28 super(DaemonClosedError, self).__init__(msg)
29
30
31 class Daemon(object):
32 """The process-level manager for the VSC protocol debug adapter."""
33
34 exitcode = 0
35
36 def __init__(self, wait_on_exit=_wait_on_exit,
37 addhandlers=True, killonclose=True):
38 self.wait_on_exit = wait_on_exit
39 self.addhandlers = addhandlers
40 self.killonclose = killonclose
41
42 self._closed = False
43
44 self._pydevd = None
45 self._server = None
46 self._client = None
47 self._adapter = None
48
49 @property
50 def pydevd(self):
51 return self._pydevd
52
53 @property
54 def server(self):
55 return self._server
56
57 @property
58 def client(self):
59 return self._client
60
61 @property
62 def adapter(self):
63 return self._adapter
64
65 def start(self, server=None):
66 """Return the "socket" to use for pydevd after setting it up."""
67 if self._closed:
68 raise DaemonClosedError()
69 if self._pydevd is not None:
70 raise RuntimeError('already started')
71 self._pydevd = wrapper.PydevdSocket(
72 self._handle_pydevd_message,
73 self._handle_pydevd_close,
74 self._getpeername,
75 self._getsockname,
76 )
77 self._server = server
78 return self._pydevd
79
80 def set_connection(self, client):
81 """Set the client socket to use for the debug adapter.
82
83 A VSC message loop is started for the client.
84 """
85 if self._closed:
86 raise DaemonClosedError()
87 if self._pydevd is None:
88 raise RuntimeError('not started yet')
89 if self._client is not None:
90 raise RuntimeError('connection already set')
91 self._client = client
92
93 self._adapter = wrapper.VSCodeMessageProcessor(
94 client,
95 self._pydevd.pydevd_notify,
96 self._pydevd.pydevd_request,
97 self._handle_vsc_disconnect,
98 self._handle_vsc_close,
99 )
100 name = 'ptvsd.Client' if self._server is None else 'ptvsd.Server'
101 self._adapter.start(name)
102 if self.addhandlers:
103 self._add_atexit_handler()
104 self._set_signal_handlers()
105 return self._adapter
106
107 def close(self):
108 """Stop all loops and release all resources."""
109 if self._closed:
110 raise DaemonClosedError('already closed')
111 self._closed = True
112
113 if self._adapter is not None:
114 normal, abnormal = self._adapter._wait_options()
115 if (normal and not self.exitcode) or (abnormal and self.exitcode):
116 self.wait_on_exit()
117
118 if self._pydevd is not None:
119 close_socket(self._pydevd)
120 if self._client is not None:
121 self._release_connection()
122
123 # internal methods
124
125 def _add_atexit_handler(self):
126 def handler():
127 if not self._closed:
128 self.close()
129 if self._adapter is not None:
130 # TODO: Do this in VSCodeMessageProcessor.close()?
131 self._adapter._wait_for_server_thread()
132 atexit.register(handler)
133
134 def _set_signal_handlers(self):
135 if platform.system() == 'Windows':
136 return None
137
138 def handler(signum, frame):
139 if not self._closed:
140 self.close()
141 sys.exit(0)
142 signal.signal(signal.SIGHUP, handler)
143
144 def _release_connection(self):
145 if self._adapter is not None:
146 # TODO: This is not correct in the "attach" case.
147 self._adapter.handle_pydevd_stopped(self.exitcode)
148 self._adapter.close()
149 close_socket(self._client)
150
151 # internal methods for PyDevdSocket().
152
153 def _handle_pydevd_message(self, cmdid, seq, text):
154 if self._adapter is not None:
155 self._adapter.on_pydevd_event(cmdid, seq, text)
156
157 def _handle_pydevd_close(self):
158 if self._closed:
159 return
160 self.close()
161
162 def _getpeername(self):
163 if self._client is None:
164 raise NotImplementedError
165 return self._client.getpeername()
166
167 def _getsockname(self):
168 if self._client is None:
169 raise NotImplementedError
170 return self._client.getsockname()
171
172 # internal methods for VSCodeMessageProcessor
173
174 def _handle_vsc_disconnect(self, kill=False):
175 if not self._closed:
176 self.close()
177 if kill and self.killonclose:
178 os.kill(os.getpid(), signal.SIGTERM)
179
180 def _handle_vsc_close(self):
181 if self._closed:
182 return
183 self.close()
184
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ptvsd/daemon.py b/ptvsd/daemon.py
--- a/ptvsd/daemon.py
+++ b/ptvsd/daemon.py
@@ -40,6 +40,7 @@
self.killonclose = killonclose
self._closed = False
+ self._exiting_via_atexit_handler = False
self._pydevd = None
self._server = None
@@ -124,6 +125,7 @@
def _add_atexit_handler(self):
def handler():
+ self._exiting_via_atexit_handler = True
if not self._closed:
self.close()
if self._adapter is not None:
@@ -174,7 +176,7 @@
def _handle_vsc_disconnect(self, kill=False):
if not self._closed:
self.close()
- if kill and self.killonclose:
+ if kill and self.killonclose and not self._exiting_via_atexit_handler:
os.kill(os.getpid(), signal.SIGTERM)
def _handle_vsc_close(self):
|
{"golden_diff": "diff --git a/ptvsd/daemon.py b/ptvsd/daemon.py\n--- a/ptvsd/daemon.py\n+++ b/ptvsd/daemon.py\n@@ -40,6 +40,7 @@\n self.killonclose = killonclose\n \n self._closed = False\n+ self._exiting_via_atexit_handler = False\n \n self._pydevd = None\n self._server = None\n@@ -124,6 +125,7 @@\n \n def _add_atexit_handler(self):\n def handler():\n+ self._exiting_via_atexit_handler = True\n if not self._closed:\n self.close()\n if self._adapter is not None:\n@@ -174,7 +176,7 @@\n def _handle_vsc_disconnect(self, kill=False):\n if not self._closed:\n self.close()\n- if kill and self.killonclose:\n+ if kill and self.killonclose and not self._exiting_via_atexit_handler:\n os.kill(os.getpid(), signal.SIGTERM)\n \n def _handle_vsc_close(self):\n", "issue": "Running program to completion now prints `Terminated` in the console window\n* Python program contains `print(\"Hello world\")`\r\n* Debug the above code without any breakpoints\r\n* I.e. let it run to completion\r\n\r\nThis did not happen in the past.\r\n\r\nThe output is as follows:\r\n```\r\npydev debugger: New process is launching (breakpoints won't work in the new process).\r\npydev debugger: To debug that process please enable 'Attach to subprocess automatically while debugging?' option in the debugger settings.\r\n\r\na\r\nTerminated\r\n```\n", "before_files": [{"content": "import atexit\nimport os\nimport platform\nimport signal\nimport sys\n\nfrom ptvsd import wrapper\nfrom ptvsd.socket import close_socket\n\n\ndef _wait_on_exit():\n if sys.__stdout__ is not None:\n try:\n import msvcrt\n except ImportError:\n sys.__stdout__.write('Press Enter to continue . . . ')\n sys.__stdout__.flush()\n sys.__stdin__.read(1)\n else:\n sys.__stdout__.write('Press any key to continue . . . ')\n sys.__stdout__.flush()\n msvcrt.getch()\n\n\nclass DaemonClosedError(RuntimeError):\n \"\"\"Indicates that a Daemon was unexpectedly closed.\"\"\"\n def __init__(self, msg='closed'):\n super(DaemonClosedError, self).__init__(msg)\n\n\nclass Daemon(object):\n \"\"\"The process-level manager for the VSC protocol debug adapter.\"\"\"\n\n exitcode = 0\n\n def __init__(self, wait_on_exit=_wait_on_exit,\n addhandlers=True, killonclose=True):\n self.wait_on_exit = wait_on_exit\n self.addhandlers = addhandlers\n self.killonclose = killonclose\n\n self._closed = False\n\n self._pydevd = None\n self._server = None\n self._client = None\n self._adapter = None\n\n @property\n def pydevd(self):\n return self._pydevd\n\n @property\n def server(self):\n return self._server\n\n @property\n def client(self):\n return self._client\n\n @property\n def adapter(self):\n return self._adapter\n\n def start(self, server=None):\n \"\"\"Return the \"socket\" to use for pydevd after setting it up.\"\"\"\n if self._closed:\n raise DaemonClosedError()\n if self._pydevd is not None:\n raise RuntimeError('already started')\n self._pydevd = wrapper.PydevdSocket(\n self._handle_pydevd_message,\n self._handle_pydevd_close,\n self._getpeername,\n self._getsockname,\n )\n self._server = server\n return self._pydevd\n\n def set_connection(self, client):\n \"\"\"Set the client socket to use for the debug adapter.\n\n A VSC message loop is started for the client.\n \"\"\"\n if self._closed:\n raise DaemonClosedError()\n if self._pydevd is None:\n raise RuntimeError('not started yet')\n if self._client is not None:\n raise RuntimeError('connection already set')\n self._client = client\n\n self._adapter = wrapper.VSCodeMessageProcessor(\n client,\n self._pydevd.pydevd_notify,\n self._pydevd.pydevd_request,\n self._handle_vsc_disconnect,\n self._handle_vsc_close,\n )\n name = 'ptvsd.Client' if self._server is None else 'ptvsd.Server'\n self._adapter.start(name)\n if self.addhandlers:\n self._add_atexit_handler()\n self._set_signal_handlers()\n return self._adapter\n\n def close(self):\n \"\"\"Stop all loops and release all resources.\"\"\"\n if self._closed:\n raise DaemonClosedError('already closed')\n self._closed = True\n\n if self._adapter is not None:\n normal, abnormal = self._adapter._wait_options()\n if (normal and not self.exitcode) or (abnormal and self.exitcode):\n self.wait_on_exit()\n\n if self._pydevd is not None:\n close_socket(self._pydevd)\n if self._client is not None:\n self._release_connection()\n\n # internal methods\n\n def _add_atexit_handler(self):\n def handler():\n if not self._closed:\n self.close()\n if self._adapter is not None:\n # TODO: Do this in VSCodeMessageProcessor.close()?\n self._adapter._wait_for_server_thread()\n atexit.register(handler)\n\n def _set_signal_handlers(self):\n if platform.system() == 'Windows':\n return None\n\n def handler(signum, frame):\n if not self._closed:\n self.close()\n sys.exit(0)\n signal.signal(signal.SIGHUP, handler)\n\n def _release_connection(self):\n if self._adapter is not None:\n # TODO: This is not correct in the \"attach\" case.\n self._adapter.handle_pydevd_stopped(self.exitcode)\n self._adapter.close()\n close_socket(self._client)\n\n # internal methods for PyDevdSocket().\n\n def _handle_pydevd_message(self, cmdid, seq, text):\n if self._adapter is not None:\n self._adapter.on_pydevd_event(cmdid, seq, text)\n\n def _handle_pydevd_close(self):\n if self._closed:\n return\n self.close()\n\n def _getpeername(self):\n if self._client is None:\n raise NotImplementedError\n return self._client.getpeername()\n\n def _getsockname(self):\n if self._client is None:\n raise NotImplementedError\n return self._client.getsockname()\n\n # internal methods for VSCodeMessageProcessor\n\n def _handle_vsc_disconnect(self, kill=False):\n if not self._closed:\n self.close()\n if kill and self.killonclose:\n os.kill(os.getpid(), signal.SIGTERM)\n\n def _handle_vsc_close(self):\n if self._closed:\n return\n self.close()\n", "path": "ptvsd/daemon.py"}], "after_files": [{"content": "import atexit\nimport os\nimport platform\nimport signal\nimport sys\n\nfrom ptvsd import wrapper\nfrom ptvsd.socket import close_socket\n\n\ndef _wait_on_exit():\n if sys.__stdout__ is not None:\n try:\n import msvcrt\n except ImportError:\n sys.__stdout__.write('Press Enter to continue . . . ')\n sys.__stdout__.flush()\n sys.__stdin__.read(1)\n else:\n sys.__stdout__.write('Press any key to continue . . . ')\n sys.__stdout__.flush()\n msvcrt.getch()\n\n\nclass DaemonClosedError(RuntimeError):\n \"\"\"Indicates that a Daemon was unexpectedly closed.\"\"\"\n def __init__(self, msg='closed'):\n super(DaemonClosedError, self).__init__(msg)\n\n\nclass Daemon(object):\n \"\"\"The process-level manager for the VSC protocol debug adapter.\"\"\"\n\n exitcode = 0\n\n def __init__(self, wait_on_exit=_wait_on_exit,\n addhandlers=True, killonclose=True):\n self.wait_on_exit = wait_on_exit\n self.addhandlers = addhandlers\n self.killonclose = killonclose\n\n self._closed = False\n self._exiting_via_atexit_handler = False\n\n self._pydevd = None\n self._server = None\n self._client = None\n self._adapter = None\n\n @property\n def pydevd(self):\n return self._pydevd\n\n @property\n def server(self):\n return self._server\n\n @property\n def client(self):\n return self._client\n\n @property\n def adapter(self):\n return self._adapter\n\n def start(self, server=None):\n \"\"\"Return the \"socket\" to use for pydevd after setting it up.\"\"\"\n if self._closed:\n raise DaemonClosedError()\n if self._pydevd is not None:\n raise RuntimeError('already started')\n self._pydevd = wrapper.PydevdSocket(\n self._handle_pydevd_message,\n self._handle_pydevd_close,\n self._getpeername,\n self._getsockname,\n )\n self._server = server\n return self._pydevd\n\n def set_connection(self, client):\n \"\"\"Set the client socket to use for the debug adapter.\n\n A VSC message loop is started for the client.\n \"\"\"\n if self._closed:\n raise DaemonClosedError()\n if self._pydevd is None:\n raise RuntimeError('not started yet')\n if self._client is not None:\n raise RuntimeError('connection already set')\n self._client = client\n\n self._adapter = wrapper.VSCodeMessageProcessor(\n client,\n self._pydevd.pydevd_notify,\n self._pydevd.pydevd_request,\n self._handle_vsc_disconnect,\n self._handle_vsc_close,\n )\n name = 'ptvsd.Client' if self._server is None else 'ptvsd.Server'\n self._adapter.start(name)\n if self.addhandlers:\n self._add_atexit_handler()\n self._set_signal_handlers()\n return self._adapter\n\n def close(self):\n \"\"\"Stop all loops and release all resources.\"\"\"\n if self._closed:\n raise DaemonClosedError('already closed')\n self._closed = True\n\n if self._adapter is not None:\n normal, abnormal = self._adapter._wait_options()\n if (normal and not self.exitcode) or (abnormal and self.exitcode):\n self.wait_on_exit()\n\n if self._pydevd is not None:\n close_socket(self._pydevd)\n if self._client is not None:\n self._release_connection()\n\n # internal methods\n\n def _add_atexit_handler(self):\n def handler():\n self._exiting_via_atexit_handler = True\n if not self._closed:\n self.close()\n if self._adapter is not None:\n # TODO: Do this in VSCodeMessageProcessor.close()?\n self._adapter._wait_for_server_thread()\n atexit.register(handler)\n\n def _set_signal_handlers(self):\n if platform.system() == 'Windows':\n return None\n\n def handler(signum, frame):\n if not self._closed:\n self.close()\n sys.exit(0)\n signal.signal(signal.SIGHUP, handler)\n\n def _release_connection(self):\n if self._adapter is not None:\n # TODO: This is not correct in the \"attach\" case.\n self._adapter.handle_pydevd_stopped(self.exitcode)\n self._adapter.close()\n close_socket(self._client)\n\n # internal methods for PyDevdSocket().\n\n def _handle_pydevd_message(self, cmdid, seq, text):\n if self._adapter is not None:\n self._adapter.on_pydevd_event(cmdid, seq, text)\n\n def _handle_pydevd_close(self):\n if self._closed:\n return\n self.close()\n\n def _getpeername(self):\n if self._client is None:\n raise NotImplementedError\n return self._client.getpeername()\n\n def _getsockname(self):\n if self._client is None:\n raise NotImplementedError\n return self._client.getsockname()\n\n # internal methods for VSCodeMessageProcessor\n\n def _handle_vsc_disconnect(self, kill=False):\n if not self._closed:\n self.close()\n if kill and self.killonclose and not self._exiting_via_atexit_handler:\n os.kill(os.getpid(), signal.SIGTERM)\n\n def _handle_vsc_close(self):\n if self._closed:\n return\n self.close()\n", "path": "ptvsd/daemon.py"}]}
| 2,024 | 247 |
gh_patches_debug_40653
|
rasdani/github-patches
|
git_diff
|
crytic__slither-240
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improvements on solc-version detectors
The allowed version is out of date:
https://github.com/crytic/slither/blob/0891f9a8a5e5e096084476e4b2bd292c3685f251/slither/detectors/attributes/incorrect_solc.py#L39
Due to the frequent solc release, we might want to change the logic to allow future releases.
Additionally:
- 0.5.5 should not be used: https://twitter.com/ethchris/status/1105903546602528768
- the wiki link is incorrect
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/detectors/attributes/incorrect_solc.py`
Content:
```
1 """
2 Check if an incorrect version of solc is used
3 """
4
5 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
6 import re
7
8 # group:
9 # 0: ^ > >= < <= (optional)
10 # 1: ' ' (optional)
11 # 2: version number
12 # 3: version number
13 # 4: version number
14 PATTERN = re.compile('(\^|>|>=|<|<=)?([ ]+)?(\d+)\.(\d+)\.(\d+)')
15
16 class IncorrectSolc(AbstractDetector):
17 """
18 Check if an old version of solc is used
19 """
20
21 ARGUMENT = 'solc-version'
22 HELP = 'Incorrect Solidity version (< 0.4.24 or complex pragma)'
23 IMPACT = DetectorClassification.INFORMATIONAL
24 CONFIDENCE = DetectorClassification.HIGH
25
26 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'
27
28 WIKI_TITLE = 'Incorrect versions of Solidity'
29 WIKI_DESCRIPTION = '''
30 Solc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
31 We recommend avoiding complex pragma statement.'''
32 WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'
33
34 COMPLEX_PRAGMA = "is too complex"
35 OLD_VERSION = "allows old versions"
36 LESS_THAN = "uses lesser than"
37
38 # Indicates the allowed versions.
39 ALLOWED_VERSIONS = ["0.4.24", "0.4.25", "0.5.2", "0.5.3"]
40
41 def _check_version(self, version):
42 op = version[0]
43 if op and not op in ['>', '>=', '^']:
44 return self.LESS_THAN
45 version_number = '.'.join(version[2:])
46 if version_number not in self.ALLOWED_VERSIONS:
47 return self.OLD_VERSION
48 return None
49
50 def _check_pragma(self, version):
51 versions = PATTERN.findall(version)
52 if len(versions) == 1:
53 version = versions[0]
54 return self._check_version(version)
55 elif len(versions) == 2:
56 version_left = versions[0]
57 version_right = versions[1]
58 # Only allow two elements if the second one is
59 # <0.5.0 or <0.6.0
60 if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:
61 return self.COMPLEX_PRAGMA
62 return self._check_version(version_left)
63 else:
64 return self.COMPLEX_PRAGMA
65 def _detect(self):
66 """
67 Detects pragma statements that allow for outdated solc versions.
68 :return: Returns the relevant JSON data for the findings.
69 """
70 # Detect all version related pragmas and check if they are disallowed.
71 results = []
72 pragma = self.slither.pragma_directives
73 disallowed_pragmas = []
74 detected_version = False
75 for p in pragma:
76 # Skip any pragma directives which do not refer to version
77 if len(p.directive) < 1 or p.directive[0] != "solidity":
78 continue
79
80 # This is version, so we test if this is disallowed.
81 detected_version = True
82 reason = self._check_pragma(p.version)
83 if reason:
84 disallowed_pragmas.append((reason, p))
85
86 # If we found any disallowed pragmas, we output our findings.
87 if disallowed_pragmas:
88 for (reason, p) in disallowed_pragmas:
89 info = f"Pragma version \"{p.version}\" {reason} ({p.source_mapping_str})\n"
90
91 json = self.generate_json_result(info)
92 self.add_pragma_to_json(p, json)
93 results.append(json)
94
95 return results
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py
--- a/slither/detectors/attributes/incorrect_solc.py
+++ b/slither/detectors/attributes/incorrect_solc.py
@@ -23,31 +23,43 @@
IMPACT = DetectorClassification.INFORMATIONAL
CONFIDENCE = DetectorClassification.HIGH
- WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'
+ WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity'
WIKI_TITLE = 'Incorrect versions of Solidity'
WIKI_DESCRIPTION = '''
Solc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
We recommend avoiding complex pragma statement.'''
- WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'
+ WIKI_RECOMMENDATION = '''
+Use Solidity 0.4.25 or 0.5.3. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''
- COMPLEX_PRAGMA = "is too complex"
- OLD_VERSION = "allows old versions"
- LESS_THAN = "uses lesser than"
+ COMPLEX_PRAGMA_TXT = "is too complex"
+ OLD_VERSION_TXT = "allows old versions"
+ LESS_THAN_TXT = "uses lesser than"
+
+ TOO_RECENT_VERSION_TXT = "necessitates versions too recent to be trusted. Consider deploying with 0.5.3"
+ BUGGY_VERSION_TXT = "is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)"
# Indicates the allowed versions.
- ALLOWED_VERSIONS = ["0.4.24", "0.4.25", "0.5.2", "0.5.3"]
+ ALLOWED_VERSIONS = ["0.4.25", "0.4.26", "0.5.3"]
+ # Indicates the versions too recent.
+ TOO_RECENT_VERSIONS = ["0.5.4", "0.5.7", "0.5.8", "0.5.9", "0.5.10"]
+ # Indicates the versions that should not be used.
+ BUGGY_VERSIONS = ["0.4.22", "0.5.5", "0.5.6", "^0.4.22", "^0.5.5", "^0.5.6"]
def _check_version(self, version):
op = version[0]
if op and not op in ['>', '>=', '^']:
- return self.LESS_THAN
+ return self.LESS_THAN_TXT
version_number = '.'.join(version[2:])
if version_number not in self.ALLOWED_VERSIONS:
- return self.OLD_VERSION
+ if version_number in self.TOO_RECENT_VERSIONS:
+ return self.TOO_RECENT_VERSION_TXT
+ return self.OLD_VERSION_TXT
return None
def _check_pragma(self, version):
+ if version in self.BUGGY_VERSIONS:
+ return self.BUGGY_VERSION_TXT
versions = PATTERN.findall(version)
if len(versions) == 1:
version = versions[0]
@@ -58,10 +70,10 @@
# Only allow two elements if the second one is
# <0.5.0 or <0.6.0
if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:
- return self.COMPLEX_PRAGMA
+ return self.COMPLEX_PRAGMA_TXT
return self._check_version(version_left)
else:
- return self.COMPLEX_PRAGMA
+ return self.COMPLEX_PRAGMA_TXT
def _detect(self):
"""
Detects pragma statements that allow for outdated solc versions.
|
{"golden_diff": "diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py\n--- a/slither/detectors/attributes/incorrect_solc.py\n+++ b/slither/detectors/attributes/incorrect_solc.py\n@@ -23,31 +23,43 @@\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n \n- WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'\n+ WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity'\n \n WIKI_TITLE = 'Incorrect versions of Solidity'\n WIKI_DESCRIPTION = '''\n Solc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\n We recommend avoiding complex pragma statement.'''\n- WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'\n+ WIKI_RECOMMENDATION = '''\n+Use Solidity 0.4.25 or 0.5.3. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''\n \n- COMPLEX_PRAGMA = \"is too complex\"\n- OLD_VERSION = \"allows old versions\"\n- LESS_THAN = \"uses lesser than\"\n+ COMPLEX_PRAGMA_TXT = \"is too complex\"\n+ OLD_VERSION_TXT = \"allows old versions\"\n+ LESS_THAN_TXT = \"uses lesser than\"\n+\n+ TOO_RECENT_VERSION_TXT = \"necessitates versions too recent to be trusted. Consider deploying with 0.5.3\"\n+ BUGGY_VERSION_TXT = \"is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)\"\n \n # Indicates the allowed versions.\n- ALLOWED_VERSIONS = [\"0.4.24\", \"0.4.25\", \"0.5.2\", \"0.5.3\"]\n+ ALLOWED_VERSIONS = [\"0.4.25\", \"0.4.26\", \"0.5.3\"]\n+ # Indicates the versions too recent.\n+ TOO_RECENT_VERSIONS = [\"0.5.4\", \"0.5.7\", \"0.5.8\", \"0.5.9\", \"0.5.10\"]\n+ # Indicates the versions that should not be used.\n+ BUGGY_VERSIONS = [\"0.4.22\", \"0.5.5\", \"0.5.6\", \"^0.4.22\", \"^0.5.5\", \"^0.5.6\"]\n \n def _check_version(self, version):\n op = version[0]\n if op and not op in ['>', '>=', '^']:\n- return self.LESS_THAN\n+ return self.LESS_THAN_TXT\n version_number = '.'.join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n- return self.OLD_VERSION\n+ if version_number in self.TOO_RECENT_VERSIONS:\n+ return self.TOO_RECENT_VERSION_TXT\n+ return self.OLD_VERSION_TXT\n return None\n \n def _check_pragma(self, version):\n+ if version in self.BUGGY_VERSIONS:\n+ return self.BUGGY_VERSION_TXT\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n@@ -58,10 +70,10 @@\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:\n- return self.COMPLEX_PRAGMA\n+ return self.COMPLEX_PRAGMA_TXT\n return self._check_version(version_left)\n else:\n- return self.COMPLEX_PRAGMA\n+ return self.COMPLEX_PRAGMA_TXT\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n", "issue": "Improvements on solc-version detectors\nThe allowed version is out of date:\r\nhttps://github.com/crytic/slither/blob/0891f9a8a5e5e096084476e4b2bd292c3685f251/slither/detectors/attributes/incorrect_solc.py#L39\r\n\r\nDue to the frequent solc release, we might want to change the logic to allow future releases.\r\n\r\nAdditionally:\r\n- 0.5.5 should not be used: https://twitter.com/ethchris/status/1105903546602528768\r\n- the wiki link is incorrect\n", "before_files": [{"content": "\"\"\"\n Check if an incorrect version of solc is used\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nimport re\n\n# group:\n# 0: ^ > >= < <= (optional)\n# 1: ' ' (optional)\n# 2: version number\n# 3: version number\n# 4: version number\nPATTERN = re.compile('(\\^|>|>=|<|<=)?([ ]+)?(\\d+)\\.(\\d+)\\.(\\d+)')\n\nclass IncorrectSolc(AbstractDetector):\n \"\"\"\n Check if an old version of solc is used\n \"\"\"\n\n ARGUMENT = 'solc-version'\n HELP = 'Incorrect Solidity version (< 0.4.24 or complex pragma)'\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-version-of-solidity'\n\n WIKI_TITLE = 'Incorrect versions of Solidity'\n WIKI_DESCRIPTION = '''\nSolc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\nWe recommend avoiding complex pragma statement.'''\n WIKI_RECOMMENDATION = 'Use Solidity 0.4.25 or 0.5.2.'\n\n COMPLEX_PRAGMA = \"is too complex\"\n OLD_VERSION = \"allows old versions\"\n LESS_THAN = \"uses lesser than\"\n\n # Indicates the allowed versions.\n ALLOWED_VERSIONS = [\"0.4.24\", \"0.4.25\", \"0.5.2\", \"0.5.3\"]\n\n def _check_version(self, version):\n op = version[0]\n if op and not op in ['>', '>=', '^']:\n return self.LESS_THAN\n version_number = '.'.join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n return self.OLD_VERSION\n return None\n\n def _check_pragma(self, version):\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n return self._check_version(version)\n elif len(versions) == 2:\n version_left = versions[0]\n version_right = versions[1]\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:\n return self.COMPLEX_PRAGMA\n return self._check_version(version_left)\n else:\n return self.COMPLEX_PRAGMA\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n :return: Returns the relevant JSON data for the findings.\n \"\"\"\n # Detect all version related pragmas and check if they are disallowed.\n results = []\n pragma = self.slither.pragma_directives\n disallowed_pragmas = []\n detected_version = False\n for p in pragma:\n # Skip any pragma directives which do not refer to version\n if len(p.directive) < 1 or p.directive[0] != \"solidity\":\n continue\n\n # This is version, so we test if this is disallowed.\n detected_version = True\n reason = self._check_pragma(p.version)\n if reason:\n disallowed_pragmas.append((reason, p))\n\n # If we found any disallowed pragmas, we output our findings.\n if disallowed_pragmas:\n for (reason, p) in disallowed_pragmas:\n info = f\"Pragma version \\\"{p.version}\\\" {reason} ({p.source_mapping_str})\\n\"\n\n json = self.generate_json_result(info)\n self.add_pragma_to_json(p, json)\n results.append(json)\n\n return results\n", "path": "slither/detectors/attributes/incorrect_solc.py"}], "after_files": [{"content": "\"\"\"\n Check if an incorrect version of solc is used\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nimport re\n\n# group:\n# 0: ^ > >= < <= (optional)\n# 1: ' ' (optional)\n# 2: version number\n# 3: version number\n# 4: version number\nPATTERN = re.compile('(\\^|>|>=|<|<=)?([ ]+)?(\\d+)\\.(\\d+)\\.(\\d+)')\n\nclass IncorrectSolc(AbstractDetector):\n \"\"\"\n Check if an old version of solc is used\n \"\"\"\n\n ARGUMENT = 'solc-version'\n HELP = 'Incorrect Solidity version (< 0.4.24 or complex pragma)'\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity'\n\n WIKI_TITLE = 'Incorrect versions of Solidity'\n WIKI_DESCRIPTION = '''\nSolc frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\nWe recommend avoiding complex pragma statement.'''\n WIKI_RECOMMENDATION = '''\nUse Solidity 0.4.25 or 0.5.3. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''\n\n COMPLEX_PRAGMA_TXT = \"is too complex\"\n OLD_VERSION_TXT = \"allows old versions\"\n LESS_THAN_TXT = \"uses lesser than\"\n\n TOO_RECENT_VERSION_TXT = \"necessitates versions too recent to be trusted. Consider deploying with 0.5.3\"\n BUGGY_VERSION_TXT = \"is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)\"\n\n # Indicates the allowed versions.\n ALLOWED_VERSIONS = [\"0.4.25\", \"0.4.26\", \"0.5.3\"]\n # Indicates the versions too recent.\n TOO_RECENT_VERSIONS = [\"0.5.4\", \"0.5.7\", \"0.5.8\", \"0.5.9\", \"0.5.10\"]\n # Indicates the versions that should not be used.\n BUGGY_VERSIONS = [\"0.4.22\", \"0.5.5\", \"0.5.6\", \"^0.4.22\", \"^0.5.5\", \"^0.5.6\"]\n\n def _check_version(self, version):\n op = version[0]\n if op and not op in ['>', '>=', '^']:\n return self.LESS_THAN_TXT\n version_number = '.'.join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n if version_number in self.TOO_RECENT_VERSIONS:\n return self.TOO_RECENT_VERSION_TXT\n return self.OLD_VERSION_TXT\n return None\n\n def _check_pragma(self, version):\n if version in self.BUGGY_VERSIONS:\n return self.BUGGY_VERSION_TXT\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n return self._check_version(version)\n elif len(versions) == 2:\n version_left = versions[0]\n version_right = versions[1]\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0')]:\n return self.COMPLEX_PRAGMA_TXT\n return self._check_version(version_left)\n else:\n return self.COMPLEX_PRAGMA_TXT\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n :return: Returns the relevant JSON data for the findings.\n \"\"\"\n # Detect all version related pragmas and check if they are disallowed.\n results = []\n pragma = self.slither.pragma_directives\n disallowed_pragmas = []\n detected_version = False\n for p in pragma:\n # Skip any pragma directives which do not refer to version\n if len(p.directive) < 1 or p.directive[0] != \"solidity\":\n continue\n\n # This is version, so we test if this is disallowed.\n detected_version = True\n reason = self._check_pragma(p.version)\n if reason:\n disallowed_pragmas.append((reason, p))\n\n # If we found any disallowed pragmas, we output our findings.\n if disallowed_pragmas:\n for (reason, p) in disallowed_pragmas:\n info = f\"Pragma version \\\"{p.version}\\\" {reason} ({p.source_mapping_str})\\n\"\n\n json = self.generate_json_result(info)\n self.add_pragma_to_json(p, json)\n results.append(json)\n\n return results\n", "path": "slither/detectors/attributes/incorrect_solc.py"}]}
| 1,468 | 926 |
gh_patches_debug_5974
|
rasdani/github-patches
|
git_diff
|
google__pytype-20
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
python -m pip install -U . doesn't work.
It ought to be possible to install pytype using pip by running
```
python -m pip install -U .
```
but doing so causes an error message.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # pylint: disable=bad-indentation
4
5 from distutils.core import setup
6
7 import glob
8 import os
9
10
11 def scan_package_data(path, pattern):
12 result = []
13 for subdir, _, _ in os.walk(path):
14 full_pattern = os.path.join(subdir, pattern)
15 if glob.glob(full_pattern):
16 # Once we know that it matches files, we store the pattern itself.
17 result.append(full_pattern)
18 return result
19
20
21 typeshed = scan_package_data('typeshed', '*.pyi')
22 assert 'typeshed/stdlib/2.7/*.pyi' in typeshed
23
24
25 setup(
26 name='pytype',
27 version='0.2',
28 description='Python type inferencer',
29 maintainer='Google',
30 maintainer_email='[email protected]',
31 url='http://github.com/google/pytype',
32 packages=['pytype',
33 'pytype/pyc',
34 'pytype/pytd',
35 'pytype/pytd/parse',
36 ],
37 scripts=['scripts/pytype', 'scripts/pytd'],
38 package_data={'pytype': ['pytd/builtins/*',
39 'pytd/stdlib/*',
40 ] + typeshed},
41 requires=['ply (>=3.4)'],
42 install_requires=['ply>=3.4'],
43 )
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -36,7 +36,8 @@
],
scripts=['scripts/pytype', 'scripts/pytd'],
package_data={'pytype': ['pytd/builtins/*',
- 'pytd/stdlib/*',
+ 'pytd/stdlib/os/*.pytd',
+ 'pytd/stdlib/*.pytd',
] + typeshed},
requires=['ply (>=3.4)'],
install_requires=['ply>=3.4'],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -36,7 +36,8 @@\n ],\n scripts=['scripts/pytype', 'scripts/pytd'],\n package_data={'pytype': ['pytd/builtins/*',\n- 'pytd/stdlib/*',\n+ 'pytd/stdlib/os/*.pytd',\n+ 'pytd/stdlib/*.pytd',\n ] + typeshed},\n requires=['ply (>=3.4)'],\n install_requires=['ply>=3.4'],\n", "issue": "python -m pip install -U . doesn't work.\nIt ought to be possible to install pytype using pip by running\n\n```\npython -m pip install -U .\n```\n\nbut doing so causes an error message.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# pylint: disable=bad-indentation\n\nfrom distutils.core import setup\n\nimport glob\nimport os\n\n\ndef scan_package_data(path, pattern):\n result = []\n for subdir, _, _ in os.walk(path):\n full_pattern = os.path.join(subdir, pattern)\n if glob.glob(full_pattern):\n # Once we know that it matches files, we store the pattern itself.\n result.append(full_pattern)\n return result\n\n\ntypeshed = scan_package_data('typeshed', '*.pyi')\nassert 'typeshed/stdlib/2.7/*.pyi' in typeshed\n\n\nsetup(\n name='pytype',\n version='0.2',\n description='Python type inferencer',\n maintainer='Google',\n maintainer_email='[email protected]',\n url='http://github.com/google/pytype',\n packages=['pytype',\n 'pytype/pyc',\n 'pytype/pytd',\n 'pytype/pytd/parse',\n ],\n scripts=['scripts/pytype', 'scripts/pytd'],\n package_data={'pytype': ['pytd/builtins/*',\n 'pytd/stdlib/*',\n ] + typeshed},\n requires=['ply (>=3.4)'],\n install_requires=['ply>=3.4'],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# pylint: disable=bad-indentation\n\nfrom distutils.core import setup\n\nimport glob\nimport os\n\n\ndef scan_package_data(path, pattern):\n result = []\n for subdir, _, _ in os.walk(path):\n full_pattern = os.path.join(subdir, pattern)\n if glob.glob(full_pattern):\n # Once we know that it matches files, we store the pattern itself.\n result.append(full_pattern)\n return result\n\n\ntypeshed = scan_package_data('typeshed', '*.pyi')\nassert 'typeshed/stdlib/2.7/*.pyi' in typeshed\n\n\nsetup(\n name='pytype',\n version='0.2',\n description='Python type inferencer',\n maintainer='Google',\n maintainer_email='[email protected]',\n url='http://github.com/google/pytype',\n packages=['pytype',\n 'pytype/pyc',\n 'pytype/pytd',\n 'pytype/pytd/parse',\n ],\n scripts=['scripts/pytype', 'scripts/pytd'],\n package_data={'pytype': ['pytd/builtins/*',\n 'pytd/stdlib/os/*.pytd',\n 'pytd/stdlib/*.pytd',\n ] + typeshed},\n requires=['ply (>=3.4)'],\n install_requires=['ply>=3.4'],\n)\n", "path": "setup.py"}]}
| 668 | 119 |
gh_patches_debug_35793
|
rasdani/github-patches
|
git_diff
|
CTPUG__wafer-221
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate page created
On https://wafertest.debconf.org, I created the following page: https://wafertest.debconf.org/debconf-16-bursaries-instructions, when my Wafer pages page loaded, I saw that there existed two new pages with that title.
When I visit https://wafertest.debconf.org/debconf-16-bursaries-instructions, wafer gives me a debug page that says "get() returned more than one Page -- it returned 2!"
Here is the traceback: http://paste.debian.net/415666/
For now I'll just delete the duplicate page, but @stefanor mentioned that a unique index for pages may be required.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wafer/pages/admin.py`
Content:
```
1 from django.contrib import admin
2
3 from wafer.pages.models import File, Page
4
5 from wafer.compare.admin import CompareVersionAdmin, DateModifiedFilter
6
7
8 class PageAdmin(CompareVersionAdmin, admin.ModelAdmin):
9 prepopulated_fields = {"slug": ("name",)}
10 list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')
11
12 list_filter = (DateModifiedFilter,)
13
14
15
16 admin.site.register(Page, PageAdmin)
17 admin.site.register(File)
18
```
Path: `wafer/pages/models.py`
Content:
```
1 import logging
2 logger = logging.getLogger(__name__)
3
4 from django.utils.translation import ugettext_lazy as _
5 from django.core.urlresolvers import reverse
6 from django.conf import settings
7 from django.db import models
8 from django.db.models.signals import post_save
9 from django.utils.encoding import python_2_unicode_compatible
10
11
12 from markitup.fields import MarkupField
13 from wafer.menu import MenuError, refresh_menu_cache
14
15
16 @python_2_unicode_compatible
17 class File(models.Model):
18 """A file for use in page markup."""
19 name = models.CharField(max_length=255)
20 description = models.TextField()
21 item = models.FileField(upload_to='pages_files')
22
23 def __str__(self):
24 return u'%s' % (self.name,)
25
26
27 @python_2_unicode_compatible
28 class Page(models.Model):
29 """An extra page for the site."""
30 name = models.CharField(max_length=255)
31 slug = models.SlugField(help_text=_("Last component of the page URL"))
32 parent = models.ForeignKey('self', null=True, blank=True)
33 content = MarkupField(
34 help_text=_("Markdown contents for the page."))
35 include_in_menu = models.BooleanField(
36 help_text=_("Whether to include in menus."),
37 default=False)
38 exclude_from_static = models.BooleanField(
39 help_text=_("Whether to exclude this page from the static version of"
40 " the site (Container pages, etc.)"),
41 default=False)
42 files = models.ManyToManyField(
43 File, related_name="pages", blank=True,
44 help_text=_("Images and other files for use in"
45 " the content markdown field."))
46
47 people = models.ManyToManyField(settings.AUTH_USER_MODEL,
48 related_name='pages', blank=True,
49 help_text=_("People associated with this page for display in the"
50 " schedule (Session chairs, panelists, etc.)"))
51
52 def __str__(self):
53 return u'%s' % (self.name,)
54
55 def get_path(self):
56 path, parent = [self.slug], self.parent
57 while parent is not None:
58 path.insert(0, parent.slug)
59 parent = parent.parent
60 return path
61
62 def get_absolute_url(self):
63 url = "/".join(self.get_path())
64 return reverse('wafer_page', args=(url,))
65
66 def get_in_schedule(self):
67 if self.scheduleitem_set.all():
68 return True
69 return False
70
71 def get_people_display_names(self):
72 names = [person.userprofile.display_name()
73 for person in self.people.all()]
74 if len(names) > 2:
75 comma_names = ', '.join(names[:-1])
76 return comma_names + ' and ' + names[-1]
77 else:
78 return ' and '.join(names)
79
80 get_in_schedule.short_description = 'Added to schedule'
81 get_in_schedule.boolean = True
82
83 get_people_display_names.short_description = 'People'
84
85 class Model:
86 unique_together = (('parent', 'slug'),)
87
88
89 def page_menus(root_menu):
90 """Add page menus."""
91 for page in Page.objects.filter(include_in_menu=True):
92 path = page.get_path()
93 menu = path[0] if len(path) > 1 else None
94 try:
95 root_menu.add_item(page.name, page.get_absolute_url(), menu=menu)
96 except MenuError as e:
97 logger.error("Bad menu item %r for page with slug %r."
98 % (e, page.slug))
99
100
101 post_save.connect(refresh_menu_cache, sender=Page)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wafer/pages/admin.py b/wafer/pages/admin.py
--- a/wafer/pages/admin.py
+++ b/wafer/pages/admin.py
@@ -7,7 +7,8 @@
class PageAdmin(CompareVersionAdmin, admin.ModelAdmin):
prepopulated_fields = {"slug": ("name",)}
- list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')
+ list_display = ('name', 'slug', 'get_absolute_url',
+ 'get_people_display_names', 'get_in_schedule')
list_filter = (DateModifiedFilter,)
diff --git a/wafer/pages/models.py b/wafer/pages/models.py
--- a/wafer/pages/models.py
+++ b/wafer/pages/models.py
@@ -3,6 +3,7 @@
from django.utils.translation import ugettext_lazy as _
from django.core.urlresolvers import reverse
+from django.core.exceptions import ValidationError, NON_FIELD_ERRORS
from django.conf import settings
from django.db import models
from django.db.models.signals import post_save
@@ -63,6 +64,8 @@
url = "/".join(self.get_path())
return reverse('wafer_page', args=(url,))
+ get_absolute_url.short_description = 'page url'
+
def get_in_schedule(self):
if self.scheduleitem_set.all():
return True
@@ -85,6 +88,35 @@
class Model:
unique_together = (('parent', 'slug'),)
+ def clean(self):
+ keys = [self.pk]
+ parent = self.parent
+ while parent is not None:
+ if parent.pk in keys:
+ raise ValidationError(
+ {
+ NON_FIELD_ERRORS: [
+ _("Circular reference in parent."),
+ ],
+ })
+ keys.append(parent.pk)
+ parent = parent.parent
+ return super(Page, self).clean()
+
+ def validate_unique(self, exclude=None):
+ existing = Page.objects.filter(slug=self.slug, parent=self.parent)
+ # We could be updating the page, so don't fail if the existing
+ # entry is this page.
+ if existing.count() > 1 or (existing.count() == 1 and
+ existing.first().pk != self.pk):
+ raise ValidationError(
+ {
+ NON_FIELD_ERRORS: [
+ _("Duplicate parent/slug combination."),
+ ],
+ })
+ return super(Page, self).validate_unique(exclude)
+
def page_menus(root_menu):
"""Add page menus."""
|
{"golden_diff": "diff --git a/wafer/pages/admin.py b/wafer/pages/admin.py\n--- a/wafer/pages/admin.py\n+++ b/wafer/pages/admin.py\n@@ -7,7 +7,8 @@\n \n class PageAdmin(CompareVersionAdmin, admin.ModelAdmin):\n prepopulated_fields = {\"slug\": (\"name\",)}\n- list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')\n+ list_display = ('name', 'slug', 'get_absolute_url',\n+ 'get_people_display_names', 'get_in_schedule')\n \n list_filter = (DateModifiedFilter,)\n \ndiff --git a/wafer/pages/models.py b/wafer/pages/models.py\n--- a/wafer/pages/models.py\n+++ b/wafer/pages/models.py\n@@ -3,6 +3,7 @@\n \n from django.utils.translation import ugettext_lazy as _\n from django.core.urlresolvers import reverse\n+from django.core.exceptions import ValidationError, NON_FIELD_ERRORS\n from django.conf import settings\n from django.db import models\n from django.db.models.signals import post_save\n@@ -63,6 +64,8 @@\n url = \"/\".join(self.get_path())\n return reverse('wafer_page', args=(url,))\n \n+ get_absolute_url.short_description = 'page url'\n+\n def get_in_schedule(self):\n if self.scheduleitem_set.all():\n return True\n@@ -85,6 +88,35 @@\n class Model:\n unique_together = (('parent', 'slug'),)\n \n+ def clean(self):\n+ keys = [self.pk]\n+ parent = self.parent\n+ while parent is not None:\n+ if parent.pk in keys:\n+ raise ValidationError(\n+ {\n+ NON_FIELD_ERRORS: [\n+ _(\"Circular reference in parent.\"),\n+ ],\n+ })\n+ keys.append(parent.pk)\n+ parent = parent.parent\n+ return super(Page, self).clean()\n+\n+ def validate_unique(self, exclude=None):\n+ existing = Page.objects.filter(slug=self.slug, parent=self.parent)\n+ # We could be updating the page, so don't fail if the existing\n+ # entry is this page.\n+ if existing.count() > 1 or (existing.count() == 1 and\n+ existing.first().pk != self.pk):\n+ raise ValidationError(\n+ {\n+ NON_FIELD_ERRORS: [\n+ _(\"Duplicate parent/slug combination.\"),\n+ ],\n+ })\n+ return super(Page, self).validate_unique(exclude)\n+\n \n def page_menus(root_menu):\n \"\"\"Add page menus.\"\"\"\n", "issue": "Duplicate page created\nOn https://wafertest.debconf.org, I created the following page: https://wafertest.debconf.org/debconf-16-bursaries-instructions, when my Wafer pages page loaded, I saw that there existed two new pages with that title.\n\nWhen I visit https://wafertest.debconf.org/debconf-16-bursaries-instructions, wafer gives me a debug page that says \"get() returned more than one Page -- it returned 2!\"\n\nHere is the traceback: http://paste.debian.net/415666/\n\nFor now I'll just delete the duplicate page, but @stefanor mentioned that a unique index for pages may be required.\n\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom wafer.pages.models import File, Page\n\nfrom wafer.compare.admin import CompareVersionAdmin, DateModifiedFilter\n\n\nclass PageAdmin(CompareVersionAdmin, admin.ModelAdmin):\n prepopulated_fields = {\"slug\": (\"name\",)}\n list_display = ('name', 'slug', 'get_people_display_names', 'get_in_schedule')\n\n list_filter = (DateModifiedFilter,)\n\n\n\nadmin.site.register(Page, PageAdmin)\nadmin.site.register(File)\n", "path": "wafer/pages/admin.py"}, {"content": "import logging\nlogger = logging.getLogger(__name__)\n\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.core.urlresolvers import reverse\nfrom django.conf import settings\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.utils.encoding import python_2_unicode_compatible\n\n\nfrom markitup.fields import MarkupField\nfrom wafer.menu import MenuError, refresh_menu_cache\n\n\n@python_2_unicode_compatible\nclass File(models.Model):\n \"\"\"A file for use in page markup.\"\"\"\n name = models.CharField(max_length=255)\n description = models.TextField()\n item = models.FileField(upload_to='pages_files')\n\n def __str__(self):\n return u'%s' % (self.name,)\n\n\n@python_2_unicode_compatible\nclass Page(models.Model):\n \"\"\"An extra page for the site.\"\"\"\n name = models.CharField(max_length=255)\n slug = models.SlugField(help_text=_(\"Last component of the page URL\"))\n parent = models.ForeignKey('self', null=True, blank=True)\n content = MarkupField(\n help_text=_(\"Markdown contents for the page.\"))\n include_in_menu = models.BooleanField(\n help_text=_(\"Whether to include in menus.\"),\n default=False)\n exclude_from_static = models.BooleanField(\n help_text=_(\"Whether to exclude this page from the static version of\"\n \" the site (Container pages, etc.)\"),\n default=False)\n files = models.ManyToManyField(\n File, related_name=\"pages\", blank=True,\n help_text=_(\"Images and other files for use in\"\n \" the content markdown field.\"))\n\n people = models.ManyToManyField(settings.AUTH_USER_MODEL,\n related_name='pages', blank=True,\n help_text=_(\"People associated with this page for display in the\"\n \" schedule (Session chairs, panelists, etc.)\"))\n\n def __str__(self):\n return u'%s' % (self.name,)\n\n def get_path(self):\n path, parent = [self.slug], self.parent\n while parent is not None:\n path.insert(0, parent.slug)\n parent = parent.parent\n return path\n\n def get_absolute_url(self):\n url = \"/\".join(self.get_path())\n return reverse('wafer_page', args=(url,))\n\n def get_in_schedule(self):\n if self.scheduleitem_set.all():\n return True\n return False\n\n def get_people_display_names(self):\n names = [person.userprofile.display_name()\n for person in self.people.all()]\n if len(names) > 2:\n comma_names = ', '.join(names[:-1])\n return comma_names + ' and ' + names[-1]\n else:\n return ' and '.join(names)\n\n get_in_schedule.short_description = 'Added to schedule'\n get_in_schedule.boolean = True\n\n get_people_display_names.short_description = 'People'\n\n class Model:\n unique_together = (('parent', 'slug'),)\n\n\ndef page_menus(root_menu):\n \"\"\"Add page menus.\"\"\"\n for page in Page.objects.filter(include_in_menu=True):\n path = page.get_path()\n menu = path[0] if len(path) > 1 else None\n try:\n root_menu.add_item(page.name, page.get_absolute_url(), menu=menu)\n except MenuError as e:\n logger.error(\"Bad menu item %r for page with slug %r.\"\n % (e, page.slug))\n\n\npost_save.connect(refresh_menu_cache, sender=Page)\n", "path": "wafer/pages/models.py"}], "after_files": [{"content": "from django.contrib import admin\n\nfrom wafer.pages.models import File, Page\n\nfrom wafer.compare.admin import CompareVersionAdmin, DateModifiedFilter\n\n\nclass PageAdmin(CompareVersionAdmin, admin.ModelAdmin):\n prepopulated_fields = {\"slug\": (\"name\",)}\n list_display = ('name', 'slug', 'get_absolute_url',\n 'get_people_display_names', 'get_in_schedule')\n\n list_filter = (DateModifiedFilter,)\n\n\n\nadmin.site.register(Page, PageAdmin)\nadmin.site.register(File)\n", "path": "wafer/pages/admin.py"}, {"content": "import logging\nlogger = logging.getLogger(__name__)\n\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.core.urlresolvers import reverse\nfrom django.core.exceptions import ValidationError, NON_FIELD_ERRORS\nfrom django.conf import settings\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.utils.encoding import python_2_unicode_compatible\n\n\nfrom markitup.fields import MarkupField\nfrom wafer.menu import MenuError, refresh_menu_cache\n\n\n@python_2_unicode_compatible\nclass File(models.Model):\n \"\"\"A file for use in page markup.\"\"\"\n name = models.CharField(max_length=255)\n description = models.TextField()\n item = models.FileField(upload_to='pages_files')\n\n def __str__(self):\n return u'%s' % (self.name,)\n\n\n@python_2_unicode_compatible\nclass Page(models.Model):\n \"\"\"An extra page for the site.\"\"\"\n name = models.CharField(max_length=255)\n slug = models.SlugField(help_text=_(\"Last component of the page URL\"))\n parent = models.ForeignKey('self', null=True, blank=True)\n content = MarkupField(\n help_text=_(\"Markdown contents for the page.\"))\n include_in_menu = models.BooleanField(\n help_text=_(\"Whether to include in menus.\"),\n default=False)\n exclude_from_static = models.BooleanField(\n help_text=_(\"Whether to exclude this page from the static version of\"\n \" the site (Container pages, etc.)\"),\n default=False)\n files = models.ManyToManyField(\n File, related_name=\"pages\", blank=True,\n help_text=_(\"Images and other files for use in\"\n \" the content markdown field.\"))\n\n people = models.ManyToManyField(settings.AUTH_USER_MODEL,\n related_name='pages', blank=True,\n help_text=_(\"People associated with this page for display in the\"\n \" schedule (Session chairs, panelists, etc.)\"))\n\n def __str__(self):\n return u'%s' % (self.name,)\n\n def get_path(self):\n path, parent = [self.slug], self.parent\n while parent is not None:\n path.insert(0, parent.slug)\n parent = parent.parent\n return path\n\n def get_absolute_url(self):\n url = \"/\".join(self.get_path())\n return reverse('wafer_page', args=(url,))\n\n get_absolute_url.short_description = 'page url'\n\n def get_in_schedule(self):\n if self.scheduleitem_set.all():\n return True\n return False\n\n def get_people_display_names(self):\n names = [person.userprofile.display_name()\n for person in self.people.all()]\n if len(names) > 2:\n comma_names = ', '.join(names[:-1])\n return comma_names + ' and ' + names[-1]\n else:\n return ' and '.join(names)\n\n get_in_schedule.short_description = 'Added to schedule'\n get_in_schedule.boolean = True\n\n get_people_display_names.short_description = 'People'\n\n class Model:\n unique_together = (('parent', 'slug'),)\n\n def clean(self):\n keys = [self.pk]\n parent = self.parent\n while parent is not None:\n if parent.pk in keys:\n raise ValidationError(\n {\n NON_FIELD_ERRORS: [\n _(\"Circular reference in parent.\"),\n ],\n })\n keys.append(parent.pk)\n parent = parent.parent\n return super(Page, self).clean()\n\n def validate_unique(self, exclude=None):\n existing = Page.objects.filter(slug=self.slug, parent=self.parent)\n # We could be updating the page, so don't fail if the existing\n # entry is this page.\n if existing.count() > 1 or (existing.count() == 1 and\n existing.first().pk != self.pk):\n raise ValidationError(\n {\n NON_FIELD_ERRORS: [\n _(\"Duplicate parent/slug combination.\"),\n ],\n })\n return super(Page, self).validate_unique(exclude)\n\n\ndef page_menus(root_menu):\n \"\"\"Add page menus.\"\"\"\n for page in Page.objects.filter(include_in_menu=True):\n path = page.get_path()\n menu = path[0] if len(path) > 1 else None\n try:\n root_menu.add_item(page.name, page.get_absolute_url(), menu=menu)\n except MenuError as e:\n logger.error(\"Bad menu item %r for page with slug %r.\"\n % (e, page.slug))\n\n\npost_save.connect(refresh_menu_cache, sender=Page)\n", "path": "wafer/pages/models.py"}]}
| 1,499 | 557 |
gh_patches_debug_31451
|
rasdani/github-patches
|
git_diff
|
getredash__redash-831
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Download DataSheets as Excel file.
Csv file with utf-8 is hard to use in excel. So I want to download data sheets as .xlsx file
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/handlers/query_results.py`
Content:
```
1 import csv
2 import json
3 import cStringIO
4 import time
5
6 from flask import make_response, request
7 from flask.ext.restful import abort
8
9 from redash import models, settings, utils
10 from redash.wsgi import api
11 from redash.tasks import QueryTask, record_event
12 from redash.permissions import require_permission, not_view_only, has_access
13 from redash.handlers.base import BaseResource, get_object_or_404
14
15
16 class QueryResultListAPI(BaseResource):
17 @require_permission('execute_query')
18 def post(self):
19 params = request.get_json(force=True)
20 data_source = models.DataSource.get_by_id_and_org(params.get('data_source_id'), self.current_org)
21
22 if not has_access(data_source.groups, self.current_user, not_view_only):
23 return {'job': {'status': 4, 'error': 'You do not have permission to run queries with this data source.'}}, 403
24
25 self.record_event({
26 'action': 'execute_query',
27 'timestamp': int(time.time()),
28 'object_id': data_source.id,
29 'object_type': 'data_source',
30 'query': params['query']
31 })
32
33 max_age = int(params.get('max_age', -1))
34
35 if max_age == 0:
36 query_result = None
37 else:
38 query_result = models.QueryResult.get_latest(data_source, params['query'], max_age)
39
40 if query_result:
41 return {'query_result': query_result.to_dict()}
42 else:
43 query_id = params.get('query_id', 'adhoc')
44 job = QueryTask.add_task(params['query'], data_source,
45 metadata={"Username": self.current_user.name, "Query ID": query_id})
46 return {'job': job.to_dict()}
47
48
49 ONE_YEAR = 60 * 60 * 24 * 365.25
50
51
52 class QueryResultAPI(BaseResource):
53 @staticmethod
54 def add_cors_headers(headers):
55 if 'Origin' in request.headers:
56 origin = request.headers['Origin']
57
58 if origin in settings.ACCESS_CONTROL_ALLOW_ORIGIN:
59 headers['Access-Control-Allow-Origin'] = origin
60 headers['Access-Control-Allow-Credentials'] = str(settings.ACCESS_CONTROL_ALLOW_CREDENTIALS).lower()
61
62 @require_permission('view_query')
63 def options(self, query_id=None, query_result_id=None, filetype='json'):
64 headers = {}
65 self.add_cors_headers(headers)
66
67 if settings.ACCESS_CONTROL_REQUEST_METHOD:
68 headers['Access-Control-Request-Method'] = settings.ACCESS_CONTROL_REQUEST_METHOD
69
70 if settings.ACCESS_CONTROL_ALLOW_HEADERS:
71 headers['Access-Control-Allow-Headers'] = settings.ACCESS_CONTROL_ALLOW_HEADERS
72
73 return make_response("", 200, headers)
74
75 @require_permission('view_query')
76 def get(self, query_id=None, query_result_id=None, filetype='json'):
77 should_cache = query_result_id is not None
78 if query_result_id is None and query_id is not None:
79 query = get_object_or_404(models.Query.get_by_id_and_org, query_id, self.current_org)
80 if query:
81 query_result_id = query._data['latest_query_data']
82
83 if query_result_id:
84 query_result = get_object_or_404(models.QueryResult.get_by_id_and_org, query_result_id, self.current_org)
85
86 if query_result:
87 if isinstance(self.current_user, models.ApiUser):
88 event = {
89 'user_id': None,
90 'org_id': self.current_org.id,
91 'action': 'api_get',
92 'timestamp': int(time.time()),
93 'api_key': self.current_user.id,
94 'file_type': filetype
95 }
96
97 if query_id:
98 event['object_type'] = 'query'
99 event['object_id'] = query_id
100 else:
101 event['object_type'] = 'query_result'
102 event['object_id'] = query_result_id
103
104 record_event.delay(event)
105
106 if filetype == 'json':
107 response = self.make_json_response(query_result)
108 else:
109 response = self.make_csv_response(query_result)
110
111 if len(settings.ACCESS_CONTROL_ALLOW_ORIGIN) > 0:
112 self.add_cors_headers(response.headers)
113
114 if should_cache:
115 response.headers.add_header('Cache-Control', 'max-age=%d' % ONE_YEAR)
116
117 return response
118
119 else:
120 abort(404)
121
122 def make_json_response(self, query_result):
123 data = json.dumps({'query_result': query_result.to_dict()}, cls=utils.JSONEncoder)
124 return make_response(data, 200, {})
125
126 @staticmethod
127 def make_csv_response(query_result):
128 s = cStringIO.StringIO()
129
130 query_data = json.loads(query_result.data)
131 writer = csv.DictWriter(s, fieldnames=[col['name'] for col in query_data['columns']])
132 writer.writer = utils.UnicodeWriter(s)
133 writer.writeheader()
134 for row in query_data['rows']:
135 writer.writerow(row)
136
137 headers = {'Content-Type': "text/csv; charset=UTF-8"}
138 return make_response(s.getvalue(), 200, headers)
139
140
141 api.add_org_resource(QueryResultListAPI, '/api/query_results', endpoint='query_results')
142 api.add_org_resource(QueryResultAPI,
143 '/api/query_results/<query_result_id>',
144 '/api/queries/<query_id>/results.<filetype>',
145 '/api/queries/<query_id>/results/<query_result_id>.<filetype>',
146 endpoint='query_result')
147
148
149 class JobAPI(BaseResource):
150 def get(self, job_id):
151 # TODO: if finished, include the query result
152 job = QueryTask(job_id=job_id)
153 return {'job': job.to_dict()}
154
155 def delete(self, job_id):
156 job = QueryTask(job_id=job_id)
157 job.cancel()
158
159 api.add_org_resource(JobAPI, '/api/jobs/<job_id>', endpoint='job')
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redash/handlers/query_results.py b/redash/handlers/query_results.py
--- a/redash/handlers/query_results.py
+++ b/redash/handlers/query_results.py
@@ -5,7 +5,7 @@
from flask import make_response, request
from flask.ext.restful import abort
-
+import xlsxwriter
from redash import models, settings, utils
from redash.wsgi import api
from redash.tasks import QueryTask, record_event
@@ -105,6 +105,8 @@
if filetype == 'json':
response = self.make_json_response(query_result)
+ elif filetype == 'xlsx':
+ response = self.make_excel_response(query_result)
else:
response = self.make_csv_response(query_result)
@@ -137,6 +139,28 @@
headers = {'Content-Type': "text/csv; charset=UTF-8"}
return make_response(s.getvalue(), 200, headers)
+ @staticmethod
+ def make_excel_response(query_result):
+ s = cStringIO.StringIO()
+
+ query_data = json.loads(query_result.data)
+ book = xlsxwriter.Workbook(s)
+ sheet = book.add_worksheet("result")
+
+ column_names = []
+ for (c, col) in enumerate(query_data['columns']):
+ sheet.write(0, c, col['name'])
+ column_names.append(col['name'])
+
+ for (r, row) in enumerate(query_data['rows']):
+ for (c, name) in enumerate(column_names):
+ sheet.write(r+1, c, row[name])
+
+ book.close()
+
+ headers = {'Content-Type': "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"}
+ return make_response(s.getvalue(), 200, headers)
+
api.add_org_resource(QueryResultListAPI, '/api/query_results', endpoint='query_results')
api.add_org_resource(QueryResultAPI,
|
{"golden_diff": "diff --git a/redash/handlers/query_results.py b/redash/handlers/query_results.py\n--- a/redash/handlers/query_results.py\n+++ b/redash/handlers/query_results.py\n@@ -5,7 +5,7 @@\n \n from flask import make_response, request\n from flask.ext.restful import abort\n-\n+import xlsxwriter\n from redash import models, settings, utils\n from redash.wsgi import api\n from redash.tasks import QueryTask, record_event\n@@ -105,6 +105,8 @@\n \n if filetype == 'json':\n response = self.make_json_response(query_result)\n+ elif filetype == 'xlsx':\n+ response = self.make_excel_response(query_result)\n else:\n response = self.make_csv_response(query_result)\n \n@@ -137,6 +139,28 @@\n headers = {'Content-Type': \"text/csv; charset=UTF-8\"}\n return make_response(s.getvalue(), 200, headers)\n \n+ @staticmethod\n+ def make_excel_response(query_result):\n+ s = cStringIO.StringIO()\n+\n+ query_data = json.loads(query_result.data)\n+ book = xlsxwriter.Workbook(s)\n+ sheet = book.add_worksheet(\"result\")\n+\n+ column_names = []\n+ for (c, col) in enumerate(query_data['columns']):\n+ sheet.write(0, c, col['name'])\n+ column_names.append(col['name'])\n+\n+ for (r, row) in enumerate(query_data['rows']):\n+ for (c, name) in enumerate(column_names):\n+ sheet.write(r+1, c, row[name])\n+\n+ book.close()\n+\n+ headers = {'Content-Type': \"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet\"}\n+ return make_response(s.getvalue(), 200, headers)\n+\n \n api.add_org_resource(QueryResultListAPI, '/api/query_results', endpoint='query_results')\n api.add_org_resource(QueryResultAPI,\n", "issue": "Download DataSheets as Excel file.\nCsv file with utf-8 is hard to use in excel. So I want to download data sheets as .xlsx file\n\n", "before_files": [{"content": "import csv\nimport json\nimport cStringIO\nimport time\n\nfrom flask import make_response, request\nfrom flask.ext.restful import abort\n\nfrom redash import models, settings, utils\nfrom redash.wsgi import api\nfrom redash.tasks import QueryTask, record_event\nfrom redash.permissions import require_permission, not_view_only, has_access\nfrom redash.handlers.base import BaseResource, get_object_or_404\n\n\nclass QueryResultListAPI(BaseResource):\n @require_permission('execute_query')\n def post(self):\n params = request.get_json(force=True)\n data_source = models.DataSource.get_by_id_and_org(params.get('data_source_id'), self.current_org)\n\n if not has_access(data_source.groups, self.current_user, not_view_only):\n return {'job': {'status': 4, 'error': 'You do not have permission to run queries with this data source.'}}, 403\n\n self.record_event({\n 'action': 'execute_query',\n 'timestamp': int(time.time()),\n 'object_id': data_source.id,\n 'object_type': 'data_source',\n 'query': params['query']\n })\n\n max_age = int(params.get('max_age', -1))\n\n if max_age == 0:\n query_result = None\n else:\n query_result = models.QueryResult.get_latest(data_source, params['query'], max_age)\n\n if query_result:\n return {'query_result': query_result.to_dict()}\n else:\n query_id = params.get('query_id', 'adhoc')\n job = QueryTask.add_task(params['query'], data_source,\n metadata={\"Username\": self.current_user.name, \"Query ID\": query_id})\n return {'job': job.to_dict()}\n\n\nONE_YEAR = 60 * 60 * 24 * 365.25\n\n\nclass QueryResultAPI(BaseResource):\n @staticmethod\n def add_cors_headers(headers):\n if 'Origin' in request.headers:\n origin = request.headers['Origin']\n\n if origin in settings.ACCESS_CONTROL_ALLOW_ORIGIN:\n headers['Access-Control-Allow-Origin'] = origin\n headers['Access-Control-Allow-Credentials'] = str(settings.ACCESS_CONTROL_ALLOW_CREDENTIALS).lower()\n\n @require_permission('view_query')\n def options(self, query_id=None, query_result_id=None, filetype='json'):\n headers = {}\n self.add_cors_headers(headers)\n\n if settings.ACCESS_CONTROL_REQUEST_METHOD:\n headers['Access-Control-Request-Method'] = settings.ACCESS_CONTROL_REQUEST_METHOD\n\n if settings.ACCESS_CONTROL_ALLOW_HEADERS:\n headers['Access-Control-Allow-Headers'] = settings.ACCESS_CONTROL_ALLOW_HEADERS\n\n return make_response(\"\", 200, headers)\n\n @require_permission('view_query')\n def get(self, query_id=None, query_result_id=None, filetype='json'):\n should_cache = query_result_id is not None\n if query_result_id is None and query_id is not None:\n query = get_object_or_404(models.Query.get_by_id_and_org, query_id, self.current_org)\n if query:\n query_result_id = query._data['latest_query_data']\n\n if query_result_id:\n query_result = get_object_or_404(models.QueryResult.get_by_id_and_org, query_result_id, self.current_org)\n\n if query_result:\n if isinstance(self.current_user, models.ApiUser):\n event = {\n 'user_id': None,\n 'org_id': self.current_org.id,\n 'action': 'api_get',\n 'timestamp': int(time.time()),\n 'api_key': self.current_user.id,\n 'file_type': filetype\n }\n\n if query_id:\n event['object_type'] = 'query'\n event['object_id'] = query_id\n else:\n event['object_type'] = 'query_result'\n event['object_id'] = query_result_id\n\n record_event.delay(event)\n\n if filetype == 'json':\n response = self.make_json_response(query_result)\n else:\n response = self.make_csv_response(query_result)\n\n if len(settings.ACCESS_CONTROL_ALLOW_ORIGIN) > 0:\n self.add_cors_headers(response.headers)\n\n if should_cache:\n response.headers.add_header('Cache-Control', 'max-age=%d' % ONE_YEAR)\n\n return response\n\n else:\n abort(404)\n\n def make_json_response(self, query_result):\n data = json.dumps({'query_result': query_result.to_dict()}, cls=utils.JSONEncoder)\n return make_response(data, 200, {})\n\n @staticmethod\n def make_csv_response(query_result):\n s = cStringIO.StringIO()\n\n query_data = json.loads(query_result.data)\n writer = csv.DictWriter(s, fieldnames=[col['name'] for col in query_data['columns']])\n writer.writer = utils.UnicodeWriter(s)\n writer.writeheader()\n for row in query_data['rows']:\n writer.writerow(row)\n\n headers = {'Content-Type': \"text/csv; charset=UTF-8\"}\n return make_response(s.getvalue(), 200, headers)\n\n\napi.add_org_resource(QueryResultListAPI, '/api/query_results', endpoint='query_results')\napi.add_org_resource(QueryResultAPI,\n '/api/query_results/<query_result_id>',\n '/api/queries/<query_id>/results.<filetype>',\n '/api/queries/<query_id>/results/<query_result_id>.<filetype>',\n endpoint='query_result')\n\n\nclass JobAPI(BaseResource):\n def get(self, job_id):\n # TODO: if finished, include the query result\n job = QueryTask(job_id=job_id)\n return {'job': job.to_dict()}\n\n def delete(self, job_id):\n job = QueryTask(job_id=job_id)\n job.cancel()\n\napi.add_org_resource(JobAPI, '/api/jobs/<job_id>', endpoint='job')\n", "path": "redash/handlers/query_results.py"}], "after_files": [{"content": "import csv\nimport json\nimport cStringIO\nimport time\n\nfrom flask import make_response, request\nfrom flask.ext.restful import abort\nimport xlsxwriter\nfrom redash import models, settings, utils\nfrom redash.wsgi import api\nfrom redash.tasks import QueryTask, record_event\nfrom redash.permissions import require_permission, not_view_only, has_access\nfrom redash.handlers.base import BaseResource, get_object_or_404\n\n\nclass QueryResultListAPI(BaseResource):\n @require_permission('execute_query')\n def post(self):\n params = request.get_json(force=True)\n data_source = models.DataSource.get_by_id_and_org(params.get('data_source_id'), self.current_org)\n\n if not has_access(data_source.groups, self.current_user, not_view_only):\n return {'job': {'status': 4, 'error': 'You do not have permission to run queries with this data source.'}}, 403\n\n self.record_event({\n 'action': 'execute_query',\n 'timestamp': int(time.time()),\n 'object_id': data_source.id,\n 'object_type': 'data_source',\n 'query': params['query']\n })\n\n max_age = int(params.get('max_age', -1))\n\n if max_age == 0:\n query_result = None\n else:\n query_result = models.QueryResult.get_latest(data_source, params['query'], max_age)\n\n if query_result:\n return {'query_result': query_result.to_dict()}\n else:\n query_id = params.get('query_id', 'adhoc')\n job = QueryTask.add_task(params['query'], data_source,\n metadata={\"Username\": self.current_user.name, \"Query ID\": query_id})\n return {'job': job.to_dict()}\n\n\nONE_YEAR = 60 * 60 * 24 * 365.25\n\n\nclass QueryResultAPI(BaseResource):\n @staticmethod\n def add_cors_headers(headers):\n if 'Origin' in request.headers:\n origin = request.headers['Origin']\n\n if origin in settings.ACCESS_CONTROL_ALLOW_ORIGIN:\n headers['Access-Control-Allow-Origin'] = origin\n headers['Access-Control-Allow-Credentials'] = str(settings.ACCESS_CONTROL_ALLOW_CREDENTIALS).lower()\n\n @require_permission('view_query')\n def options(self, query_id=None, query_result_id=None, filetype='json'):\n headers = {}\n self.add_cors_headers(headers)\n\n if settings.ACCESS_CONTROL_REQUEST_METHOD:\n headers['Access-Control-Request-Method'] = settings.ACCESS_CONTROL_REQUEST_METHOD\n\n if settings.ACCESS_CONTROL_ALLOW_HEADERS:\n headers['Access-Control-Allow-Headers'] = settings.ACCESS_CONTROL_ALLOW_HEADERS\n\n return make_response(\"\", 200, headers)\n\n @require_permission('view_query')\n def get(self, query_id=None, query_result_id=None, filetype='json'):\n should_cache = query_result_id is not None\n if query_result_id is None and query_id is not None:\n query = get_object_or_404(models.Query.get_by_id_and_org, query_id, self.current_org)\n if query:\n query_result_id = query._data['latest_query_data']\n\n if query_result_id:\n query_result = get_object_or_404(models.QueryResult.get_by_id_and_org, query_result_id, self.current_org)\n\n if query_result:\n if isinstance(self.current_user, models.ApiUser):\n event = {\n 'user_id': None,\n 'org_id': self.current_org.id,\n 'action': 'api_get',\n 'timestamp': int(time.time()),\n 'api_key': self.current_user.id,\n 'file_type': filetype\n }\n\n if query_id:\n event['object_type'] = 'query'\n event['object_id'] = query_id\n else:\n event['object_type'] = 'query_result'\n event['object_id'] = query_result_id\n\n record_event.delay(event)\n\n if filetype == 'json':\n response = self.make_json_response(query_result)\n elif filetype == 'xlsx':\n response = self.make_excel_response(query_result)\n else:\n response = self.make_csv_response(query_result)\n\n if len(settings.ACCESS_CONTROL_ALLOW_ORIGIN) > 0:\n self.add_cors_headers(response.headers)\n\n if should_cache:\n response.headers.add_header('Cache-Control', 'max-age=%d' % ONE_YEAR)\n\n return response\n\n else:\n abort(404)\n\n def make_json_response(self, query_result):\n data = json.dumps({'query_result': query_result.to_dict()}, cls=utils.JSONEncoder)\n return make_response(data, 200, {})\n\n @staticmethod\n def make_csv_response(query_result):\n s = cStringIO.StringIO()\n\n query_data = json.loads(query_result.data)\n writer = csv.DictWriter(s, fieldnames=[col['name'] for col in query_data['columns']])\n writer.writer = utils.UnicodeWriter(s)\n writer.writeheader()\n for row in query_data['rows']:\n writer.writerow(row)\n\n headers = {'Content-Type': \"text/csv; charset=UTF-8\"}\n return make_response(s.getvalue(), 200, headers)\n\n @staticmethod\n def make_excel_response(query_result):\n s = cStringIO.StringIO()\n\n query_data = json.loads(query_result.data)\n book = xlsxwriter.Workbook(s)\n sheet = book.add_worksheet(\"result\")\n\n column_names = []\n for (c, col) in enumerate(query_data['columns']):\n sheet.write(0, c, col['name'])\n column_names.append(col['name'])\n\n for (r, row) in enumerate(query_data['rows']):\n for (c, name) in enumerate(column_names):\n sheet.write(r+1, c, row[name])\n\n book.close()\n\n headers = {'Content-Type': \"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet\"}\n return make_response(s.getvalue(), 200, headers)\n\n\napi.add_org_resource(QueryResultListAPI, '/api/query_results', endpoint='query_results')\napi.add_org_resource(QueryResultAPI,\n '/api/query_results/<query_result_id>',\n '/api/queries/<query_id>/results.<filetype>',\n '/api/queries/<query_id>/results/<query_result_id>.<filetype>',\n endpoint='query_result')\n\n\nclass JobAPI(BaseResource):\n def get(self, job_id):\n # TODO: if finished, include the query result\n job = QueryTask(job_id=job_id)\n return {'job': job.to_dict()}\n\n def delete(self, job_id):\n job = QueryTask(job_id=job_id)\n job.cancel()\n\napi.add_org_resource(JobAPI, '/api/jobs/<job_id>', endpoint='job')\n", "path": "redash/handlers/query_results.py"}]}
| 1,933 | 436 |
gh_patches_debug_7804
|
rasdani/github-patches
|
git_diff
|
apache__airflow-15207
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Specify that exit code -9 is due to RAM
Related to https://github.com/apache/airflow/issues/9655
It would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that.
I have found the code where the -9 is assigned but have no idea how to add a logging message.
self.process = None
if self._rc is None:
# Something else reaped it before we had a chance, so let's just "guess" at an error code.
self._rc = -9
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `airflow/task/task_runner/standard_task_runner.py`
Content:
```
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 """Standard task runner"""
19 import logging
20 import os
21 from typing import Optional
22
23 import psutil
24 from setproctitle import setproctitle # pylint: disable=no-name-in-module
25
26 from airflow.settings import CAN_FORK
27 from airflow.task.task_runner.base_task_runner import BaseTaskRunner
28 from airflow.utils.process_utils import reap_process_group
29
30
31 class StandardTaskRunner(BaseTaskRunner):
32 """Standard runner for all tasks."""
33
34 def __init__(self, local_task_job):
35 super().__init__(local_task_job)
36 self._rc = None
37 self.dag = local_task_job.task_instance.task.dag
38
39 def start(self):
40 if CAN_FORK and not self.run_as_user:
41 self.process = self._start_by_fork()
42 else:
43 self.process = self._start_by_exec()
44
45 def _start_by_exec(self):
46 subprocess = self.run_command()
47 return psutil.Process(subprocess.pid)
48
49 def _start_by_fork(self): # pylint: disable=inconsistent-return-statements
50 pid = os.fork()
51 if pid:
52 self.log.info("Started process %d to run task", pid)
53 return psutil.Process(pid)
54 else:
55 import signal
56
57 from airflow import settings
58 from airflow.cli.cli_parser import get_parser
59 from airflow.sentry import Sentry
60
61 signal.signal(signal.SIGINT, signal.SIG_DFL)
62 signal.signal(signal.SIGTERM, signal.SIG_DFL)
63 # Start a new process group
64 os.setpgid(0, 0)
65
66 # Force a new SQLAlchemy session. We can't share open DB handles
67 # between process. The cli code will re-create this as part of its
68 # normal startup
69 settings.engine.pool.dispose()
70 settings.engine.dispose()
71
72 parser = get_parser()
73 # [1:] - remove "airflow" from the start of the command
74 args = parser.parse_args(self._command[1:])
75
76 self.log.info('Running: %s', self._command)
77 self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id)
78
79 proc_title = "airflow task runner: {0.dag_id} {0.task_id} {0.execution_date}"
80 if hasattr(args, "job_id"):
81 proc_title += " {0.job_id}"
82 setproctitle(proc_title.format(args))
83
84 try:
85 args.func(args, dag=self.dag)
86 return_code = 0
87 except Exception: # pylint: disable=broad-except
88 return_code = 1
89 finally:
90 # Explicitly flush any pending exception to Sentry if enabled
91 Sentry.flush()
92 logging.shutdown()
93 os._exit(return_code) # pylint: disable=protected-access
94
95 def return_code(self, timeout: int = 0) -> Optional[int]:
96 # We call this multiple times, but we can only wait on the process once
97 if self._rc is not None or not self.process:
98 return self._rc
99
100 try:
101 self._rc = self.process.wait(timeout=timeout)
102 self.process = None
103 except psutil.TimeoutExpired:
104 pass
105
106 return self._rc
107
108 def terminate(self):
109 if self.process is None:
110 return
111
112 # Reap the child process - it may already be finished
113 _ = self.return_code(timeout=0)
114
115 if self.process and self.process.is_running():
116 rcs = reap_process_group(self.process.pid, self.log)
117 self._rc = rcs.get(self.process.pid)
118
119 self.process = None
120
121 if self._rc is None:
122 # Something else reaped it before we had a chance, so let's just "guess" at an error code.
123 self._rc = -9
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/airflow/task/task_runner/standard_task_runner.py b/airflow/task/task_runner/standard_task_runner.py
--- a/airflow/task/task_runner/standard_task_runner.py
+++ b/airflow/task/task_runner/standard_task_runner.py
@@ -121,3 +121,11 @@
if self._rc is None:
# Something else reaped it before we had a chance, so let's just "guess" at an error code.
self._rc = -9
+
+ if self._rc == -9:
+ # If either we or psutil gives out a -9 return code, it likely means
+ # an OOM happened
+ self.log.error(
+ 'Job %s was killed before it finished (likely due to running out of memory)',
+ self._task_instance.job_id,
+ )
|
{"golden_diff": "diff --git a/airflow/task/task_runner/standard_task_runner.py b/airflow/task/task_runner/standard_task_runner.py\n--- a/airflow/task/task_runner/standard_task_runner.py\n+++ b/airflow/task/task_runner/standard_task_runner.py\n@@ -121,3 +121,11 @@\n if self._rc is None:\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\n self._rc = -9\n+\n+ if self._rc == -9:\n+ # If either we or psutil gives out a -9 return code, it likely means\n+ # an OOM happened\n+ self.log.error(\n+ 'Job %s was killed before it finished (likely due to running out of memory)',\n+ self._task_instance.job_id,\n+ )\n", "issue": "Specify that exit code -9 is due to RAM\nRelated to https://github.com/apache/airflow/issues/9655\r\n\r\nIt would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that. \r\n\r\nI have found the code where the -9 is assigned but have no idea how to add a logging message. \r\n\r\n self.process = None\r\n\r\n if self._rc is None:\r\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\r\n self._rc = -9\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Standard task runner\"\"\"\nimport logging\nimport os\nfrom typing import Optional\n\nimport psutil\nfrom setproctitle import setproctitle # pylint: disable=no-name-in-module\n\nfrom airflow.settings import CAN_FORK\nfrom airflow.task.task_runner.base_task_runner import BaseTaskRunner\nfrom airflow.utils.process_utils import reap_process_group\n\n\nclass StandardTaskRunner(BaseTaskRunner):\n \"\"\"Standard runner for all tasks.\"\"\"\n\n def __init__(self, local_task_job):\n super().__init__(local_task_job)\n self._rc = None\n self.dag = local_task_job.task_instance.task.dag\n\n def start(self):\n if CAN_FORK and not self.run_as_user:\n self.process = self._start_by_fork()\n else:\n self.process = self._start_by_exec()\n\n def _start_by_exec(self):\n subprocess = self.run_command()\n return psutil.Process(subprocess.pid)\n\n def _start_by_fork(self): # pylint: disable=inconsistent-return-statements\n pid = os.fork()\n if pid:\n self.log.info(\"Started process %d to run task\", pid)\n return psutil.Process(pid)\n else:\n import signal\n\n from airflow import settings\n from airflow.cli.cli_parser import get_parser\n from airflow.sentry import Sentry\n\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n # Start a new process group\n os.setpgid(0, 0)\n\n # Force a new SQLAlchemy session. We can't share open DB handles\n # between process. The cli code will re-create this as part of its\n # normal startup\n settings.engine.pool.dispose()\n settings.engine.dispose()\n\n parser = get_parser()\n # [1:] - remove \"airflow\" from the start of the command\n args = parser.parse_args(self._command[1:])\n\n self.log.info('Running: %s', self._command)\n self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id)\n\n proc_title = \"airflow task runner: {0.dag_id} {0.task_id} {0.execution_date}\"\n if hasattr(args, \"job_id\"):\n proc_title += \" {0.job_id}\"\n setproctitle(proc_title.format(args))\n\n try:\n args.func(args, dag=self.dag)\n return_code = 0\n except Exception: # pylint: disable=broad-except\n return_code = 1\n finally:\n # Explicitly flush any pending exception to Sentry if enabled\n Sentry.flush()\n logging.shutdown()\n os._exit(return_code) # pylint: disable=protected-access\n\n def return_code(self, timeout: int = 0) -> Optional[int]:\n # We call this multiple times, but we can only wait on the process once\n if self._rc is not None or not self.process:\n return self._rc\n\n try:\n self._rc = self.process.wait(timeout=timeout)\n self.process = None\n except psutil.TimeoutExpired:\n pass\n\n return self._rc\n\n def terminate(self):\n if self.process is None:\n return\n\n # Reap the child process - it may already be finished\n _ = self.return_code(timeout=0)\n\n if self.process and self.process.is_running():\n rcs = reap_process_group(self.process.pid, self.log)\n self._rc = rcs.get(self.process.pid)\n\n self.process = None\n\n if self._rc is None:\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\n self._rc = -9\n", "path": "airflow/task/task_runner/standard_task_runner.py"}], "after_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Standard task runner\"\"\"\nimport logging\nimport os\nfrom typing import Optional\n\nimport psutil\nfrom setproctitle import setproctitle # pylint: disable=no-name-in-module\n\nfrom airflow.settings import CAN_FORK\nfrom airflow.task.task_runner.base_task_runner import BaseTaskRunner\nfrom airflow.utils.process_utils import reap_process_group\n\n\nclass StandardTaskRunner(BaseTaskRunner):\n \"\"\"Standard runner for all tasks.\"\"\"\n\n def __init__(self, local_task_job):\n super().__init__(local_task_job)\n self._rc = None\n self.dag = local_task_job.task_instance.task.dag\n\n def start(self):\n if CAN_FORK and not self.run_as_user:\n self.process = self._start_by_fork()\n else:\n self.process = self._start_by_exec()\n\n def _start_by_exec(self):\n subprocess = self.run_command()\n return psutil.Process(subprocess.pid)\n\n def _start_by_fork(self): # pylint: disable=inconsistent-return-statements\n pid = os.fork()\n if pid:\n self.log.info(\"Started process %d to run task\", pid)\n return psutil.Process(pid)\n else:\n import signal\n\n from airflow import settings\n from airflow.cli.cli_parser import get_parser\n from airflow.sentry import Sentry\n\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n # Start a new process group\n os.setpgid(0, 0)\n\n # Force a new SQLAlchemy session. We can't share open DB handles\n # between process. The cli code will re-create this as part of its\n # normal startup\n settings.engine.pool.dispose()\n settings.engine.dispose()\n\n parser = get_parser()\n # [1:] - remove \"airflow\" from the start of the command\n args = parser.parse_args(self._command[1:])\n\n self.log.info('Running: %s', self._command)\n self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id)\n\n proc_title = \"airflow task runner: {0.dag_id} {0.task_id} {0.execution_date}\"\n if hasattr(args, \"job_id\"):\n proc_title += \" {0.job_id}\"\n setproctitle(proc_title.format(args))\n\n try:\n args.func(args, dag=self.dag)\n return_code = 0\n except Exception: # pylint: disable=broad-except\n return_code = 1\n finally:\n # Explicitly flush any pending exception to Sentry if enabled\n Sentry.flush()\n logging.shutdown()\n os._exit(return_code) # pylint: disable=protected-access\n\n def return_code(self, timeout: int = 0) -> Optional[int]:\n # We call this multiple times, but we can only wait on the process once\n if self._rc is not None or not self.process:\n return self._rc\n\n try:\n self._rc = self.process.wait(timeout=timeout)\n self.process = None\n except psutil.TimeoutExpired:\n pass\n\n return self._rc\n\n def terminate(self):\n if self.process is None:\n return\n\n # Reap the child process - it may already be finished\n _ = self.return_code(timeout=0)\n\n if self.process and self.process.is_running():\n rcs = reap_process_group(self.process.pid, self.log)\n self._rc = rcs.get(self.process.pid)\n\n self.process = None\n\n if self._rc is None:\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\n self._rc = -9\n\n if self._rc == -9:\n # If either we or psutil gives out a -9 return code, it likely means\n # an OOM happened\n self.log.error(\n 'Job %s was killed before it finished (likely due to running out of memory)',\n self._task_instance.job_id,\n )\n", "path": "airflow/task/task_runner/standard_task_runner.py"}]}
| 1,654 | 190 |
gh_patches_debug_14224
|
rasdani/github-patches
|
git_diff
|
privacyidea__privacyidea-3176
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix recovery link in email
The email that is sent when resetting a password, contains a faulty link.
Currentlly `https://piserver/#/reset/...` is sent. It has to be `https://piserver/#!/reset/..`.
This happens here:
https://github.com/privacyidea/privacyidea/blob/35f0963b1bbacfbbad128c12ba0a0953fc17a339/privacyidea/lib/passwordreset.py#L67
Also see https://community.privacyidea.org/t/q-password-reset-what-policies-needed/2493/4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `privacyidea/lib/passwordreset.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # 2015-01-06 Cornelius Kölbel <[email protected]>
4 # The Password reset functions
5 #
6 # This code is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
8 # License as published by the Free Software Foundation; either
9 # version 3 of the License, or any later version.
10 #
11 # This code is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public
17 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 #
20 from privacyidea.models import PasswordReset
21 from privacyidea.lib.crypto import (hash_with_pepper, verify_with_pepper,
22 generate_password)
23 import logging
24 from privacyidea.lib.log import log_with
25 from privacyidea.lib.error import UserError, privacyIDEAError, ConfigAdminError
26 from privacyidea.lib.smtpserver import send_email_identifier
27 from privacyidea.lib.config import get_from_config
28 from privacyidea.lib.resolver import get_resolver_list
29 from privacyidea.lib.policy import ACTION, SCOPE, Match
30 from sqlalchemy import and_
31 from datetime import datetime
32
33
34 __doc__ = """
35 This is the library for creating a recovery code for password reset.
36 The recovery code is sent to the user.
37
38 The salted/peppered hash of the recovery code is stored in the passwordreset
39 database table.
40
41 This module is tested in tests/test_lib_passwordreset.py
42 """
43
44 log = logging.getLogger(__name__)
45
46 BODY = u"""Someone requested to reset the password within privacyIDEA.
47
48 To reset your user password please visit the link
49
50 {0!s}/reset/{1!s}@{2!s}/{3!s}
51 """
52
53
54 @log_with(log)
55 def create_recoverycode(user, email=None, expiration_seconds=3600,
56 recoverycode=None, base_url=""):
57 """
58 Create and send a password recovery code
59
60 :param user: User for whom the password reset code should be sent
61 :type user: User Object
62 :param email: The optional email of the user
63 :param recoverycode: Only used for testing purpose
64 :return: bool
65 """
66 base_url = base_url.strip("recover")
67 base_url += "#"
68 recoverycode = recoverycode or generate_password(size=24)
69 hash_code = hash_with_pepper(recoverycode)
70 # send this recoverycode
71 #
72 pwreset = PasswordReset(hash_code, username=user.login,
73 realm=user.realm,
74 expiration_seconds=expiration_seconds)
75 pwreset.save()
76
77 res = False
78 if not user:
79 raise UserError("User required for recovery token.")
80 user_email = user.info.get("email")
81 if email and email.lower() != user_email.lower():
82 raise UserError("The email does not match the users email.")
83
84 identifier = get_from_config("recovery.identifier")
85 if identifier:
86 # send email
87 r = send_email_identifier(identifier, user_email,
88 "Your password reset",
89 BODY.format(base_url,
90 user.login, user.realm,
91 recoverycode))
92 if not r:
93 raise privacyIDEAError("Failed to send email. {0!s}".format(r))
94 else:
95 raise ConfigAdminError("Missing configuration "
96 "recovery.identifier.")
97 res = True
98 return res
99
100
101 @log_with(log)
102 def check_recoverycode(user, recoverycode):
103 """
104 Check if the given recovery code is a valid recovery code for this user
105
106 :param user: User, who wants to reset his password.
107 :type user: User object
108 :param recoverycode: The recovery code
109 :type recoverycode: str
110 :return: True is code was correct
111 """
112 recoverycode_valid = False
113 # delete old entries
114 r = PasswordReset.query.filter(and_(PasswordReset.expiration <
115 datetime.now())).delete()
116 log.debug("{0!s} old password recoverycodes deleted.".format(r))
117 sql_query = PasswordReset.query.filter(and_(PasswordReset.username ==
118 user.login,
119 PasswordReset.realm
120 == user.realm))
121 for pwr in sql_query:
122 if verify_with_pepper(pwr.recoverycode, recoverycode):
123 recoverycode_valid = True
124 log.debug(u"Found valid recoverycode for user {0!r}".format(user))
125 # Delete the recovery code, so that it can only be used once!
126 r = pwr.delete()
127 log.debug("{0!s} used password recoverycode deleted.".format(r))
128
129 return recoverycode_valid
130
131
132 @log_with(log)
133 def is_password_reset(g):
134 """
135 Check if password reset is allowed.
136
137 We need to check, if a user policy with password_reset exists AND if an
138 editable resolver exists. Otherwise password_reset does not make any sense.
139
140 :return: True or False
141 """
142 rlist = get_resolver_list(editable=True)
143 log.debug("Number of editable resolvers: {0!s}".format(len(rlist)))
144 pwreset = Match.generic(g, scope=SCOPE.USER,
145 action=ACTION.PASSWORDRESET).allowed(write_to_audit_log=False)
146 log.debug("Password reset allowed via policies: {0!s}".format(pwreset))
147 return bool(rlist and pwreset)
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/privacyidea/lib/passwordreset.py b/privacyidea/lib/passwordreset.py
--- a/privacyidea/lib/passwordreset.py
+++ b/privacyidea/lib/passwordreset.py
@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
#
+# 2022-06-06 Cornelius Kölbel <[email protected]>
+# Fix recovery link
# 2015-01-06 Cornelius Kölbel <[email protected]>
# The Password reset functions
#
@@ -64,7 +66,7 @@
:return: bool
"""
base_url = base_url.strip("recover")
- base_url += "#"
+ base_url += "#!"
recoverycode = recoverycode or generate_password(size=24)
hash_code = hash_with_pepper(recoverycode)
# send this recoverycode
|
{"golden_diff": "diff --git a/privacyidea/lib/passwordreset.py b/privacyidea/lib/passwordreset.py\n--- a/privacyidea/lib/passwordreset.py\n+++ b/privacyidea/lib/passwordreset.py\n@@ -1,5 +1,7 @@\n # -*- coding: utf-8 -*-\n #\n+# 2022-06-06 Cornelius K\u00f6lbel <[email protected]>\n+# Fix recovery link\n # 2015-01-06 Cornelius K\u00f6lbel <[email protected]>\n # The Password reset functions\n #\n@@ -64,7 +66,7 @@\n :return: bool\n \"\"\"\n base_url = base_url.strip(\"recover\")\n- base_url += \"#\"\n+ base_url += \"#!\"\n recoverycode = recoverycode or generate_password(size=24)\n hash_code = hash_with_pepper(recoverycode)\n # send this recoverycode\n", "issue": "Fix recovery link in email\nThe email that is sent when resetting a password, contains a faulty link.\r\n\r\nCurrentlly `https://piserver/#/reset/...` is sent. It has to be `https://piserver/#!/reset/..`.\r\n\r\nThis happens here:\r\n\r\nhttps://github.com/privacyidea/privacyidea/blob/35f0963b1bbacfbbad128c12ba0a0953fc17a339/privacyidea/lib/passwordreset.py#L67\r\n\r\nAlso see https://community.privacyidea.org/t/q-password-reset-what-policies-needed/2493/4\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2015-01-06 Cornelius K\u00f6lbel <[email protected]>\n# The Password reset functions\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\nfrom privacyidea.models import PasswordReset\nfrom privacyidea.lib.crypto import (hash_with_pepper, verify_with_pepper,\n generate_password)\nimport logging\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.error import UserError, privacyIDEAError, ConfigAdminError\nfrom privacyidea.lib.smtpserver import send_email_identifier\nfrom privacyidea.lib.config import get_from_config\nfrom privacyidea.lib.resolver import get_resolver_list\nfrom privacyidea.lib.policy import ACTION, SCOPE, Match\nfrom sqlalchemy import and_\nfrom datetime import datetime\n\n\n__doc__ = \"\"\"\nThis is the library for creating a recovery code for password reset.\nThe recovery code is sent to the user.\n\nThe salted/peppered hash of the recovery code is stored in the passwordreset\ndatabase table.\n\nThis module is tested in tests/test_lib_passwordreset.py\n\"\"\"\n\nlog = logging.getLogger(__name__)\n\nBODY = u\"\"\"Someone requested to reset the password within privacyIDEA.\n\nTo reset your user password please visit the link\n\n{0!s}/reset/{1!s}@{2!s}/{3!s}\n\"\"\"\n\n\n@log_with(log)\ndef create_recoverycode(user, email=None, expiration_seconds=3600,\n recoverycode=None, base_url=\"\"):\n \"\"\"\n Create and send a password recovery code\n\n :param user: User for whom the password reset code should be sent\n :type user: User Object\n :param email: The optional email of the user\n :param recoverycode: Only used for testing purpose\n :return: bool\n \"\"\"\n base_url = base_url.strip(\"recover\")\n base_url += \"#\"\n recoverycode = recoverycode or generate_password(size=24)\n hash_code = hash_with_pepper(recoverycode)\n # send this recoverycode\n #\n pwreset = PasswordReset(hash_code, username=user.login,\n realm=user.realm,\n expiration_seconds=expiration_seconds)\n pwreset.save()\n\n res = False\n if not user:\n raise UserError(\"User required for recovery token.\")\n user_email = user.info.get(\"email\")\n if email and email.lower() != user_email.lower():\n raise UserError(\"The email does not match the users email.\")\n\n identifier = get_from_config(\"recovery.identifier\")\n if identifier:\n # send email\n r = send_email_identifier(identifier, user_email,\n \"Your password reset\",\n BODY.format(base_url,\n user.login, user.realm,\n recoverycode))\n if not r:\n raise privacyIDEAError(\"Failed to send email. {0!s}\".format(r))\n else:\n raise ConfigAdminError(\"Missing configuration \"\n \"recovery.identifier.\")\n res = True\n return res\n\n\n@log_with(log)\ndef check_recoverycode(user, recoverycode):\n \"\"\"\n Check if the given recovery code is a valid recovery code for this user\n\n :param user: User, who wants to reset his password.\n :type user: User object\n :param recoverycode: The recovery code\n :type recoverycode: str\n :return: True is code was correct\n \"\"\"\n recoverycode_valid = False\n # delete old entries\n r = PasswordReset.query.filter(and_(PasswordReset.expiration <\n datetime.now())).delete()\n log.debug(\"{0!s} old password recoverycodes deleted.\".format(r))\n sql_query = PasswordReset.query.filter(and_(PasswordReset.username ==\n user.login,\n PasswordReset.realm\n == user.realm))\n for pwr in sql_query:\n if verify_with_pepper(pwr.recoverycode, recoverycode):\n recoverycode_valid = True\n log.debug(u\"Found valid recoverycode for user {0!r}\".format(user))\n # Delete the recovery code, so that it can only be used once!\n r = pwr.delete()\n log.debug(\"{0!s} used password recoverycode deleted.\".format(r))\n\n return recoverycode_valid\n\n\n@log_with(log)\ndef is_password_reset(g):\n \"\"\"\n Check if password reset is allowed.\n\n We need to check, if a user policy with password_reset exists AND if an\n editable resolver exists. Otherwise password_reset does not make any sense.\n\n :return: True or False\n \"\"\"\n rlist = get_resolver_list(editable=True)\n log.debug(\"Number of editable resolvers: {0!s}\".format(len(rlist)))\n pwreset = Match.generic(g, scope=SCOPE.USER,\n action=ACTION.PASSWORDRESET).allowed(write_to_audit_log=False)\n log.debug(\"Password reset allowed via policies: {0!s}\".format(pwreset))\n return bool(rlist and pwreset)\n", "path": "privacyidea/lib/passwordreset.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2022-06-06 Cornelius K\u00f6lbel <[email protected]>\n# Fix recovery link\n# 2015-01-06 Cornelius K\u00f6lbel <[email protected]>\n# The Password reset functions\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\nfrom privacyidea.models import PasswordReset\nfrom privacyidea.lib.crypto import (hash_with_pepper, verify_with_pepper,\n generate_password)\nimport logging\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.error import UserError, privacyIDEAError, ConfigAdminError\nfrom privacyidea.lib.smtpserver import send_email_identifier\nfrom privacyidea.lib.config import get_from_config\nfrom privacyidea.lib.resolver import get_resolver_list\nfrom privacyidea.lib.policy import ACTION, SCOPE, Match\nfrom sqlalchemy import and_\nfrom datetime import datetime\n\n\n__doc__ = \"\"\"\nThis is the library for creating a recovery code for password reset.\nThe recovery code is sent to the user.\n\nThe salted/peppered hash of the recovery code is stored in the passwordreset\ndatabase table.\n\nThis module is tested in tests/test_lib_passwordreset.py\n\"\"\"\n\nlog = logging.getLogger(__name__)\n\nBODY = u\"\"\"Someone requested to reset the password within privacyIDEA.\n\nTo reset your user password please visit the link\n\n{0!s}/reset/{1!s}@{2!s}/{3!s}\n\"\"\"\n\n\n@log_with(log)\ndef create_recoverycode(user, email=None, expiration_seconds=3600,\n recoverycode=None, base_url=\"\"):\n \"\"\"\n Create and send a password recovery code\n\n :param user: User for whom the password reset code should be sent\n :type user: User Object\n :param email: The optional email of the user\n :param recoverycode: Only used for testing purpose\n :return: bool\n \"\"\"\n base_url = base_url.strip(\"recover\")\n base_url += \"#!\"\n recoverycode = recoverycode or generate_password(size=24)\n hash_code = hash_with_pepper(recoverycode)\n # send this recoverycode\n #\n pwreset = PasswordReset(hash_code, username=user.login,\n realm=user.realm,\n expiration_seconds=expiration_seconds)\n pwreset.save()\n\n res = False\n if not user:\n raise UserError(\"User required for recovery token.\")\n user_email = user.info.get(\"email\")\n if email and email.lower() != user_email.lower():\n raise UserError(\"The email does not match the users email.\")\n\n identifier = get_from_config(\"recovery.identifier\")\n if identifier:\n # send email\n r = send_email_identifier(identifier, user_email,\n \"Your password reset\",\n BODY.format(base_url,\n user.login, user.realm,\n recoverycode))\n if not r:\n raise privacyIDEAError(\"Failed to send email. {0!s}\".format(r))\n else:\n raise ConfigAdminError(\"Missing configuration \"\n \"recovery.identifier.\")\n res = True\n return res\n\n\n@log_with(log)\ndef check_recoverycode(user, recoverycode):\n \"\"\"\n Check if the given recovery code is a valid recovery code for this user\n\n :param user: User, who wants to reset his password.\n :type user: User object\n :param recoverycode: The recovery code\n :type recoverycode: str\n :return: True is code was correct\n \"\"\"\n recoverycode_valid = False\n # delete old entries\n r = PasswordReset.query.filter(and_(PasswordReset.expiration <\n datetime.now())).delete()\n log.debug(\"{0!s} old password recoverycodes deleted.\".format(r))\n sql_query = PasswordReset.query.filter(and_(PasswordReset.username ==\n user.login,\n PasswordReset.realm\n == user.realm))\n for pwr in sql_query:\n if verify_with_pepper(pwr.recoverycode, recoverycode):\n recoverycode_valid = True\n log.debug(u\"Found valid recoverycode for user {0!r}\".format(user))\n # Delete the recovery code, so that it can only be used once!\n r = pwr.delete()\n log.debug(\"{0!s} used password recoverycode deleted.\".format(r))\n\n return recoverycode_valid\n\n\n@log_with(log)\ndef is_password_reset(g):\n \"\"\"\n Check if password reset is allowed.\n\n We need to check, if a user policy with password_reset exists AND if an\n editable resolver exists. Otherwise password_reset does not make any sense.\n\n :return: True or False\n \"\"\"\n rlist = get_resolver_list(editable=True)\n log.debug(\"Number of editable resolvers: {0!s}\".format(len(rlist)))\n pwreset = Match.generic(g, scope=SCOPE.USER,\n action=ACTION.PASSWORDRESET).allowed(write_to_audit_log=False)\n log.debug(\"Password reset allowed via policies: {0!s}\".format(pwreset))\n return bool(rlist and pwreset)\n", "path": "privacyidea/lib/passwordreset.py"}]}
| 1,925 | 209 |
gh_patches_debug_1134
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-2982
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OR failing since at least 2019-06-09
OR has been failing since 2019-06-09
Based on automated runs it appears that OR has not run successfully in 2 days (2019-06-09).
```
loaded Open States pupa settings...
or (scrape, import)
bills: {}
votes: {}
08:01:13 CRITICAL pupa: Session(s) 2019-2020 Interim were reported by Oregon.get_session_list() but were not found in Oregon.legislative_sessions or Oregon.ignored_scraped_sessions.
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/or/__init__.py`
Content:
```
1 from pupa.scrape import Jurisdiction, Organization
2 from .people import ORPersonScraper
3 # from .committees import ORCommitteeScraper
4 from .bills import ORBillScraper
5 from .votes import ORVoteScraper
6
7
8 class Oregon(Jurisdiction):
9 division_id = "ocd-division/country:us/state:or"
10 classification = "government"
11 name = "Oregon"
12 url = "https://olis.leg.state.or.us"
13 scrapers = {
14 'people': ORPersonScraper,
15 # 'committees': ORCommitteeScraper,
16 'bills': ORBillScraper,
17 'votes': ORVoteScraper
18 }
19 legislative_sessions = [
20 {
21 "_scraped_name": "2007 Regular Session",
22 "identifier": "2007 Regular Session",
23 "name": "2007 Regular Session"
24 },
25 {
26 "_scraped_name": "2008 Special Session",
27 "identifier": "2008 Special Session",
28 "name": "2008 Special Session"
29 },
30 {
31 "_scraped_name": "2009 Regular Session",
32 "identifier": "2009 Regular Session",
33 "name": "2009 Regular Session"
34 },
35 {
36 "_scraped_name": "2010 Special Session",
37 "identifier": "2012 Special Session",
38 "name": "2010 Special Session"
39 },
40 {
41 "_scraped_name": "2011 Regular Session",
42 "identifier": "2011 Regular Session",
43 "name": "2011 Regular Session"
44 },
45 {
46 "_scraped_name": "2012 Regular Session",
47 "identifier": "2012 Regular Session",
48 "name": "2012 Regular Session"
49 },
50 {
51 "_scraped_name": "2012 Special Session",
52 "identifier": "2012 Special Session",
53 "name": "2012 Speical Session"
54 },
55 {
56 "_scraped_name": "2013 Regular Session",
57 "identifier": "2013 Regular Session",
58 "name": "2013 Regular Session"
59 },
60 {
61 "_scraped_name": "2013 Special Session",
62 "identifier": "2013 Special Session",
63 "name": "2013 Special Session"
64 },
65 {
66 "_scraped_name": "2014 Regular Session",
67 "identifier": "2014 Regular Session",
68 "name": "2014 Regular Session"
69 },
70 {
71 "_scraped_name": "2015 Regular Session",
72 "identifier": "2015 Regular Session",
73 "name": "2015 Regular Session"
74 },
75 {
76 "_scraped_name": "2016 Regular Session",
77 "identifier": "2016 Regular Session",
78 "name": "2016 Regular Session"
79 },
80 {
81 "_scraped_name": "2017 Regular Session",
82 "end_date": "2017-07-10",
83 "identifier": "2017 Regular Session",
84 "name": "2017 Regular Session",
85 "start_date": "2017-02-01"
86 },
87 {
88 "_scraped_name": "2018 Regular Session",
89 "identifier": "2018 Regular Session",
90 "name": "2018 Regular Session",
91 "start_date": "2018-02-05",
92 "end_date": "2018-03-09",
93 },
94 {
95 "_scraped_name": "2018 1st Special Session",
96 "identifier": "2018 Special Session",
97 "name": "2018 Special Session",
98 "start_date": "2018-05-21",
99 "end_date": "2018-05-21",
100 },
101 {
102 "_scraped_name": "2019 Regular Session",
103 "identifier": "2019 Regular Session",
104 "name": "2019 Regular Session",
105 "start_date": "2019-01-22",
106 "end_date": "2019-06-30",
107 },
108 ]
109 ignored_scraped_sessions = [
110 "Today",
111 "2017-2018 Interim",
112 "2015-2016 Interim",
113 "2013 1st Special Session",
114 "2012 1st Special Session",
115 "2013 - 2014 Interim",
116 "2011 - 2012 Interim",
117 "2009 - 2010 Interim",
118 "2007 - 2008 Interim"
119 ]
120
121 def get_organizations(self):
122 legislature_name = "Oregon Legislative Assembly"
123
124 legislature = Organization(name=legislature_name,
125 classification="legislature")
126 upper = Organization('Senate', classification='upper',
127 parent_id=legislature._id)
128 lower = Organization('House', classification='lower',
129 parent_id=legislature._id)
130
131 yield legislature
132 yield upper
133 yield lower
134
135 def get_session_list(self):
136 from .apiclient import OregonLegislatorODataClient
137 sessions = OregonLegislatorODataClient(None).all_sessions()
138 sessions = [s['SessionName'] for s in sessions]
139 return sessions
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openstates/or/__init__.py b/openstates/or/__init__.py
--- a/openstates/or/__init__.py
+++ b/openstates/or/__init__.py
@@ -108,6 +108,7 @@
]
ignored_scraped_sessions = [
"Today",
+ "2019-2020 Interim",
"2017-2018 Interim",
"2015-2016 Interim",
"2013 1st Special Session",
|
{"golden_diff": "diff --git a/openstates/or/__init__.py b/openstates/or/__init__.py\n--- a/openstates/or/__init__.py\n+++ b/openstates/or/__init__.py\n@@ -108,6 +108,7 @@\n ]\n ignored_scraped_sessions = [\n \"Today\",\n+ \"2019-2020 Interim\",\n \"2017-2018 Interim\",\n \"2015-2016 Interim\",\n \"2013 1st Special Session\",\n", "issue": "OR failing since at least 2019-06-09\nOR has been failing since 2019-06-09\n\nBased on automated runs it appears that OR has not run successfully in 2 days (2019-06-09).\n\n\n```\n loaded Open States pupa settings...\nor (scrape, import)\n bills: {}\n votes: {}\n08:01:13 CRITICAL pupa: Session(s) 2019-2020 Interim were reported by Oregon.get_session_list() but were not found in Oregon.legislative_sessions or Oregon.ignored_scraped_sessions.\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\nfrom .people import ORPersonScraper\n# from .committees import ORCommitteeScraper\nfrom .bills import ORBillScraper\nfrom .votes import ORVoteScraper\n\n\nclass Oregon(Jurisdiction):\n division_id = \"ocd-division/country:us/state:or\"\n classification = \"government\"\n name = \"Oregon\"\n url = \"https://olis.leg.state.or.us\"\n scrapers = {\n 'people': ORPersonScraper,\n # 'committees': ORCommitteeScraper,\n 'bills': ORBillScraper,\n 'votes': ORVoteScraper\n }\n legislative_sessions = [\n {\n \"_scraped_name\": \"2007 Regular Session\",\n \"identifier\": \"2007 Regular Session\",\n \"name\": \"2007 Regular Session\"\n },\n {\n \"_scraped_name\": \"2008 Special Session\",\n \"identifier\": \"2008 Special Session\",\n \"name\": \"2008 Special Session\"\n },\n {\n \"_scraped_name\": \"2009 Regular Session\",\n \"identifier\": \"2009 Regular Session\",\n \"name\": \"2009 Regular Session\"\n },\n {\n \"_scraped_name\": \"2010 Special Session\",\n \"identifier\": \"2012 Special Session\",\n \"name\": \"2010 Special Session\"\n },\n {\n \"_scraped_name\": \"2011 Regular Session\",\n \"identifier\": \"2011 Regular Session\",\n \"name\": \"2011 Regular Session\"\n },\n {\n \"_scraped_name\": \"2012 Regular Session\",\n \"identifier\": \"2012 Regular Session\",\n \"name\": \"2012 Regular Session\"\n },\n {\n \"_scraped_name\": \"2012 Special Session\",\n \"identifier\": \"2012 Special Session\",\n \"name\": \"2012 Speical Session\"\n },\n {\n \"_scraped_name\": \"2013 Regular Session\",\n \"identifier\": \"2013 Regular Session\",\n \"name\": \"2013 Regular Session\"\n },\n {\n \"_scraped_name\": \"2013 Special Session\",\n \"identifier\": \"2013 Special Session\",\n \"name\": \"2013 Special Session\"\n },\n {\n \"_scraped_name\": \"2014 Regular Session\",\n \"identifier\": \"2014 Regular Session\",\n \"name\": \"2014 Regular Session\"\n },\n {\n \"_scraped_name\": \"2015 Regular Session\",\n \"identifier\": \"2015 Regular Session\",\n \"name\": \"2015 Regular Session\"\n },\n {\n \"_scraped_name\": \"2016 Regular Session\",\n \"identifier\": \"2016 Regular Session\",\n \"name\": \"2016 Regular Session\"\n },\n {\n \"_scraped_name\": \"2017 Regular Session\",\n \"end_date\": \"2017-07-10\",\n \"identifier\": \"2017 Regular Session\",\n \"name\": \"2017 Regular Session\",\n \"start_date\": \"2017-02-01\"\n },\n {\n \"_scraped_name\": \"2018 Regular Session\",\n \"identifier\": \"2018 Regular Session\",\n \"name\": \"2018 Regular Session\",\n \"start_date\": \"2018-02-05\",\n \"end_date\": \"2018-03-09\",\n },\n {\n \"_scraped_name\": \"2018 1st Special Session\",\n \"identifier\": \"2018 Special Session\",\n \"name\": \"2018 Special Session\",\n \"start_date\": \"2018-05-21\",\n \"end_date\": \"2018-05-21\",\n },\n {\n \"_scraped_name\": \"2019 Regular Session\",\n \"identifier\": \"2019 Regular Session\",\n \"name\": \"2019 Regular Session\",\n \"start_date\": \"2019-01-22\",\n \"end_date\": \"2019-06-30\",\n },\n ]\n ignored_scraped_sessions = [\n \"Today\",\n \"2017-2018 Interim\",\n \"2015-2016 Interim\",\n \"2013 1st Special Session\",\n \"2012 1st Special Session\",\n \"2013 - 2014 Interim\",\n \"2011 - 2012 Interim\",\n \"2009 - 2010 Interim\",\n \"2007 - 2008 Interim\"\n ]\n\n def get_organizations(self):\n legislature_name = \"Oregon Legislative Assembly\"\n\n legislature = Organization(name=legislature_name,\n classification=\"legislature\")\n upper = Organization('Senate', classification='upper',\n parent_id=legislature._id)\n lower = Organization('House', classification='lower',\n parent_id=legislature._id)\n\n yield legislature\n yield upper\n yield lower\n\n def get_session_list(self):\n from .apiclient import OregonLegislatorODataClient\n sessions = OregonLegislatorODataClient(None).all_sessions()\n sessions = [s['SessionName'] for s in sessions]\n return sessions\n", "path": "openstates/or/__init__.py"}], "after_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\nfrom .people import ORPersonScraper\n# from .committees import ORCommitteeScraper\nfrom .bills import ORBillScraper\nfrom .votes import ORVoteScraper\n\n\nclass Oregon(Jurisdiction):\n division_id = \"ocd-division/country:us/state:or\"\n classification = \"government\"\n name = \"Oregon\"\n url = \"https://olis.leg.state.or.us\"\n scrapers = {\n 'people': ORPersonScraper,\n # 'committees': ORCommitteeScraper,\n 'bills': ORBillScraper,\n 'votes': ORVoteScraper\n }\n legislative_sessions = [\n {\n \"_scraped_name\": \"2007 Regular Session\",\n \"identifier\": \"2007 Regular Session\",\n \"name\": \"2007 Regular Session\"\n },\n {\n \"_scraped_name\": \"2008 Special Session\",\n \"identifier\": \"2008 Special Session\",\n \"name\": \"2008 Special Session\"\n },\n {\n \"_scraped_name\": \"2009 Regular Session\",\n \"identifier\": \"2009 Regular Session\",\n \"name\": \"2009 Regular Session\"\n },\n {\n \"_scraped_name\": \"2010 Special Session\",\n \"identifier\": \"2012 Special Session\",\n \"name\": \"2010 Special Session\"\n },\n {\n \"_scraped_name\": \"2011 Regular Session\",\n \"identifier\": \"2011 Regular Session\",\n \"name\": \"2011 Regular Session\"\n },\n {\n \"_scraped_name\": \"2012 Regular Session\",\n \"identifier\": \"2012 Regular Session\",\n \"name\": \"2012 Regular Session\"\n },\n {\n \"_scraped_name\": \"2012 Special Session\",\n \"identifier\": \"2012 Special Session\",\n \"name\": \"2012 Speical Session\"\n },\n {\n \"_scraped_name\": \"2013 Regular Session\",\n \"identifier\": \"2013 Regular Session\",\n \"name\": \"2013 Regular Session\"\n },\n {\n \"_scraped_name\": \"2013 Special Session\",\n \"identifier\": \"2013 Special Session\",\n \"name\": \"2013 Special Session\"\n },\n {\n \"_scraped_name\": \"2014 Regular Session\",\n \"identifier\": \"2014 Regular Session\",\n \"name\": \"2014 Regular Session\"\n },\n {\n \"_scraped_name\": \"2015 Regular Session\",\n \"identifier\": \"2015 Regular Session\",\n \"name\": \"2015 Regular Session\"\n },\n {\n \"_scraped_name\": \"2016 Regular Session\",\n \"identifier\": \"2016 Regular Session\",\n \"name\": \"2016 Regular Session\"\n },\n {\n \"_scraped_name\": \"2017 Regular Session\",\n \"end_date\": \"2017-07-10\",\n \"identifier\": \"2017 Regular Session\",\n \"name\": \"2017 Regular Session\",\n \"start_date\": \"2017-02-01\"\n },\n {\n \"_scraped_name\": \"2018 Regular Session\",\n \"identifier\": \"2018 Regular Session\",\n \"name\": \"2018 Regular Session\",\n \"start_date\": \"2018-02-05\",\n \"end_date\": \"2018-03-09\",\n },\n {\n \"_scraped_name\": \"2018 1st Special Session\",\n \"identifier\": \"2018 Special Session\",\n \"name\": \"2018 Special Session\",\n \"start_date\": \"2018-05-21\",\n \"end_date\": \"2018-05-21\",\n },\n {\n \"_scraped_name\": \"2019 Regular Session\",\n \"identifier\": \"2019 Regular Session\",\n \"name\": \"2019 Regular Session\",\n \"start_date\": \"2019-01-22\",\n \"end_date\": \"2019-06-30\",\n },\n ]\n ignored_scraped_sessions = [\n \"Today\",\n \"2019-2020 Interim\",\n \"2017-2018 Interim\",\n \"2015-2016 Interim\",\n \"2013 1st Special Session\",\n \"2012 1st Special Session\",\n \"2013 - 2014 Interim\",\n \"2011 - 2012 Interim\",\n \"2009 - 2010 Interim\",\n \"2007 - 2008 Interim\"\n ]\n\n def get_organizations(self):\n legislature_name = \"Oregon Legislative Assembly\"\n\n legislature = Organization(name=legislature_name,\n classification=\"legislature\")\n upper = Organization('Senate', classification='upper',\n parent_id=legislature._id)\n lower = Organization('House', classification='lower',\n parent_id=legislature._id)\n\n yield legislature\n yield upper\n yield lower\n\n def get_session_list(self):\n from .apiclient import OregonLegislatorODataClient\n sessions = OregonLegislatorODataClient(None).all_sessions()\n sessions = [s['SessionName'] for s in sessions]\n return sessions\n", "path": "openstates/or/__init__.py"}]}
| 1,987 | 123 |
gh_patches_debug_16048
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-473
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError exception raised on up/down commands.
### Description
When running the command `up` or `down` with an integer argument, the following exception is raised:
```
Traceback (most recent call last):
File "/home/david/.pwndbg/pwndbg/commands/__init__.py", line 109, in __call__
return self.function(*args, **kwargs)
File "/home/david/.pwndbg/pwndbg/commands/__init__.py", line 200, in _OnlyWhenRunning
return function(*a, **kw)
File "/home/david/.pwndbg/pwndbg/commands/ida.py", line 46, in up
for i in range(n):
TypeError: 'str' object cannot be interpreted as an integer
```
### Steps to reproduce
Open any binary and attempt to do `up 2` during debugging.
### My setup
pwndbg> version
Gdb: 7.12.0.20161007-git
Python: 3.6.5rc1 (default, Mar 14 2018, 06:54:23) [GCC 7.3.0]
Pwndbg: 1.0.0 build: f69b81e
Capstone: 4.0.1024
Unicorn: 1.0.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/ida.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6 from __future__ import unicode_literals
7
8 import bz2
9 import datetime
10 import os
11
12 import gdb
13
14 import pwndbg.commands
15 import pwndbg.commands.context
16 import pwndbg.ida
17 import pwndbg.regs
18 from pwndbg.gdbutils.functions import GdbFunction
19
20
21 @pwndbg.commands.ParsedCommand
22 @pwndbg.commands.OnlyWhenRunning
23 @pwndbg.events.stop
24 @pwndbg.ida.withIDA
25 def j(*args):
26 """
27 Synchronize IDA's cursor with GDB
28 """
29 try:
30 pc = int(gdb.selected_frame().pc())
31 pwndbg.ida.Jump(pc)
32 except Exception:
33 pass
34
35
36
37 @pwndbg.commands.Command
38 @pwndbg.commands.OnlyWhenRunning
39 def up(n=1):
40 """
41 Select and print stack frame that called this one.
42 An argument says how many frames up to go.
43 """
44 f = gdb.selected_frame()
45
46 for i in range(n):
47 o = f.older()
48 if o:
49 o.select()
50
51 bt = pwndbg.commands.context.context_backtrace(with_banner=False)
52 print('\n'.join(bt))
53
54 j()
55
56
57 @pwndbg.commands.Command
58 @pwndbg.commands.OnlyWhenRunning
59 def down(n=1):
60 """
61 Select and print stack frame called by this one.
62 An argument says how many frames down to go.
63 """
64 f = gdb.selected_frame()
65
66 for i in range(n):
67 o = f.newer()
68 if o:
69 o.select()
70
71 bt = pwndbg.commands.context.context_backtrace(with_banner=False)
72 print('\n'.join(bt))
73
74 j()
75
76
77 @pwndbg.commands.Command
78 @pwndbg.ida.withIDA
79 def save_ida():
80 """Save the IDA database"""
81 if not pwndbg.ida.available():
82 return
83
84 path = pwndbg.ida.GetIdbPath()
85
86 # Need to handle emulated paths for Wine
87 if path.startswith('Z:'):
88 path = path[2:].replace('\\', '/')
89 pwndbg.ida.SaveBase(path)
90
91 basename = os.path.basename(path)
92 dirname = os.path.dirname(path)
93 backups = os.path.join(dirname, 'ida-backup')
94
95 if not os.path.isdir(backups):
96 os.mkdir(backups)
97
98 basename, ext = os.path.splitext(basename)
99 basename += '-%s' % datetime.datetime.now().isoformat()
100 basename += ext
101
102 # Windows doesn't like colons in paths
103 basename = basename.replace(':', '_')
104
105 full_path = os.path.join(backups, basename)
106
107 pwndbg.ida.SaveBase(full_path)
108
109 data = open(full_path, 'rb').read()
110
111 # Compress!
112 full_path_compressed = full_path + '.bz2'
113 bz2.BZ2File(full_path_compressed, 'w').write(data)
114
115 # Remove old version
116 os.unlink(full_path)
117
118 save_ida()
119
120
121 @GdbFunction()
122 def ida(name):
123
124 """Evaluate ida.LocByName() on the supplied value."""
125 name = name.string()
126 result = pwndbg.ida.LocByName(name)
127
128 if 0xffffe000 <= result <= 0xffffffff or 0xffffffffffffe000 <= result <= 0xffffffffffffffff:
129 raise ValueError("ida.LocByName(%r) == BADADDR" % name)
130
131 return result
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/commands/ida.py b/pwndbg/commands/ida.py
--- a/pwndbg/commands/ida.py
+++ b/pwndbg/commands/ida.py
@@ -43,10 +43,10 @@
"""
f = gdb.selected_frame()
- for i in range(n):
- o = f.older()
- if o:
- o.select()
+ for i in range(int(n)):
+ if f.older():
+ f = f.older()
+ f.select()
bt = pwndbg.commands.context.context_backtrace(with_banner=False)
print('\n'.join(bt))
@@ -63,10 +63,10 @@
"""
f = gdb.selected_frame()
- for i in range(n):
- o = f.newer()
- if o:
- o.select()
+ for i in range(int(n)):
+ if f.newer():
+ f = f.newer()
+ f.select()
bt = pwndbg.commands.context.context_backtrace(with_banner=False)
print('\n'.join(bt))
|
{"golden_diff": "diff --git a/pwndbg/commands/ida.py b/pwndbg/commands/ida.py\n--- a/pwndbg/commands/ida.py\n+++ b/pwndbg/commands/ida.py\n@@ -43,10 +43,10 @@\n \"\"\"\n f = gdb.selected_frame()\n \n- for i in range(n):\n- o = f.older()\n- if o:\n- o.select()\n+ for i in range(int(n)):\n+ if f.older():\n+ f = f.older()\n+ f.select()\n \n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n@@ -63,10 +63,10 @@\n \"\"\"\n f = gdb.selected_frame()\n \n- for i in range(n):\n- o = f.newer()\n- if o:\n- o.select()\n+ for i in range(int(n)):\n+ if f.newer():\n+ f = f.newer()\n+ f.select()\n \n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n", "issue": "TypeError exception raised on up/down commands.\n### Description\r\n\r\nWhen running the command `up` or `down` with an integer argument, the following exception is raised:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/david/.pwndbg/pwndbg/commands/__init__.py\", line 109, in __call__\r\n return self.function(*args, **kwargs)\r\n File \"/home/david/.pwndbg/pwndbg/commands/__init__.py\", line 200, in _OnlyWhenRunning\r\n return function(*a, **kw)\r\n File \"/home/david/.pwndbg/pwndbg/commands/ida.py\", line 46, in up\r\n for i in range(n):\r\nTypeError: 'str' object cannot be interpreted as an integer\r\n```\r\n\r\n### Steps to reproduce\r\n\r\nOpen any binary and attempt to do `up 2` during debugging.\r\n\r\n### My setup\r\n\r\npwndbg> version\r\nGdb: 7.12.0.20161007-git\r\nPython: 3.6.5rc1 (default, Mar 14 2018, 06:54:23) [GCC 7.3.0]\r\nPwndbg: 1.0.0 build: f69b81e\r\nCapstone: 4.0.1024\r\nUnicorn: 1.0.1\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport bz2\nimport datetime\nimport os\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.commands.context\nimport pwndbg.ida\nimport pwndbg.regs\nfrom pwndbg.gdbutils.functions import GdbFunction\n\n\[email protected]\[email protected]\[email protected]\[email protected]\ndef j(*args):\n \"\"\"\n Synchronize IDA's cursor with GDB\n \"\"\"\n try:\n pc = int(gdb.selected_frame().pc())\n pwndbg.ida.Jump(pc)\n except Exception:\n pass\n\n\n\[email protected]\[email protected]\ndef up(n=1):\n \"\"\"\n Select and print stack frame that called this one.\n An argument says how many frames up to go.\n \"\"\"\n f = gdb.selected_frame()\n\n for i in range(n):\n o = f.older()\n if o:\n o.select()\n\n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n\n j()\n\n\[email protected]\[email protected]\ndef down(n=1):\n \"\"\"\n Select and print stack frame called by this one.\n An argument says how many frames down to go.\n \"\"\"\n f = gdb.selected_frame()\n\n for i in range(n):\n o = f.newer()\n if o:\n o.select()\n\n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n\n j()\n\n\[email protected]\[email protected]\ndef save_ida():\n \"\"\"Save the IDA database\"\"\"\n if not pwndbg.ida.available():\n return\n\n path = pwndbg.ida.GetIdbPath()\n\n # Need to handle emulated paths for Wine\n if path.startswith('Z:'):\n path = path[2:].replace('\\\\', '/')\n pwndbg.ida.SaveBase(path)\n\n basename = os.path.basename(path)\n dirname = os.path.dirname(path)\n backups = os.path.join(dirname, 'ida-backup')\n\n if not os.path.isdir(backups):\n os.mkdir(backups)\n\n basename, ext = os.path.splitext(basename)\n basename += '-%s' % datetime.datetime.now().isoformat()\n basename += ext\n\n # Windows doesn't like colons in paths\n basename = basename.replace(':', '_')\n\n full_path = os.path.join(backups, basename)\n\n pwndbg.ida.SaveBase(full_path)\n\n data = open(full_path, 'rb').read()\n\n # Compress!\n full_path_compressed = full_path + '.bz2'\n bz2.BZ2File(full_path_compressed, 'w').write(data)\n\n # Remove old version\n os.unlink(full_path)\n\nsave_ida()\n\n\n@GdbFunction()\ndef ida(name):\n\n \"\"\"Evaluate ida.LocByName() on the supplied value.\"\"\"\n name = name.string()\n result = pwndbg.ida.LocByName(name)\n\n if 0xffffe000 <= result <= 0xffffffff or 0xffffffffffffe000 <= result <= 0xffffffffffffffff:\n raise ValueError(\"ida.LocByName(%r) == BADADDR\" % name)\n\n return result\n", "path": "pwndbg/commands/ida.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport bz2\nimport datetime\nimport os\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.commands.context\nimport pwndbg.ida\nimport pwndbg.regs\nfrom pwndbg.gdbutils.functions import GdbFunction\n\n\[email protected]\[email protected]\[email protected]\[email protected]\ndef j(*args):\n \"\"\"\n Synchronize IDA's cursor with GDB\n \"\"\"\n try:\n pc = int(gdb.selected_frame().pc())\n pwndbg.ida.Jump(pc)\n except Exception:\n pass\n\n\n\[email protected]\[email protected]\ndef up(n=1):\n \"\"\"\n Select and print stack frame that called this one.\n An argument says how many frames up to go.\n \"\"\"\n f = gdb.selected_frame()\n\n for i in range(int(n)):\n if f.older():\n f = f.older()\n f.select()\n\n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n\n j()\n\n\[email protected]\[email protected]\ndef down(n=1):\n \"\"\"\n Select and print stack frame called by this one.\n An argument says how many frames down to go.\n \"\"\"\n f = gdb.selected_frame()\n\n for i in range(int(n)):\n if f.newer():\n f = f.newer()\n f.select()\n\n bt = pwndbg.commands.context.context_backtrace(with_banner=False)\n print('\\n'.join(bt))\n\n j()\n\n\[email protected]\[email protected]\ndef save_ida():\n \"\"\"Save the IDA database\"\"\"\n if not pwndbg.ida.available():\n return\n\n path = pwndbg.ida.GetIdbPath()\n\n # Need to handle emulated paths for Wine\n if path.startswith('Z:'):\n path = path[2:].replace('\\\\', '/')\n pwndbg.ida.SaveBase(path)\n\n basename = os.path.basename(path)\n dirname = os.path.dirname(path)\n backups = os.path.join(dirname, 'ida-backup')\n\n if not os.path.isdir(backups):\n os.mkdir(backups)\n\n basename, ext = os.path.splitext(basename)\n basename += '-%s' % datetime.datetime.now().isoformat()\n basename += ext\n\n # Windows doesn't like colons in paths\n basename = basename.replace(':', '_')\n\n full_path = os.path.join(backups, basename)\n\n pwndbg.ida.SaveBase(full_path)\n\n data = open(full_path, 'rb').read()\n\n # Compress!\n full_path_compressed = full_path + '.bz2'\n bz2.BZ2File(full_path_compressed, 'w').write(data)\n\n # Remove old version\n os.unlink(full_path)\n\nsave_ida()\n\n\n@GdbFunction()\ndef ida(name):\n\n \"\"\"Evaluate ida.LocByName() on the supplied value.\"\"\"\n name = name.string()\n result = pwndbg.ida.LocByName(name)\n\n if 0xffffe000 <= result <= 0xffffffff or 0xffffffffffffe000 <= result <= 0xffffffffffffffff:\n raise ValueError(\"ida.LocByName(%r) == BADADDR\" % name)\n\n return result\n", "path": "pwndbg/commands/ida.py"}]}
| 1,666 | 250 |
gh_patches_debug_9390
|
rasdani/github-patches
|
git_diff
|
pandas-dev__pandas-18844
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TST: make _skip_if into pytest decorators
- [X] _skip_if_32bit (#18693)
- [X] _skip_if_no_mpl (#18427)
- [X] _skip_if_mpl_1_5 (#18682)
- [x] _skip_if_no_scipy (#18794)
- [x] _skip_if_no_lzma (#18820)
- [x] _skip_if_no_xarray (#18814)
- [X] _skip_if_windows_python_3 (#18693)
- [X] _skip_if_windows (#18693)
- [x] _skip_if_no_pathlib (#18765)
- [x] _skip_if_no_localpath (#18765)
- [x] skip_if_no_ne (#18820)
- [x] _skip_if_has_locale (#18745)
- [x] _skip_if_not_us_locale (#18745)
- [ ] _skip_if_no_mock
- [x] _skip_if_no_ipython (#18814)
- [ ] skip_if_no_package
we should move the ``_skip_if_*`` functions out of ``pandas.util.testing`` to another (private module)
then we can add [skipif decorators](http://pytest.readthedocs.io/en/reorganize-docs/new-docs/user/skipping.html)
and use like this
```
@skip_if_windows_py3
def test_.......():
```
rather than calling ``tm._skip_if_windows_py390`` in the body of the function (sometimes you also need to do that, so we leave the functions themselves as well).
this makes much more idiomatic and readable pytest code and removes the need to roll your own when using the decorator.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandas/conftest.py`
Content:
```
1 import pytest
2
3 from distutils.version import LooseVersion
4 import numpy
5 import pandas
6 import pandas.util.testing as tm
7 import dateutil
8
9
10 def pytest_addoption(parser):
11 parser.addoption("--skip-slow", action="store_true",
12 help="skip slow tests")
13 parser.addoption("--skip-network", action="store_true",
14 help="skip network tests")
15 parser.addoption("--run-high-memory", action="store_true",
16 help="run high memory tests")
17 parser.addoption("--only-slow", action="store_true",
18 help="run only slow tests")
19
20
21 def pytest_runtest_setup(item):
22 if 'slow' in item.keywords and item.config.getoption("--skip-slow"):
23 pytest.skip("skipping due to --skip-slow")
24
25 if 'slow' not in item.keywords and item.config.getoption("--only-slow"):
26 pytest.skip("skipping due to --only-slow")
27
28 if 'network' in item.keywords and item.config.getoption("--skip-network"):
29 pytest.skip("skipping due to --skip-network")
30
31 if 'high_memory' in item.keywords and not item.config.getoption(
32 "--run-high-memory"):
33 pytest.skip(
34 "skipping high memory test since --run-high-memory was not set")
35
36
37 # Configurations for all tests and all test modules
38
39 @pytest.fixture(autouse=True)
40 def configure_tests():
41 pandas.set_option('chained_assignment', 'raise')
42
43
44 # For running doctests: make np and pd names available
45
46 @pytest.fixture(autouse=True)
47 def add_imports(doctest_namespace):
48 doctest_namespace['np'] = numpy
49 doctest_namespace['pd'] = pandas
50
51
52 @pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
53 def spmatrix(request):
54 tm._skip_if_no_scipy()
55 from scipy import sparse
56 return getattr(sparse, request.param + '_matrix')
57
58
59 @pytest.fixture
60 def ip():
61 """
62 Get an instance of IPython.InteractiveShell.
63
64 Will raise a skip if IPython is not installed.
65 """
66
67 pytest.importorskip('IPython', minversion="6.0.0")
68 from IPython.core.interactiveshell import InteractiveShell
69 return InteractiveShell()
70
71
72 is_dateutil_le_261 = pytest.mark.skipif(
73 LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),
74 reason="dateutil api change version")
75 is_dateutil_gt_261 = pytest.mark.skipif(
76 LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),
77 reason="dateutil stable version")
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -3,7 +3,6 @@
from distutils.version import LooseVersion
import numpy
import pandas
-import pandas.util.testing as tm
import dateutil
@@ -51,7 +50,6 @@
@pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
def spmatrix(request):
- tm._skip_if_no_scipy()
from scipy import sparse
return getattr(sparse, request.param + '_matrix')
|
{"golden_diff": "diff --git a/pandas/conftest.py b/pandas/conftest.py\n--- a/pandas/conftest.py\n+++ b/pandas/conftest.py\n@@ -3,7 +3,6 @@\n from distutils.version import LooseVersion\n import numpy\n import pandas\n-import pandas.util.testing as tm\n import dateutil\n \n \n@@ -51,7 +50,6 @@\n \n @pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\n def spmatrix(request):\n- tm._skip_if_no_scipy()\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n", "issue": "TST: make _skip_if into pytest decorators\n- [X] _skip_if_32bit (#18693)\r\n- [X] _skip_if_no_mpl (#18427)\r\n- [X] _skip_if_mpl_1_5 (#18682)\r\n- [x] _skip_if_no_scipy (#18794)\r\n- [x] _skip_if_no_lzma (#18820)\r\n- [x] _skip_if_no_xarray (#18814)\r\n- [X] _skip_if_windows_python_3 (#18693)\r\n- [X] _skip_if_windows (#18693)\r\n- [x] _skip_if_no_pathlib (#18765) \r\n- [x] _skip_if_no_localpath (#18765)\r\n- [x] skip_if_no_ne (#18820)\r\n- [x] _skip_if_has_locale (#18745) \r\n- [x] _skip_if_not_us_locale (#18745)\r\n- [ ] _skip_if_no_mock\r\n- [x] _skip_if_no_ipython (#18814)\r\n- [ ] skip_if_no_package\r\n\r\nwe should move the ``_skip_if_*`` functions out of ``pandas.util.testing`` to another (private module)\r\n\r\nthen we can add [skipif decorators](http://pytest.readthedocs.io/en/reorganize-docs/new-docs/user/skipping.html)\r\n\r\nand use like this\r\n\r\n```\r\n@skip_if_windows_py3\r\ndef test_.......():\r\n```\r\n\r\nrather than calling ``tm._skip_if_windows_py390`` in the body of the function (sometimes you also need to do that, so we leave the functions themselves as well).\r\n\r\nthis makes much more idiomatic and readable pytest code and removes the need to roll your own when using the decorator.\r\n\n", "before_files": [{"content": "import pytest\n\nfrom distutils.version import LooseVersion\nimport numpy\nimport pandas\nimport pandas.util.testing as tm\nimport dateutil\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--skip-slow\", action=\"store_true\",\n help=\"skip slow tests\")\n parser.addoption(\"--skip-network\", action=\"store_true\",\n help=\"skip network tests\")\n parser.addoption(\"--run-high-memory\", action=\"store_true\",\n help=\"run high memory tests\")\n parser.addoption(\"--only-slow\", action=\"store_true\",\n help=\"run only slow tests\")\n\n\ndef pytest_runtest_setup(item):\n if 'slow' in item.keywords and item.config.getoption(\"--skip-slow\"):\n pytest.skip(\"skipping due to --skip-slow\")\n\n if 'slow' not in item.keywords and item.config.getoption(\"--only-slow\"):\n pytest.skip(\"skipping due to --only-slow\")\n\n if 'network' in item.keywords and item.config.getoption(\"--skip-network\"):\n pytest.skip(\"skipping due to --skip-network\")\n\n if 'high_memory' in item.keywords and not item.config.getoption(\n \"--run-high-memory\"):\n pytest.skip(\n \"skipping high memory test since --run-high-memory was not set\")\n\n\n# Configurations for all tests and all test modules\n\[email protected](autouse=True)\ndef configure_tests():\n pandas.set_option('chained_assignment', 'raise')\n\n\n# For running doctests: make np and pd names available\n\[email protected](autouse=True)\ndef add_imports(doctest_namespace):\n doctest_namespace['np'] = numpy\n doctest_namespace['pd'] = pandas\n\n\[email protected](params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\ndef spmatrix(request):\n tm._skip_if_no_scipy()\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n\n\[email protected]\ndef ip():\n \"\"\"\n Get an instance of IPython.InteractiveShell.\n\n Will raise a skip if IPython is not installed.\n \"\"\"\n\n pytest.importorskip('IPython', minversion=\"6.0.0\")\n from IPython.core.interactiveshell import InteractiveShell\n return InteractiveShell()\n\n\nis_dateutil_le_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),\n reason=\"dateutil api change version\")\nis_dateutil_gt_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),\n reason=\"dateutil stable version\")\n", "path": "pandas/conftest.py"}], "after_files": [{"content": "import pytest\n\nfrom distutils.version import LooseVersion\nimport numpy\nimport pandas\nimport dateutil\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--skip-slow\", action=\"store_true\",\n help=\"skip slow tests\")\n parser.addoption(\"--skip-network\", action=\"store_true\",\n help=\"skip network tests\")\n parser.addoption(\"--run-high-memory\", action=\"store_true\",\n help=\"run high memory tests\")\n parser.addoption(\"--only-slow\", action=\"store_true\",\n help=\"run only slow tests\")\n\n\ndef pytest_runtest_setup(item):\n if 'slow' in item.keywords and item.config.getoption(\"--skip-slow\"):\n pytest.skip(\"skipping due to --skip-slow\")\n\n if 'slow' not in item.keywords and item.config.getoption(\"--only-slow\"):\n pytest.skip(\"skipping due to --only-slow\")\n\n if 'network' in item.keywords and item.config.getoption(\"--skip-network\"):\n pytest.skip(\"skipping due to --skip-network\")\n\n if 'high_memory' in item.keywords and not item.config.getoption(\n \"--run-high-memory\"):\n pytest.skip(\n \"skipping high memory test since --run-high-memory was not set\")\n\n\n# Configurations for all tests and all test modules\n\[email protected](autouse=True)\ndef configure_tests():\n pandas.set_option('chained_assignment', 'raise')\n\n\n# For running doctests: make np and pd names available\n\[email protected](autouse=True)\ndef add_imports(doctest_namespace):\n doctest_namespace['np'] = numpy\n doctest_namespace['pd'] = pandas\n\n\[email protected](params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\ndef spmatrix(request):\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n\n\[email protected]\ndef ip():\n \"\"\"\n Get an instance of IPython.InteractiveShell.\n\n Will raise a skip if IPython is not installed.\n \"\"\"\n\n pytest.importorskip('IPython', minversion=\"6.0.0\")\n from IPython.core.interactiveshell import InteractiveShell\n return InteractiveShell()\n\n\nis_dateutil_le_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),\n reason=\"dateutil api change version\")\nis_dateutil_gt_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),\n reason=\"dateutil stable version\")\n", "path": "pandas/conftest.py"}]}
| 1,398 | 148 |
gh_patches_debug_38130
|
rasdani/github-patches
|
git_diff
|
modoboa__modoboa-726
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create alias with tag (+) in recipient address with internal domain
It's impossible to create new alias with tag in recipient address.
Example :
- I've [email protected] mailbox
- I would like to create [email protected] alias with [email protected] recipient
I've this error « Local recipient [email protected] not found ».
Solution proposition :
- use Alias.extmboxes to record this email address with tag
Do you see other solution ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modoboa/extensions/admin/forms/alias.py`
Content:
```
1 from django import forms
2 from django.utils.translation import ugettext as _, ugettext_lazy
3 from django.http import QueryDict
4 from modoboa.lib.exceptions import BadRequest, NotFound, Conflict
5 from modoboa.lib.emailutils import split_mailbox
6 from modoboa.lib.formutils import (
7 DynamicForm
8 )
9 from modoboa.extensions.admin.models import (
10 Domain, Mailbox, Alias
11 )
12
13
14 class AliasForm(forms.ModelForm, DynamicForm):
15 email = forms.EmailField(
16 label=ugettext_lazy("Email address"),
17 help_text=ugettext_lazy(
18 "The distribution list address. Use the '*' character to create a "
19 "'catchall' address (ex: *@domain.tld)."
20 ),
21 widget=forms.TextInput(attrs={"class": "form-control"})
22 )
23 recipients = forms.EmailField(
24 label=ugettext_lazy("Recipients"), required=False,
25 help_text=ugettext_lazy(
26 "Mailbox(es) this alias will point to. Indicate only one address "
27 "per input, press ENTER to add a new input."
28 ),
29 widget=forms.TextInput(attrs={"class": "form-control"})
30 )
31
32 class Meta:
33 model = Alias
34 fields = ("enabled",)
35
36 def __init__(self, user, *args, **kwargs):
37 self.user = user
38 super(AliasForm, self).__init__(*args, **kwargs)
39 self.fields.keyOrder = ['email', 'recipients', 'enabled']
40
41 if len(args) and isinstance(args[0], QueryDict):
42 if "instance" in kwargs:
43 if not kwargs["instance"].domain.enabled:
44 del self.fields["enabled"]
45 self._load_from_qdict(args[0], "recipients", forms.EmailField)
46 elif "instance" in kwargs:
47 dlist = kwargs["instance"]
48 self.fields["email"].initial = dlist.full_address
49 if not dlist.domain.enabled:
50 self.fields["enabled"].widget.attrs["disabled"] = "disabled"
51 cpt = 1
52 for al in dlist.aliases.all():
53 name = "recipients_%d" % cpt
54 self._create_field(forms.EmailField, name, al.full_address, 2)
55 cpt += 1
56 for mb in dlist.mboxes.all():
57 name = "recipients_%d" % (cpt)
58 self._create_field(forms.EmailField, name, mb.full_address, 2)
59 cpt += 1
60 for addr in dlist.extmboxes.split(','):
61 if addr == "":
62 continue
63 name = "recipients_%d" % (cpt)
64 self._create_field(forms.EmailField, name, addr, 2)
65 cpt += 1
66
67 def clean_email(self):
68 localpart, domname = split_mailbox(self.cleaned_data["email"])
69 try:
70 domain = Domain.objects.get(name=domname)
71 except Domain.DoesNotExist:
72 raise forms.ValidationError(_("Domain does not exist"))
73 if not self.user.can_access(domain):
74 raise forms.ValidationError(
75 _("You don't have access to this domain")
76 )
77 return self.cleaned_data["email"].lower()
78
79 def set_recipients(self):
80 """Recipients dispatching
81
82 We make a difference between 'local' recipients (the ones hosted
83 by Modoboa) and 'external' recipients.
84 """
85 self.ext_rcpts = []
86 self.int_rcpts = []
87 total = 0
88
89 for k, v in self.cleaned_data.items():
90 if not k.startswith("recipients"):
91 continue
92 if v == "":
93 continue
94 local_part, domname = split_mailbox(v)
95 if domname is None:
96 raise BadRequest(
97 u"%s %s" % (_("Invalid mailbox"), v)
98 )
99 try:
100 domain = Domain.objects.get(name=domname)
101 except Domain.DoesNotExist:
102 domain = None
103 if domain is not None:
104 try:
105 rcpt = Alias.objects.get(domain=domain, address=local_part)
106 if rcpt.full_address == self.cleaned_data["email"]:
107 rcpt = None
108 except Alias.DoesNotExist:
109 rcpt = None
110 if rcpt is None:
111 try:
112 rcpt = Mailbox.objects.get(domain=domain, address=local_part)
113 except Mailbox.DoesNotExist:
114 raise NotFound(
115 _("Local recipient %s not found" % v)
116 )
117 if rcpt in self.int_rcpts:
118 raise Conflict(
119 _("Recipient %s already present" % v)
120 )
121 self.int_rcpts += [rcpt]
122 total += 1
123 continue
124
125 if v in self.ext_rcpts:
126 raise Conflict(
127 _("Recipient %s already present" % v)
128 )
129 self.ext_rcpts += [v]
130 total += 1
131
132 if total == 0:
133 raise BadRequest(_("No recipient defined"))
134
135 def save(self, commit=True):
136 alias = super(AliasForm, self).save(commit=False)
137 localpart, domname = split_mailbox(self.cleaned_data["email"])
138 alias.address = localpart
139 alias.domain = Domain.objects.get(name=domname)
140 if commit:
141 alias.save(int_rcpts=self.int_rcpts, ext_rcpts=self.ext_rcpts)
142 self.save_m2m()
143 return alias
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modoboa/extensions/admin/forms/alias.py b/modoboa/extensions/admin/forms/alias.py
--- a/modoboa/extensions/admin/forms/alias.py
+++ b/modoboa/extensions/admin/forms/alias.py
@@ -96,31 +96,36 @@
raise BadRequest(
u"%s %s" % (_("Invalid mailbox"), v)
)
- try:
- domain = Domain.objects.get(name=domname)
- except Domain.DoesNotExist:
- domain = None
+
+ # Support tag in recipient, see https://github.com/tonioo/modoboa/issues/713
+ local_part_with_tag = None
+ if '+' in local_part:
+ local_part_with_tag = local_part
+ local_part = local_part[0:local_part.find('+')]
+
+ domain = Domain.objects.filter(name=domname).first()
+
if domain is not None:
- try:
- rcpt = Alias.objects.get(domain=domain, address=local_part)
- if rcpt.full_address == self.cleaned_data["email"]:
- rcpt = None
- except Alias.DoesNotExist:
+ rcpt = Alias.objects.filter(domain=domain, address=local_part).first()
+ if rcpt and (rcpt.full_address == self.cleaned_data["email"]):
rcpt = None
+
if rcpt is None:
try:
rcpt = Mailbox.objects.get(domain=domain, address=local_part)
except Mailbox.DoesNotExist:
raise NotFound(
- _("Local recipient %s not found" % v)
+ _("Local recipient %s@%s not found" % (local_part, domname))
)
- if rcpt in self.int_rcpts:
- raise Conflict(
- _("Recipient %s already present" % v)
- )
- self.int_rcpts += [rcpt]
- total += 1
- continue
+
+ if local_part_with_tag is None:
+ if rcpt in self.int_rcpts:
+ raise Conflict(
+ _("Recipient %s already present" % v)
+ )
+ self.int_rcpts += [rcpt]
+ total += 1
+ continue
if v in self.ext_rcpts:
raise Conflict(
@@ -134,8 +139,8 @@
def save(self, commit=True):
alias = super(AliasForm, self).save(commit=False)
- localpart, domname = split_mailbox(self.cleaned_data["email"])
- alias.address = localpart
+ local_part, domname = split_mailbox(self.cleaned_data["email"])
+ alias.address = local_part
alias.domain = Domain.objects.get(name=domname)
if commit:
alias.save(int_rcpts=self.int_rcpts, ext_rcpts=self.ext_rcpts)
|
{"golden_diff": "diff --git a/modoboa/extensions/admin/forms/alias.py b/modoboa/extensions/admin/forms/alias.py\n--- a/modoboa/extensions/admin/forms/alias.py\n+++ b/modoboa/extensions/admin/forms/alias.py\n@@ -96,31 +96,36 @@\n raise BadRequest(\n u\"%s %s\" % (_(\"Invalid mailbox\"), v)\n )\n- try:\n- domain = Domain.objects.get(name=domname)\n- except Domain.DoesNotExist:\n- domain = None\n+\n+ # Support tag in recipient, see https://github.com/tonioo/modoboa/issues/713\n+ local_part_with_tag = None\n+ if '+' in local_part:\n+ local_part_with_tag = local_part\n+ local_part = local_part[0:local_part.find('+')]\n+\n+ domain = Domain.objects.filter(name=domname).first()\n+\n if domain is not None:\n- try:\n- rcpt = Alias.objects.get(domain=domain, address=local_part)\n- if rcpt.full_address == self.cleaned_data[\"email\"]:\n- rcpt = None\n- except Alias.DoesNotExist:\n+ rcpt = Alias.objects.filter(domain=domain, address=local_part).first()\n+ if rcpt and (rcpt.full_address == self.cleaned_data[\"email\"]):\n rcpt = None\n+\n if rcpt is None:\n try:\n rcpt = Mailbox.objects.get(domain=domain, address=local_part)\n except Mailbox.DoesNotExist:\n raise NotFound(\n- _(\"Local recipient %s not found\" % v)\n+ _(\"Local recipient %s@%s not found\" % (local_part, domname))\n )\n- if rcpt in self.int_rcpts:\n- raise Conflict(\n- _(\"Recipient %s already present\" % v)\n- )\n- self.int_rcpts += [rcpt]\n- total += 1\n- continue\n+\n+ if local_part_with_tag is None:\n+ if rcpt in self.int_rcpts:\n+ raise Conflict(\n+ _(\"Recipient %s already present\" % v)\n+ )\n+ self.int_rcpts += [rcpt]\n+ total += 1\n+ continue\n \n if v in self.ext_rcpts:\n raise Conflict(\n@@ -134,8 +139,8 @@\n \n def save(self, commit=True):\n alias = super(AliasForm, self).save(commit=False)\n- localpart, domname = split_mailbox(self.cleaned_data[\"email\"])\n- alias.address = localpart\n+ local_part, domname = split_mailbox(self.cleaned_data[\"email\"])\n+ alias.address = local_part\n alias.domain = Domain.objects.get(name=domname)\n if commit:\n alias.save(int_rcpts=self.int_rcpts, ext_rcpts=self.ext_rcpts)\n", "issue": "Create alias with tag (+) in recipient address with internal domain\nIt's impossible to create new alias with tag in recipient address.\n\nExample : \n- I've [email protected] mailbox\n- I would like to create [email protected] alias with [email protected] recipient\n\nI've this error \u00ab Local recipient [email protected] not found \u00bb.\n\nSolution proposition : \n- use Alias.extmboxes to record this email address with tag\n\nDo you see other solution ?\n\n", "before_files": [{"content": "from django import forms\nfrom django.utils.translation import ugettext as _, ugettext_lazy\nfrom django.http import QueryDict\nfrom modoboa.lib.exceptions import BadRequest, NotFound, Conflict\nfrom modoboa.lib.emailutils import split_mailbox\nfrom modoboa.lib.formutils import (\n DynamicForm\n)\nfrom modoboa.extensions.admin.models import (\n Domain, Mailbox, Alias\n)\n\n\nclass AliasForm(forms.ModelForm, DynamicForm):\n email = forms.EmailField(\n label=ugettext_lazy(\"Email address\"),\n help_text=ugettext_lazy(\n \"The distribution list address. Use the '*' character to create a \"\n \"'catchall' address (ex: *@domain.tld).\"\n ),\n widget=forms.TextInput(attrs={\"class\": \"form-control\"})\n )\n recipients = forms.EmailField(\n label=ugettext_lazy(\"Recipients\"), required=False,\n help_text=ugettext_lazy(\n \"Mailbox(es) this alias will point to. Indicate only one address \"\n \"per input, press ENTER to add a new input.\"\n ),\n widget=forms.TextInput(attrs={\"class\": \"form-control\"})\n )\n\n class Meta:\n model = Alias\n fields = (\"enabled\",)\n\n def __init__(self, user, *args, **kwargs):\n self.user = user\n super(AliasForm, self).__init__(*args, **kwargs)\n self.fields.keyOrder = ['email', 'recipients', 'enabled']\n\n if len(args) and isinstance(args[0], QueryDict):\n if \"instance\" in kwargs:\n if not kwargs[\"instance\"].domain.enabled:\n del self.fields[\"enabled\"]\n self._load_from_qdict(args[0], \"recipients\", forms.EmailField)\n elif \"instance\" in kwargs:\n dlist = kwargs[\"instance\"]\n self.fields[\"email\"].initial = dlist.full_address\n if not dlist.domain.enabled:\n self.fields[\"enabled\"].widget.attrs[\"disabled\"] = \"disabled\"\n cpt = 1\n for al in dlist.aliases.all():\n name = \"recipients_%d\" % cpt\n self._create_field(forms.EmailField, name, al.full_address, 2)\n cpt += 1\n for mb in dlist.mboxes.all():\n name = \"recipients_%d\" % (cpt)\n self._create_field(forms.EmailField, name, mb.full_address, 2)\n cpt += 1\n for addr in dlist.extmboxes.split(','):\n if addr == \"\":\n continue\n name = \"recipients_%d\" % (cpt)\n self._create_field(forms.EmailField, name, addr, 2)\n cpt += 1\n\n def clean_email(self):\n localpart, domname = split_mailbox(self.cleaned_data[\"email\"])\n try:\n domain = Domain.objects.get(name=domname)\n except Domain.DoesNotExist:\n raise forms.ValidationError(_(\"Domain does not exist\"))\n if not self.user.can_access(domain):\n raise forms.ValidationError(\n _(\"You don't have access to this domain\")\n )\n return self.cleaned_data[\"email\"].lower()\n\n def set_recipients(self):\n \"\"\"Recipients dispatching\n\n We make a difference between 'local' recipients (the ones hosted\n by Modoboa) and 'external' recipients.\n \"\"\"\n self.ext_rcpts = []\n self.int_rcpts = []\n total = 0\n\n for k, v in self.cleaned_data.items():\n if not k.startswith(\"recipients\"):\n continue\n if v == \"\":\n continue\n local_part, domname = split_mailbox(v)\n if domname is None:\n raise BadRequest(\n u\"%s %s\" % (_(\"Invalid mailbox\"), v)\n )\n try:\n domain = Domain.objects.get(name=domname)\n except Domain.DoesNotExist:\n domain = None\n if domain is not None:\n try:\n rcpt = Alias.objects.get(domain=domain, address=local_part)\n if rcpt.full_address == self.cleaned_data[\"email\"]:\n rcpt = None\n except Alias.DoesNotExist:\n rcpt = None\n if rcpt is None:\n try:\n rcpt = Mailbox.objects.get(domain=domain, address=local_part)\n except Mailbox.DoesNotExist:\n raise NotFound(\n _(\"Local recipient %s not found\" % v)\n )\n if rcpt in self.int_rcpts:\n raise Conflict(\n _(\"Recipient %s already present\" % v)\n )\n self.int_rcpts += [rcpt]\n total += 1\n continue\n\n if v in self.ext_rcpts:\n raise Conflict(\n _(\"Recipient %s already present\" % v)\n )\n self.ext_rcpts += [v]\n total += 1\n\n if total == 0:\n raise BadRequest(_(\"No recipient defined\"))\n\n def save(self, commit=True):\n alias = super(AliasForm, self).save(commit=False)\n localpart, domname = split_mailbox(self.cleaned_data[\"email\"])\n alias.address = localpart\n alias.domain = Domain.objects.get(name=domname)\n if commit:\n alias.save(int_rcpts=self.int_rcpts, ext_rcpts=self.ext_rcpts)\n self.save_m2m()\n return alias\n", "path": "modoboa/extensions/admin/forms/alias.py"}], "after_files": [{"content": "from django import forms\nfrom django.utils.translation import ugettext as _, ugettext_lazy\nfrom django.http import QueryDict\nfrom modoboa.lib.exceptions import BadRequest, NotFound, Conflict\nfrom modoboa.lib.emailutils import split_mailbox\nfrom modoboa.lib.formutils import (\n DynamicForm\n)\nfrom modoboa.extensions.admin.models import (\n Domain, Mailbox, Alias\n)\n\n\nclass AliasForm(forms.ModelForm, DynamicForm):\n email = forms.EmailField(\n label=ugettext_lazy(\"Email address\"),\n help_text=ugettext_lazy(\n \"The distribution list address. Use the '*' character to create a \"\n \"'catchall' address (ex: *@domain.tld).\"\n ),\n widget=forms.TextInput(attrs={\"class\": \"form-control\"})\n )\n recipients = forms.EmailField(\n label=ugettext_lazy(\"Recipients\"), required=False,\n help_text=ugettext_lazy(\n \"Mailbox(es) this alias will point to. Indicate only one address \"\n \"per input, press ENTER to add a new input.\"\n ),\n widget=forms.TextInput(attrs={\"class\": \"form-control\"})\n )\n\n class Meta:\n model = Alias\n fields = (\"enabled\",)\n\n def __init__(self, user, *args, **kwargs):\n self.user = user\n super(AliasForm, self).__init__(*args, **kwargs)\n self.fields.keyOrder = ['email', 'recipients', 'enabled']\n\n if len(args) and isinstance(args[0], QueryDict):\n if \"instance\" in kwargs:\n if not kwargs[\"instance\"].domain.enabled:\n del self.fields[\"enabled\"]\n self._load_from_qdict(args[0], \"recipients\", forms.EmailField)\n elif \"instance\" in kwargs:\n dlist = kwargs[\"instance\"]\n self.fields[\"email\"].initial = dlist.full_address\n if not dlist.domain.enabled:\n self.fields[\"enabled\"].widget.attrs[\"disabled\"] = \"disabled\"\n cpt = 1\n for al in dlist.aliases.all():\n name = \"recipients_%d\" % cpt\n self._create_field(forms.EmailField, name, al.full_address, 2)\n cpt += 1\n for mb in dlist.mboxes.all():\n name = \"recipients_%d\" % (cpt)\n self._create_field(forms.EmailField, name, mb.full_address, 2)\n cpt += 1\n for addr in dlist.extmboxes.split(','):\n if addr == \"\":\n continue\n name = \"recipients_%d\" % (cpt)\n self._create_field(forms.EmailField, name, addr, 2)\n cpt += 1\n\n def clean_email(self):\n localpart, domname = split_mailbox(self.cleaned_data[\"email\"])\n try:\n domain = Domain.objects.get(name=domname)\n except Domain.DoesNotExist:\n raise forms.ValidationError(_(\"Domain does not exist\"))\n if not self.user.can_access(domain):\n raise forms.ValidationError(\n _(\"You don't have access to this domain\")\n )\n return self.cleaned_data[\"email\"].lower()\n\n def set_recipients(self):\n \"\"\"Recipients dispatching\n\n We make a difference between 'local' recipients (the ones hosted\n by Modoboa) and 'external' recipients.\n \"\"\"\n self.ext_rcpts = []\n self.int_rcpts = []\n total = 0\n\n for k, v in self.cleaned_data.items():\n if not k.startswith(\"recipients\"):\n continue\n if v == \"\":\n continue\n local_part, domname = split_mailbox(v)\n if domname is None:\n raise BadRequest(\n u\"%s %s\" % (_(\"Invalid mailbox\"), v)\n )\n\n # Support tag in recipient, see https://github.com/tonioo/modoboa/issues/713\n local_part_with_tag = None\n if '+' in local_part:\n local_part_with_tag = local_part\n local_part = local_part[0:local_part.find('+')]\n\n domain = Domain.objects.filter(name=domname).first()\n\n if domain is not None:\n rcpt = Alias.objects.filter(domain=domain, address=local_part).first()\n if rcpt and (rcpt.full_address == self.cleaned_data[\"email\"]):\n rcpt = None\n\n if rcpt is None:\n try:\n rcpt = Mailbox.objects.get(domain=domain, address=local_part)\n except Mailbox.DoesNotExist:\n raise NotFound(\n _(\"Local recipient %s@%s not found\" % (local_part, domname))\n )\n\n if local_part_with_tag is None:\n if rcpt in self.int_rcpts:\n raise Conflict(\n _(\"Recipient %s already present\" % v)\n )\n self.int_rcpts += [rcpt]\n total += 1\n continue\n\n if v in self.ext_rcpts:\n raise Conflict(\n _(\"Recipient %s already present\" % v)\n )\n self.ext_rcpts += [v]\n total += 1\n\n if total == 0:\n raise BadRequest(_(\"No recipient defined\"))\n\n def save(self, commit=True):\n alias = super(AliasForm, self).save(commit=False)\n local_part, domname = split_mailbox(self.cleaned_data[\"email\"])\n alias.address = local_part\n alias.domain = Domain.objects.get(name=domname)\n if commit:\n alias.save(int_rcpts=self.int_rcpts, ext_rcpts=self.ext_rcpts)\n self.save_m2m()\n return alias\n", "path": "modoboa/extensions/admin/forms/alias.py"}]}
| 1,807 | 621 |
gh_patches_debug_1856
|
rasdani/github-patches
|
git_diff
|
Kaggle__docker-python-1326
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NameError: name 'io' is not defined
## 🐛 Bug
I am trying to run my scripts on GPU notebook, and I keep getting the following error.
```shell
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_init.py", line 1172, in init
getcaller()
File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_init.py", line 846, in getcaller
src, line, func, stack = logger.findCaller(stack_info=True)
File "/root/.local/lib/python3.10/site-packages/log.py", line 42, in findCaller
sio = io.StringIO()
NameError: name 'io' is not defined
```
In addition, I found that there is no import `io` package in [this](https://github.com/Kaggle/docker-python/blob/main/patches/log.py) code.
### To Reproduce
### Expected behavior
### Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `patches/log.py`
Content:
```
1 import logging
2 import os
3
4 import google.auth
5
6
7 _LOG_TO_FILE_ENV = os.getenv("KAGGLE_LOG_TO_FILE")
8
9
10 class _LogFormatter(logging.Formatter):
11 """A logging formatter which truncates long messages."""
12
13 _MAX_LOG_LENGTH = 10000 # Be generous, not to truncate long backtraces.
14
15 def format(self, record):
16 msg = super(_LogFormatter, self).format(record)
17 return msg[:_LogFormatter._MAX_LOG_LENGTH] if msg else msg
18
19 # TODO(vimota): Clean this up once we're using python 3.8 and can use
20 # (https://github.com/python/cpython/commit/dde9fdbe453925279ac3d2a6a72102f6f9ef247c)
21 # Right now, making the logging module display the intended frame's information
22 # when the logging calls (info, warn, ...) are wrapped (as is the case in our
23 # Log class) involves fragile logic.
24 class _Logger(logging.Logger):
25
26 # This is a copy of logging.Logger.findCaller with the filename ignore
27 # set expanded to include the current filename (".../log.py").
28 # Copyright 2001-2015 by Vinay Sajip. All Rights Reserved.
29 # License: https://github.com/python/cpython/blob/ce9e62544571e7ade7186697d5dd065fb4c5243f/LICENSE
30 def findCaller(self, stack_info=False, stacklevel=1):
31 f = logging.currentframe()
32 f = f.f_back
33 rv = "(unknown file)", 0, "(unknown function)", None
34 while hasattr(f, "f_code"):
35 co = f.f_code
36 filename = os.path.normcase(co.co_filename)
37 if filename in _ignore_srcfiles:
38 f = f.f_back
39 continue
40 sinfo = None
41 if stack_info:
42 sio = io.StringIO()
43 sio.write('Stack (most recent call last):\n')
44 traceback.print_stack(f, file=sio)
45 sinfo = sio.getvalue()
46 if sinfo[-1] == '\n':
47 sinfo = sinfo[:-1]
48 sio.close()
49 rv = (co.co_filename, f.f_lineno, co.co_name, sinfo)
50 break
51 return rv
52
53
54 _srcfile = os.path.normcase(_Logger.findCaller.__code__.co_filename)
55 _ignore_srcfiles = (_srcfile, logging._srcfile)
56
57 class Log:
58 """ Helper aggregate for all things related to logging activity. """
59
60 _GLOBAL_LOG = logging.getLogger("")
61 _initialized = False
62
63 # These are convenience helpers. For performance, consider saving Log.get_logger() and using that
64 @staticmethod
65 def critical(msg, *args, **kwargs):
66 Log._GLOBAL_LOG.critical(msg, *args, **kwargs)
67
68 @staticmethod
69 def fatal(msg, *args, **kwargs):
70 Log._GLOBAL_LOG.fatal(msg, *args, **kwargs)
71
72 @staticmethod
73 def exception(msg, *args, **kwargs):
74 Log._GLOBAL_LOG.exception(msg, *args, **kwargs)
75
76 @staticmethod
77 def error(msg, *args, **kwargs):
78 Log._GLOBAL_LOG.error(msg, *args, **kwargs)
79
80 @staticmethod
81 def warn(msg, *args, **kwargs):
82 Log._GLOBAL_LOG.warn(msg, *args, **kwargs)
83
84 @staticmethod
85 def warning(msg, *args, **kwargs):
86 Log._GLOBAL_LOG.warning(msg, *args, **kwargs)
87
88 @staticmethod
89 def debug(msg, *args, **kwargs):
90 Log._GLOBAL_LOG.debug(msg, *args, **kwargs)
91
92 @staticmethod
93 def info(msg, *args, **kwargs):
94 Log._GLOBAL_LOG.info(msg, *args, **kwargs)
95
96 @staticmethod
97 def set_level(loglevel):
98 if isinstance(loglevel, int):
99 Log._GLOBAL_LOG.setLevel(loglevel)
100 return
101 elif isinstance(loglevel, str):
102 # idea from https://docs.python.org/3.5/howto/logging.html#logging-to-a-file
103 numeric_level = getattr(logging, loglevel.upper(), None)
104 if isinstance(numeric_level, int):
105 Log._GLOBAL_LOG.setLevel(numeric_level)
106 return
107
108 raise ValueError('Invalid log level: %s' % loglevel)
109
110 @staticmethod
111 def _static_init():
112 if Log._initialized:
113 return
114
115 logging.setLoggerClass(_Logger)
116 # The root logger's type is unfortunately (and surprisingly) not affected by
117 # `setLoggerClass`. Monkey patch it instead. TODO(vimota): Remove this, see the TODO
118 # associated with _Logger.
119 logging.RootLogger.findCaller = _Logger.findCaller
120 log_to_file = _LOG_TO_FILE_ENV.lower() in ("yes", "true", "t", "1") if _LOG_TO_FILE_ENV is not None else True
121 if log_to_file:
122 handler = logging.FileHandler(filename='/tmp/kaggle.log', mode='w')
123 else:
124 handler = logging.StreamHandler()
125
126 # ".1s" is for the first letter: http://stackoverflow.com/a/27453084/1869.
127 format_string = "%(asctime)s %(levelname).1s %(process)d %(filename)s:%(lineno)d] %(message)s"
128 handler.setFormatter(_LogFormatter(format_string))
129 logging.basicConfig(level=logging.INFO, handlers=[handler])
130 Log._initialized = True
131
132 Log._static_init()
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/patches/log.py b/patches/log.py
--- a/patches/log.py
+++ b/patches/log.py
@@ -1,3 +1,4 @@
+import io
import logging
import os
@@ -129,4 +130,4 @@
logging.basicConfig(level=logging.INFO, handlers=[handler])
Log._initialized = True
-Log._static_init()
\ No newline at end of file
+Log._static_init()
|
{"golden_diff": "diff --git a/patches/log.py b/patches/log.py\n--- a/patches/log.py\n+++ b/patches/log.py\n@@ -1,3 +1,4 @@\n+import io\n import logging\n import os\n \n@@ -129,4 +130,4 @@\n logging.basicConfig(level=logging.INFO, handlers=[handler])\n Log._initialized = True\n \n-Log._static_init()\n\\ No newline at end of file\n+Log._static_init()\n", "issue": "NameError: name 'io' is not defined\n## \ud83d\udc1b Bug\r\n\r\nI am trying to run my scripts on GPU notebook, and I keep getting the following error.\r\n\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_init.py\", line 1172, in init\r\n getcaller()\r\n File \"/opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_init.py\", line 846, in getcaller\r\n src, line, func, stack = logger.findCaller(stack_info=True)\r\n File \"/root/.local/lib/python3.10/site-packages/log.py\", line 42, in findCaller\r\n sio = io.StringIO()\r\nNameError: name 'io' is not defined\r\n```\r\n\r\nIn addition, I found that there is no import `io` package in [this](https://github.com/Kaggle/docker-python/blob/main/patches/log.py) code.\r\n\r\n### To Reproduce \r\n\r\n### Expected behavior\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "import logging\nimport os\n\nimport google.auth\n\n\n_LOG_TO_FILE_ENV = os.getenv(\"KAGGLE_LOG_TO_FILE\")\n\n\nclass _LogFormatter(logging.Formatter):\n \"\"\"A logging formatter which truncates long messages.\"\"\"\n\n _MAX_LOG_LENGTH = 10000 # Be generous, not to truncate long backtraces.\n\n def format(self, record):\n msg = super(_LogFormatter, self).format(record)\n return msg[:_LogFormatter._MAX_LOG_LENGTH] if msg else msg\n\n# TODO(vimota): Clean this up once we're using python 3.8 and can use\n# (https://github.com/python/cpython/commit/dde9fdbe453925279ac3d2a6a72102f6f9ef247c)\n# Right now, making the logging module display the intended frame's information\n# when the logging calls (info, warn, ...) are wrapped (as is the case in our\n# Log class) involves fragile logic.\nclass _Logger(logging.Logger):\n\n # This is a copy of logging.Logger.findCaller with the filename ignore\n # set expanded to include the current filename (\".../log.py\").\n # Copyright 2001-2015 by Vinay Sajip. All Rights Reserved.\n # License: https://github.com/python/cpython/blob/ce9e62544571e7ade7186697d5dd065fb4c5243f/LICENSE\n def findCaller(self, stack_info=False, stacklevel=1):\n f = logging.currentframe()\n f = f.f_back\n rv = \"(unknown file)\", 0, \"(unknown function)\", None\n while hasattr(f, \"f_code\"):\n co = f.f_code\n filename = os.path.normcase(co.co_filename)\n if filename in _ignore_srcfiles:\n f = f.f_back\n continue\n sinfo = None\n if stack_info:\n sio = io.StringIO()\n sio.write('Stack (most recent call last):\\n')\n traceback.print_stack(f, file=sio)\n sinfo = sio.getvalue()\n if sinfo[-1] == '\\n':\n sinfo = sinfo[:-1]\n sio.close()\n rv = (co.co_filename, f.f_lineno, co.co_name, sinfo)\n break\n return rv\n\n\n_srcfile = os.path.normcase(_Logger.findCaller.__code__.co_filename)\n_ignore_srcfiles = (_srcfile, logging._srcfile)\n\nclass Log:\n \"\"\" Helper aggregate for all things related to logging activity. \"\"\"\n\n _GLOBAL_LOG = logging.getLogger(\"\")\n _initialized = False\n\n # These are convenience helpers. For performance, consider saving Log.get_logger() and using that\n @staticmethod\n def critical(msg, *args, **kwargs):\n Log._GLOBAL_LOG.critical(msg, *args, **kwargs)\n\n @staticmethod\n def fatal(msg, *args, **kwargs):\n Log._GLOBAL_LOG.fatal(msg, *args, **kwargs)\n\n @staticmethod\n def exception(msg, *args, **kwargs):\n Log._GLOBAL_LOG.exception(msg, *args, **kwargs)\n\n @staticmethod\n def error(msg, *args, **kwargs):\n Log._GLOBAL_LOG.error(msg, *args, **kwargs)\n\n @staticmethod\n def warn(msg, *args, **kwargs):\n Log._GLOBAL_LOG.warn(msg, *args, **kwargs)\n\n @staticmethod\n def warning(msg, *args, **kwargs):\n Log._GLOBAL_LOG.warning(msg, *args, **kwargs)\n\n @staticmethod\n def debug(msg, *args, **kwargs):\n Log._GLOBAL_LOG.debug(msg, *args, **kwargs)\n\n @staticmethod\n def info(msg, *args, **kwargs):\n Log._GLOBAL_LOG.info(msg, *args, **kwargs)\n\n @staticmethod\n def set_level(loglevel):\n if isinstance(loglevel, int):\n Log._GLOBAL_LOG.setLevel(loglevel)\n return\n elif isinstance(loglevel, str):\n # idea from https://docs.python.org/3.5/howto/logging.html#logging-to-a-file\n numeric_level = getattr(logging, loglevel.upper(), None)\n if isinstance(numeric_level, int):\n Log._GLOBAL_LOG.setLevel(numeric_level)\n return\n\n raise ValueError('Invalid log level: %s' % loglevel)\n\n @staticmethod\n def _static_init():\n if Log._initialized:\n return\n\n logging.setLoggerClass(_Logger)\n # The root logger's type is unfortunately (and surprisingly) not affected by\n # `setLoggerClass`. Monkey patch it instead. TODO(vimota): Remove this, see the TODO\n # associated with _Logger.\n logging.RootLogger.findCaller = _Logger.findCaller\n log_to_file = _LOG_TO_FILE_ENV.lower() in (\"yes\", \"true\", \"t\", \"1\") if _LOG_TO_FILE_ENV is not None else True\n if log_to_file:\n handler = logging.FileHandler(filename='/tmp/kaggle.log', mode='w')\n else:\n handler = logging.StreamHandler()\n \n # \".1s\" is for the first letter: http://stackoverflow.com/a/27453084/1869.\n format_string = \"%(asctime)s %(levelname).1s %(process)d %(filename)s:%(lineno)d] %(message)s\"\n handler.setFormatter(_LogFormatter(format_string))\n logging.basicConfig(level=logging.INFO, handlers=[handler])\n Log._initialized = True\n\nLog._static_init()", "path": "patches/log.py"}], "after_files": [{"content": "import io\nimport logging\nimport os\n\nimport google.auth\n\n\n_LOG_TO_FILE_ENV = os.getenv(\"KAGGLE_LOG_TO_FILE\")\n\n\nclass _LogFormatter(logging.Formatter):\n \"\"\"A logging formatter which truncates long messages.\"\"\"\n\n _MAX_LOG_LENGTH = 10000 # Be generous, not to truncate long backtraces.\n\n def format(self, record):\n msg = super(_LogFormatter, self).format(record)\n return msg[:_LogFormatter._MAX_LOG_LENGTH] if msg else msg\n\n# TODO(vimota): Clean this up once we're using python 3.8 and can use\n# (https://github.com/python/cpython/commit/dde9fdbe453925279ac3d2a6a72102f6f9ef247c)\n# Right now, making the logging module display the intended frame's information\n# when the logging calls (info, warn, ...) are wrapped (as is the case in our\n# Log class) involves fragile logic.\nclass _Logger(logging.Logger):\n\n # This is a copy of logging.Logger.findCaller with the filename ignore\n # set expanded to include the current filename (\".../log.py\").\n # Copyright 2001-2015 by Vinay Sajip. All Rights Reserved.\n # License: https://github.com/python/cpython/blob/ce9e62544571e7ade7186697d5dd065fb4c5243f/LICENSE\n def findCaller(self, stack_info=False, stacklevel=1):\n f = logging.currentframe()\n f = f.f_back\n rv = \"(unknown file)\", 0, \"(unknown function)\", None\n while hasattr(f, \"f_code\"):\n co = f.f_code\n filename = os.path.normcase(co.co_filename)\n if filename in _ignore_srcfiles:\n f = f.f_back\n continue\n sinfo = None\n if stack_info:\n sio = io.StringIO()\n sio.write('Stack (most recent call last):\\n')\n traceback.print_stack(f, file=sio)\n sinfo = sio.getvalue()\n if sinfo[-1] == '\\n':\n sinfo = sinfo[:-1]\n sio.close()\n rv = (co.co_filename, f.f_lineno, co.co_name, sinfo)\n break\n return rv\n\n\n_srcfile = os.path.normcase(_Logger.findCaller.__code__.co_filename)\n_ignore_srcfiles = (_srcfile, logging._srcfile)\n\nclass Log:\n \"\"\" Helper aggregate for all things related to logging activity. \"\"\"\n\n _GLOBAL_LOG = logging.getLogger(\"\")\n _initialized = False\n\n # These are convenience helpers. For performance, consider saving Log.get_logger() and using that\n @staticmethod\n def critical(msg, *args, **kwargs):\n Log._GLOBAL_LOG.critical(msg, *args, **kwargs)\n\n @staticmethod\n def fatal(msg, *args, **kwargs):\n Log._GLOBAL_LOG.fatal(msg, *args, **kwargs)\n\n @staticmethod\n def exception(msg, *args, **kwargs):\n Log._GLOBAL_LOG.exception(msg, *args, **kwargs)\n\n @staticmethod\n def error(msg, *args, **kwargs):\n Log._GLOBAL_LOG.error(msg, *args, **kwargs)\n\n @staticmethod\n def warn(msg, *args, **kwargs):\n Log._GLOBAL_LOG.warn(msg, *args, **kwargs)\n\n @staticmethod\n def warning(msg, *args, **kwargs):\n Log._GLOBAL_LOG.warning(msg, *args, **kwargs)\n\n @staticmethod\n def debug(msg, *args, **kwargs):\n Log._GLOBAL_LOG.debug(msg, *args, **kwargs)\n\n @staticmethod\n def info(msg, *args, **kwargs):\n Log._GLOBAL_LOG.info(msg, *args, **kwargs)\n\n @staticmethod\n def set_level(loglevel):\n if isinstance(loglevel, int):\n Log._GLOBAL_LOG.setLevel(loglevel)\n return\n elif isinstance(loglevel, str):\n # idea from https://docs.python.org/3.5/howto/logging.html#logging-to-a-file\n numeric_level = getattr(logging, loglevel.upper(), None)\n if isinstance(numeric_level, int):\n Log._GLOBAL_LOG.setLevel(numeric_level)\n return\n\n raise ValueError('Invalid log level: %s' % loglevel)\n\n @staticmethod\n def _static_init():\n if Log._initialized:\n return\n\n logging.setLoggerClass(_Logger)\n # The root logger's type is unfortunately (and surprisingly) not affected by\n # `setLoggerClass`. Monkey patch it instead. TODO(vimota): Remove this, see the TODO\n # associated with _Logger.\n logging.RootLogger.findCaller = _Logger.findCaller\n log_to_file = _LOG_TO_FILE_ENV.lower() in (\"yes\", \"true\", \"t\", \"1\") if _LOG_TO_FILE_ENV is not None else True\n if log_to_file:\n handler = logging.FileHandler(filename='/tmp/kaggle.log', mode='w')\n else:\n handler = logging.StreamHandler()\n \n # \".1s\" is for the first letter: http://stackoverflow.com/a/27453084/1869.\n format_string = \"%(asctime)s %(levelname).1s %(process)d %(filename)s:%(lineno)d] %(message)s\"\n handler.setFormatter(_LogFormatter(format_string))\n logging.basicConfig(level=logging.INFO, handlers=[handler])\n Log._initialized = True\n\nLog._static_init()\n", "path": "patches/log.py"}]}
| 2,017 | 100 |
gh_patches_debug_14097
|
rasdani/github-patches
|
git_diff
|
pretix__pretix-1443
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Required file question breaks validation on edit
Editing an order with a product with a required file question breaks when the customer tries to edit the order. In that case even if the customer already provided a file earlier the validation for the question fails on edit.
Required file question breaks validation on edit
Editing an order with a product with a required file question breaks when the customer tries to edit the order. In that case even if the customer already provided a file earlier the validation for the question fails on edit.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pretix/base/forms/widgets.py`
Content:
```
1 import os
2
3 from django import forms
4 from django.utils.formats import get_format
5 from django.utils.functional import lazy
6 from django.utils.timezone import now
7 from django.utils.translation import ugettext_lazy as _
8
9 from pretix.base.models import OrderPosition
10 from pretix.multidomain.urlreverse import eventreverse
11
12
13 class DatePickerWidget(forms.DateInput):
14 def __init__(self, attrs=None, date_format=None):
15 attrs = attrs or {}
16 if 'placeholder' in attrs:
17 del attrs['placeholder']
18 date_attrs = dict(attrs)
19 date_attrs.setdefault('class', 'form-control')
20 date_attrs['class'] += ' datepickerfield'
21
22 df = date_format or get_format('DATE_INPUT_FORMATS')[0]
23 date_attrs['placeholder'] = now().replace(
24 year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0
25 ).strftime(df)
26
27 forms.DateInput.__init__(self, date_attrs, date_format)
28
29
30 class TimePickerWidget(forms.TimeInput):
31 def __init__(self, attrs=None, time_format=None):
32 attrs = attrs or {}
33 if 'placeholder' in attrs:
34 del attrs['placeholder']
35 time_attrs = dict(attrs)
36 time_attrs.setdefault('class', 'form-control')
37 time_attrs['class'] += ' timepickerfield'
38
39 tf = time_format or get_format('TIME_INPUT_FORMATS')[0]
40 time_attrs['placeholder'] = now().replace(
41 year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0
42 ).strftime(tf)
43
44 forms.TimeInput.__init__(self, time_attrs, time_format)
45
46
47 class UploadedFileWidget(forms.ClearableFileInput):
48 def __init__(self, *args, **kwargs):
49 self.position = kwargs.pop('position')
50 self.event = kwargs.pop('event')
51 self.answer = kwargs.pop('answer')
52 super().__init__(*args, **kwargs)
53
54 class FakeFile:
55 def __init__(self, file, position, event, answer):
56 self.file = file
57 self.position = position
58 self.event = event
59 self.answer = answer
60
61 def __str__(self):
62 return os.path.basename(self.file.name).split('.', 1)[-1]
63
64 @property
65 def url(self):
66 if isinstance(self.position, OrderPosition):
67 return eventreverse(self.event, 'presale:event.order.download.answer', kwargs={
68 'order': self.position.order.code,
69 'secret': self.position.order.secret,
70 'answer': self.answer.pk,
71 })
72 else:
73 return eventreverse(self.event, 'presale:event.cart.download.answer', kwargs={
74 'answer': self.answer.pk,
75 })
76
77 def format_value(self, value):
78 if self.is_initial(value):
79 return self.FakeFile(value, self.position, self.event, self.answer)
80
81
82 class SplitDateTimePickerWidget(forms.SplitDateTimeWidget):
83 template_name = 'pretixbase/forms/widgets/splitdatetime.html'
84
85 def __init__(self, attrs=None, date_format=None, time_format=None):
86 attrs = attrs or {}
87 if 'placeholder' in attrs:
88 del attrs['placeholder']
89 date_attrs = dict(attrs)
90 time_attrs = dict(attrs)
91 date_attrs.setdefault('class', 'form-control splitdatetimepart')
92 time_attrs.setdefault('class', 'form-control splitdatetimepart')
93 date_attrs.setdefault('autocomplete', 'off')
94 time_attrs.setdefault('autocomplete', 'off')
95 date_attrs['class'] += ' datepickerfield'
96 time_attrs['class'] += ' timepickerfield'
97
98 def date_placeholder():
99 df = date_format or get_format('DATE_INPUT_FORMATS')[0]
100 return now().replace(
101 year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0
102 ).strftime(df)
103
104 def time_placeholder():
105 tf = time_format or get_format('TIME_INPUT_FORMATS')[0]
106 return now().replace(
107 year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0
108 ).strftime(tf)
109
110 date_attrs['placeholder'] = lazy(date_placeholder, str)
111 time_attrs['placeholder'] = lazy(time_placeholder, str)
112
113 widgets = (
114 forms.DateInput(attrs=date_attrs, format=date_format),
115 forms.TimeInput(attrs=time_attrs, format=time_format),
116 )
117 # Skip one hierarchy level
118 forms.MultiWidget.__init__(self, widgets, attrs)
119
120
121 class BusinessBooleanRadio(forms.RadioSelect):
122 def __init__(self, require_business=False, attrs=None):
123 self.require_business = require_business
124 if self.require_business:
125 choices = (
126 ('business', _('Business customer')),
127 )
128 else:
129 choices = (
130 ('individual', _('Individual customer')),
131 ('business', _('Business customer')),
132 )
133 super().__init__(attrs, choices)
134
135 def format_value(self, value):
136 if self.require_business:
137 return 'business'
138 try:
139 return {True: 'business', False: 'individual'}[value]
140 except KeyError:
141 return 'individual'
142
143 def value_from_datadict(self, data, files, name):
144 value = data.get(name)
145 if self.require_business:
146 return True
147 return {
148 'business': True,
149 True: True,
150 'True': True,
151 'individual': False,
152 'False': False,
153 False: False,
154 }.get(value)
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pretix/base/forms/widgets.py b/src/pretix/base/forms/widgets.py
--- a/src/pretix/base/forms/widgets.py
+++ b/src/pretix/base/forms/widgets.py
@@ -46,6 +46,14 @@
class UploadedFileWidget(forms.ClearableFileInput):
def __init__(self, *args, **kwargs):
+ # Browsers can't recognize that the server already has a file uploaded
+ # Don't mark this input as being required if we already have an answer
+ # (this needs to be done via the attrs, otherwise we wouldn't get the "required" star on the field label)
+ attrs = kwargs.get('attrs', {})
+ if kwargs.get('required') and kwargs.get('initial'):
+ attrs.update({'required': None})
+ kwargs.update({'attrs': attrs})
+
self.position = kwargs.pop('position')
self.event = kwargs.pop('event')
self.answer = kwargs.pop('answer')
|
{"golden_diff": "diff --git a/src/pretix/base/forms/widgets.py b/src/pretix/base/forms/widgets.py\n--- a/src/pretix/base/forms/widgets.py\n+++ b/src/pretix/base/forms/widgets.py\n@@ -46,6 +46,14 @@\n \n class UploadedFileWidget(forms.ClearableFileInput):\n def __init__(self, *args, **kwargs):\n+ # Browsers can't recognize that the server already has a file uploaded\n+ # Don't mark this input as being required if we already have an answer\n+ # (this needs to be done via the attrs, otherwise we wouldn't get the \"required\" star on the field label)\n+ attrs = kwargs.get('attrs', {})\n+ if kwargs.get('required') and kwargs.get('initial'):\n+ attrs.update({'required': None})\n+ kwargs.update({'attrs': attrs})\n+\n self.position = kwargs.pop('position')\n self.event = kwargs.pop('event')\n self.answer = kwargs.pop('answer')\n", "issue": "Required file question breaks validation on edit\nEditing an order with a product with a required file question breaks when the customer tries to edit the order. In that case even if the customer already provided a file earlier the validation for the question fails on edit.\nRequired file question breaks validation on edit\nEditing an order with a product with a required file question breaks when the customer tries to edit the order. In that case even if the customer already provided a file earlier the validation for the question fails on edit.\n", "before_files": [{"content": "import os\n\nfrom django import forms\nfrom django.utils.formats import get_format\nfrom django.utils.functional import lazy\nfrom django.utils.timezone import now\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pretix.base.models import OrderPosition\nfrom pretix.multidomain.urlreverse import eventreverse\n\n\nclass DatePickerWidget(forms.DateInput):\n def __init__(self, attrs=None, date_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control')\n date_attrs['class'] += ' datepickerfield'\n\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n date_attrs['placeholder'] = now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n forms.DateInput.__init__(self, date_attrs, date_format)\n\n\nclass TimePickerWidget(forms.TimeInput):\n def __init__(self, attrs=None, time_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n time_attrs = dict(attrs)\n time_attrs.setdefault('class', 'form-control')\n time_attrs['class'] += ' timepickerfield'\n\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n time_attrs['placeholder'] = now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n forms.TimeInput.__init__(self, time_attrs, time_format)\n\n\nclass UploadedFileWidget(forms.ClearableFileInput):\n def __init__(self, *args, **kwargs):\n self.position = kwargs.pop('position')\n self.event = kwargs.pop('event')\n self.answer = kwargs.pop('answer')\n super().__init__(*args, **kwargs)\n\n class FakeFile:\n def __init__(self, file, position, event, answer):\n self.file = file\n self.position = position\n self.event = event\n self.answer = answer\n\n def __str__(self):\n return os.path.basename(self.file.name).split('.', 1)[-1]\n\n @property\n def url(self):\n if isinstance(self.position, OrderPosition):\n return eventreverse(self.event, 'presale:event.order.download.answer', kwargs={\n 'order': self.position.order.code,\n 'secret': self.position.order.secret,\n 'answer': self.answer.pk,\n })\n else:\n return eventreverse(self.event, 'presale:event.cart.download.answer', kwargs={\n 'answer': self.answer.pk,\n })\n\n def format_value(self, value):\n if self.is_initial(value):\n return self.FakeFile(value, self.position, self.event, self.answer)\n\n\nclass SplitDateTimePickerWidget(forms.SplitDateTimeWidget):\n template_name = 'pretixbase/forms/widgets/splitdatetime.html'\n\n def __init__(self, attrs=None, date_format=None, time_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n time_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control splitdatetimepart')\n time_attrs.setdefault('class', 'form-control splitdatetimepart')\n date_attrs.setdefault('autocomplete', 'off')\n time_attrs.setdefault('autocomplete', 'off')\n date_attrs['class'] += ' datepickerfield'\n time_attrs['class'] += ' timepickerfield'\n\n def date_placeholder():\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n def time_placeholder():\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n date_attrs['placeholder'] = lazy(date_placeholder, str)\n time_attrs['placeholder'] = lazy(time_placeholder, str)\n\n widgets = (\n forms.DateInput(attrs=date_attrs, format=date_format),\n forms.TimeInput(attrs=time_attrs, format=time_format),\n )\n # Skip one hierarchy level\n forms.MultiWidget.__init__(self, widgets, attrs)\n\n\nclass BusinessBooleanRadio(forms.RadioSelect):\n def __init__(self, require_business=False, attrs=None):\n self.require_business = require_business\n if self.require_business:\n choices = (\n ('business', _('Business customer')),\n )\n else:\n choices = (\n ('individual', _('Individual customer')),\n ('business', _('Business customer')),\n )\n super().__init__(attrs, choices)\n\n def format_value(self, value):\n if self.require_business:\n return 'business'\n try:\n return {True: 'business', False: 'individual'}[value]\n except KeyError:\n return 'individual'\n\n def value_from_datadict(self, data, files, name):\n value = data.get(name)\n if self.require_business:\n return True\n return {\n 'business': True,\n True: True,\n 'True': True,\n 'individual': False,\n 'False': False,\n False: False,\n }.get(value)\n", "path": "src/pretix/base/forms/widgets.py"}], "after_files": [{"content": "import os\n\nfrom django import forms\nfrom django.utils.formats import get_format\nfrom django.utils.functional import lazy\nfrom django.utils.timezone import now\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pretix.base.models import OrderPosition\nfrom pretix.multidomain.urlreverse import eventreverse\n\n\nclass DatePickerWidget(forms.DateInput):\n def __init__(self, attrs=None, date_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control')\n date_attrs['class'] += ' datepickerfield'\n\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n date_attrs['placeholder'] = now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n forms.DateInput.__init__(self, date_attrs, date_format)\n\n\nclass TimePickerWidget(forms.TimeInput):\n def __init__(self, attrs=None, time_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n time_attrs = dict(attrs)\n time_attrs.setdefault('class', 'form-control')\n time_attrs['class'] += ' timepickerfield'\n\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n time_attrs['placeholder'] = now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n forms.TimeInput.__init__(self, time_attrs, time_format)\n\n\nclass UploadedFileWidget(forms.ClearableFileInput):\n def __init__(self, *args, **kwargs):\n # Browsers can't recognize that the server already has a file uploaded\n # Don't mark this input as being required if we already have an answer\n # (this needs to be done via the attrs, otherwise we wouldn't get the \"required\" star on the field label)\n attrs = kwargs.get('attrs', {})\n if kwargs.get('required') and kwargs.get('initial'):\n attrs.update({'required': None})\n kwargs.update({'attrs': attrs})\n\n self.position = kwargs.pop('position')\n self.event = kwargs.pop('event')\n self.answer = kwargs.pop('answer')\n super().__init__(*args, **kwargs)\n\n class FakeFile:\n def __init__(self, file, position, event, answer):\n self.file = file\n self.position = position\n self.event = event\n self.answer = answer\n\n def __str__(self):\n return os.path.basename(self.file.name).split('.', 1)[-1]\n\n @property\n def url(self):\n if isinstance(self.position, OrderPosition):\n return eventreverse(self.event, 'presale:event.order.download.answer', kwargs={\n 'order': self.position.order.code,\n 'secret': self.position.order.secret,\n 'answer': self.answer.pk,\n })\n else:\n return eventreverse(self.event, 'presale:event.cart.download.answer', kwargs={\n 'answer': self.answer.pk,\n })\n\n def format_value(self, value):\n if self.is_initial(value):\n return self.FakeFile(value, self.position, self.event, self.answer)\n\n\nclass SplitDateTimePickerWidget(forms.SplitDateTimeWidget):\n template_name = 'pretixbase/forms/widgets/splitdatetime.html'\n\n def __init__(self, attrs=None, date_format=None, time_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n time_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control splitdatetimepart')\n time_attrs.setdefault('class', 'form-control splitdatetimepart')\n date_attrs.setdefault('autocomplete', 'off')\n time_attrs.setdefault('autocomplete', 'off')\n date_attrs['class'] += ' datepickerfield'\n time_attrs['class'] += ' timepickerfield'\n\n def date_placeholder():\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n def time_placeholder():\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n date_attrs['placeholder'] = lazy(date_placeholder, str)\n time_attrs['placeholder'] = lazy(time_placeholder, str)\n\n widgets = (\n forms.DateInput(attrs=date_attrs, format=date_format),\n forms.TimeInput(attrs=time_attrs, format=time_format),\n )\n # Skip one hierarchy level\n forms.MultiWidget.__init__(self, widgets, attrs)\n\n\nclass BusinessBooleanRadio(forms.RadioSelect):\n def __init__(self, require_business=False, attrs=None):\n self.require_business = require_business\n if self.require_business:\n choices = (\n ('business', _('Business customer')),\n )\n else:\n choices = (\n ('individual', _('Individual customer')),\n ('business', _('Business customer')),\n )\n super().__init__(attrs, choices)\n\n def format_value(self, value):\n if self.require_business:\n return 'business'\n try:\n return {True: 'business', False: 'individual'}[value]\n except KeyError:\n return 'individual'\n\n def value_from_datadict(self, data, files, name):\n value = data.get(name)\n if self.require_business:\n return True\n return {\n 'business': True,\n True: True,\n 'True': True,\n 'individual': False,\n 'False': False,\n False: False,\n }.get(value)\n", "path": "src/pretix/base/forms/widgets.py"}]}
| 1,933 | 214 |
gh_patches_debug_28274
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-427
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nginxparser does not recognize 'if' statements
E.g., this is unparseable by nginxparser:
```
if ($http_origin ~* ^https://www\.example\.com) {
add_header Access-Control-Allow-Origin "$http_origin";
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `letsencrypt_nginx/nginxparser.py`
Content:
```
1 """Very low-level nginx config parser based on pyparsing."""
2 import string
3
4 from pyparsing import (
5 Literal, White, Word, alphanums, CharsNotIn, Forward, Group,
6 Optional, OneOrMore, ZeroOrMore, pythonStyleComment)
7
8
9 class RawNginxParser(object):
10 # pylint: disable=expression-not-assigned
11 """A class that parses nginx configuration with pyparsing."""
12
13 # constants
14 left_bracket = Literal("{").suppress()
15 right_bracket = Literal("}").suppress()
16 semicolon = Literal(";").suppress()
17 space = White().suppress()
18 key = Word(alphanums + "_/")
19 value = CharsNotIn("{};,")
20 location = CharsNotIn("{};," + string.whitespace)
21 # modifier for location uri [ = | ~ | ~* | ^~ ]
22 modifier = Literal("=") | Literal("~*") | Literal("~") | Literal("^~")
23
24 # rules
25 assignment = (key + Optional(space + value) + semicolon)
26 block = Forward()
27
28 block << Group(
29 Group(key + Optional(space + modifier) + Optional(space + location))
30 + left_bracket
31 + Group(ZeroOrMore(Group(assignment) | block))
32 + right_bracket)
33
34 script = OneOrMore(Group(assignment) | block).ignore(pythonStyleComment)
35
36 def __init__(self, source):
37 self.source = source
38
39 def parse(self):
40 """Returns the parsed tree."""
41 return self.script.parseString(self.source)
42
43 def as_list(self):
44 """Returns the parsed tree as a list."""
45 return self.parse().asList()
46
47
48 class RawNginxDumper(object):
49 # pylint: disable=too-few-public-methods
50 """A class that dumps nginx configuration from the provided tree."""
51 def __init__(self, blocks, indentation=4):
52 self.blocks = blocks
53 self.indentation = indentation
54
55 def __iter__(self, blocks=None, current_indent=0, spacer=' '):
56 """Iterates the dumped nginx content."""
57 blocks = blocks or self.blocks
58 for key, values in blocks:
59 if current_indent:
60 yield spacer
61 indentation = spacer * current_indent
62 if isinstance(key, list):
63 yield indentation + spacer.join(key) + ' {'
64 for parameter in values:
65 if isinstance(parameter[0], list):
66 dumped = self.__iter__(
67 [parameter],
68 current_indent + self.indentation)
69 for line in dumped:
70 yield line
71 else:
72 dumped = spacer.join(parameter) + ';'
73 yield spacer * (
74 current_indent + self.indentation) + dumped
75
76 yield indentation + '}'
77 else:
78 yield spacer * current_indent + key + spacer + values + ';'
79
80 def as_string(self):
81 """Return the parsed block as a string."""
82 return '\n'.join(self)
83
84
85 # Shortcut functions to respect Python's serialization interface
86 # (like pyyaml, picker or json)
87
88 def loads(source):
89 """Parses from a string.
90
91 :param str souce: The string to parse
92 :returns: The parsed tree
93 :rtype: list
94
95 """
96 return RawNginxParser(source).as_list()
97
98
99 def load(_file):
100 """Parses from a file.
101
102 :param file _file: The file to parse
103 :returns: The parsed tree
104 :rtype: list
105
106 """
107 return loads(_file.read())
108
109
110 def dumps(blocks, indentation=4):
111 """Dump to a string.
112
113 :param list block: The parsed tree
114 :param int indentation: The number of spaces to indent
115 :rtype: str
116
117 """
118 return RawNginxDumper(blocks, indentation).as_string()
119
120
121 def dump(blocks, _file, indentation=4):
122 """Dump to a file.
123
124 :param list block: The parsed tree
125 :param file _file: The file to dump to
126 :param int indentation: The number of spaces to indent
127 :rtype: NoneType
128
129 """
130 return _file.write(dumps(blocks, indentation))
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/letsencrypt_nginx/nginxparser.py b/letsencrypt_nginx/nginxparser.py
--- a/letsencrypt_nginx/nginxparser.py
+++ b/letsencrypt_nginx/nginxparser.py
@@ -3,7 +3,7 @@
from pyparsing import (
Literal, White, Word, alphanums, CharsNotIn, Forward, Group,
- Optional, OneOrMore, ZeroOrMore, pythonStyleComment)
+ Optional, OneOrMore, Regex, ZeroOrMore, pythonStyleComment)
class RawNginxParser(object):
@@ -16,17 +16,21 @@
semicolon = Literal(";").suppress()
space = White().suppress()
key = Word(alphanums + "_/")
- value = CharsNotIn("{};,")
+ # Matches anything that is not a special character AND any chars in single
+ # or double quotes
+ value = Regex(r"((\".*\")?(\'.*\')?[^\{\};,]?)+")
location = CharsNotIn("{};," + string.whitespace)
# modifier for location uri [ = | ~ | ~* | ^~ ]
modifier = Literal("=") | Literal("~*") | Literal("~") | Literal("^~")
# rules
assignment = (key + Optional(space + value) + semicolon)
+ location_statement = Optional(space + modifier) + Optional(space + location)
+ if_statement = Literal("if") + space + Regex(r"\(.+\)") + space
block = Forward()
block << Group(
- Group(key + Optional(space + modifier) + Optional(space + location))
+ (Group(key + location_statement) ^ Group(if_statement))
+ left_bracket
+ Group(ZeroOrMore(Group(assignment) | block))
+ right_bracket)
|
{"golden_diff": "diff --git a/letsencrypt_nginx/nginxparser.py b/letsencrypt_nginx/nginxparser.py\n--- a/letsencrypt_nginx/nginxparser.py\n+++ b/letsencrypt_nginx/nginxparser.py\n@@ -3,7 +3,7 @@\n \n from pyparsing import (\n Literal, White, Word, alphanums, CharsNotIn, Forward, Group,\n- Optional, OneOrMore, ZeroOrMore, pythonStyleComment)\n+ Optional, OneOrMore, Regex, ZeroOrMore, pythonStyleComment)\n \n \n class RawNginxParser(object):\n@@ -16,17 +16,21 @@\n semicolon = Literal(\";\").suppress()\n space = White().suppress()\n key = Word(alphanums + \"_/\")\n- value = CharsNotIn(\"{};,\")\n+ # Matches anything that is not a special character AND any chars in single\n+ # or double quotes\n+ value = Regex(r\"((\\\".*\\\")?(\\'.*\\')?[^\\{\\};,]?)+\")\n location = CharsNotIn(\"{};,\" + string.whitespace)\n # modifier for location uri [ = | ~ | ~* | ^~ ]\n modifier = Literal(\"=\") | Literal(\"~*\") | Literal(\"~\") | Literal(\"^~\")\n \n # rules\n assignment = (key + Optional(space + value) + semicolon)\n+ location_statement = Optional(space + modifier) + Optional(space + location)\n+ if_statement = Literal(\"if\") + space + Regex(r\"\\(.+\\)\") + space\n block = Forward()\n \n block << Group(\n- Group(key + Optional(space + modifier) + Optional(space + location))\n+ (Group(key + location_statement) ^ Group(if_statement))\n + left_bracket\n + Group(ZeroOrMore(Group(assignment) | block))\n + right_bracket)\n", "issue": "nginxparser does not recognize 'if' statements\nE.g., this is unparseable by nginxparser:\n\n```\nif ($http_origin ~* ^https://www\\.example\\.com) {\n add_header Access-Control-Allow-Origin \"$http_origin\";\n}\n```\n\n", "before_files": [{"content": "\"\"\"Very low-level nginx config parser based on pyparsing.\"\"\"\nimport string\n\nfrom pyparsing import (\n Literal, White, Word, alphanums, CharsNotIn, Forward, Group,\n Optional, OneOrMore, ZeroOrMore, pythonStyleComment)\n\n\nclass RawNginxParser(object):\n # pylint: disable=expression-not-assigned\n \"\"\"A class that parses nginx configuration with pyparsing.\"\"\"\n\n # constants\n left_bracket = Literal(\"{\").suppress()\n right_bracket = Literal(\"}\").suppress()\n semicolon = Literal(\";\").suppress()\n space = White().suppress()\n key = Word(alphanums + \"_/\")\n value = CharsNotIn(\"{};,\")\n location = CharsNotIn(\"{};,\" + string.whitespace)\n # modifier for location uri [ = | ~ | ~* | ^~ ]\n modifier = Literal(\"=\") | Literal(\"~*\") | Literal(\"~\") | Literal(\"^~\")\n\n # rules\n assignment = (key + Optional(space + value) + semicolon)\n block = Forward()\n\n block << Group(\n Group(key + Optional(space + modifier) + Optional(space + location))\n + left_bracket\n + Group(ZeroOrMore(Group(assignment) | block))\n + right_bracket)\n\n script = OneOrMore(Group(assignment) | block).ignore(pythonStyleComment)\n\n def __init__(self, source):\n self.source = source\n\n def parse(self):\n \"\"\"Returns the parsed tree.\"\"\"\n return self.script.parseString(self.source)\n\n def as_list(self):\n \"\"\"Returns the parsed tree as a list.\"\"\"\n return self.parse().asList()\n\n\nclass RawNginxDumper(object):\n # pylint: disable=too-few-public-methods\n \"\"\"A class that dumps nginx configuration from the provided tree.\"\"\"\n def __init__(self, blocks, indentation=4):\n self.blocks = blocks\n self.indentation = indentation\n\n def __iter__(self, blocks=None, current_indent=0, spacer=' '):\n \"\"\"Iterates the dumped nginx content.\"\"\"\n blocks = blocks or self.blocks\n for key, values in blocks:\n if current_indent:\n yield spacer\n indentation = spacer * current_indent\n if isinstance(key, list):\n yield indentation + spacer.join(key) + ' {'\n for parameter in values:\n if isinstance(parameter[0], list):\n dumped = self.__iter__(\n [parameter],\n current_indent + self.indentation)\n for line in dumped:\n yield line\n else:\n dumped = spacer.join(parameter) + ';'\n yield spacer * (\n current_indent + self.indentation) + dumped\n\n yield indentation + '}'\n else:\n yield spacer * current_indent + key + spacer + values + ';'\n\n def as_string(self):\n \"\"\"Return the parsed block as a string.\"\"\"\n return '\\n'.join(self)\n\n\n# Shortcut functions to respect Python's serialization interface\n# (like pyyaml, picker or json)\n\ndef loads(source):\n \"\"\"Parses from a string.\n\n :param str souce: The string to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return RawNginxParser(source).as_list()\n\n\ndef load(_file):\n \"\"\"Parses from a file.\n\n :param file _file: The file to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return loads(_file.read())\n\n\ndef dumps(blocks, indentation=4):\n \"\"\"Dump to a string.\n\n :param list block: The parsed tree\n :param int indentation: The number of spaces to indent\n :rtype: str\n\n \"\"\"\n return RawNginxDumper(blocks, indentation).as_string()\n\n\ndef dump(blocks, _file, indentation=4):\n \"\"\"Dump to a file.\n\n :param list block: The parsed tree\n :param file _file: The file to dump to\n :param int indentation: The number of spaces to indent\n :rtype: NoneType\n\n \"\"\"\n return _file.write(dumps(blocks, indentation))\n", "path": "letsencrypt_nginx/nginxparser.py"}], "after_files": [{"content": "\"\"\"Very low-level nginx config parser based on pyparsing.\"\"\"\nimport string\n\nfrom pyparsing import (\n Literal, White, Word, alphanums, CharsNotIn, Forward, Group,\n Optional, OneOrMore, Regex, ZeroOrMore, pythonStyleComment)\n\n\nclass RawNginxParser(object):\n # pylint: disable=expression-not-assigned\n \"\"\"A class that parses nginx configuration with pyparsing.\"\"\"\n\n # constants\n left_bracket = Literal(\"{\").suppress()\n right_bracket = Literal(\"}\").suppress()\n semicolon = Literal(\";\").suppress()\n space = White().suppress()\n key = Word(alphanums + \"_/\")\n # Matches anything that is not a special character AND any chars in single\n # or double quotes\n value = Regex(r\"((\\\".*\\\")?(\\'.*\\')?[^\\{\\};,]?)+\")\n location = CharsNotIn(\"{};,\" + string.whitespace)\n # modifier for location uri [ = | ~ | ~* | ^~ ]\n modifier = Literal(\"=\") | Literal(\"~*\") | Literal(\"~\") | Literal(\"^~\")\n\n # rules\n assignment = (key + Optional(space + value) + semicolon)\n location_statement = Optional(space + modifier) + Optional(space + location)\n if_statement = Literal(\"if\") + space + Regex(r\"\\(.+\\)\") + space\n block = Forward()\n\n block << Group(\n (Group(key + location_statement) ^ Group(if_statement))\n + left_bracket\n + Group(ZeroOrMore(Group(assignment) | block))\n + right_bracket)\n\n script = OneOrMore(Group(assignment) | block).ignore(pythonStyleComment)\n\n def __init__(self, source):\n self.source = source\n\n def parse(self):\n \"\"\"Returns the parsed tree.\"\"\"\n return self.script.parseString(self.source)\n\n def as_list(self):\n \"\"\"Returns the parsed tree as a list.\"\"\"\n return self.parse().asList()\n\n\nclass RawNginxDumper(object):\n # pylint: disable=too-few-public-methods\n \"\"\"A class that dumps nginx configuration from the provided tree.\"\"\"\n def __init__(self, blocks, indentation=4):\n self.blocks = blocks\n self.indentation = indentation\n\n def __iter__(self, blocks=None, current_indent=0, spacer=' '):\n \"\"\"Iterates the dumped nginx content.\"\"\"\n blocks = blocks or self.blocks\n for key, values in blocks:\n if current_indent:\n yield spacer\n indentation = spacer * current_indent\n if isinstance(key, list):\n yield indentation + spacer.join(key) + ' {'\n for parameter in values:\n if isinstance(parameter[0], list):\n dumped = self.__iter__(\n [parameter],\n current_indent + self.indentation)\n for line in dumped:\n yield line\n else:\n dumped = spacer.join(parameter) + ';'\n yield spacer * (\n current_indent + self.indentation) + dumped\n\n yield indentation + '}'\n else:\n yield spacer * current_indent + key + spacer + values + ';'\n\n def as_string(self):\n \"\"\"Return the parsed block as a string.\"\"\"\n return '\\n'.join(self)\n\n\n# Shortcut functions to respect Python's serialization interface\n# (like pyyaml, picker or json)\n\ndef loads(source):\n \"\"\"Parses from a string.\n\n :param str souce: The string to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return RawNginxParser(source).as_list()\n\n\ndef load(_file):\n \"\"\"Parses from a file.\n\n :param file _file: The file to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return loads(_file.read())\n\n\ndef dumps(blocks, indentation=4):\n \"\"\"Dump to a string.\n\n :param list block: The parsed tree\n :param int indentation: The number of spaces to indent\n :rtype: str\n\n \"\"\"\n return RawNginxDumper(blocks, indentation).as_string()\n\n\ndef dump(blocks, _file, indentation=4):\n \"\"\"Dump to a file.\n\n :param list block: The parsed tree\n :param file _file: The file to dump to\n :param int indentation: The number of spaces to indent\n :rtype: NoneType\n\n \"\"\"\n return _file.write(dumps(blocks, indentation))\n", "path": "letsencrypt_nginx/nginxparser.py"}]}
| 1,501 | 399 |
gh_patches_debug_238
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-6117
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Warn new users about the lazy creation of connections (when requests are expected to be served in the script fully and only)
#### Problem Description
The [example script](https://docs.mitmproxy.org/stable/addons-examples/#http-reply-from-proxy) for not sending any data to the server does not prevent mitmproxy from **establishing a connection** to the server.
For which reason is said connection established when no data has to be sent to this host right away and possibly never in the future?
I trusted mitmproxy to **not send _any_ data, as stated**, but I had to discover (the hard way) that **that's not the case**.
I used mitmproxy in an environment where it required to stay silent, but it wasn't compliant.
Could you please consider warning new users about this behavior?
<strike>Is there an easy way to prevent establishing connections?
Is it planned to do so on default in this case?</strike>
*EDIT*: Trying to prevent connections by rerouting the connection to a closed port killed the flow for the client. Routing to a different host with invalid certificate worked though, warning me in the event log and suggesting setting connection strategy to lazy and it worked.
#### Steps to reproduce the behavior:
1. Load the example script
2. Have the client request examle.com
3. View the event log
#### System Information
Mitmproxy: 9.0.1
Python: 3.10.6
OpenSSL: OpenSSL 3.0.7 1 Nov 2022
Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/addons/http-reply-from-proxy.py`
Content:
```
1 """Send a reply from the proxy without sending any data to the remote server."""
2 from mitmproxy import http
3
4
5 def request(flow: http.HTTPFlow) -> None:
6 if flow.request.pretty_url == "http://example.com/path":
7 flow.response = http.Response.make(
8 200, # (optional) status code
9 b"Hello World", # (optional) content
10 {"Content-Type": "text/html"}, # (optional) headers
11 )
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/addons/http-reply-from-proxy.py b/examples/addons/http-reply-from-proxy.py
--- a/examples/addons/http-reply-from-proxy.py
+++ b/examples/addons/http-reply-from-proxy.py
@@ -1,4 +1,4 @@
-"""Send a reply from the proxy without sending any data to the remote server."""
+"""Send a reply from the proxy without sending the request to the remote server."""
from mitmproxy import http
|
{"golden_diff": "diff --git a/examples/addons/http-reply-from-proxy.py b/examples/addons/http-reply-from-proxy.py\n--- a/examples/addons/http-reply-from-proxy.py\n+++ b/examples/addons/http-reply-from-proxy.py\n@@ -1,4 +1,4 @@\n-\"\"\"Send a reply from the proxy without sending any data to the remote server.\"\"\"\n+\"\"\"Send a reply from the proxy without sending the request to the remote server.\"\"\"\n from mitmproxy import http\n", "issue": "Warn new users about the lazy creation of connections (when requests are expected to be served in the script fully and only)\n#### Problem Description\r\nThe [example script](https://docs.mitmproxy.org/stable/addons-examples/#http-reply-from-proxy) for not sending any data to the server does not prevent mitmproxy from **establishing a connection** to the server.\r\nFor which reason is said connection established when no data has to be sent to this host right away and possibly never in the future?\r\nI trusted mitmproxy to **not send _any_ data, as stated**, but I had to discover (the hard way) that **that's not the case**.\r\nI used mitmproxy in an environment where it required to stay silent, but it wasn't compliant.\r\n\r\nCould you please consider warning new users about this behavior?\r\n<strike>Is there an easy way to prevent establishing connections?\r\nIs it planned to do so on default in this case?</strike>\r\n*EDIT*: Trying to prevent connections by rerouting the connection to a closed port killed the flow for the client. Routing to a different host with invalid certificate worked though, warning me in the event log and suggesting setting connection strategy to lazy and it worked.\r\n\r\n#### Steps to reproduce the behavior:\r\n1. Load the example script\r\n2. Have the client request examle.com\r\n3. View the event log\r\n\r\n#### System Information\r\nMitmproxy: 9.0.1\r\nPython: 3.10.6\r\nOpenSSL: OpenSSL 3.0.7 1 Nov 2022\r\nPlatform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35\r\n\r\n\n", "before_files": [{"content": "\"\"\"Send a reply from the proxy without sending any data to the remote server.\"\"\"\nfrom mitmproxy import http\n\n\ndef request(flow: http.HTTPFlow) -> None:\n if flow.request.pretty_url == \"http://example.com/path\":\n flow.response = http.Response.make(\n 200, # (optional) status code\n b\"Hello World\", # (optional) content\n {\"Content-Type\": \"text/html\"}, # (optional) headers\n )\n", "path": "examples/addons/http-reply-from-proxy.py"}], "after_files": [{"content": "\"\"\"Send a reply from the proxy without sending the request to the remote server.\"\"\"\nfrom mitmproxy import http\n\n\ndef request(flow: http.HTTPFlow) -> None:\n if flow.request.pretty_url == \"http://example.com/path\":\n flow.response = http.Response.make(\n 200, # (optional) status code\n b\"Hello World\", # (optional) content\n {\"Content-Type\": \"text/html\"}, # (optional) headers\n )\n", "path": "examples/addons/http-reply-from-proxy.py"}]}
| 738 | 95 |
gh_patches_debug_65366
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__models-399
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
使用 generate_sequence_by_rnn_lm 进行train的时候报错
在 generate_sequence_by_rnn_lm 这个模型下运行 train.py 的时候,当测试文件的路径不存在的时候会报错。错误的原因是把conf写成了config。错误行数是train.py 的112行
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `generate_sequence_by_rnn_lm/train.py`
Content:
```
1 import os
2 import sys
3 import gzip
4
5 import paddle.v2 as paddle
6 import config as conf
7 import reader
8 from network_conf import rnn_lm
9 from utils import logger, build_dict, load_dict
10
11
12 def train(topology,
13 train_reader,
14 test_reader,
15 model_save_dir="models",
16 num_passes=10):
17 """
18 train model.
19
20 :param topology: cost layer of the model to train.
21 :type topology: LayerOuput
22 :param train_reader: train data reader.
23 :type trainer_reader: collections.Iterable
24 :param test_reader: test data reader.
25 :type test_reader: collections.Iterable
26 :param model_save_dir: path to save the trained model
27 :type model_save_dir: str
28 :param num_passes: number of epoch
29 :type num_passes: int
30 """
31 if not os.path.exists(model_save_dir):
32 os.mkdir(model_save_dir)
33
34 # initialize PaddlePaddle
35 paddle.init(use_gpu=conf.use_gpu, trainer_count=conf.trainer_count)
36
37 # create optimizer
38 adam_optimizer = paddle.optimizer.Adam(
39 learning_rate=1e-3,
40 regularization=paddle.optimizer.L2Regularization(rate=1e-3),
41 model_average=paddle.optimizer.ModelAverage(
42 average_window=0.5, max_average_window=10000))
43
44 # create parameters
45 parameters = paddle.parameters.create(topology)
46 # create trainer
47 trainer = paddle.trainer.SGD(
48 cost=topology, parameters=parameters, update_equation=adam_optimizer)
49
50 # define the event_handler callback
51 def event_handler(event):
52 if isinstance(event, paddle.event.EndIteration):
53 if not event.batch_id % conf.log_period:
54 logger.info("Pass %d, Batch %d, Cost %f, %s" % (
55 event.pass_id, event.batch_id, event.cost, event.metrics))
56
57 if (not event.batch_id %
58 conf.save_period_by_batches) and event.batch_id:
59 save_name = os.path.join(model_save_dir,
60 "rnn_lm_pass_%05d_batch_%03d.tar.gz" %
61 (event.pass_id, event.batch_id))
62 with gzip.open(save_name, "w") as f:
63 trainer.save_parameter_to_tar(f)
64
65 if isinstance(event, paddle.event.EndPass):
66 if test_reader is not None:
67 result = trainer.test(reader=test_reader)
68 logger.info("Test with Pass %d, %s" %
69 (event.pass_id, result.metrics))
70 save_name = os.path.join(model_save_dir, "rnn_lm_pass_%05d.tar.gz" %
71 (event.pass_id))
72 with gzip.open(save_name, "w") as f:
73 trainer.save_parameter_to_tar(f)
74
75 logger.info("start training...")
76 trainer.train(
77 reader=train_reader, event_handler=event_handler, num_passes=num_passes)
78
79 logger.info("Training is finished.")
80
81
82 def main():
83 # prepare vocab
84 if not (os.path.exists(conf.vocab_file) and
85 os.path.getsize(conf.vocab_file)):
86 logger.info(("word dictionary does not exist, "
87 "build it from the training data"))
88 build_dict(conf.train_file, conf.vocab_file, conf.max_word_num,
89 conf.cutoff_word_fre)
90 logger.info("load word dictionary.")
91 word_dict = load_dict(conf.vocab_file)
92 logger.info("dictionay size = %d" % (len(word_dict)))
93
94 cost = rnn_lm(
95 len(word_dict), conf.emb_dim, conf.hidden_size, conf.stacked_rnn_num,
96 conf.rnn_type)
97
98 # define reader
99 reader_args = {
100 "file_name": conf.train_file,
101 "word_dict": word_dict,
102 }
103 train_reader = paddle.batch(
104 paddle.reader.shuffle(
105 reader.rnn_reader(**reader_args), buf_size=102400),
106 batch_size=conf.batch_size)
107 test_reader = None
108 if os.path.exists(conf.test_file) and os.path.getsize(conf.test_file):
109 test_reader = paddle.batch(
110 paddle.reader.shuffle(
111 reader.rnn_reader(**reader_args), buf_size=65536),
112 batch_size=config.batch_size)
113
114 train(
115 topology=cost,
116 train_reader=train_reader,
117 test_reader=test_reader,
118 model_save_dir=conf.model_save_dir,
119 num_passes=conf.num_passes)
120
121
122 if __name__ == "__main__":
123 main()
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/generate_sequence_by_rnn_lm/train.py b/generate_sequence_by_rnn_lm/train.py
--- a/generate_sequence_by_rnn_lm/train.py
+++ b/generate_sequence_by_rnn_lm/train.py
@@ -109,7 +109,7 @@
test_reader = paddle.batch(
paddle.reader.shuffle(
reader.rnn_reader(**reader_args), buf_size=65536),
- batch_size=config.batch_size)
+ batch_size=conf.batch_size)
train(
topology=cost,
|
{"golden_diff": "diff --git a/generate_sequence_by_rnn_lm/train.py b/generate_sequence_by_rnn_lm/train.py\n--- a/generate_sequence_by_rnn_lm/train.py\n+++ b/generate_sequence_by_rnn_lm/train.py\n@@ -109,7 +109,7 @@\n test_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=65536),\n- batch_size=config.batch_size)\n+ batch_size=conf.batch_size)\n \n train(\n topology=cost,\n", "issue": "\u4f7f\u7528 generate_sequence_by_rnn_lm \u8fdb\u884ctrain\u7684\u65f6\u5019\u62a5\u9519\n\u5728 generate_sequence_by_rnn_lm \u8fd9\u4e2a\u6a21\u578b\u4e0b\u8fd0\u884c train.py \u7684\u65f6\u5019\uff0c\u5f53\u6d4b\u8bd5\u6587\u4ef6\u7684\u8def\u5f84\u4e0d\u5b58\u5728\u7684\u65f6\u5019\u4f1a\u62a5\u9519\u3002\u9519\u8bef\u7684\u539f\u56e0\u662f\u628aconf\u5199\u6210\u4e86config\u3002\u9519\u8bef\u884c\u6570\u662ftrain.py \u7684112\u884c\n", "before_files": [{"content": "import os\nimport sys\nimport gzip\n\nimport paddle.v2 as paddle\nimport config as conf\nimport reader\nfrom network_conf import rnn_lm\nfrom utils import logger, build_dict, load_dict\n\n\ndef train(topology,\n train_reader,\n test_reader,\n model_save_dir=\"models\",\n num_passes=10):\n \"\"\"\n train model.\n\n :param topology: cost layer of the model to train.\n :type topology: LayerOuput\n :param train_reader: train data reader.\n :type trainer_reader: collections.Iterable\n :param test_reader: test data reader.\n :type test_reader: collections.Iterable\n :param model_save_dir: path to save the trained model\n :type model_save_dir: str\n :param num_passes: number of epoch\n :type num_passes: int\n \"\"\"\n if not os.path.exists(model_save_dir):\n os.mkdir(model_save_dir)\n\n # initialize PaddlePaddle\n paddle.init(use_gpu=conf.use_gpu, trainer_count=conf.trainer_count)\n\n # create optimizer\n adam_optimizer = paddle.optimizer.Adam(\n learning_rate=1e-3,\n regularization=paddle.optimizer.L2Regularization(rate=1e-3),\n model_average=paddle.optimizer.ModelAverage(\n average_window=0.5, max_average_window=10000))\n\n # create parameters\n parameters = paddle.parameters.create(topology)\n # create trainer\n trainer = paddle.trainer.SGD(\n cost=topology, parameters=parameters, update_equation=adam_optimizer)\n\n # define the event_handler callback\n def event_handler(event):\n if isinstance(event, paddle.event.EndIteration):\n if not event.batch_id % conf.log_period:\n logger.info(\"Pass %d, Batch %d, Cost %f, %s\" % (\n event.pass_id, event.batch_id, event.cost, event.metrics))\n\n if (not event.batch_id %\n conf.save_period_by_batches) and event.batch_id:\n save_name = os.path.join(model_save_dir,\n \"rnn_lm_pass_%05d_batch_%03d.tar.gz\" %\n (event.pass_id, event.batch_id))\n with gzip.open(save_name, \"w\") as f:\n trainer.save_parameter_to_tar(f)\n\n if isinstance(event, paddle.event.EndPass):\n if test_reader is not None:\n result = trainer.test(reader=test_reader)\n logger.info(\"Test with Pass %d, %s\" %\n (event.pass_id, result.metrics))\n save_name = os.path.join(model_save_dir, \"rnn_lm_pass_%05d.tar.gz\" %\n (event.pass_id))\n with gzip.open(save_name, \"w\") as f:\n trainer.save_parameter_to_tar(f)\n\n logger.info(\"start training...\")\n trainer.train(\n reader=train_reader, event_handler=event_handler, num_passes=num_passes)\n\n logger.info(\"Training is finished.\")\n\n\ndef main():\n # prepare vocab\n if not (os.path.exists(conf.vocab_file) and\n os.path.getsize(conf.vocab_file)):\n logger.info((\"word dictionary does not exist, \"\n \"build it from the training data\"))\n build_dict(conf.train_file, conf.vocab_file, conf.max_word_num,\n conf.cutoff_word_fre)\n logger.info(\"load word dictionary.\")\n word_dict = load_dict(conf.vocab_file)\n logger.info(\"dictionay size = %d\" % (len(word_dict)))\n\n cost = rnn_lm(\n len(word_dict), conf.emb_dim, conf.hidden_size, conf.stacked_rnn_num,\n conf.rnn_type)\n\n # define reader\n reader_args = {\n \"file_name\": conf.train_file,\n \"word_dict\": word_dict,\n }\n train_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=102400),\n batch_size=conf.batch_size)\n test_reader = None\n if os.path.exists(conf.test_file) and os.path.getsize(conf.test_file):\n test_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=65536),\n batch_size=config.batch_size)\n\n train(\n topology=cost,\n train_reader=train_reader,\n test_reader=test_reader,\n model_save_dir=conf.model_save_dir,\n num_passes=conf.num_passes)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "generate_sequence_by_rnn_lm/train.py"}], "after_files": [{"content": "import os\nimport sys\nimport gzip\n\nimport paddle.v2 as paddle\nimport config as conf\nimport reader\nfrom network_conf import rnn_lm\nfrom utils import logger, build_dict, load_dict\n\n\ndef train(topology,\n train_reader,\n test_reader,\n model_save_dir=\"models\",\n num_passes=10):\n \"\"\"\n train model.\n\n :param topology: cost layer of the model to train.\n :type topology: LayerOuput\n :param train_reader: train data reader.\n :type trainer_reader: collections.Iterable\n :param test_reader: test data reader.\n :type test_reader: collections.Iterable\n :param model_save_dir: path to save the trained model\n :type model_save_dir: str\n :param num_passes: number of epoch\n :type num_passes: int\n \"\"\"\n if not os.path.exists(model_save_dir):\n os.mkdir(model_save_dir)\n\n # initialize PaddlePaddle\n paddle.init(use_gpu=conf.use_gpu, trainer_count=conf.trainer_count)\n\n # create optimizer\n adam_optimizer = paddle.optimizer.Adam(\n learning_rate=1e-3,\n regularization=paddle.optimizer.L2Regularization(rate=1e-3),\n model_average=paddle.optimizer.ModelAverage(\n average_window=0.5, max_average_window=10000))\n\n # create parameters\n parameters = paddle.parameters.create(topology)\n # create trainer\n trainer = paddle.trainer.SGD(\n cost=topology, parameters=parameters, update_equation=adam_optimizer)\n\n # define the event_handler callback\n def event_handler(event):\n if isinstance(event, paddle.event.EndIteration):\n if not event.batch_id % conf.log_period:\n logger.info(\"Pass %d, Batch %d, Cost %f, %s\" % (\n event.pass_id, event.batch_id, event.cost, event.metrics))\n\n if (not event.batch_id %\n conf.save_period_by_batches) and event.batch_id:\n save_name = os.path.join(model_save_dir,\n \"rnn_lm_pass_%05d_batch_%03d.tar.gz\" %\n (event.pass_id, event.batch_id))\n with gzip.open(save_name, \"w\") as f:\n trainer.save_parameter_to_tar(f)\n\n if isinstance(event, paddle.event.EndPass):\n if test_reader is not None:\n result = trainer.test(reader=test_reader)\n logger.info(\"Test with Pass %d, %s\" %\n (event.pass_id, result.metrics))\n save_name = os.path.join(model_save_dir, \"rnn_lm_pass_%05d.tar.gz\" %\n (event.pass_id))\n with gzip.open(save_name, \"w\") as f:\n trainer.save_parameter_to_tar(f)\n\n logger.info(\"start training...\")\n trainer.train(\n reader=train_reader, event_handler=event_handler, num_passes=num_passes)\n\n logger.info(\"Training is finished.\")\n\n\ndef main():\n # prepare vocab\n if not (os.path.exists(conf.vocab_file) and\n os.path.getsize(conf.vocab_file)):\n logger.info((\"word dictionary does not exist, \"\n \"build it from the training data\"))\n build_dict(conf.train_file, conf.vocab_file, conf.max_word_num,\n conf.cutoff_word_fre)\n logger.info(\"load word dictionary.\")\n word_dict = load_dict(conf.vocab_file)\n logger.info(\"dictionay size = %d\" % (len(word_dict)))\n\n cost = rnn_lm(\n len(word_dict), conf.emb_dim, conf.hidden_size, conf.stacked_rnn_num,\n conf.rnn_type)\n\n # define reader\n reader_args = {\n \"file_name\": conf.train_file,\n \"word_dict\": word_dict,\n }\n train_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=102400),\n batch_size=conf.batch_size)\n test_reader = None\n if os.path.exists(conf.test_file) and os.path.getsize(conf.test_file):\n test_reader = paddle.batch(\n paddle.reader.shuffle(\n reader.rnn_reader(**reader_args), buf_size=65536),\n batch_size=conf.batch_size)\n\n train(\n topology=cost,\n train_reader=train_reader,\n test_reader=test_reader,\n model_save_dir=conf.model_save_dir,\n num_passes=conf.num_passes)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "generate_sequence_by_rnn_lm/train.py"}]}
| 1,550 | 113 |
gh_patches_debug_2394
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-1530
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release Automation Fixes for Seventh Release
The release script is not properly waiting for the wheel job it starts to finish before downloading. This causes it to download previous releases and attempt to upload them.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tasks.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import getpass
8 import os
9 import time
10
11 import invoke
12
13 import requests
14
15
16 JENKINS_URL = "https://jenkins.cryptography.io/job/cryptography-wheel-builder"
17
18
19 def wait_for_build_completed(session):
20 while True:
21 response = session.get(
22 "{0}/lastBuild/api/json/".format(JENKINS_URL),
23 headers={
24 "Accept": "application/json",
25 }
26 )
27 response.raise_for_status()
28 if not response.json()["building"]:
29 assert response.json()["result"] == "SUCCESS"
30 break
31 time.sleep(0.1)
32
33
34 def download_artifacts(session):
35 response = session.get(
36 "{0}/lastBuild/api/json/".format(JENKINS_URL),
37 headers={
38 "Accept": "application/json"
39 }
40 )
41 response.raise_for_status()
42 assert not response.json()["building"]
43 assert response.json()["result"] == "SUCCESS"
44
45 paths = []
46
47 for run in response.json()["runs"]:
48 response = session.get(
49 run["url"] + "api/json/",
50 headers={
51 "Accept": "application/json",
52 }
53 )
54 response.raise_for_status()
55 for artifact in response.json()["artifacts"]:
56 response = session.get(
57 "{0}artifact/{1}".format(run["url"], artifact["relativePath"])
58 )
59 out_path = os.path.join(
60 os.path.dirname(__file__),
61 "dist",
62 artifact["fileName"],
63 )
64 with open(out_path, "wb") as f:
65 f.write(response.content)
66 paths.append(out_path)
67 return paths
68
69
70 @invoke.task
71 def release(version):
72 """
73 ``version`` should be a string like '0.4' or '1.0'.
74 """
75 invoke.run("git tag -s {0} -m '{0} release'".format(version))
76 invoke.run("git push --tags")
77
78 invoke.run("python setup.py sdist")
79 invoke.run("cd vectors/ && python setup.py sdist bdist_wheel")
80
81 invoke.run(
82 "twine upload -s dist/cryptography-{0}* "
83 "vectors/dist/cryptography_vectors-{0}*".format(version)
84 )
85
86 session = requests.Session()
87
88 # This tells the CDN to delete the cached response for the URL. We do this
89 # so that the Jenkins builders will see the new sdist immediately when they
90 # go to build the wheels.
91 response = session.request(
92 "PURGE", "https://pypi.python.org/simple/cryptography/"
93 )
94 response.raise_for_status()
95
96 username = getpass.getpass("Input the GitHub/Jenkins username: ")
97 token = getpass.getpass("Input the Jenkins token: ")
98 response = session.post(
99 "{0}/build".format(JENKINS_URL),
100 auth=requests.auth.HTTPBasicAuth(
101 username, token
102 ),
103 params={
104 "cause": "Building wheels for {0}".format(version)
105 }
106 )
107 response.raise_for_status()
108 wait_for_build_completed(session)
109 paths = download_artifacts(session)
110 invoke.run("twine upload {0}".format(" ".join(paths)))
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -17,6 +17,9 @@
def wait_for_build_completed(session):
+ # Wait 3 seconds before actually checking if the build is complete, to
+ # ensure that it had time to really start.
+ time.sleep(3)
while True:
response = session.get(
"{0}/lastBuild/api/json/".format(JENKINS_URL),
|
{"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -17,6 +17,9 @@\n \n \n def wait_for_build_completed(session):\n+ # Wait 3 seconds before actually checking if the build is complete, to\n+ # ensure that it had time to really start.\n+ time.sleep(3)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n", "issue": "Release Automation Fixes for Seventh Release\nThe release script is not properly waiting for the wheel job it starts to finish before downloading. This causes it to download previous releases and attempt to upload them.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport os\nimport time\n\nimport invoke\n\nimport requests\n\n\nJENKINS_URL = \"https://jenkins.cryptography.io/job/cryptography-wheel-builder\"\n\n\ndef wait_for_build_completed(session):\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n assert not response.json()[\"building\"]\n assert response.json()[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for run in response.json()[\"runs\"]:\n response = session.get(\n run[\"url\"] + \"api/json/\",\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n for artifact in response.json()[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(run[\"url\"], artifact[\"relativePath\"])\n )\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(response.content)\n paths.append(out_path)\n return paths\n\n\[email protected]\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n invoke.run(\"git tag -s {0} -m '{0} release'\".format(version))\n invoke.run(\"git push --tags\")\n\n invoke.run(\"python setup.py sdist\")\n invoke.run(\"cd vectors/ && python setup.py sdist bdist_wheel\")\n\n invoke.run(\n \"twine upload -s dist/cryptography-{0}* \"\n \"vectors/dist/cryptography_vectors-{0}*\".format(version)\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n username = getpass.getpass(\"Input the GitHub/Jenkins username: \")\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.post(\n \"{0}/build\".format(JENKINS_URL),\n auth=requests.auth.HTTPBasicAuth(\n username, token\n ),\n params={\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n invoke.run(\"twine upload {0}\".format(\" \".join(paths)))\n", "path": "tasks.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport os\nimport time\n\nimport invoke\n\nimport requests\n\n\nJENKINS_URL = \"https://jenkins.cryptography.io/job/cryptography-wheel-builder\"\n\n\ndef wait_for_build_completed(session):\n # Wait 3 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(3)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n assert not response.json()[\"building\"]\n assert response.json()[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for run in response.json()[\"runs\"]:\n response = session.get(\n run[\"url\"] + \"api/json/\",\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n for artifact in response.json()[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(run[\"url\"], artifact[\"relativePath\"])\n )\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(response.content)\n paths.append(out_path)\n return paths\n\n\[email protected]\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n invoke.run(\"git tag -s {0} -m '{0} release'\".format(version))\n invoke.run(\"git push --tags\")\n\n invoke.run(\"python setup.py sdist\")\n invoke.run(\"cd vectors/ && python setup.py sdist bdist_wheel\")\n\n invoke.run(\n \"twine upload -s dist/cryptography-{0}* \"\n \"vectors/dist/cryptography_vectors-{0}*\".format(version)\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n username = getpass.getpass(\"Input the GitHub/Jenkins username: \")\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.post(\n \"{0}/build\".format(JENKINS_URL),\n auth=requests.auth.HTTPBasicAuth(\n username, token\n ),\n params={\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n invoke.run(\"twine upload {0}\".format(\" \".join(paths)))\n", "path": "tasks.py"}]}
| 1,253 | 104 |
gh_patches_debug_59731
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-772
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Grid cell margin issue
Adding margin: 1 to the cells within this grid causes unexpected output:
<img width="752" alt="image" src="https://user-images.githubusercontent.com/5740731/190180955-3b10bd1f-60ca-4fda-9473-bc7d9a325b9d.png">
## `grid_buttons.py`
```python
from textual.app import App, ComposeResult
from textual.widgets import Button
class GridButtons(App):
def compose(self) -> ComposeResult:
yield Button.success("A")
yield Button.success("B")
yield Button.success("C")
yield Button.success("D")
yield Button.success("E")
yield Button.success("F")
app = GridButtons(css_path="grid_buttons.css")
if __name__ == '__main__':
app.run()
```
## `grid_buttons.css`
```scss
Screen {
layout: grid;
grid-size: 3;
grid-rows: 1fr;
grid-columns: 1fr;
}
Button {
margin: 1;
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/layouts/grid.py`
Content:
```
1 from __future__ import annotations
2
3 from fractions import Fraction
4 from typing import TYPE_CHECKING, Iterable
5
6 from .._layout import ArrangeResult, Layout, WidgetPlacement
7 from .._resolve import resolve
8 from ..css.scalar import Scalar
9 from ..geometry import Region, Size, Spacing
10
11 if TYPE_CHECKING:
12 from ..widget import Widget
13
14
15 class GridLayout(Layout):
16 """Used to layout Widgets in to a grid."""
17
18 name = "grid"
19
20 def arrange(
21 self, parent: Widget, children: list[Widget], size: Size
22 ) -> ArrangeResult:
23 styles = parent.styles
24 row_scalars = styles.grid_rows or [Scalar.parse("1fr")]
25 column_scalars = styles.grid_columns or [Scalar.parse("1fr")]
26 gutter_horizontal = styles.grid_gutter_horizontal
27 gutter_vertical = styles.grid_gutter_vertical
28 table_size_columns = max(1, styles.grid_size_columns)
29 table_size_rows = styles.grid_size_rows
30 viewport = parent.screen.size
31
32 def cell_coords(column_count: int) -> Iterable[tuple[int, int]]:
33 """Iterate over table coordinates ad infinitum.
34
35 Args:
36 column_count (int): Number of columns
37
38 """
39 row = 0
40 while True:
41 for column in range(column_count):
42 yield (column, row)
43 row += 1
44
45 def widget_coords(
46 column_start: int, row_start: int, columns: int, rows: int
47 ) -> set[tuple[int, int]]:
48 """Get coords occupied by a cell.
49
50 Args:
51 column_start (int): Start column.
52 row_start (int): Start_row.
53 columns (int): Number of columns.
54 rows (int): Number of rows.
55
56 Returns:
57 set[tuple[int, int]]: Set of coords.
58 """
59 return {
60 (column, row)
61 for column in range(column_start, column_start + columns)
62 for row in range(row_start, row_start + rows)
63 }
64
65 def repeat_scalars(scalars: Iterable[Scalar], count: int) -> list[Scalar]:
66 """Repeat an iterable of scalars as many times as required to return
67 a list of `count` values.
68
69 Args:
70 scalars (Iterable[T]): Iterable of values.
71 count (int): Number of values to return.
72
73 Returns:
74 list[T]: A list of values.
75 """
76 limited_values = list(scalars)[:]
77 while len(limited_values) < count:
78 limited_values.extend(scalars)
79 return limited_values[:count]
80
81 cell_map: dict[tuple[int, int], tuple[Widget, bool]] = {}
82 cell_size_map: dict[Widget, tuple[int, int, int, int]] = {}
83
84 column_count = table_size_columns
85 next_coord = iter(cell_coords(column_count)).__next__
86 cell_coord = (0, 0)
87 column = row = 0
88
89 for child in children:
90 child_styles = child.styles
91 column_span = child_styles.column_span or 1
92 row_span = child_styles.row_span or 1
93 # Find a slot where this cell fits
94 # A cell on a previous row may have a row span
95 while True:
96 column, row = cell_coord
97 coords = widget_coords(column, row, column_span, row_span)
98 if cell_map.keys().isdisjoint(coords):
99 for coord in coords:
100 cell_map[coord] = (child, coord == cell_coord)
101 cell_size_map[child] = (
102 column,
103 row,
104 column_span - 1,
105 row_span - 1,
106 )
107 break
108 else:
109 cell_coord = next_coord()
110 continue
111 cell_coord = next_coord()
112
113 # Resolve columns / rows
114 columns = resolve(
115 repeat_scalars(column_scalars, table_size_columns),
116 size.width,
117 gutter_vertical,
118 size,
119 viewport,
120 )
121 rows = resolve(
122 repeat_scalars(
123 row_scalars, table_size_rows if table_size_rows else row + 1
124 ),
125 size.height,
126 gutter_horizontal,
127 size,
128 viewport,
129 )
130
131 placements: list[WidgetPlacement] = []
132 add_placement = placements.append
133 fraction_unit = Fraction(1)
134 widgets: list[Widget] = []
135 add_widget = widgets.append
136 max_column = len(columns) - 1
137 max_row = len(rows) - 1
138 margin = Spacing()
139 for widget, (column, row, column_span, row_span) in cell_size_map.items():
140 x = columns[column][0]
141 if row > max_row:
142 break
143 y = rows[row][0]
144 x2, cell_width = columns[min(max_column, column + column_span)]
145 y2, cell_height = rows[min(max_row, row + row_span)]
146 cell_size = Size(cell_width + x2 - x, cell_height + y2 - y)
147 width, height, margin = widget._get_box_model(
148 cell_size,
149 viewport,
150 fraction_unit,
151 )
152 region = (
153 Region(x, y, int(width), int(height))
154 .shrink(margin)
155 .clip_size(cell_size)
156 )
157 add_placement(WidgetPlacement(region, margin, widget))
158 add_widget(widget)
159
160 return (placements, set(widgets))
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/textual/layouts/grid.py b/src/textual/layouts/grid.py
--- a/src/textual/layouts/grid.py
+++ b/src/textual/layouts/grid.py
@@ -150,7 +150,7 @@
fraction_unit,
)
region = (
- Region(x, y, int(width), int(height))
+ Region(x, y, int(width + margin.width), int(height + margin.height))
.shrink(margin)
.clip_size(cell_size)
)
|
{"golden_diff": "diff --git a/src/textual/layouts/grid.py b/src/textual/layouts/grid.py\n--- a/src/textual/layouts/grid.py\n+++ b/src/textual/layouts/grid.py\n@@ -150,7 +150,7 @@\n fraction_unit,\n )\n region = (\n- Region(x, y, int(width), int(height))\n+ Region(x, y, int(width + margin.width), int(height + margin.height))\n .shrink(margin)\n .clip_size(cell_size)\n )\n", "issue": "Grid cell margin issue\nAdding margin: 1 to the cells within this grid causes unexpected output:\r\n\r\n<img width=\"752\" alt=\"image\" src=\"https://user-images.githubusercontent.com/5740731/190180955-3b10bd1f-60ca-4fda-9473-bc7d9a325b9d.png\">\r\n\r\n## `grid_buttons.py`\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.widgets import Button\r\n\r\n\r\nclass GridButtons(App):\r\n def compose(self) -> ComposeResult:\r\n yield Button.success(\"A\")\r\n yield Button.success(\"B\")\r\n yield Button.success(\"C\")\r\n yield Button.success(\"D\")\r\n yield Button.success(\"E\")\r\n yield Button.success(\"F\")\r\n\r\n\r\napp = GridButtons(css_path=\"grid_buttons.css\")\r\nif __name__ == '__main__':\r\n app.run()\r\n```\r\n\r\n## `grid_buttons.css`\r\n\r\n```scss\r\nScreen {\r\n layout: grid;\r\n grid-size: 3;\r\n grid-rows: 1fr;\r\n grid-columns: 1fr;\r\n}\r\n\r\nButton {\r\n margin: 1;\r\n}\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom fractions import Fraction\nfrom typing import TYPE_CHECKING, Iterable\n\nfrom .._layout import ArrangeResult, Layout, WidgetPlacement\nfrom .._resolve import resolve\nfrom ..css.scalar import Scalar\nfrom ..geometry import Region, Size, Spacing\n\nif TYPE_CHECKING:\n from ..widget import Widget\n\n\nclass GridLayout(Layout):\n \"\"\"Used to layout Widgets in to a grid.\"\"\"\n\n name = \"grid\"\n\n def arrange(\n self, parent: Widget, children: list[Widget], size: Size\n ) -> ArrangeResult:\n styles = parent.styles\n row_scalars = styles.grid_rows or [Scalar.parse(\"1fr\")]\n column_scalars = styles.grid_columns or [Scalar.parse(\"1fr\")]\n gutter_horizontal = styles.grid_gutter_horizontal\n gutter_vertical = styles.grid_gutter_vertical\n table_size_columns = max(1, styles.grid_size_columns)\n table_size_rows = styles.grid_size_rows\n viewport = parent.screen.size\n\n def cell_coords(column_count: int) -> Iterable[tuple[int, int]]:\n \"\"\"Iterate over table coordinates ad infinitum.\n\n Args:\n column_count (int): Number of columns\n\n \"\"\"\n row = 0\n while True:\n for column in range(column_count):\n yield (column, row)\n row += 1\n\n def widget_coords(\n column_start: int, row_start: int, columns: int, rows: int\n ) -> set[tuple[int, int]]:\n \"\"\"Get coords occupied by a cell.\n\n Args:\n column_start (int): Start column.\n row_start (int): Start_row.\n columns (int): Number of columns.\n rows (int): Number of rows.\n\n Returns:\n set[tuple[int, int]]: Set of coords.\n \"\"\"\n return {\n (column, row)\n for column in range(column_start, column_start + columns)\n for row in range(row_start, row_start + rows)\n }\n\n def repeat_scalars(scalars: Iterable[Scalar], count: int) -> list[Scalar]:\n \"\"\"Repeat an iterable of scalars as many times as required to return\n a list of `count` values.\n\n Args:\n scalars (Iterable[T]): Iterable of values.\n count (int): Number of values to return.\n\n Returns:\n list[T]: A list of values.\n \"\"\"\n limited_values = list(scalars)[:]\n while len(limited_values) < count:\n limited_values.extend(scalars)\n return limited_values[:count]\n\n cell_map: dict[tuple[int, int], tuple[Widget, bool]] = {}\n cell_size_map: dict[Widget, tuple[int, int, int, int]] = {}\n\n column_count = table_size_columns\n next_coord = iter(cell_coords(column_count)).__next__\n cell_coord = (0, 0)\n column = row = 0\n\n for child in children:\n child_styles = child.styles\n column_span = child_styles.column_span or 1\n row_span = child_styles.row_span or 1\n # Find a slot where this cell fits\n # A cell on a previous row may have a row span\n while True:\n column, row = cell_coord\n coords = widget_coords(column, row, column_span, row_span)\n if cell_map.keys().isdisjoint(coords):\n for coord in coords:\n cell_map[coord] = (child, coord == cell_coord)\n cell_size_map[child] = (\n column,\n row,\n column_span - 1,\n row_span - 1,\n )\n break\n else:\n cell_coord = next_coord()\n continue\n cell_coord = next_coord()\n\n # Resolve columns / rows\n columns = resolve(\n repeat_scalars(column_scalars, table_size_columns),\n size.width,\n gutter_vertical,\n size,\n viewport,\n )\n rows = resolve(\n repeat_scalars(\n row_scalars, table_size_rows if table_size_rows else row + 1\n ),\n size.height,\n gutter_horizontal,\n size,\n viewport,\n )\n\n placements: list[WidgetPlacement] = []\n add_placement = placements.append\n fraction_unit = Fraction(1)\n widgets: list[Widget] = []\n add_widget = widgets.append\n max_column = len(columns) - 1\n max_row = len(rows) - 1\n margin = Spacing()\n for widget, (column, row, column_span, row_span) in cell_size_map.items():\n x = columns[column][0]\n if row > max_row:\n break\n y = rows[row][0]\n x2, cell_width = columns[min(max_column, column + column_span)]\n y2, cell_height = rows[min(max_row, row + row_span)]\n cell_size = Size(cell_width + x2 - x, cell_height + y2 - y)\n width, height, margin = widget._get_box_model(\n cell_size,\n viewport,\n fraction_unit,\n )\n region = (\n Region(x, y, int(width), int(height))\n .shrink(margin)\n .clip_size(cell_size)\n )\n add_placement(WidgetPlacement(region, margin, widget))\n add_widget(widget)\n\n return (placements, set(widgets))\n", "path": "src/textual/layouts/grid.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom fractions import Fraction\nfrom typing import TYPE_CHECKING, Iterable\n\nfrom .._layout import ArrangeResult, Layout, WidgetPlacement\nfrom .._resolve import resolve\nfrom ..css.scalar import Scalar\nfrom ..geometry import Region, Size, Spacing\n\nif TYPE_CHECKING:\n from ..widget import Widget\n\n\nclass GridLayout(Layout):\n \"\"\"Used to layout Widgets in to a grid.\"\"\"\n\n name = \"grid\"\n\n def arrange(\n self, parent: Widget, children: list[Widget], size: Size\n ) -> ArrangeResult:\n styles = parent.styles\n row_scalars = styles.grid_rows or [Scalar.parse(\"1fr\")]\n column_scalars = styles.grid_columns or [Scalar.parse(\"1fr\")]\n gutter_horizontal = styles.grid_gutter_horizontal\n gutter_vertical = styles.grid_gutter_vertical\n table_size_columns = max(1, styles.grid_size_columns)\n table_size_rows = styles.grid_size_rows\n viewport = parent.screen.size\n\n def cell_coords(column_count: int) -> Iterable[tuple[int, int]]:\n \"\"\"Iterate over table coordinates ad infinitum.\n\n Args:\n column_count (int): Number of columns\n\n \"\"\"\n row = 0\n while True:\n for column in range(column_count):\n yield (column, row)\n row += 1\n\n def widget_coords(\n column_start: int, row_start: int, columns: int, rows: int\n ) -> set[tuple[int, int]]:\n \"\"\"Get coords occupied by a cell.\n\n Args:\n column_start (int): Start column.\n row_start (int): Start_row.\n columns (int): Number of columns.\n rows (int): Number of rows.\n\n Returns:\n set[tuple[int, int]]: Set of coords.\n \"\"\"\n return {\n (column, row)\n for column in range(column_start, column_start + columns)\n for row in range(row_start, row_start + rows)\n }\n\n def repeat_scalars(scalars: Iterable[Scalar], count: int) -> list[Scalar]:\n \"\"\"Repeat an iterable of scalars as many times as required to return\n a list of `count` values.\n\n Args:\n scalars (Iterable[T]): Iterable of values.\n count (int): Number of values to return.\n\n Returns:\n list[T]: A list of values.\n \"\"\"\n limited_values = list(scalars)[:]\n while len(limited_values) < count:\n limited_values.extend(scalars)\n return limited_values[:count]\n\n cell_map: dict[tuple[int, int], tuple[Widget, bool]] = {}\n cell_size_map: dict[Widget, tuple[int, int, int, int]] = {}\n\n column_count = table_size_columns\n next_coord = iter(cell_coords(column_count)).__next__\n cell_coord = (0, 0)\n column = row = 0\n\n for child in children:\n child_styles = child.styles\n column_span = child_styles.column_span or 1\n row_span = child_styles.row_span or 1\n # Find a slot where this cell fits\n # A cell on a previous row may have a row span\n while True:\n column, row = cell_coord\n coords = widget_coords(column, row, column_span, row_span)\n if cell_map.keys().isdisjoint(coords):\n for coord in coords:\n cell_map[coord] = (child, coord == cell_coord)\n cell_size_map[child] = (\n column,\n row,\n column_span - 1,\n row_span - 1,\n )\n break\n else:\n cell_coord = next_coord()\n continue\n cell_coord = next_coord()\n\n # Resolve columns / rows\n columns = resolve(\n repeat_scalars(column_scalars, table_size_columns),\n size.width,\n gutter_vertical,\n size,\n viewport,\n )\n rows = resolve(\n repeat_scalars(\n row_scalars, table_size_rows if table_size_rows else row + 1\n ),\n size.height,\n gutter_horizontal,\n size,\n viewport,\n )\n\n placements: list[WidgetPlacement] = []\n add_placement = placements.append\n fraction_unit = Fraction(1)\n widgets: list[Widget] = []\n add_widget = widgets.append\n max_column = len(columns) - 1\n max_row = len(rows) - 1\n margin = Spacing()\n for widget, (column, row, column_span, row_span) in cell_size_map.items():\n x = columns[column][0]\n if row > max_row:\n break\n y = rows[row][0]\n x2, cell_width = columns[min(max_column, column + column_span)]\n y2, cell_height = rows[min(max_row, row + row_span)]\n cell_size = Size(cell_width + x2 - x, cell_height + y2 - y)\n width, height, margin = widget._get_box_model(\n cell_size,\n viewport,\n fraction_unit,\n )\n region = (\n Region(x, y, int(width + margin.width), int(height + margin.height))\n .shrink(margin)\n .clip_size(cell_size)\n )\n add_placement(WidgetPlacement(region, margin, widget))\n add_widget(widget)\n\n return (placements, set(widgets))\n", "path": "src/textual/layouts/grid.py"}]}
| 2,043 | 108 |
gh_patches_debug_698
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-6034
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing '**kwargs' parameters passing to imshow_bboxes() in show_result() of rpn.py
https://github.com/open-mmlab/mmdetection/blob/bde7b4b7eea9dd6ee91a486c6996b2d68662366d/mmdet/models/detectors/rpn.py#L155
'**kwargs' parameters haven't passed to mmcv.imshow_bboxes() in show_result() of mmdetection/mmdet/models/detectors/rpn.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/models/detectors/rpn.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import warnings
3
4 import mmcv
5 import torch
6 from mmcv.image import tensor2imgs
7
8 from mmdet.core import bbox_mapping
9 from ..builder import DETECTORS, build_backbone, build_head, build_neck
10 from .base import BaseDetector
11
12
13 @DETECTORS.register_module()
14 class RPN(BaseDetector):
15 """Implementation of Region Proposal Network."""
16
17 def __init__(self,
18 backbone,
19 neck,
20 rpn_head,
21 train_cfg,
22 test_cfg,
23 pretrained=None,
24 init_cfg=None):
25 super(RPN, self).__init__(init_cfg)
26 if pretrained:
27 warnings.warn('DeprecationWarning: pretrained is deprecated, '
28 'please use "init_cfg" instead')
29 backbone.pretrained = pretrained
30 self.backbone = build_backbone(backbone)
31 self.neck = build_neck(neck) if neck is not None else None
32 rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None
33 rpn_head.update(train_cfg=rpn_train_cfg)
34 rpn_head.update(test_cfg=test_cfg.rpn)
35 self.rpn_head = build_head(rpn_head)
36 self.train_cfg = train_cfg
37 self.test_cfg = test_cfg
38
39 def extract_feat(self, img):
40 """Extract features.
41
42 Args:
43 img (torch.Tensor): Image tensor with shape (n, c, h ,w).
44
45 Returns:
46 list[torch.Tensor]: Multi-level features that may have
47 different resolutions.
48 """
49 x = self.backbone(img)
50 if self.with_neck:
51 x = self.neck(x)
52 return x
53
54 def forward_dummy(self, img):
55 """Dummy forward function."""
56 x = self.extract_feat(img)
57 rpn_outs = self.rpn_head(x)
58 return rpn_outs
59
60 def forward_train(self,
61 img,
62 img_metas,
63 gt_bboxes=None,
64 gt_bboxes_ignore=None):
65 """
66 Args:
67 img (Tensor): Input images of shape (N, C, H, W).
68 Typically these should be mean centered and std scaled.
69 img_metas (list[dict]): A List of image info dict where each dict
70 has: 'img_shape', 'scale_factor', 'flip', and may also contain
71 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
72 For details on the values of these keys see
73 :class:`mmdet.datasets.pipelines.Collect`.
74 gt_bboxes (list[Tensor]): Each item are the truth boxes for each
75 image in [tl_x, tl_y, br_x, br_y] format.
76 gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
77 boxes can be ignored when computing the loss.
78
79 Returns:
80 dict[str, Tensor]: A dictionary of loss components.
81 """
82 if (isinstance(self.train_cfg.rpn, dict)
83 and self.train_cfg.rpn.get('debug', False)):
84 self.rpn_head.debug_imgs = tensor2imgs(img)
85
86 x = self.extract_feat(img)
87 losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None,
88 gt_bboxes_ignore)
89 return losses
90
91 def simple_test(self, img, img_metas, rescale=False):
92 """Test function without test time augmentation.
93
94 Args:
95 imgs (list[torch.Tensor]): List of multiple images
96 img_metas (list[dict]): List of image information.
97 rescale (bool, optional): Whether to rescale the results.
98 Defaults to False.
99
100 Returns:
101 list[np.ndarray]: proposals
102 """
103 x = self.extract_feat(img)
104 # get origin input shape to onnx dynamic input shape
105 if torch.onnx.is_in_onnx_export():
106 img_shape = torch._shape_as_tensor(img)[2:]
107 img_metas[0]['img_shape_for_onnx'] = img_shape
108 proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)
109 if rescale:
110 for proposals, meta in zip(proposal_list, img_metas):
111 proposals[:, :4] /= proposals.new_tensor(meta['scale_factor'])
112 if torch.onnx.is_in_onnx_export():
113 return proposal_list
114
115 return [proposal.cpu().numpy() for proposal in proposal_list]
116
117 def aug_test(self, imgs, img_metas, rescale=False):
118 """Test function with test time augmentation.
119
120 Args:
121 imgs (list[torch.Tensor]): List of multiple images
122 img_metas (list[dict]): List of image information.
123 rescale (bool, optional): Whether to rescale the results.
124 Defaults to False.
125
126 Returns:
127 list[np.ndarray]: proposals
128 """
129 proposal_list = self.rpn_head.aug_test_rpn(
130 self.extract_feats(imgs), img_metas)
131 if not rescale:
132 for proposals, img_meta in zip(proposal_list, img_metas[0]):
133 img_shape = img_meta['img_shape']
134 scale_factor = img_meta['scale_factor']
135 flip = img_meta['flip']
136 flip_direction = img_meta['flip_direction']
137 proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape,
138 scale_factor, flip,
139 flip_direction)
140 return [proposal.cpu().numpy() for proposal in proposal_list]
141
142 def show_result(self, data, result, top_k=20, **kwargs):
143 """Show RPN proposals on the image.
144
145 Args:
146 data (str or np.ndarray): Image filename or loaded image.
147 result (Tensor or tuple): The results to draw over `img`
148 bbox_result or (bbox_result, segm_result).
149 top_k (int): Plot the first k bboxes only
150 if set positive. Default: 20
151
152 Returns:
153 np.ndarray: The image with bboxes drawn on it.
154 """
155 mmcv.imshow_bboxes(data, result, top_k=top_k)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmdet/models/detectors/rpn.py b/mmdet/models/detectors/rpn.py
--- a/mmdet/models/detectors/rpn.py
+++ b/mmdet/models/detectors/rpn.py
@@ -152,4 +152,4 @@
Returns:
np.ndarray: The image with bboxes drawn on it.
"""
- mmcv.imshow_bboxes(data, result, top_k=top_k)
+ mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)
|
{"golden_diff": "diff --git a/mmdet/models/detectors/rpn.py b/mmdet/models/detectors/rpn.py\n--- a/mmdet/models/detectors/rpn.py\n+++ b/mmdet/models/detectors/rpn.py\n@@ -152,4 +152,4 @@\n Returns:\n np.ndarray: The image with bboxes drawn on it.\n \"\"\"\n- mmcv.imshow_bboxes(data, result, top_k=top_k)\n+ mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)\n", "issue": "Missing '**kwargs' parameters passing to imshow_bboxes() in show_result() of rpn.py\nhttps://github.com/open-mmlab/mmdetection/blob/bde7b4b7eea9dd6ee91a486c6996b2d68662366d/mmdet/models/detectors/rpn.py#L155\r\n\r\n'**kwargs' parameters haven't passed to mmcv.imshow_bboxes() in show_result() of mmdetection/mmdet/models/detectors/rpn.py\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport mmcv\nimport torch\nfrom mmcv.image import tensor2imgs\n\nfrom mmdet.core import bbox_mapping\nfrom ..builder import DETECTORS, build_backbone, build_head, build_neck\nfrom .base import BaseDetector\n\n\[email protected]_module()\nclass RPN(BaseDetector):\n \"\"\"Implementation of Region Proposal Network.\"\"\"\n\n def __init__(self,\n backbone,\n neck,\n rpn_head,\n train_cfg,\n test_cfg,\n pretrained=None,\n init_cfg=None):\n super(RPN, self).__init__(init_cfg)\n if pretrained:\n warnings.warn('DeprecationWarning: pretrained is deprecated, '\n 'please use \"init_cfg\" instead')\n backbone.pretrained = pretrained\n self.backbone = build_backbone(backbone)\n self.neck = build_neck(neck) if neck is not None else None\n rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None\n rpn_head.update(train_cfg=rpn_train_cfg)\n rpn_head.update(test_cfg=test_cfg.rpn)\n self.rpn_head = build_head(rpn_head)\n self.train_cfg = train_cfg\n self.test_cfg = test_cfg\n\n def extract_feat(self, img):\n \"\"\"Extract features.\n\n Args:\n img (torch.Tensor): Image tensor with shape (n, c, h ,w).\n\n Returns:\n list[torch.Tensor]: Multi-level features that may have\n different resolutions.\n \"\"\"\n x = self.backbone(img)\n if self.with_neck:\n x = self.neck(x)\n return x\n\n def forward_dummy(self, img):\n \"\"\"Dummy forward function.\"\"\"\n x = self.extract_feat(img)\n rpn_outs = self.rpn_head(x)\n return rpn_outs\n\n def forward_train(self,\n img,\n img_metas,\n gt_bboxes=None,\n gt_bboxes_ignore=None):\n \"\"\"\n Args:\n img (Tensor): Input images of shape (N, C, H, W).\n Typically these should be mean centered and std scaled.\n img_metas (list[dict]): A List of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n :class:`mmdet.datasets.pipelines.Collect`.\n gt_bboxes (list[Tensor]): Each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n gt_bboxes_ignore (None | list[Tensor]): Specify which bounding\n boxes can be ignored when computing the loss.\n\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n if (isinstance(self.train_cfg.rpn, dict)\n and self.train_cfg.rpn.get('debug', False)):\n self.rpn_head.debug_imgs = tensor2imgs(img)\n\n x = self.extract_feat(img)\n losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None,\n gt_bboxes_ignore)\n return losses\n\n def simple_test(self, img, img_metas, rescale=False):\n \"\"\"Test function without test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n x = self.extract_feat(img)\n # get origin input shape to onnx dynamic input shape\n if torch.onnx.is_in_onnx_export():\n img_shape = torch._shape_as_tensor(img)[2:]\n img_metas[0]['img_shape_for_onnx'] = img_shape\n proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)\n if rescale:\n for proposals, meta in zip(proposal_list, img_metas):\n proposals[:, :4] /= proposals.new_tensor(meta['scale_factor'])\n if torch.onnx.is_in_onnx_export():\n return proposal_list\n\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def aug_test(self, imgs, img_metas, rescale=False):\n \"\"\"Test function with test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n proposal_list = self.rpn_head.aug_test_rpn(\n self.extract_feats(imgs), img_metas)\n if not rescale:\n for proposals, img_meta in zip(proposal_list, img_metas[0]):\n img_shape = img_meta['img_shape']\n scale_factor = img_meta['scale_factor']\n flip = img_meta['flip']\n flip_direction = img_meta['flip_direction']\n proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape,\n scale_factor, flip,\n flip_direction)\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def show_result(self, data, result, top_k=20, **kwargs):\n \"\"\"Show RPN proposals on the image.\n\n Args:\n data (str or np.ndarray): Image filename or loaded image.\n result (Tensor or tuple): The results to draw over `img`\n bbox_result or (bbox_result, segm_result).\n top_k (int): Plot the first k bboxes only\n if set positive. Default: 20\n\n Returns:\n np.ndarray: The image with bboxes drawn on it.\n \"\"\"\n mmcv.imshow_bboxes(data, result, top_k=top_k)\n", "path": "mmdet/models/detectors/rpn.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport mmcv\nimport torch\nfrom mmcv.image import tensor2imgs\n\nfrom mmdet.core import bbox_mapping\nfrom ..builder import DETECTORS, build_backbone, build_head, build_neck\nfrom .base import BaseDetector\n\n\[email protected]_module()\nclass RPN(BaseDetector):\n \"\"\"Implementation of Region Proposal Network.\"\"\"\n\n def __init__(self,\n backbone,\n neck,\n rpn_head,\n train_cfg,\n test_cfg,\n pretrained=None,\n init_cfg=None):\n super(RPN, self).__init__(init_cfg)\n if pretrained:\n warnings.warn('DeprecationWarning: pretrained is deprecated, '\n 'please use \"init_cfg\" instead')\n backbone.pretrained = pretrained\n self.backbone = build_backbone(backbone)\n self.neck = build_neck(neck) if neck is not None else None\n rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None\n rpn_head.update(train_cfg=rpn_train_cfg)\n rpn_head.update(test_cfg=test_cfg.rpn)\n self.rpn_head = build_head(rpn_head)\n self.train_cfg = train_cfg\n self.test_cfg = test_cfg\n\n def extract_feat(self, img):\n \"\"\"Extract features.\n\n Args:\n img (torch.Tensor): Image tensor with shape (n, c, h ,w).\n\n Returns:\n list[torch.Tensor]: Multi-level features that may have\n different resolutions.\n \"\"\"\n x = self.backbone(img)\n if self.with_neck:\n x = self.neck(x)\n return x\n\n def forward_dummy(self, img):\n \"\"\"Dummy forward function.\"\"\"\n x = self.extract_feat(img)\n rpn_outs = self.rpn_head(x)\n return rpn_outs\n\n def forward_train(self,\n img,\n img_metas,\n gt_bboxes=None,\n gt_bboxes_ignore=None):\n \"\"\"\n Args:\n img (Tensor): Input images of shape (N, C, H, W).\n Typically these should be mean centered and std scaled.\n img_metas (list[dict]): A List of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n :class:`mmdet.datasets.pipelines.Collect`.\n gt_bboxes (list[Tensor]): Each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n gt_bboxes_ignore (None | list[Tensor]): Specify which bounding\n boxes can be ignored when computing the loss.\n\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n if (isinstance(self.train_cfg.rpn, dict)\n and self.train_cfg.rpn.get('debug', False)):\n self.rpn_head.debug_imgs = tensor2imgs(img)\n\n x = self.extract_feat(img)\n losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None,\n gt_bboxes_ignore)\n return losses\n\n def simple_test(self, img, img_metas, rescale=False):\n \"\"\"Test function without test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n x = self.extract_feat(img)\n # get origin input shape to onnx dynamic input shape\n if torch.onnx.is_in_onnx_export():\n img_shape = torch._shape_as_tensor(img)[2:]\n img_metas[0]['img_shape_for_onnx'] = img_shape\n proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)\n if rescale:\n for proposals, meta in zip(proposal_list, img_metas):\n proposals[:, :4] /= proposals.new_tensor(meta['scale_factor'])\n if torch.onnx.is_in_onnx_export():\n return proposal_list\n\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def aug_test(self, imgs, img_metas, rescale=False):\n \"\"\"Test function with test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n proposal_list = self.rpn_head.aug_test_rpn(\n self.extract_feats(imgs), img_metas)\n if not rescale:\n for proposals, img_meta in zip(proposal_list, img_metas[0]):\n img_shape = img_meta['img_shape']\n scale_factor = img_meta['scale_factor']\n flip = img_meta['flip']\n flip_direction = img_meta['flip_direction']\n proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape,\n scale_factor, flip,\n flip_direction)\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def show_result(self, data, result, top_k=20, **kwargs):\n \"\"\"Show RPN proposals on the image.\n\n Args:\n data (str or np.ndarray): Image filename or loaded image.\n result (Tensor or tuple): The results to draw over `img`\n bbox_result or (bbox_result, segm_result).\n top_k (int): Plot the first k bboxes only\n if set positive. Default: 20\n\n Returns:\n np.ndarray: The image with bboxes drawn on it.\n \"\"\"\n mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)\n", "path": "mmdet/models/detectors/rpn.py"}]}
| 2,039 | 119 |
gh_patches_debug_5207
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-3219
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add python 3.12 to CI
## 🚀 Feature
Add python 3.12 to CI: https://github.com/pytorch/ignite/blob/master/.github/workflows/unit-tests.yml
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/mnist/mnist.py`
Content:
```
1 from argparse import ArgumentParser
2
3 import torch
4 import torch.nn.functional as F
5 from torch import nn
6 from torch.optim import SGD
7 from torch.utils.data import DataLoader
8 from torchvision.datasets import MNIST
9 from torchvision.transforms import Compose, Normalize, ToTensor
10 from tqdm import tqdm
11
12 from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
13 from ignite.metrics import Accuracy, Loss
14 from ignite.utils import setup_logger
15
16
17 class Net(nn.Module):
18 def __init__(self):
19 super(Net, self).__init__()
20 self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
21 self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
22 self.conv2_drop = nn.Dropout2d()
23 self.fc1 = nn.Linear(320, 50)
24 self.fc2 = nn.Linear(50, 10)
25
26 def forward(self, x):
27 x = F.relu(F.max_pool2d(self.conv1(x), 2))
28 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
29 x = x.view(-1, 320)
30 x = F.relu(self.fc1(x))
31 x = F.dropout(x, training=self.training)
32 x = self.fc2(x)
33 return F.log_softmax(x, dim=-1)
34
35
36 def get_data_loaders(train_batch_size, val_batch_size):
37 data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])
38
39 train_loader = DataLoader(
40 MNIST(download=True, root=".", transform=data_transform, train=True), batch_size=train_batch_size, shuffle=True
41 )
42
43 val_loader = DataLoader(
44 MNIST(download=False, root=".", transform=data_transform, train=False), batch_size=val_batch_size, shuffle=False
45 )
46 return train_loader, val_loader
47
48
49 def run(train_batch_size, val_batch_size, epochs, lr, momentum, log_interval):
50 train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)
51 model = Net()
52 device = "cpu"
53
54 if torch.cuda.is_available():
55 device = "cuda"
56
57 model.to(device) # Move model before creating optimizer
58 optimizer = SGD(model.parameters(), lr=lr, momentum=momentum)
59 criterion = nn.NLLLoss()
60 trainer = create_supervised_trainer(model, optimizer, criterion, device=device)
61 trainer.logger = setup_logger("trainer")
62
63 val_metrics = {"accuracy": Accuracy(), "nll": Loss(criterion)}
64 evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)
65 evaluator.logger = setup_logger("evaluator")
66
67 pbar = tqdm(initial=0, leave=False, total=len(train_loader), desc=f"ITERATION - loss: {0:.2f}")
68
69 @trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
70 def log_training_loss(engine):
71 pbar.desc = f"ITERATION - loss: {engine.state.output:.2f}"
72 pbar.update(log_interval)
73
74 @trainer.on(Events.EPOCH_COMPLETED)
75 def log_training_results(engine):
76 pbar.refresh()
77 evaluator.run(train_loader)
78 metrics = evaluator.state.metrics
79 avg_accuracy = metrics["accuracy"]
80 avg_nll = metrics["nll"]
81 tqdm.write(
82 f"Training Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}"
83 )
84
85 @trainer.on(Events.EPOCH_COMPLETED)
86 def log_validation_results(engine):
87 evaluator.run(val_loader)
88 metrics = evaluator.state.metrics
89 avg_accuracy = metrics["accuracy"]
90 avg_nll = metrics["nll"]
91 tqdm.write(
92 f"Validation Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}"
93 )
94
95 pbar.n = pbar.last_print_n = 0
96
97 @trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)
98 def log_time(engine):
99 tqdm.write(f"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds")
100
101 trainer.run(train_loader, max_epochs=epochs)
102 pbar.close()
103
104
105 if __name__ == "__main__":
106 parser = ArgumentParser()
107 parser.add_argument("--batch_size", type=int, default=64, help="input batch size for training (default: 64)")
108 parser.add_argument(
109 "--val_batch_size", type=int, default=1000, help="input batch size for validation (default: 1000)"
110 )
111 parser.add_argument("--epochs", type=int, default=10, help="number of epochs to train (default: 10)")
112 parser.add_argument("--lr", type=float, default=0.01, help="learning rate (default: 0.01)")
113 parser.add_argument("--momentum", type=float, default=0.5, help="SGD momentum (default: 0.5)")
114 parser.add_argument(
115 "--log_interval", type=int, default=10, help="how many batches to wait before logging training status"
116 )
117
118 args = parser.parse_args()
119
120 run(args.batch_size, args.val_batch_size, args.epochs, args.lr, args.momentum, args.log_interval)
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/mnist/mnist.py b/examples/mnist/mnist.py
--- a/examples/mnist/mnist.py
+++ b/examples/mnist/mnist.py
@@ -96,7 +96,7 @@
@trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)
def log_time(engine):
- tqdm.write(f"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds")
+ tqdm.write(f"{trainer.last_event_name.name} took {trainer.state.times[trainer.last_event_name.name]} seconds")
trainer.run(train_loader, max_epochs=epochs)
pbar.close()
|
{"golden_diff": "diff --git a/examples/mnist/mnist.py b/examples/mnist/mnist.py\n--- a/examples/mnist/mnist.py\n+++ b/examples/mnist/mnist.py\n@@ -96,7 +96,7 @@\n \n @trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)\n def log_time(engine):\n- tqdm.write(f\"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds\")\n+ tqdm.write(f\"{trainer.last_event_name.name} took {trainer.state.times[trainer.last_event_name.name]} seconds\")\n \n trainer.run(train_loader, max_epochs=epochs)\n pbar.close()\n", "issue": "Add python 3.12 to CI\n## \ud83d\ude80 Feature\r\n\r\nAdd python 3.12 to CI: https://github.com/pytorch/ignite/blob/master/.github/workflows/unit-tests.yml\r\n\n", "before_files": [{"content": "from argparse import ArgumentParser\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\nfrom torch.optim import SGD\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import Compose, Normalize, ToTensor\nfrom tqdm import tqdm\n\nfrom ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events\nfrom ignite.metrics import Accuracy, Loss\nfrom ignite.utils import setup_logger\n\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, dim=-1)\n\n\ndef get_data_loaders(train_batch_size, val_batch_size):\n data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])\n\n train_loader = DataLoader(\n MNIST(download=True, root=\".\", transform=data_transform, train=True), batch_size=train_batch_size, shuffle=True\n )\n\n val_loader = DataLoader(\n MNIST(download=False, root=\".\", transform=data_transform, train=False), batch_size=val_batch_size, shuffle=False\n )\n return train_loader, val_loader\n\n\ndef run(train_batch_size, val_batch_size, epochs, lr, momentum, log_interval):\n train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)\n model = Net()\n device = \"cpu\"\n\n if torch.cuda.is_available():\n device = \"cuda\"\n\n model.to(device) # Move model before creating optimizer\n optimizer = SGD(model.parameters(), lr=lr, momentum=momentum)\n criterion = nn.NLLLoss()\n trainer = create_supervised_trainer(model, optimizer, criterion, device=device)\n trainer.logger = setup_logger(\"trainer\")\n\n val_metrics = {\"accuracy\": Accuracy(), \"nll\": Loss(criterion)}\n evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)\n evaluator.logger = setup_logger(\"evaluator\")\n\n pbar = tqdm(initial=0, leave=False, total=len(train_loader), desc=f\"ITERATION - loss: {0:.2f}\")\n\n @trainer.on(Events.ITERATION_COMPLETED(every=log_interval))\n def log_training_loss(engine):\n pbar.desc = f\"ITERATION - loss: {engine.state.output:.2f}\"\n pbar.update(log_interval)\n\n @trainer.on(Events.EPOCH_COMPLETED)\n def log_training_results(engine):\n pbar.refresh()\n evaluator.run(train_loader)\n metrics = evaluator.state.metrics\n avg_accuracy = metrics[\"accuracy\"]\n avg_nll = metrics[\"nll\"]\n tqdm.write(\n f\"Training Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}\"\n )\n\n @trainer.on(Events.EPOCH_COMPLETED)\n def log_validation_results(engine):\n evaluator.run(val_loader)\n metrics = evaluator.state.metrics\n avg_accuracy = metrics[\"accuracy\"]\n avg_nll = metrics[\"nll\"]\n tqdm.write(\n f\"Validation Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}\"\n )\n\n pbar.n = pbar.last_print_n = 0\n\n @trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)\n def log_time(engine):\n tqdm.write(f\"{trainer.last_event_name.name} took { trainer.state.times[trainer.last_event_name.name]} seconds\")\n\n trainer.run(train_loader, max_epochs=epochs)\n pbar.close()\n\n\nif __name__ == \"__main__\":\n parser = ArgumentParser()\n parser.add_argument(\"--batch_size\", type=int, default=64, help=\"input batch size for training (default: 64)\")\n parser.add_argument(\n \"--val_batch_size\", type=int, default=1000, help=\"input batch size for validation (default: 1000)\"\n )\n parser.add_argument(\"--epochs\", type=int, default=10, help=\"number of epochs to train (default: 10)\")\n parser.add_argument(\"--lr\", type=float, default=0.01, help=\"learning rate (default: 0.01)\")\n parser.add_argument(\"--momentum\", type=float, default=0.5, help=\"SGD momentum (default: 0.5)\")\n parser.add_argument(\n \"--log_interval\", type=int, default=10, help=\"how many batches to wait before logging training status\"\n )\n\n args = parser.parse_args()\n\n run(args.batch_size, args.val_batch_size, args.epochs, args.lr, args.momentum, args.log_interval)\n", "path": "examples/mnist/mnist.py"}], "after_files": [{"content": "from argparse import ArgumentParser\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\nfrom torch.optim import SGD\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import Compose, Normalize, ToTensor\nfrom tqdm import tqdm\n\nfrom ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events\nfrom ignite.metrics import Accuracy, Loss\nfrom ignite.utils import setup_logger\n\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x, dim=-1)\n\n\ndef get_data_loaders(train_batch_size, val_batch_size):\n data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])\n\n train_loader = DataLoader(\n MNIST(download=True, root=\".\", transform=data_transform, train=True), batch_size=train_batch_size, shuffle=True\n )\n\n val_loader = DataLoader(\n MNIST(download=False, root=\".\", transform=data_transform, train=False), batch_size=val_batch_size, shuffle=False\n )\n return train_loader, val_loader\n\n\ndef run(train_batch_size, val_batch_size, epochs, lr, momentum, log_interval):\n train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)\n model = Net()\n device = \"cpu\"\n\n if torch.cuda.is_available():\n device = \"cuda\"\n\n model.to(device) # Move model before creating optimizer\n optimizer = SGD(model.parameters(), lr=lr, momentum=momentum)\n criterion = nn.NLLLoss()\n trainer = create_supervised_trainer(model, optimizer, criterion, device=device)\n trainer.logger = setup_logger(\"trainer\")\n\n val_metrics = {\"accuracy\": Accuracy(), \"nll\": Loss(criterion)}\n evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)\n evaluator.logger = setup_logger(\"evaluator\")\n\n pbar = tqdm(initial=0, leave=False, total=len(train_loader), desc=f\"ITERATION - loss: {0:.2f}\")\n\n @trainer.on(Events.ITERATION_COMPLETED(every=log_interval))\n def log_training_loss(engine):\n pbar.desc = f\"ITERATION - loss: {engine.state.output:.2f}\"\n pbar.update(log_interval)\n\n @trainer.on(Events.EPOCH_COMPLETED)\n def log_training_results(engine):\n pbar.refresh()\n evaluator.run(train_loader)\n metrics = evaluator.state.metrics\n avg_accuracy = metrics[\"accuracy\"]\n avg_nll = metrics[\"nll\"]\n tqdm.write(\n f\"Training Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}\"\n )\n\n @trainer.on(Events.EPOCH_COMPLETED)\n def log_validation_results(engine):\n evaluator.run(val_loader)\n metrics = evaluator.state.metrics\n avg_accuracy = metrics[\"accuracy\"]\n avg_nll = metrics[\"nll\"]\n tqdm.write(\n f\"Validation Results - Epoch: {engine.state.epoch} Avg accuracy: {avg_accuracy:.2f} Avg loss: {avg_nll:.2f}\"\n )\n\n pbar.n = pbar.last_print_n = 0\n\n @trainer.on(Events.EPOCH_COMPLETED | Events.COMPLETED)\n def log_time(engine):\n tqdm.write(f\"{trainer.last_event_name.name} took {trainer.state.times[trainer.last_event_name.name]} seconds\")\n\n trainer.run(train_loader, max_epochs=epochs)\n pbar.close()\n\n\nif __name__ == \"__main__\":\n parser = ArgumentParser()\n parser.add_argument(\"--batch_size\", type=int, default=64, help=\"input batch size for training (default: 64)\")\n parser.add_argument(\n \"--val_batch_size\", type=int, default=1000, help=\"input batch size for validation (default: 1000)\"\n )\n parser.add_argument(\"--epochs\", type=int, default=10, help=\"number of epochs to train (default: 10)\")\n parser.add_argument(\"--lr\", type=float, default=0.01, help=\"learning rate (default: 0.01)\")\n parser.add_argument(\"--momentum\", type=float, default=0.5, help=\"SGD momentum (default: 0.5)\")\n parser.add_argument(\n \"--log_interval\", type=int, default=10, help=\"how many batches to wait before logging training status\"\n )\n\n args = parser.parse_args()\n\n run(args.batch_size, args.val_batch_size, args.epochs, args.lr, args.momentum, args.log_interval)\n", "path": "examples/mnist/mnist.py"}]}
| 1,739 | 138 |
gh_patches_debug_61331
|
rasdani/github-patches
|
git_diff
|
nerfstudio-project__nerfstudio-913
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tanh instead of tan bug
Hi,
please change tanh (hyperbolic tan) to tan
https://github.com/nerfstudio-project/nerfstudio/blob/1a24f3e58c544bc0211563e770d425426284256c/nerfstudio/data/dataparsers/instant_ngp_dataparser.py#L133
thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nerfstudio/data/dataparsers/instant_ngp_dataparser.py`
Content:
```
1 # Copyright 2022 The Nerfstudio Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Data parser for instant ngp data"""
16
17 from __future__ import annotations
18
19 from dataclasses import dataclass, field
20 from pathlib import Path
21 from typing import Dict, Tuple, Type
22
23 import numpy as np
24 import torch
25 from rich.console import Console
26
27 from nerfstudio.cameras import camera_utils
28 from nerfstudio.cameras.cameras import Cameras, CameraType
29 from nerfstudio.data.dataparsers.base_dataparser import (
30 DataParser,
31 DataParserConfig,
32 DataparserOutputs,
33 )
34 from nerfstudio.data.scene_box import SceneBox
35 from nerfstudio.utils.io import load_from_json
36
37 CONSOLE = Console(width=120)
38
39
40 @dataclass
41 class InstantNGPDataParserConfig(DataParserConfig):
42 """Instant-NGP dataset parser config"""
43
44 _target: Type = field(default_factory=lambda: InstantNGP)
45 """target class to instantiate"""
46 data: Path = Path("data/ours/posterv2")
47 """Directory specifying location of data."""
48 scale_factor: float = 1.0
49 """How much to scale the camera origins by."""
50 scene_scale: float = 0.33
51 """How much to scale the scene."""
52
53
54 @dataclass
55 class InstantNGP(DataParser):
56 """Instant NGP Dataset"""
57
58 config: InstantNGPDataParserConfig
59
60 def _generate_dataparser_outputs(self, split="train"):
61
62 meta = load_from_json(self.config.data / "transforms.json")
63 image_filenames = []
64 poses = []
65 num_skipped_image_filenames = 0
66 for frame in meta["frames"]:
67 fname = self.config.data / Path(frame["file_path"])
68 if not fname:
69 num_skipped_image_filenames += 1
70 else:
71 image_filenames.append(fname)
72 poses.append(np.array(frame["transform_matrix"]))
73 if num_skipped_image_filenames >= 0:
74 CONSOLE.print(f"Skipping {num_skipped_image_filenames} files in dataset split {split}.")
75 assert (
76 len(image_filenames) != 0
77 ), """
78 No image files found.
79 You should check the file_paths in the transforms.json file to make sure they are correct.
80 """
81 poses = np.array(poses).astype(np.float32)
82 poses[:, :3, 3] *= self.config.scene_scale
83
84 camera_to_world = torch.from_numpy(poses[:, :3]) # camera to world transform
85
86 distortion_params = camera_utils.get_distortion_params(
87 k1=float(meta["k1"]), k2=float(meta["k2"]), p1=float(meta["p1"]), p2=float(meta["p2"])
88 )
89
90 # in x,y,z order
91 # assumes that the scene is centered at the origin
92 aabb_scale = meta["aabb_scale"]
93 scene_box = SceneBox(
94 aabb=torch.tensor(
95 [[-aabb_scale, -aabb_scale, -aabb_scale], [aabb_scale, aabb_scale, aabb_scale]], dtype=torch.float32
96 )
97 )
98
99 fl_x, fl_y = InstantNGP.get_focal_lengths(meta)
100
101 cameras = Cameras(
102 fx=float(fl_x),
103 fy=float(fl_y),
104 cx=float(meta["cx"]),
105 cy=float(meta["cy"]),
106 distortion_params=distortion_params,
107 height=int(meta["h"]),
108 width=int(meta["w"]),
109 camera_to_worlds=camera_to_world,
110 camera_type=CameraType.PERSPECTIVE,
111 )
112
113 # TODO(ethan): add alpha background color
114 dataparser_outputs = DataparserOutputs(
115 image_filenames=image_filenames,
116 cameras=cameras,
117 scene_box=scene_box,
118 )
119
120 return dataparser_outputs
121
122 @classmethod
123 def get_focal_lengths(cls, meta: Dict) -> Tuple[float, float]:
124 """Reads or computes the focal length from transforms dict.
125 Args:
126 meta: metadata from transforms.json file.
127 Returns:
128 Focal lengths in the x and y directions. Error is raised if these cannot be calculated.
129 """
130 fl_x, fl_y = 0, 0
131
132 def fov_to_focal_length(rad, res):
133 return 0.5 * res / np.tanh(0.5 * rad)
134
135 if "fl_x" in meta:
136 fl_x = meta["fl_x"]
137 elif "x_fov" in meta:
138 fl_x = fov_to_focal_length(np.deg2rad(meta["x_fov"]), meta["w"])
139 elif "camera_angle_x" in meta:
140 fl_x = fov_to_focal_length(meta["camera_angle_x"], meta["w"])
141
142 if "fl_y" in meta:
143 fl_y = meta["fl_y"]
144 elif "y_fov" in meta:
145 fl_y = fov_to_focal_length(np.deg2rad(meta["y_fov"]), meta["h"])
146 elif "camera_angle_y" in meta:
147 fl_y = fov_to_focal_length(meta["camera_angle_y"], meta["h"])
148
149 if fl_x == 0 or fl_y == 0:
150 raise AttributeError("Focal length cannot be calculated from transforms.json (missing fields).")
151
152 return (fl_x, fl_y)
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nerfstudio/data/dataparsers/instant_ngp_dataparser.py b/nerfstudio/data/dataparsers/instant_ngp_dataparser.py
--- a/nerfstudio/data/dataparsers/instant_ngp_dataparser.py
+++ b/nerfstudio/data/dataparsers/instant_ngp_dataparser.py
@@ -130,7 +130,7 @@
fl_x, fl_y = 0, 0
def fov_to_focal_length(rad, res):
- return 0.5 * res / np.tanh(0.5 * rad)
+ return 0.5 * res / np.tan(0.5 * rad)
if "fl_x" in meta:
fl_x = meta["fl_x"]
|
{"golden_diff": "diff --git a/nerfstudio/data/dataparsers/instant_ngp_dataparser.py b/nerfstudio/data/dataparsers/instant_ngp_dataparser.py\n--- a/nerfstudio/data/dataparsers/instant_ngp_dataparser.py\n+++ b/nerfstudio/data/dataparsers/instant_ngp_dataparser.py\n@@ -130,7 +130,7 @@\n fl_x, fl_y = 0, 0\n \n def fov_to_focal_length(rad, res):\n- return 0.5 * res / np.tanh(0.5 * rad)\n+ return 0.5 * res / np.tan(0.5 * rad)\n \n if \"fl_x\" in meta:\n fl_x = meta[\"fl_x\"]\n", "issue": "tanh instead of tan bug\nHi,\r\nplease change tanh (hyperbolic tan) to tan \r\n\r\nhttps://github.com/nerfstudio-project/nerfstudio/blob/1a24f3e58c544bc0211563e770d425426284256c/nerfstudio/data/dataparsers/instant_ngp_dataparser.py#L133\r\n\r\nthanks\n", "before_files": [{"content": "# Copyright 2022 The Nerfstudio Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Data parser for instant ngp data\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass, field\nfrom pathlib import Path\nfrom typing import Dict, Tuple, Type\n\nimport numpy as np\nimport torch\nfrom rich.console import Console\n\nfrom nerfstudio.cameras import camera_utils\nfrom nerfstudio.cameras.cameras import Cameras, CameraType\nfrom nerfstudio.data.dataparsers.base_dataparser import (\n DataParser,\n DataParserConfig,\n DataparserOutputs,\n)\nfrom nerfstudio.data.scene_box import SceneBox\nfrom nerfstudio.utils.io import load_from_json\n\nCONSOLE = Console(width=120)\n\n\n@dataclass\nclass InstantNGPDataParserConfig(DataParserConfig):\n \"\"\"Instant-NGP dataset parser config\"\"\"\n\n _target: Type = field(default_factory=lambda: InstantNGP)\n \"\"\"target class to instantiate\"\"\"\n data: Path = Path(\"data/ours/posterv2\")\n \"\"\"Directory specifying location of data.\"\"\"\n scale_factor: float = 1.0\n \"\"\"How much to scale the camera origins by.\"\"\"\n scene_scale: float = 0.33\n \"\"\"How much to scale the scene.\"\"\"\n\n\n@dataclass\nclass InstantNGP(DataParser):\n \"\"\"Instant NGP Dataset\"\"\"\n\n config: InstantNGPDataParserConfig\n\n def _generate_dataparser_outputs(self, split=\"train\"):\n\n meta = load_from_json(self.config.data / \"transforms.json\")\n image_filenames = []\n poses = []\n num_skipped_image_filenames = 0\n for frame in meta[\"frames\"]:\n fname = self.config.data / Path(frame[\"file_path\"])\n if not fname:\n num_skipped_image_filenames += 1\n else:\n image_filenames.append(fname)\n poses.append(np.array(frame[\"transform_matrix\"]))\n if num_skipped_image_filenames >= 0:\n CONSOLE.print(f\"Skipping {num_skipped_image_filenames} files in dataset split {split}.\")\n assert (\n len(image_filenames) != 0\n ), \"\"\"\n No image files found. \n You should check the file_paths in the transforms.json file to make sure they are correct.\n \"\"\"\n poses = np.array(poses).astype(np.float32)\n poses[:, :3, 3] *= self.config.scene_scale\n\n camera_to_world = torch.from_numpy(poses[:, :3]) # camera to world transform\n\n distortion_params = camera_utils.get_distortion_params(\n k1=float(meta[\"k1\"]), k2=float(meta[\"k2\"]), p1=float(meta[\"p1\"]), p2=float(meta[\"p2\"])\n )\n\n # in x,y,z order\n # assumes that the scene is centered at the origin\n aabb_scale = meta[\"aabb_scale\"]\n scene_box = SceneBox(\n aabb=torch.tensor(\n [[-aabb_scale, -aabb_scale, -aabb_scale], [aabb_scale, aabb_scale, aabb_scale]], dtype=torch.float32\n )\n )\n\n fl_x, fl_y = InstantNGP.get_focal_lengths(meta)\n\n cameras = Cameras(\n fx=float(fl_x),\n fy=float(fl_y),\n cx=float(meta[\"cx\"]),\n cy=float(meta[\"cy\"]),\n distortion_params=distortion_params,\n height=int(meta[\"h\"]),\n width=int(meta[\"w\"]),\n camera_to_worlds=camera_to_world,\n camera_type=CameraType.PERSPECTIVE,\n )\n\n # TODO(ethan): add alpha background color\n dataparser_outputs = DataparserOutputs(\n image_filenames=image_filenames,\n cameras=cameras,\n scene_box=scene_box,\n )\n\n return dataparser_outputs\n\n @classmethod\n def get_focal_lengths(cls, meta: Dict) -> Tuple[float, float]:\n \"\"\"Reads or computes the focal length from transforms dict.\n Args:\n meta: metadata from transforms.json file.\n Returns:\n Focal lengths in the x and y directions. Error is raised if these cannot be calculated.\n \"\"\"\n fl_x, fl_y = 0, 0\n\n def fov_to_focal_length(rad, res):\n return 0.5 * res / np.tanh(0.5 * rad)\n\n if \"fl_x\" in meta:\n fl_x = meta[\"fl_x\"]\n elif \"x_fov\" in meta:\n fl_x = fov_to_focal_length(np.deg2rad(meta[\"x_fov\"]), meta[\"w\"])\n elif \"camera_angle_x\" in meta:\n fl_x = fov_to_focal_length(meta[\"camera_angle_x\"], meta[\"w\"])\n\n if \"fl_y\" in meta:\n fl_y = meta[\"fl_y\"]\n elif \"y_fov\" in meta:\n fl_y = fov_to_focal_length(np.deg2rad(meta[\"y_fov\"]), meta[\"h\"])\n elif \"camera_angle_y\" in meta:\n fl_y = fov_to_focal_length(meta[\"camera_angle_y\"], meta[\"h\"])\n\n if fl_x == 0 or fl_y == 0:\n raise AttributeError(\"Focal length cannot be calculated from transforms.json (missing fields).\")\n\n return (fl_x, fl_y)\n", "path": "nerfstudio/data/dataparsers/instant_ngp_dataparser.py"}], "after_files": [{"content": "# Copyright 2022 The Nerfstudio Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Data parser for instant ngp data\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass, field\nfrom pathlib import Path\nfrom typing import Dict, Tuple, Type\n\nimport numpy as np\nimport torch\nfrom rich.console import Console\n\nfrom nerfstudio.cameras import camera_utils\nfrom nerfstudio.cameras.cameras import Cameras, CameraType\nfrom nerfstudio.data.dataparsers.base_dataparser import (\n DataParser,\n DataParserConfig,\n DataparserOutputs,\n)\nfrom nerfstudio.data.scene_box import SceneBox\nfrom nerfstudio.utils.io import load_from_json\n\nCONSOLE = Console(width=120)\n\n\n@dataclass\nclass InstantNGPDataParserConfig(DataParserConfig):\n \"\"\"Instant-NGP dataset parser config\"\"\"\n\n _target: Type = field(default_factory=lambda: InstantNGP)\n \"\"\"target class to instantiate\"\"\"\n data: Path = Path(\"data/ours/posterv2\")\n \"\"\"Directory specifying location of data.\"\"\"\n scale_factor: float = 1.0\n \"\"\"How much to scale the camera origins by.\"\"\"\n scene_scale: float = 0.33\n \"\"\"How much to scale the scene.\"\"\"\n\n\n@dataclass\nclass InstantNGP(DataParser):\n \"\"\"Instant NGP Dataset\"\"\"\n\n config: InstantNGPDataParserConfig\n\n def _generate_dataparser_outputs(self, split=\"train\"):\n\n meta = load_from_json(self.config.data / \"transforms.json\")\n image_filenames = []\n poses = []\n num_skipped_image_filenames = 0\n for frame in meta[\"frames\"]:\n fname = self.config.data / Path(frame[\"file_path\"])\n if not fname:\n num_skipped_image_filenames += 1\n else:\n image_filenames.append(fname)\n poses.append(np.array(frame[\"transform_matrix\"]))\n if num_skipped_image_filenames >= 0:\n CONSOLE.print(f\"Skipping {num_skipped_image_filenames} files in dataset split {split}.\")\n assert (\n len(image_filenames) != 0\n ), \"\"\"\n No image files found. \n You should check the file_paths in the transforms.json file to make sure they are correct.\n \"\"\"\n poses = np.array(poses).astype(np.float32)\n poses[:, :3, 3] *= self.config.scene_scale\n\n camera_to_world = torch.from_numpy(poses[:, :3]) # camera to world transform\n\n distortion_params = camera_utils.get_distortion_params(\n k1=float(meta[\"k1\"]), k2=float(meta[\"k2\"]), p1=float(meta[\"p1\"]), p2=float(meta[\"p2\"])\n )\n\n # in x,y,z order\n # assumes that the scene is centered at the origin\n aabb_scale = meta[\"aabb_scale\"]\n scene_box = SceneBox(\n aabb=torch.tensor(\n [[-aabb_scale, -aabb_scale, -aabb_scale], [aabb_scale, aabb_scale, aabb_scale]], dtype=torch.float32\n )\n )\n\n fl_x, fl_y = InstantNGP.get_focal_lengths(meta)\n\n cameras = Cameras(\n fx=float(fl_x),\n fy=float(fl_y),\n cx=float(meta[\"cx\"]),\n cy=float(meta[\"cy\"]),\n distortion_params=distortion_params,\n height=int(meta[\"h\"]),\n width=int(meta[\"w\"]),\n camera_to_worlds=camera_to_world,\n camera_type=CameraType.PERSPECTIVE,\n )\n\n # TODO(ethan): add alpha background color\n dataparser_outputs = DataparserOutputs(\n image_filenames=image_filenames,\n cameras=cameras,\n scene_box=scene_box,\n )\n\n return dataparser_outputs\n\n @classmethod\n def get_focal_lengths(cls, meta: Dict) -> Tuple[float, float]:\n \"\"\"Reads or computes the focal length from transforms dict.\n Args:\n meta: metadata from transforms.json file.\n Returns:\n Focal lengths in the x and y directions. Error is raised if these cannot be calculated.\n \"\"\"\n fl_x, fl_y = 0, 0\n\n def fov_to_focal_length(rad, res):\n return 0.5 * res / np.tan(0.5 * rad)\n\n if \"fl_x\" in meta:\n fl_x = meta[\"fl_x\"]\n elif \"x_fov\" in meta:\n fl_x = fov_to_focal_length(np.deg2rad(meta[\"x_fov\"]), meta[\"w\"])\n elif \"camera_angle_x\" in meta:\n fl_x = fov_to_focal_length(meta[\"camera_angle_x\"], meta[\"w\"])\n\n if \"fl_y\" in meta:\n fl_y = meta[\"fl_y\"]\n elif \"y_fov\" in meta:\n fl_y = fov_to_focal_length(np.deg2rad(meta[\"y_fov\"]), meta[\"h\"])\n elif \"camera_angle_y\" in meta:\n fl_y = fov_to_focal_length(meta[\"camera_angle_y\"], meta[\"h\"])\n\n if fl_x == 0 or fl_y == 0:\n raise AttributeError(\"Focal length cannot be calculated from transforms.json (missing fields).\")\n\n return (fl_x, fl_y)\n", "path": "nerfstudio/data/dataparsers/instant_ngp_dataparser.py"}]}
| 1,978 | 173 |
gh_patches_debug_20678
|
rasdani/github-patches
|
git_diff
|
freqtrade__freqtrade-7571
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow freqai to pull prediction_models from user_data
Currently, only classes present in `freqai/prediction_models` are available for backtesting/trading.
Allowing the user to define a custom model to be used with `--freqaimodel` would allow more flexibility.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/configuration/directory_operations.py`
Content:
```
1 import logging
2 import shutil
3 from pathlib import Path
4 from typing import Optional
5
6 from freqtrade.constants import USER_DATA_FILES, Config
7 from freqtrade.exceptions import OperationalException
8
9
10 logger = logging.getLogger(__name__)
11
12
13 def create_datadir(config: Config, datadir: Optional[str] = None) -> Path:
14
15 folder = Path(datadir) if datadir else Path(f"{config['user_data_dir']}/data")
16 if not datadir:
17 # set datadir
18 exchange_name = config.get('exchange', {}).get('name', '').lower()
19 folder = folder.joinpath(exchange_name)
20
21 if not folder.is_dir():
22 folder.mkdir(parents=True)
23 logger.info(f'Created data directory: {datadir}')
24 return folder
25
26
27 def chown_user_directory(directory: Path) -> None:
28 """
29 Use Sudo to change permissions of the home-directory if necessary
30 Only applies when running in docker!
31 """
32 import os
33 if os.environ.get('FT_APP_ENV') == 'docker':
34 try:
35 import subprocess
36 subprocess.check_output(
37 ['sudo', 'chown', '-R', 'ftuser:', str(directory.resolve())])
38 except Exception:
39 logger.warning(f"Could not chown {directory}")
40
41
42 def create_userdata_dir(directory: str, create_dir: bool = False) -> Path:
43 """
44 Create userdata directory structure.
45 if create_dir is True, then the parent-directory will be created if it does not exist.
46 Sub-directories will always be created if the parent directory exists.
47 Raises OperationalException if given a non-existing directory.
48 :param directory: Directory to check
49 :param create_dir: Create directory if it does not exist.
50 :return: Path object containing the directory
51 """
52 sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "logs",
53 "notebooks", "plot", "strategies", ]
54 folder = Path(directory)
55 chown_user_directory(folder)
56 if not folder.is_dir():
57 if create_dir:
58 folder.mkdir(parents=True)
59 logger.info(f'Created user-data directory: {folder}')
60 else:
61 raise OperationalException(
62 f"Directory `{folder}` does not exist. "
63 "Please use `freqtrade create-userdir` to create a user directory")
64
65 # Create required subdirectories
66 for f in sub_dirs:
67 subfolder = folder / f
68 if not subfolder.is_dir():
69 subfolder.mkdir(parents=False)
70 return folder
71
72
73 def copy_sample_files(directory: Path, overwrite: bool = False) -> None:
74 """
75 Copy files from templates to User data directory.
76 :param directory: Directory to copy data to
77 :param overwrite: Overwrite existing sample files
78 """
79 if not directory.is_dir():
80 raise OperationalException(f"Directory `{directory}` does not exist.")
81 sourcedir = Path(__file__).parents[1] / "templates"
82 for source, target in USER_DATA_FILES.items():
83 targetdir = directory / target
84 if not targetdir.is_dir():
85 raise OperationalException(f"Directory `{targetdir}` does not exist.")
86 targetfile = targetdir / source
87 if targetfile.exists():
88 if not overwrite:
89 logger.warning(f"File `{targetfile}` exists already, not deploying sample file.")
90 continue
91 logger.warning(f"File `{targetfile}` exists already, overwriting.")
92 shutil.copy(str(sourcedir / source), str(targetfile))
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py
--- a/freqtrade/configuration/directory_operations.py
+++ b/freqtrade/configuration/directory_operations.py
@@ -3,7 +3,8 @@
from pathlib import Path
from typing import Optional
-from freqtrade.constants import USER_DATA_FILES, Config
+from freqtrade.constants import (USER_DATA_FILES, USERPATH_FREQAIMODELS, USERPATH_HYPEROPTS,
+ USERPATH_NOTEBOOKS, USERPATH_STRATEGIES, Config)
from freqtrade.exceptions import OperationalException
@@ -49,8 +50,8 @@
:param create_dir: Create directory if it does not exist.
:return: Path object containing the directory
"""
- sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "logs",
- "notebooks", "plot", "strategies", ]
+ sub_dirs = ["backtest_results", "data", USERPATH_HYPEROPTS, "hyperopt_results", "logs",
+ USERPATH_NOTEBOOKS, "plot", USERPATH_STRATEGIES, USERPATH_FREQAIMODELS]
folder = Path(directory)
chown_user_directory(folder)
if not folder.is_dir():
|
{"golden_diff": "diff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py\n--- a/freqtrade/configuration/directory_operations.py\n+++ b/freqtrade/configuration/directory_operations.py\n@@ -3,7 +3,8 @@\n from pathlib import Path\n from typing import Optional\n \n-from freqtrade.constants import USER_DATA_FILES, Config\n+from freqtrade.constants import (USER_DATA_FILES, USERPATH_FREQAIMODELS, USERPATH_HYPEROPTS,\n+ USERPATH_NOTEBOOKS, USERPATH_STRATEGIES, Config)\n from freqtrade.exceptions import OperationalException\n \n \n@@ -49,8 +50,8 @@\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n- sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n- \"notebooks\", \"plot\", \"strategies\", ]\n+ sub_dirs = [\"backtest_results\", \"data\", USERPATH_HYPEROPTS, \"hyperopt_results\", \"logs\",\n+ USERPATH_NOTEBOOKS, \"plot\", USERPATH_STRATEGIES, USERPATH_FREQAIMODELS]\n folder = Path(directory)\n chown_user_directory(folder)\n if not folder.is_dir():\n", "issue": "Allow freqai to pull prediction_models from user_data\nCurrently, only classes present in `freqai/prediction_models` are available for backtesting/trading.\r\nAllowing the user to define a custom model to be used with `--freqaimodel` would allow more flexibility.\n", "before_files": [{"content": "import logging\nimport shutil\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom freqtrade.constants import USER_DATA_FILES, Config\nfrom freqtrade.exceptions import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_datadir(config: Config, datadir: Optional[str] = None) -> Path:\n\n folder = Path(datadir) if datadir else Path(f\"{config['user_data_dir']}/data\")\n if not datadir:\n # set datadir\n exchange_name = config.get('exchange', {}).get('name', '').lower()\n folder = folder.joinpath(exchange_name)\n\n if not folder.is_dir():\n folder.mkdir(parents=True)\n logger.info(f'Created data directory: {datadir}')\n return folder\n\n\ndef chown_user_directory(directory: Path) -> None:\n \"\"\"\n Use Sudo to change permissions of the home-directory if necessary\n Only applies when running in docker!\n \"\"\"\n import os\n if os.environ.get('FT_APP_ENV') == 'docker':\n try:\n import subprocess\n subprocess.check_output(\n ['sudo', 'chown', '-R', 'ftuser:', str(directory.resolve())])\n except Exception:\n logger.warning(f\"Could not chown {directory}\")\n\n\ndef create_userdata_dir(directory: str, create_dir: bool = False) -> Path:\n \"\"\"\n Create userdata directory structure.\n if create_dir is True, then the parent-directory will be created if it does not exist.\n Sub-directories will always be created if the parent directory exists.\n Raises OperationalException if given a non-existing directory.\n :param directory: Directory to check\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n \"notebooks\", \"plot\", \"strategies\", ]\n folder = Path(directory)\n chown_user_directory(folder)\n if not folder.is_dir():\n if create_dir:\n folder.mkdir(parents=True)\n logger.info(f'Created user-data directory: {folder}')\n else:\n raise OperationalException(\n f\"Directory `{folder}` does not exist. \"\n \"Please use `freqtrade create-userdir` to create a user directory\")\n\n # Create required subdirectories\n for f in sub_dirs:\n subfolder = folder / f\n if not subfolder.is_dir():\n subfolder.mkdir(parents=False)\n return folder\n\n\ndef copy_sample_files(directory: Path, overwrite: bool = False) -> None:\n \"\"\"\n Copy files from templates to User data directory.\n :param directory: Directory to copy data to\n :param overwrite: Overwrite existing sample files\n \"\"\"\n if not directory.is_dir():\n raise OperationalException(f\"Directory `{directory}` does not exist.\")\n sourcedir = Path(__file__).parents[1] / \"templates\"\n for source, target in USER_DATA_FILES.items():\n targetdir = directory / target\n if not targetdir.is_dir():\n raise OperationalException(f\"Directory `{targetdir}` does not exist.\")\n targetfile = targetdir / source\n if targetfile.exists():\n if not overwrite:\n logger.warning(f\"File `{targetfile}` exists already, not deploying sample file.\")\n continue\n logger.warning(f\"File `{targetfile}` exists already, overwriting.\")\n shutil.copy(str(sourcedir / source), str(targetfile))\n", "path": "freqtrade/configuration/directory_operations.py"}], "after_files": [{"content": "import logging\nimport shutil\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom freqtrade.constants import (USER_DATA_FILES, USERPATH_FREQAIMODELS, USERPATH_HYPEROPTS,\n USERPATH_NOTEBOOKS, USERPATH_STRATEGIES, Config)\nfrom freqtrade.exceptions import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_datadir(config: Config, datadir: Optional[str] = None) -> Path:\n\n folder = Path(datadir) if datadir else Path(f\"{config['user_data_dir']}/data\")\n if not datadir:\n # set datadir\n exchange_name = config.get('exchange', {}).get('name', '').lower()\n folder = folder.joinpath(exchange_name)\n\n if not folder.is_dir():\n folder.mkdir(parents=True)\n logger.info(f'Created data directory: {datadir}')\n return folder\n\n\ndef chown_user_directory(directory: Path) -> None:\n \"\"\"\n Use Sudo to change permissions of the home-directory if necessary\n Only applies when running in docker!\n \"\"\"\n import os\n if os.environ.get('FT_APP_ENV') == 'docker':\n try:\n import subprocess\n subprocess.check_output(\n ['sudo', 'chown', '-R', 'ftuser:', str(directory.resolve())])\n except Exception:\n logger.warning(f\"Could not chown {directory}\")\n\n\ndef create_userdata_dir(directory: str, create_dir: bool = False) -> Path:\n \"\"\"\n Create userdata directory structure.\n if create_dir is True, then the parent-directory will be created if it does not exist.\n Sub-directories will always be created if the parent directory exists.\n Raises OperationalException if given a non-existing directory.\n :param directory: Directory to check\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n sub_dirs = [\"backtest_results\", \"data\", USERPATH_HYPEROPTS, \"hyperopt_results\", \"logs\",\n USERPATH_NOTEBOOKS, \"plot\", USERPATH_STRATEGIES, USERPATH_FREQAIMODELS]\n folder = Path(directory)\n chown_user_directory(folder)\n if not folder.is_dir():\n if create_dir:\n folder.mkdir(parents=True)\n logger.info(f'Created user-data directory: {folder}')\n else:\n raise OperationalException(\n f\"Directory `{folder}` does not exist. \"\n \"Please use `freqtrade create-userdir` to create a user directory\")\n\n # Create required subdirectories\n for f in sub_dirs:\n subfolder = folder / f\n if not subfolder.is_dir():\n subfolder.mkdir(parents=False)\n return folder\n\n\ndef copy_sample_files(directory: Path, overwrite: bool = False) -> None:\n \"\"\"\n Copy files from templates to User data directory.\n :param directory: Directory to copy data to\n :param overwrite: Overwrite existing sample files\n \"\"\"\n if not directory.is_dir():\n raise OperationalException(f\"Directory `{directory}` does not exist.\")\n sourcedir = Path(__file__).parents[1] / \"templates\"\n for source, target in USER_DATA_FILES.items():\n targetdir = directory / target\n if not targetdir.is_dir():\n raise OperationalException(f\"Directory `{targetdir}` does not exist.\")\n targetfile = targetdir / source\n if targetfile.exists():\n if not overwrite:\n logger.warning(f\"File `{targetfile}` exists already, not deploying sample file.\")\n continue\n logger.warning(f\"File `{targetfile}` exists already, overwriting.\")\n shutil.copy(str(sourcedir / source), str(targetfile))\n", "path": "freqtrade/configuration/directory_operations.py"}]}
| 1,238 | 276 |
gh_patches_debug_1797
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-5346
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove all warnings from pytest
When running `tox` we see these warnings in the summary.
We should use `request` fixture and access to `request.config` instead.
Docs: https://docs.pytest.org/en/latest/fixture.html#request-context
Change log: https://docs.pytest.org/en/latest/deprecations.html#pytest-config-global
```
====================================================================================== warnings summary ======================================================================================
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index_no_directory_urls
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_no_directory_urls
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_signlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text_invalid
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_singlehtml
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_htmldir
readthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_singlehtml
/home/humitos/rtfd/code/readthedocs-corporate/.tox/py36/readthedocs.org/readthedocs/rtd_tests/tests/test_core_tags.py:19: PytestDeprecationWarning: the `pytest.config` global is deprecated. Please use `request.config` or `pytest_configure` (if you're a pytest plugin) instead.
scheme=pytest.config.option.url_scheme,
-- Docs: https://docs.pytest.org/en/latest/warnings.html
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conftest.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import pytest
3 from django.conf import settings
4 from rest_framework.test import APIClient
5
6 try:
7 # TODO: this file is read/executed even when called from ``readthedocsinc``,
8 # so it's overriding the options that we are defining in the ``conftest.py``
9 # from the corporate site. We need to find a better way to avoid this.
10 import readthedocsinc
11 PYTEST_OPTIONS = ()
12 except ImportError:
13 PYTEST_OPTIONS = (
14 # Options to set test environment
15 ('community', True),
16 ('corporate', False),
17 ('environment', 'readthedocs'),
18
19 ('url_scheme', 'http'),
20 )
21
22
23 def pytest_addoption(parser):
24 parser.addoption(
25 '--including-search',
26 action='store_true',
27 dest='searchtests',
28 default=False, help='enable search tests',
29 )
30
31
32 def pytest_configure(config):
33 if not config.option.searchtests:
34 # Include ``not search``` to parameters so search tests do not perform
35 markexpr = getattr(config.option, 'markexpr')
36 if markexpr:
37 markexpr += ' and not search'
38 else:
39 markexpr = 'not search'
40 setattr(config.option, 'markexpr', markexpr.strip())
41
42 for option, value in PYTEST_OPTIONS:
43 setattr(config.option, option, value)
44
45
46 @pytest.fixture(autouse=True)
47 def settings_modification(settings):
48 settings.CELERY_ALWAYS_EAGER = True
49
50 @pytest.fixture
51 def api_client():
52 return APIClient()
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -47,6 +47,12 @@
def settings_modification(settings):
settings.CELERY_ALWAYS_EAGER = True
+
@pytest.fixture
def api_client():
return APIClient()
+
+
[email protected](scope="class")
+def url_scheme(request):
+ request.cls.url_scheme = request.config.option.url_scheme
|
{"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -47,6 +47,12 @@\n def settings_modification(settings):\n settings.CELERY_ALWAYS_EAGER = True\n \n+\n @pytest.fixture\n def api_client():\n return APIClient()\n+\n+\[email protected](scope=\"class\")\n+def url_scheme(request):\n+ request.cls.url_scheme = request.config.option.url_scheme\n", "issue": "Remove all warnings from pytest\nWhen running `tox` we see these warnings in the summary.\r\n\r\nWe should use `request` fixture and access to `request.config` instead.\r\n\r\nDocs: https://docs.pytest.org/en/latest/fixture.html#request-context\r\nChange log: https://docs.pytest.org/en/latest/deprecations.html#pytest-config-global\r\n\r\n\r\n```\r\n====================================================================================== warnings summary ======================================================================================\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_index_no_directory_urls\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_mkdocs_no_directory_urls\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_and_page_signlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_and_version_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_project_only_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_restructured_text_invalid\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_and_page_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_and_version_singlehtml\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_htmldir\r\nreadthedocs/rtd_tests/tests/test_core_tags.py::CoreTagsTests::test_translation_project_only_singlehtml\r\n /home/humitos/rtfd/code/readthedocs-corporate/.tox/py36/readthedocs.org/readthedocs/rtd_tests/tests/test_core_tags.py:19: PytestDeprecationWarning: the `pytest.config` global is deprecated. Please use `request.config` or `pytest_configure` (if you're a pytest plugin) instead.\r\n scheme=pytest.config.option.url_scheme,\r\n\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport pytest\nfrom django.conf import settings\nfrom rest_framework.test import APIClient\n\ntry:\n # TODO: this file is read/executed even when called from ``readthedocsinc``,\n # so it's overriding the options that we are defining in the ``conftest.py``\n # from the corporate site. We need to find a better way to avoid this.\n import readthedocsinc\n PYTEST_OPTIONS = ()\nexcept ImportError:\n PYTEST_OPTIONS = (\n # Options to set test environment\n ('community', True),\n ('corporate', False),\n ('environment', 'readthedocs'),\n\n ('url_scheme', 'http'),\n )\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n '--including-search',\n action='store_true',\n dest='searchtests',\n default=False, help='enable search tests',\n )\n\n\ndef pytest_configure(config):\n if not config.option.searchtests:\n # Include ``not search``` to parameters so search tests do not perform\n markexpr = getattr(config.option, 'markexpr')\n if markexpr:\n markexpr += ' and not search'\n else:\n markexpr = 'not search'\n setattr(config.option, 'markexpr', markexpr.strip())\n\n for option, value in PYTEST_OPTIONS:\n setattr(config.option, option, value)\n\n\[email protected](autouse=True)\ndef settings_modification(settings):\n settings.CELERY_ALWAYS_EAGER = True\n\[email protected]\ndef api_client():\n return APIClient()\n", "path": "conftest.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport pytest\nfrom django.conf import settings\nfrom rest_framework.test import APIClient\n\ntry:\n # TODO: this file is read/executed even when called from ``readthedocsinc``,\n # so it's overriding the options that we are defining in the ``conftest.py``\n # from the corporate site. We need to find a better way to avoid this.\n import readthedocsinc\n PYTEST_OPTIONS = ()\nexcept ImportError:\n PYTEST_OPTIONS = (\n # Options to set test environment\n ('community', True),\n ('corporate', False),\n ('environment', 'readthedocs'),\n\n ('url_scheme', 'http'),\n )\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n '--including-search',\n action='store_true',\n dest='searchtests',\n default=False, help='enable search tests',\n )\n\n\ndef pytest_configure(config):\n if not config.option.searchtests:\n # Include ``not search``` to parameters so search tests do not perform\n markexpr = getattr(config.option, 'markexpr')\n if markexpr:\n markexpr += ' and not search'\n else:\n markexpr = 'not search'\n setattr(config.option, 'markexpr', markexpr.strip())\n\n for option, value in PYTEST_OPTIONS:\n setattr(config.option, option, value)\n\n\[email protected](autouse=True)\ndef settings_modification(settings):\n settings.CELERY_ALWAYS_EAGER = True\n\n\[email protected]\ndef api_client():\n return APIClient()\n\n\[email protected](scope=\"class\")\ndef url_scheme(request):\n request.cls.url_scheme = request.config.option.url_scheme\n", "path": "conftest.py"}]}
| 1,437 | 101 |
gh_patches_debug_22245
|
rasdani/github-patches
|
git_diff
|
saulpw__visidata-543
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TSV file with column name "length" causes TypeError
**Small description**
Files in TSV format containing a column named `length` cannot be loaded.
**Expected result**
See content of TSV file.
**Actual result with screenshot**
An empty file is shown. In the footer line it says:
```
TypeError: 'property' object is not callable
```
**Steps to reproduce with sample data and a .vd**
Create a file named `test.tsv` with this content:
```
length
1
```
Then, try to open it:
```
vd test.tsv
```
**Additional context**
version 1.5.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/utils.py`
Content:
```
1 import operator
2
3 'Various helper classes and functions.'
4
5 __all__ = ['AttrDict', 'joinSheetnames', 'moveListItem', 'namedlist', 'classproperty']
6
7
8 class AttrDict(dict):
9 'Augment a dict with more convenient .attr syntax. not-present keys return None.'
10 def __getattr__(self, k):
11 try:
12 return self[k]
13 except KeyError:
14 return None
15
16 def __setattr__(self, k, v):
17 self[k] = v
18
19 def __dir__(self):
20 return self.keys()
21
22
23 class classproperty(property):
24 def __get__(self, cls, obj):
25 return classmethod(self.fget).__get__(None, obj or cls)()
26
27
28 def joinSheetnames(*sheetnames):
29 'Concatenate sheet names in a standard way'
30 return '_'.join(str(x) for x in sheetnames)
31
32
33 def moveListItem(L, fromidx, toidx):
34 "Move element within list `L` and return element's new index."
35 toidx = min(max(toidx, 0), len(L)-1)
36 fromidx = min(max(fromidx, 0), len(L)-1)
37 r = L.pop(fromidx)
38 L.insert(toidx, r)
39 return toidx
40
41
42 class OnExit:
43 '"with OnExit(func, ...):" calls func(...) when the context is exited'
44 def __init__(self, func, *args, **kwargs):
45 self.func = func
46 self.args = args
47 self.kwargs = kwargs
48
49 def __enter__(self):
50 return self
51
52 def __exit__(self, exc_type, exc_value, exc_traceback):
53 try:
54 self.func(*self.args, **self.kwargs)
55 except Exception as e:
56 vd.exceptionCaught(e)
57
58
59 def itemsetter(i):
60 def g(obj, v):
61 obj[i] = v
62 return g
63
64
65 def namedlist(objname, fieldnames):
66 'like namedtuple but editable'
67 class NamedListTemplate(list):
68 __name__ = objname
69 _fields = fieldnames
70
71 def __init__(self, L=None, **kwargs):
72 if L is None:
73 L = [None]*self.length()
74 elif len(L) < self.length():
75 L.extend([None]*(self.length() - len(L)))
76 super().__init__(L)
77 for k, v in kwargs.items():
78 setattr(self, k, v)
79
80 @classmethod
81 def length(cls):
82 return len(cls._fields)
83
84 def __getattr__(self, k):
85 'to enable .fieldname'
86 try:
87 return self[self._fields.index(k)]
88 except ValueError:
89 raise AttributeError
90
91 def __setattr__(self, k, v):
92 'to enable .fieldname ='
93 try:
94 self[self._fields.index(k)] = v
95 except ValueError:
96 super().__setattr__(k, v)
97
98 for i, attrname in enumerate(fieldnames):
99 # create property getter/setter for each field
100 setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))
101
102 return NamedListTemplate
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/visidata/utils.py b/visidata/utils.py
--- a/visidata/utils.py
+++ b/visidata/utils.py
@@ -70,17 +70,13 @@
def __init__(self, L=None, **kwargs):
if L is None:
- L = [None]*self.length()
- elif len(L) < self.length():
- L.extend([None]*(self.length() - len(L)))
+ L = [None]*len(self._fields)
+ elif len(L) < len(self._fields):
+ L.extend([None]*(len(self._fields) - len(L)))
super().__init__(L)
for k, v in kwargs.items():
setattr(self, k, v)
- @classmethod
- def length(cls):
- return len(cls._fields)
-
def __getattr__(self, k):
'to enable .fieldname'
try:
@@ -95,8 +91,4 @@
except ValueError:
super().__setattr__(k, v)
- for i, attrname in enumerate(fieldnames):
- # create property getter/setter for each field
- setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))
-
return NamedListTemplate
|
{"golden_diff": "diff --git a/visidata/utils.py b/visidata/utils.py\n--- a/visidata/utils.py\n+++ b/visidata/utils.py\n@@ -70,17 +70,13 @@\n \n def __init__(self, L=None, **kwargs):\n if L is None:\n- L = [None]*self.length()\n- elif len(L) < self.length():\n- L.extend([None]*(self.length() - len(L)))\n+ L = [None]*len(self._fields)\n+ elif len(L) < len(self._fields):\n+ L.extend([None]*(len(self._fields) - len(L)))\n super().__init__(L)\n for k, v in kwargs.items():\n setattr(self, k, v)\n \n- @classmethod\n- def length(cls):\n- return len(cls._fields)\n-\n def __getattr__(self, k):\n 'to enable .fieldname'\n try:\n@@ -95,8 +91,4 @@\n except ValueError:\n super().__setattr__(k, v)\n \n- for i, attrname in enumerate(fieldnames):\n- # create property getter/setter for each field\n- setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))\n-\n return NamedListTemplate\n", "issue": "TSV file with column name \"length\" causes TypeError\n**Small description**\r\nFiles in TSV format containing a column named `length` cannot be loaded.\r\n\r\n**Expected result**\r\nSee content of TSV file.\r\n\r\n**Actual result with screenshot**\r\nAn empty file is shown. In the footer line it says:\r\n```\r\nTypeError: 'property' object is not callable\r\n```\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\nCreate a file named `test.tsv` with this content:\r\n```\r\nlength\r\n1\r\n```\r\nThen, try to open it:\r\n```\r\nvd test.tsv\r\n```\r\n\r\n**Additional context**\r\nversion 1.5.2\r\n\n", "before_files": [{"content": "import operator\n\n'Various helper classes and functions.'\n\n__all__ = ['AttrDict', 'joinSheetnames', 'moveListItem', 'namedlist', 'classproperty']\n\n\nclass AttrDict(dict):\n 'Augment a dict with more convenient .attr syntax. not-present keys return None.'\n def __getattr__(self, k):\n try:\n return self[k]\n except KeyError:\n return None\n\n def __setattr__(self, k, v):\n self[k] = v\n\n def __dir__(self):\n return self.keys()\n\n\nclass classproperty(property):\n def __get__(self, cls, obj):\n return classmethod(self.fget).__get__(None, obj or cls)()\n\n\ndef joinSheetnames(*sheetnames):\n 'Concatenate sheet names in a standard way'\n return '_'.join(str(x) for x in sheetnames)\n\n\ndef moveListItem(L, fromidx, toidx):\n \"Move element within list `L` and return element's new index.\"\n toidx = min(max(toidx, 0), len(L)-1)\n fromidx = min(max(fromidx, 0), len(L)-1)\n r = L.pop(fromidx)\n L.insert(toidx, r)\n return toidx\n\n\nclass OnExit:\n '\"with OnExit(func, ...):\" calls func(...) when the context is exited'\n def __init__(self, func, *args, **kwargs):\n self.func = func\n self.args = args\n self.kwargs = kwargs\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_value, exc_traceback):\n try:\n self.func(*self.args, **self.kwargs)\n except Exception as e:\n vd.exceptionCaught(e)\n\n\ndef itemsetter(i):\n def g(obj, v):\n obj[i] = v\n return g\n\n\ndef namedlist(objname, fieldnames):\n 'like namedtuple but editable'\n class NamedListTemplate(list):\n __name__ = objname\n _fields = fieldnames\n\n def __init__(self, L=None, **kwargs):\n if L is None:\n L = [None]*self.length()\n elif len(L) < self.length():\n L.extend([None]*(self.length() - len(L)))\n super().__init__(L)\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n @classmethod\n def length(cls):\n return len(cls._fields)\n\n def __getattr__(self, k):\n 'to enable .fieldname'\n try:\n return self[self._fields.index(k)]\n except ValueError:\n raise AttributeError\n\n def __setattr__(self, k, v):\n 'to enable .fieldname ='\n try:\n self[self._fields.index(k)] = v\n except ValueError:\n super().__setattr__(k, v)\n\n for i, attrname in enumerate(fieldnames):\n # create property getter/setter for each field\n setattr(NamedListTemplate, attrname, property(operator.itemgetter(i), itemsetter(i)))\n\n return NamedListTemplate\n", "path": "visidata/utils.py"}], "after_files": [{"content": "import operator\n\n'Various helper classes and functions.'\n\n__all__ = ['AttrDict', 'joinSheetnames', 'moveListItem', 'namedlist', 'classproperty']\n\n\nclass AttrDict(dict):\n 'Augment a dict with more convenient .attr syntax. not-present keys return None.'\n def __getattr__(self, k):\n try:\n return self[k]\n except KeyError:\n return None\n\n def __setattr__(self, k, v):\n self[k] = v\n\n def __dir__(self):\n return self.keys()\n\n\nclass classproperty(property):\n def __get__(self, cls, obj):\n return classmethod(self.fget).__get__(None, obj or cls)()\n\n\ndef joinSheetnames(*sheetnames):\n 'Concatenate sheet names in a standard way'\n return '_'.join(str(x) for x in sheetnames)\n\n\ndef moveListItem(L, fromidx, toidx):\n \"Move element within list `L` and return element's new index.\"\n toidx = min(max(toidx, 0), len(L)-1)\n fromidx = min(max(fromidx, 0), len(L)-1)\n r = L.pop(fromidx)\n L.insert(toidx, r)\n return toidx\n\n\nclass OnExit:\n '\"with OnExit(func, ...):\" calls func(...) when the context is exited'\n def __init__(self, func, *args, **kwargs):\n self.func = func\n self.args = args\n self.kwargs = kwargs\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_value, exc_traceback):\n try:\n self.func(*self.args, **self.kwargs)\n except Exception as e:\n vd.exceptionCaught(e)\n\n\ndef itemsetter(i):\n def g(obj, v):\n obj[i] = v\n return g\n\n\ndef namedlist(objname, fieldnames):\n 'like namedtuple but editable'\n class NamedListTemplate(list):\n __name__ = objname\n _fields = fieldnames\n\n def __init__(self, L=None, **kwargs):\n if L is None:\n L = [None]*len(self._fields)\n elif len(L) < len(self._fields):\n L.extend([None]*(len(self._fields) - len(L)))\n super().__init__(L)\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n def __getattr__(self, k):\n 'to enable .fieldname'\n try:\n return self[self._fields.index(k)]\n except ValueError:\n raise AttributeError\n\n def __setattr__(self, k, v):\n 'to enable .fieldname ='\n try:\n self[self._fields.index(k)] = v\n except ValueError:\n super().__setattr__(k, v)\n\n return NamedListTemplate\n", "path": "visidata/utils.py"}]}
| 1,277 | 282 |
gh_patches_debug_8342
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__models-799
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
policy_gradient 原理介绍部分内容格式存在问题
https://github.com/PaddlePaddle/models/tree/develop/fluid/policy_gradient
policy_gradient demo介绍部分,看起来格式存在问题,能辛苦调整下吗?或者以什么样的方式可以看到原始的文档呢? @wanghaoshuang @lcy-seso
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fluid/policy_gradient/brain.py`
Content:
```
1 import numpy as np
2 import paddle.v2 as paddle
3 import paddle.fluid as fluid
4 # reproducible
5 np.random.seed(1)
6
7
8 class PolicyGradient:
9 def __init__(
10 self,
11 n_actions,
12 n_features,
13 learning_rate=0.01,
14 reward_decay=0.95,
15 output_graph=False, ):
16 self.n_actions = n_actions
17 self.n_features = n_features
18 self.lr = learning_rate
19 self.gamma = reward_decay
20
21 self.ep_obs, self.ep_as, self.ep_rs = [], [], []
22
23 self.place = fluid.CPUPlace()
24 self.exe = fluid.Executor(self.place)
25
26 def build_net(self):
27
28 obs = fluid.layers.data(
29 name='obs', shape=[self.n_features], dtype='float32')
30 acts = fluid.layers.data(name='acts', shape=[1], dtype='int64')
31 vt = fluid.layers.data(name='vt', shape=[1], dtype='float32')
32 # fc1
33 fc1 = fluid.layers.fc(
34 input=obs,
35 size=10,
36 act="tanh" # tanh activation
37 )
38 # fc2
39 self.all_act_prob = fluid.layers.fc(input=fc1,
40 size=self.n_actions,
41 act="softmax")
42 # to maximize total reward (log_p * R) is to minimize -(log_p * R)
43 neg_log_prob = fluid.layers.cross_entropy(
44 input=self.all_act_prob,
45 label=acts) # this is negative log of chosen action
46 neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)
47 loss = fluid.layers.reduce_mean(
48 x=neg_log_prob_weight) # reward guided loss
49
50 sgd_optimizer = fluid.optimizer.SGD(self.lr)
51 sgd_optimizer.minimize(loss)
52 self.exe.run(fluid.default_startup_program())
53
54 def choose_action(self, observation):
55 prob_weights = self.exe.run(
56 fluid.default_main_program().prune(self.all_act_prob),
57 feed={"obs": observation[np.newaxis, :]},
58 fetch_list=[self.all_act_prob])
59 prob_weights = np.array(prob_weights[0])
60 action = np.random.choice(
61 range(prob_weights.shape[1]),
62 p=prob_weights.ravel()) # select action w.r.t the actions prob
63 return action
64
65 def store_transition(self, s, a, r):
66 self.ep_obs.append(s)
67 self.ep_as.append(a)
68 self.ep_rs.append(r)
69
70 def learn(self):
71 # discount and normalize episode reward
72 discounted_ep_rs_norm = self._discount_and_norm_rewards()
73 tensor_obs = np.vstack(self.ep_obs).astype("float32")
74 tensor_as = np.array(self.ep_as).astype("int64")
75 tensor_as = tensor_as.reshape([tensor_as.shape[0], 1])
76 tensor_vt = discounted_ep_rs_norm.astype("float32")[:, np.newaxis]
77 # train on episode
78 self.exe.run(
79 fluid.default_main_program(),
80 feed={
81 "obs": tensor_obs, # shape=[None, n_obs]
82 "acts": tensor_as, # shape=[None, ]
83 "vt": tensor_vt # shape=[None, ]
84 })
85 self.ep_obs, self.ep_as, self.ep_rs = [], [], [] # empty episode data
86 return discounted_ep_rs_norm
87
88 def _discount_and_norm_rewards(self):
89 # discount episode rewards
90 discounted_ep_rs = np.zeros_like(self.ep_rs)
91 running_add = 0
92 for t in reversed(range(0, len(self.ep_rs))):
93 running_add = running_add * self.gamma + self.ep_rs[t]
94 discounted_ep_rs[t] = running_add
95
96 # normalize episode rewards
97 discounted_ep_rs -= np.mean(discounted_ep_rs)
98 discounted_ep_rs /= np.std(discounted_ep_rs)
99 return discounted_ep_rs
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/fluid/policy_gradient/brain.py b/fluid/policy_gradient/brain.py
--- a/fluid/policy_gradient/brain.py
+++ b/fluid/policy_gradient/brain.py
@@ -45,7 +45,7 @@
label=acts) # this is negative log of chosen action
neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)
loss = fluid.layers.reduce_mean(
- x=neg_log_prob_weight) # reward guided loss
+ neg_log_prob_weight) # reward guided loss
sgd_optimizer = fluid.optimizer.SGD(self.lr)
sgd_optimizer.minimize(loss)
|
{"golden_diff": "diff --git a/fluid/policy_gradient/brain.py b/fluid/policy_gradient/brain.py\n--- a/fluid/policy_gradient/brain.py\n+++ b/fluid/policy_gradient/brain.py\n@@ -45,7 +45,7 @@\n label=acts) # this is negative log of chosen action\n neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)\n loss = fluid.layers.reduce_mean(\n- x=neg_log_prob_weight) # reward guided loss\n+ neg_log_prob_weight) # reward guided loss\n \n sgd_optimizer = fluid.optimizer.SGD(self.lr)\n sgd_optimizer.minimize(loss)\n", "issue": "policy_gradient \u539f\u7406\u4ecb\u7ecd\u90e8\u5206\u5185\u5bb9\u683c\u5f0f\u5b58\u5728\u95ee\u9898\nhttps://github.com/PaddlePaddle/models/tree/develop/fluid/policy_gradient \r\npolicy_gradient demo\u4ecb\u7ecd\u90e8\u5206\uff0c\u770b\u8d77\u6765\u683c\u5f0f\u5b58\u5728\u95ee\u9898\uff0c\u80fd\u8f9b\u82e6\u8c03\u6574\u4e0b\u5417\uff1f\u6216\u8005\u4ee5\u4ec0\u4e48\u6837\u7684\u65b9\u5f0f\u53ef\u4ee5\u770b\u5230\u539f\u59cb\u7684\u6587\u6863\u5462\uff1f @wanghaoshuang @lcy-seso \n", "before_files": [{"content": "import numpy as np\nimport paddle.v2 as paddle\nimport paddle.fluid as fluid\n# reproducible\nnp.random.seed(1)\n\n\nclass PolicyGradient:\n def __init__(\n self,\n n_actions,\n n_features,\n learning_rate=0.01,\n reward_decay=0.95,\n output_graph=False, ):\n self.n_actions = n_actions\n self.n_features = n_features\n self.lr = learning_rate\n self.gamma = reward_decay\n\n self.ep_obs, self.ep_as, self.ep_rs = [], [], []\n\n self.place = fluid.CPUPlace()\n self.exe = fluid.Executor(self.place)\n\n def build_net(self):\n\n obs = fluid.layers.data(\n name='obs', shape=[self.n_features], dtype='float32')\n acts = fluid.layers.data(name='acts', shape=[1], dtype='int64')\n vt = fluid.layers.data(name='vt', shape=[1], dtype='float32')\n # fc1\n fc1 = fluid.layers.fc(\n input=obs,\n size=10,\n act=\"tanh\" # tanh activation\n )\n # fc2\n self.all_act_prob = fluid.layers.fc(input=fc1,\n size=self.n_actions,\n act=\"softmax\")\n # to maximize total reward (log_p * R) is to minimize -(log_p * R)\n neg_log_prob = fluid.layers.cross_entropy(\n input=self.all_act_prob,\n label=acts) # this is negative log of chosen action\n neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)\n loss = fluid.layers.reduce_mean(\n x=neg_log_prob_weight) # reward guided loss\n\n sgd_optimizer = fluid.optimizer.SGD(self.lr)\n sgd_optimizer.minimize(loss)\n self.exe.run(fluid.default_startup_program())\n\n def choose_action(self, observation):\n prob_weights = self.exe.run(\n fluid.default_main_program().prune(self.all_act_prob),\n feed={\"obs\": observation[np.newaxis, :]},\n fetch_list=[self.all_act_prob])\n prob_weights = np.array(prob_weights[0])\n action = np.random.choice(\n range(prob_weights.shape[1]),\n p=prob_weights.ravel()) # select action w.r.t the actions prob\n return action\n\n def store_transition(self, s, a, r):\n self.ep_obs.append(s)\n self.ep_as.append(a)\n self.ep_rs.append(r)\n\n def learn(self):\n # discount and normalize episode reward\n discounted_ep_rs_norm = self._discount_and_norm_rewards()\n tensor_obs = np.vstack(self.ep_obs).astype(\"float32\")\n tensor_as = np.array(self.ep_as).astype(\"int64\")\n tensor_as = tensor_as.reshape([tensor_as.shape[0], 1])\n tensor_vt = discounted_ep_rs_norm.astype(\"float32\")[:, np.newaxis]\n # train on episode\n self.exe.run(\n fluid.default_main_program(),\n feed={\n \"obs\": tensor_obs, # shape=[None, n_obs]\n \"acts\": tensor_as, # shape=[None, ]\n \"vt\": tensor_vt # shape=[None, ]\n })\n self.ep_obs, self.ep_as, self.ep_rs = [], [], [] # empty episode data\n return discounted_ep_rs_norm\n\n def _discount_and_norm_rewards(self):\n # discount episode rewards\n discounted_ep_rs = np.zeros_like(self.ep_rs)\n running_add = 0\n for t in reversed(range(0, len(self.ep_rs))):\n running_add = running_add * self.gamma + self.ep_rs[t]\n discounted_ep_rs[t] = running_add\n\n # normalize episode rewards\n discounted_ep_rs -= np.mean(discounted_ep_rs)\n discounted_ep_rs /= np.std(discounted_ep_rs)\n return discounted_ep_rs\n", "path": "fluid/policy_gradient/brain.py"}], "after_files": [{"content": "import numpy as np\nimport paddle.v2 as paddle\nimport paddle.fluid as fluid\n# reproducible\nnp.random.seed(1)\n\n\nclass PolicyGradient:\n def __init__(\n self,\n n_actions,\n n_features,\n learning_rate=0.01,\n reward_decay=0.95,\n output_graph=False, ):\n self.n_actions = n_actions\n self.n_features = n_features\n self.lr = learning_rate\n self.gamma = reward_decay\n\n self.ep_obs, self.ep_as, self.ep_rs = [], [], []\n\n self.place = fluid.CPUPlace()\n self.exe = fluid.Executor(self.place)\n\n def build_net(self):\n\n obs = fluid.layers.data(\n name='obs', shape=[self.n_features], dtype='float32')\n acts = fluid.layers.data(name='acts', shape=[1], dtype='int64')\n vt = fluid.layers.data(name='vt', shape=[1], dtype='float32')\n # fc1\n fc1 = fluid.layers.fc(\n input=obs,\n size=10,\n act=\"tanh\" # tanh activation\n )\n # fc2\n self.all_act_prob = fluid.layers.fc(input=fc1,\n size=self.n_actions,\n act=\"softmax\")\n # to maximize total reward (log_p * R) is to minimize -(log_p * R)\n neg_log_prob = fluid.layers.cross_entropy(\n input=self.all_act_prob,\n label=acts) # this is negative log of chosen action\n neg_log_prob_weight = fluid.layers.elementwise_mul(x=neg_log_prob, y=vt)\n loss = fluid.layers.reduce_mean(\n neg_log_prob_weight) # reward guided loss\n\n sgd_optimizer = fluid.optimizer.SGD(self.lr)\n sgd_optimizer.minimize(loss)\n self.exe.run(fluid.default_startup_program())\n\n def choose_action(self, observation):\n prob_weights = self.exe.run(\n fluid.default_main_program().prune(self.all_act_prob),\n feed={\"obs\": observation[np.newaxis, :]},\n fetch_list=[self.all_act_prob])\n prob_weights = np.array(prob_weights[0])\n action = np.random.choice(\n range(prob_weights.shape[1]),\n p=prob_weights.ravel()) # select action w.r.t the actions prob\n return action\n\n def store_transition(self, s, a, r):\n self.ep_obs.append(s)\n self.ep_as.append(a)\n self.ep_rs.append(r)\n\n def learn(self):\n # discount and normalize episode reward\n discounted_ep_rs_norm = self._discount_and_norm_rewards()\n tensor_obs = np.vstack(self.ep_obs).astype(\"float32\")\n tensor_as = np.array(self.ep_as).astype(\"int64\")\n tensor_as = tensor_as.reshape([tensor_as.shape[0], 1])\n tensor_vt = discounted_ep_rs_norm.astype(\"float32\")[:, np.newaxis]\n # train on episode\n self.exe.run(\n fluid.default_main_program(),\n feed={\n \"obs\": tensor_obs, # shape=[None, n_obs]\n \"acts\": tensor_as, # shape=[None, ]\n \"vt\": tensor_vt # shape=[None, ]\n })\n self.ep_obs, self.ep_as, self.ep_rs = [], [], [] # empty episode data\n return discounted_ep_rs_norm\n\n def _discount_and_norm_rewards(self):\n # discount episode rewards\n discounted_ep_rs = np.zeros_like(self.ep_rs)\n running_add = 0\n for t in reversed(range(0, len(self.ep_rs))):\n running_add = running_add * self.gamma + self.ep_rs[t]\n discounted_ep_rs[t] = running_add\n\n # normalize episode rewards\n discounted_ep_rs -= np.mean(discounted_ep_rs)\n discounted_ep_rs /= np.std(discounted_ep_rs)\n return discounted_ep_rs\n", "path": "fluid/policy_gradient/brain.py"}]}
| 1,364 | 148 |
gh_patches_debug_22549
|
rasdani/github-patches
|
git_diff
|
psf__black-3543
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GitHub Action: Use action version as default Black version, instead of latest
> I'm alright with making the default Black version tied to the action version being used. For context `version` was introduced because the action didn't exist for a long time so tying black version to action version wouldn't work for version 19.10b0 for example. In hidesight, having the default being the action version keeping the `version` configuration option around as an escape hatch is the better solution. This will involve some complexity since commit SHAs aren't supported by the version code (but are by GHA) but there might be some pre-existing logic in scripts/diff_shades_gha_helper.py we could reuse.
_Originally posted by @ichard26 in https://github.com/psf/black/issues/1140#issuecomment-1026379455_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `action/main.py`
Content:
```
1 import os
2 import shlex
3 import sys
4 from pathlib import Path
5 from subprocess import PIPE, STDOUT, run
6
7 ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
8 ENV_PATH = ACTION_PATH / ".black-env"
9 ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
10 OPTIONS = os.getenv("INPUT_OPTIONS", default="")
11 SRC = os.getenv("INPUT_SRC", default="")
12 JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
13 BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
14 VERSION = os.getenv("INPUT_VERSION", default="")
15
16 run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
17
18 version_specifier = VERSION
19 if VERSION and VERSION[0] in "0123456789":
20 version_specifier = f"=={VERSION}"
21 if JUPYTER:
22 extra_deps = "[colorama,jupyter]"
23 else:
24 extra_deps = "[colorama]"
25 req = f"black{extra_deps}{version_specifier}"
26 pip_proc = run(
27 [str(ENV_BIN / "python"), "-m", "pip", "install", req],
28 stdout=PIPE,
29 stderr=STDOUT,
30 encoding="utf-8",
31 )
32 if pip_proc.returncode:
33 print(pip_proc.stdout)
34 print("::error::Failed to install Black.", flush=True)
35 sys.exit(pip_proc.returncode)
36
37
38 base_cmd = [str(ENV_BIN / "black")]
39 if BLACK_ARGS:
40 # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
41 proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])
42 else:
43 proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])
44
45 sys.exit(proc.returncode)
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/action/main.py b/action/main.py
--- a/action/main.py
+++ b/action/main.py
@@ -22,12 +22,34 @@
extra_deps = "[colorama,jupyter]"
else:
extra_deps = "[colorama]"
-req = f"black{extra_deps}{version_specifier}"
+if version_specifier:
+ req = f"black{extra_deps}{version_specifier}"
+else:
+ describe_name = ""
+ with open(ACTION_PATH / ".git_archival.txt", encoding="utf-8") as fp:
+ for line in fp:
+ if line.startswith("describe-name: "):
+ describe_name = line[len("describe-name: ") :].rstrip()
+ break
+ if not describe_name:
+ print("::error::Failed to detect action version.", flush=True)
+ sys.exit(1)
+ # expected format is one of:
+ # - 23.1.0
+ # - 23.1.0-51-g448bba7
+ if describe_name.count("-") < 2:
+ # the action's commit matches a tag exactly, install exact version from PyPI
+ req = f"black{extra_deps}=={describe_name}"
+ else:
+ # the action's commit does not match any tag, install from the local git repo
+ req = f".{extra_deps}"
+print(f"Installing {req}...", flush=True)
pip_proc = run(
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
+ cwd=ACTION_PATH,
)
if pip_proc.returncode:
print(pip_proc.stdout)
|
{"golden_diff": "diff --git a/action/main.py b/action/main.py\n--- a/action/main.py\n+++ b/action/main.py\n@@ -22,12 +22,34 @@\n extra_deps = \"[colorama,jupyter]\"\n else:\n extra_deps = \"[colorama]\"\n-req = f\"black{extra_deps}{version_specifier}\"\n+if version_specifier:\n+ req = f\"black{extra_deps}{version_specifier}\"\n+else:\n+ describe_name = \"\"\n+ with open(ACTION_PATH / \".git_archival.txt\", encoding=\"utf-8\") as fp:\n+ for line in fp:\n+ if line.startswith(\"describe-name: \"):\n+ describe_name = line[len(\"describe-name: \") :].rstrip()\n+ break\n+ if not describe_name:\n+ print(\"::error::Failed to detect action version.\", flush=True)\n+ sys.exit(1)\n+ # expected format is one of:\n+ # - 23.1.0\n+ # - 23.1.0-51-g448bba7\n+ if describe_name.count(\"-\") < 2:\n+ # the action's commit matches a tag exactly, install exact version from PyPI\n+ req = f\"black{extra_deps}=={describe_name}\"\n+ else:\n+ # the action's commit does not match any tag, install from the local git repo\n+ req = f\".{extra_deps}\"\n+print(f\"Installing {req}...\", flush=True)\n pip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n+ cwd=ACTION_PATH,\n )\n if pip_proc.returncode:\n print(pip_proc.stdout)\n", "issue": "GitHub Action: Use action version as default Black version, instead of latest\n> I'm alright with making the default Black version tied to the action version being used. For context `version` was introduced because the action didn't exist for a long time so tying black version to action version wouldn't work for version 19.10b0 for example. In hidesight, having the default being the action version keeping the `version` configuration option around as an escape hatch is the better solution. This will involve some complexity since commit SHAs aren't supported by the version code (but are by GHA) but there might be some pre-existing logic in scripts/diff_shades_gha_helper.py we could reuse.\r\n\r\n_Originally posted by @ichard26 in https://github.com/psf/black/issues/1140#issuecomment-1026379455_\r\n \n", "before_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nJUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nif JUPYTER:\n extra_deps = \"[colorama,jupyter]\"\nelse:\n extra_deps = \"[colorama]\"\nreq = f\"black{extra_deps}{version_specifier}\"\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}], "after_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nJUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nif JUPYTER:\n extra_deps = \"[colorama,jupyter]\"\nelse:\n extra_deps = \"[colorama]\"\nif version_specifier:\n req = f\"black{extra_deps}{version_specifier}\"\nelse:\n describe_name = \"\"\n with open(ACTION_PATH / \".git_archival.txt\", encoding=\"utf-8\") as fp:\n for line in fp:\n if line.startswith(\"describe-name: \"):\n describe_name = line[len(\"describe-name: \") :].rstrip()\n break\n if not describe_name:\n print(\"::error::Failed to detect action version.\", flush=True)\n sys.exit(1)\n # expected format is one of:\n # - 23.1.0\n # - 23.1.0-51-g448bba7\n if describe_name.count(\"-\") < 2:\n # the action's commit matches a tag exactly, install exact version from PyPI\n req = f\"black{extra_deps}=={describe_name}\"\n else:\n # the action's commit does not match any tag, install from the local git repo\n req = f\".{extra_deps}\"\nprint(f\"Installing {req}...\", flush=True)\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n cwd=ACTION_PATH,\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}]}
| 923 | 393 |
gh_patches_debug_27942
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-461
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] API Documentation for Biology functions is not standardized
# Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docs do not have a standard docstring format for functions.
### Example of inconsistent docstring
> (no `Returns`, no `Functional usage example`, no `Method chaining example`)
<img width="690" alt="incorrect_func_doc" src="https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png">
### Example of a correct/desired docstring
<img width="716" alt="correct_func_doc" src="https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png">
I would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:
- `Parameters`
- ` Returns`
- `Functional usage example`
- `Method chaining example`
**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)
- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)
[DOC] API Documentation for Biology functions is not standardized
# Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docs do not have a standard docstring format for functions.
### Example of inconsistent docstring
> (no `Returns`, no `Functional usage example`, no `Method chaining example`)
<img width="690" alt="incorrect_func_doc" src="https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png">
### Example of a correct/desired docstring
<img width="716" alt="correct_func_doc" src="https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png">
I would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:
- `Parameters`
- ` Returns`
- `Functional usage example`
- `Method chaining example`
**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)
- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `janitor/biology.py`
Content:
```
1 """
2 Biology and bioinformatics-oriented data cleaning functions.
3 """
4
5 import pandas as pd
6 import pandas_flavor as pf
7
8 from .utils import deprecated_alias, import_message
9
10 try:
11 from Bio import SeqIO
12 except ImportError:
13 import_message(
14 "biology", "biopython", "conda install -c conda-forge biopython"
15 )
16
17
18 @pf.register_dataframe_method
19 @deprecated_alias(col_name="column_name")
20 def join_fasta(
21 df: pd.DataFrame, filename: str, id_col: str, column_name
22 ) -> pd.DataFrame:
23 """
24 Convenience method to join in a FASTA file as a column.
25
26 This allows us to add the string sequence of a FASTA file as a new column
27 of data in the dataframe.
28
29 This method only attaches the string representation of the SeqRecord.Seq
30 object from Biopython. Does not attach the full SeqRecord. Alphabet is
31 also not stored, under the assumption that the data scientist has domain
32 knowledge of what kind of sequence is being read in (nucleotide vs. amino
33 acid.)
34
35 This method mutates the original DataFrame.
36
37 For more advanced functions, please use phylopandas.
38
39 :param df: A pandas DataFrame.
40 :param filename: Path to the FASTA file.
41 :param id_col: The column in the DataFrame that houses sequence IDs.
42 :param column_name: The name of the new column.
43 """
44 seqrecords = {
45 x.id: x.seq.__str__() for x in SeqIO.parse(filename, "fasta")
46 }
47 seq_col = [seqrecords[i] for i in df[id_col]]
48 df[column_name] = seq_col
49 return df
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/janitor/biology.py b/janitor/biology.py
--- a/janitor/biology.py
+++ b/janitor/biology.py
@@ -18,7 +18,7 @@
@pf.register_dataframe_method
@deprecated_alias(col_name="column_name")
def join_fasta(
- df: pd.DataFrame, filename: str, id_col: str, column_name
+ df: pd.DataFrame, filename: str, id_col: str, column_name: str
) -> pd.DataFrame:
"""
Convenience method to join in a FASTA file as a column.
@@ -36,10 +36,37 @@
For more advanced functions, please use phylopandas.
+ Functional usage example:
+
+ .. code-block:: python
+
+ import janitor.biology
+
+ df = janitor.biology.join_fasta(
+ df=df,
+ filename='fasta_file.fasta',
+ id_col='sequence_accession',
+ column_name='sequence',
+ )
+
+ Method chaining example:
+
+ .. code-block:: python
+
+ import pandas as pd
+ import janitor.biology
+
+ df = pd.DataFrame(...).join_fasta(
+ filename='fasta_file.fasta',
+ id_col='sequence_accession',
+ column_name='sequence',
+ )
+
:param df: A pandas DataFrame.
:param filename: Path to the FASTA file.
:param id_col: The column in the DataFrame that houses sequence IDs.
:param column_name: The name of the new column.
+ :returns: A pandas DataFrame with new FASTA string sequence column.
"""
seqrecords = {
x.id: x.seq.__str__() for x in SeqIO.parse(filename, "fasta")
|
{"golden_diff": "diff --git a/janitor/biology.py b/janitor/biology.py\n--- a/janitor/biology.py\n+++ b/janitor/biology.py\n@@ -18,7 +18,7 @@\n @pf.register_dataframe_method\n @deprecated_alias(col_name=\"column_name\")\n def join_fasta(\n- df: pd.DataFrame, filename: str, id_col: str, column_name\n+ df: pd.DataFrame, filename: str, id_col: str, column_name: str\n ) -> pd.DataFrame:\n \"\"\"\n Convenience method to join in a FASTA file as a column.\n@@ -36,10 +36,37 @@\n \n For more advanced functions, please use phylopandas.\n \n+ Functional usage example:\n+\n+ .. code-block:: python\n+\n+ import janitor.biology\n+\n+ df = janitor.biology.join_fasta(\n+ df=df,\n+ filename='fasta_file.fasta',\n+ id_col='sequence_accession',\n+ column_name='sequence',\n+ )\n+\n+ Method chaining example:\n+\n+ .. code-block:: python\n+\n+ import pandas as pd\n+ import janitor.biology\n+\n+ df = pd.DataFrame(...).join_fasta(\n+ filename='fasta_file.fasta',\n+ id_col='sequence_accession',\n+ column_name='sequence',\n+ )\n+\n :param df: A pandas DataFrame.\n :param filename: Path to the FASTA file.\n :param id_col: The column in the DataFrame that houses sequence IDs.\n :param column_name: The name of the new column.\n+ :returns: A pandas DataFrame with new FASTA string sequence column.\n \"\"\"\n seqrecords = {\n x.id: x.seq.__str__() for x in SeqIO.parse(filename, \"fasta\")\n", "issue": "[DOC] API Documentation for Biology functions is not standardized\n# Brief Description of Fix\r\n\r\n<!-- Please describe the fix in terms of a \"before\" and \"after\". In other words, what's not so good about the current docs\r\npage, and what you would like to see it become. \r\n\r\nExample starter wording is provided. -->\r\n\r\nCurrently, the docs do not have a standard docstring format for functions.\r\n### Example of inconsistent docstring\r\n> (no `Returns`, no `Functional usage example`, no `Method chaining example`)\r\n<img width=\"690\" alt=\"incorrect_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png\">\r\n\r\n### Example of a correct/desired docstring\r\n<img width=\"716\" alt=\"correct_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png\">\r\n\r\nI would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:\r\n - `Parameters`\r\n - ` Returns`\r\n - `Functional usage example`\r\n - `Method chaining example`\r\n\r\n**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.\r\n\r\n# Relevant Context\r\n\r\n<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available\r\nto get you started. -->\r\n\r\n- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)\r\n- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)\r\n\n[DOC] API Documentation for Biology functions is not standardized\n# Brief Description of Fix\r\n\r\n<!-- Please describe the fix in terms of a \"before\" and \"after\". In other words, what's not so good about the current docs\r\npage, and what you would like to see it become. \r\n\r\nExample starter wording is provided. -->\r\n\r\nCurrently, the docs do not have a standard docstring format for functions.\r\n### Example of inconsistent docstring\r\n> (no `Returns`, no `Functional usage example`, no `Method chaining example`)\r\n<img width=\"690\" alt=\"incorrect_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png\">\r\n\r\n### Example of a correct/desired docstring\r\n<img width=\"716\" alt=\"correct_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png\">\r\n\r\nI would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:\r\n - `Parameters`\r\n - ` Returns`\r\n - `Functional usage example`\r\n - `Method chaining example`\r\n\r\n**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.\r\n\r\n# Relevant Context\r\n\r\n<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available\r\nto get you started. -->\r\n\r\n- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)\r\n- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)\r\n\n", "before_files": [{"content": "\"\"\"\nBiology and bioinformatics-oriented data cleaning functions.\n\"\"\"\n\nimport pandas as pd\nimport pandas_flavor as pf\n\nfrom .utils import deprecated_alias, import_message\n\ntry:\n from Bio import SeqIO\nexcept ImportError:\n import_message(\n \"biology\", \"biopython\", \"conda install -c conda-forge biopython\"\n )\n\n\[email protected]_dataframe_method\n@deprecated_alias(col_name=\"column_name\")\ndef join_fasta(\n df: pd.DataFrame, filename: str, id_col: str, column_name\n) -> pd.DataFrame:\n \"\"\"\n Convenience method to join in a FASTA file as a column.\n\n This allows us to add the string sequence of a FASTA file as a new column\n of data in the dataframe.\n\n This method only attaches the string representation of the SeqRecord.Seq\n object from Biopython. Does not attach the full SeqRecord. Alphabet is\n also not stored, under the assumption that the data scientist has domain\n knowledge of what kind of sequence is being read in (nucleotide vs. amino\n acid.)\n\n This method mutates the original DataFrame.\n\n For more advanced functions, please use phylopandas.\n\n :param df: A pandas DataFrame.\n :param filename: Path to the FASTA file.\n :param id_col: The column in the DataFrame that houses sequence IDs.\n :param column_name: The name of the new column.\n \"\"\"\n seqrecords = {\n x.id: x.seq.__str__() for x in SeqIO.parse(filename, \"fasta\")\n }\n seq_col = [seqrecords[i] for i in df[id_col]]\n df[column_name] = seq_col\n return df\n", "path": "janitor/biology.py"}], "after_files": [{"content": "\"\"\"\nBiology and bioinformatics-oriented data cleaning functions.\n\"\"\"\n\nimport pandas as pd\nimport pandas_flavor as pf\n\nfrom .utils import deprecated_alias, import_message\n\ntry:\n from Bio import SeqIO\nexcept ImportError:\n import_message(\n \"biology\", \"biopython\", \"conda install -c conda-forge biopython\"\n )\n\n\[email protected]_dataframe_method\n@deprecated_alias(col_name=\"column_name\")\ndef join_fasta(\n df: pd.DataFrame, filename: str, id_col: str, column_name: str\n) -> pd.DataFrame:\n \"\"\"\n Convenience method to join in a FASTA file as a column.\n\n This allows us to add the string sequence of a FASTA file as a new column\n of data in the dataframe.\n\n This method only attaches the string representation of the SeqRecord.Seq\n object from Biopython. Does not attach the full SeqRecord. Alphabet is\n also not stored, under the assumption that the data scientist has domain\n knowledge of what kind of sequence is being read in (nucleotide vs. amino\n acid.)\n\n This method mutates the original DataFrame.\n\n For more advanced functions, please use phylopandas.\n\n Functional usage example:\n\n .. code-block:: python\n\n import janitor.biology\n\n df = janitor.biology.join_fasta(\n df=df,\n filename='fasta_file.fasta',\n id_col='sequence_accession',\n column_name='sequence',\n )\n\n Method chaining example:\n\n .. code-block:: python\n\n import pandas as pd\n import janitor.biology\n\n df = pd.DataFrame(...).join_fasta(\n filename='fasta_file.fasta',\n id_col='sequence_accession',\n column_name='sequence',\n )\n\n :param df: A pandas DataFrame.\n :param filename: Path to the FASTA file.\n :param id_col: The column in the DataFrame that houses sequence IDs.\n :param column_name: The name of the new column.\n :returns: A pandas DataFrame with new FASTA string sequence column.\n \"\"\"\n seqrecords = {\n x.id: x.seq.__str__() for x in SeqIO.parse(filename, \"fasta\")\n }\n seq_col = [seqrecords[i] for i in df[id_col]]\n df[column_name] = seq_col\n return df\n", "path": "janitor/biology.py"}]}
| 1,632 | 392 |
gh_patches_debug_13931
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-987
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError trying to reload a system module name
`KeyError: 'time' (file "/opt/rh/python33/root/usr/lib/python3.3/site-packages/sopel/modules/reload.py", line 62, in f_reload)` should be a sane error message
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/reload.py`
Content:
```
1 # coding=utf-8
2 """
3 reload.py - Sopel Module Reloader Module
4 Copyright 2008, Sean B. Palmer, inamidst.com
5 Licensed under the Eiffel Forum License 2.
6
7 http://sopel.chat
8 """
9 from __future__ import unicode_literals, absolute_import, print_function, division
10
11 import collections
12 import sys
13 import time
14 from sopel.tools import iteritems
15 import sopel.loader
16 import sopel.module
17 import subprocess
18
19
20 @sopel.module.nickname_commands("reload")
21 @sopel.module.priority("low")
22 @sopel.module.thread(False)
23 def f_reload(bot, trigger):
24 """Reloads a module, for use by admins only."""
25 if not trigger.admin:
26 return
27
28 name = trigger.group(2)
29 if name == bot.config.core.owner:
30 return bot.reply('What?')
31
32 if not name or name == '*' or name.upper() == 'ALL THE THINGS':
33 bot._callables = {
34 'high': collections.defaultdict(list),
35 'medium': collections.defaultdict(list),
36 'low': collections.defaultdict(list)
37 }
38 bot.command_groups = collections.defaultdict(list)
39 bot.setup()
40 return bot.reply('done')
41
42 if name not in sys.modules:
43 return bot.reply('%s: not loaded, try the `load` command' % name)
44
45 old_module = sys.modules[name]
46
47 old_callables = {}
48 for obj_name, obj in iteritems(vars(old_module)):
49 bot.unregister(obj)
50
51 # Also remove all references to sopel callables from top level of the
52 # module, so that they will not get loaded again if reloading the
53 # module does not override them.
54 for obj_name in old_callables.keys():
55 delattr(old_module, obj_name)
56
57 # Also delete the setup function
58 if hasattr(old_module, "setup"):
59 delattr(old_module, "setup")
60
61 modules = sopel.loader.enumerate_modules(bot.config)
62 path, type_ = modules[name]
63 load_module(bot, name, path, type_)
64
65
66 def load_module(bot, name, path, type_):
67 module, mtime = sopel.loader.load_module(name, path, type_)
68 relevant_parts = sopel.loader.clean_module(module, bot.config)
69
70 bot.register(*relevant_parts)
71
72 # TODO sys.modules[name] = module
73 if hasattr(module, 'setup'):
74 module.setup(bot)
75
76 modified = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(mtime))
77
78 bot.reply('%r (version: %s)' % (module, modified))
79
80
81 @sopel.module.nickname_commands('update')
82 def f_update(bot, trigger):
83 if not trigger.admin:
84 return
85
86 """Pulls the latest versions of all modules from Git"""
87 proc = subprocess.Popen('/usr/bin/git pull',
88 stdout=subprocess.PIPE,
89 stderr=subprocess.PIPE, shell=True)
90 bot.reply(proc.communicate()[0])
91
92 f_reload(bot, trigger)
93
94
95 @sopel.module.nickname_commands("load")
96 @sopel.module.priority("low")
97 @sopel.module.thread(False)
98 def f_load(bot, trigger):
99 """Loads a module, for use by admins only."""
100 if not trigger.admin:
101 return
102
103 name = trigger.group(2)
104 path = ''
105 if name == bot.config.core.owner:
106 return bot.reply('What?')
107
108 if name in sys.modules:
109 return bot.reply('Module already loaded, use reload')
110
111 mods = sopel.loader.enumerate_modules(bot.config)
112 if name not in mods:
113 return bot.reply('Module %s not found' % name)
114 path, type_ = mods[name]
115 load_module(bot, name, path, type_)
116
117
118 # Catch PM based messages
119 @sopel.module.commands("reload")
120 @sopel.module.priority("low")
121 @sopel.module.thread(False)
122 def pm_f_reload(bot, trigger):
123 """Wrapper for allowing delivery of .reload command via PM"""
124 if trigger.is_privmsg:
125 f_reload(bot, trigger)
126
127
128 @sopel.module.commands('update')
129 def pm_f_update(bot, trigger):
130 """Wrapper for allowing delivery of .update command via PM"""
131 if trigger.is_privmsg:
132 f_update(bot, trigger)
133
134
135 @sopel.module.commands("load")
136 @sopel.module.priority("low")
137 @sopel.module.thread(False)
138 def pm_f_load(bot, trigger):
139 """Wrapper for allowing delivery of .load command via PM"""
140 if trigger.is_privmsg:
141 f_load(bot, trigger)
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/reload.py b/sopel/modules/reload.py
--- a/sopel/modules/reload.py
+++ b/sopel/modules/reload.py
@@ -40,7 +40,7 @@
return bot.reply('done')
if name not in sys.modules:
- return bot.reply('%s: not loaded, try the `load` command' % name)
+ return bot.reply('"%s" not loaded, try the `load` command' % name)
old_module = sys.modules[name]
@@ -59,6 +59,8 @@
delattr(old_module, "setup")
modules = sopel.loader.enumerate_modules(bot.config)
+ if name not in modules:
+ return bot.reply('"%s" not loaded, try the `load` command' % name)
path, type_ = modules[name]
load_module(bot, name, path, type_)
|
{"golden_diff": "diff --git a/sopel/modules/reload.py b/sopel/modules/reload.py\n--- a/sopel/modules/reload.py\n+++ b/sopel/modules/reload.py\n@@ -40,7 +40,7 @@\n return bot.reply('done')\n \n if name not in sys.modules:\n- return bot.reply('%s: not loaded, try the `load` command' % name)\n+ return bot.reply('\"%s\" not loaded, try the `load` command' % name)\n \n old_module = sys.modules[name]\n \n@@ -59,6 +59,8 @@\n delattr(old_module, \"setup\")\n \n modules = sopel.loader.enumerate_modules(bot.config)\n+ if name not in modules:\n+ return bot.reply('\"%s\" not loaded, try the `load` command' % name)\n path, type_ = modules[name]\n load_module(bot, name, path, type_)\n", "issue": "KeyError trying to reload a system module name\n`KeyError: 'time' (file \"/opt/rh/python33/root/usr/lib/python3.3/site-packages/sopel/modules/reload.py\", line 62, in f_reload)` should be a sane error message\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nreload.py - Sopel Module Reloader Module\nCopyright 2008, Sean B. Palmer, inamidst.com\nLicensed under the Eiffel Forum License 2.\n\nhttp://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport collections\nimport sys\nimport time\nfrom sopel.tools import iteritems\nimport sopel.loader\nimport sopel.module\nimport subprocess\n\n\[email protected]_commands(\"reload\")\[email protected](\"low\")\[email protected](False)\ndef f_reload(bot, trigger):\n \"\"\"Reloads a module, for use by admins only.\"\"\"\n if not trigger.admin:\n return\n\n name = trigger.group(2)\n if name == bot.config.core.owner:\n return bot.reply('What?')\n\n if not name or name == '*' or name.upper() == 'ALL THE THINGS':\n bot._callables = {\n 'high': collections.defaultdict(list),\n 'medium': collections.defaultdict(list),\n 'low': collections.defaultdict(list)\n }\n bot.command_groups = collections.defaultdict(list)\n bot.setup()\n return bot.reply('done')\n\n if name not in sys.modules:\n return bot.reply('%s: not loaded, try the `load` command' % name)\n\n old_module = sys.modules[name]\n\n old_callables = {}\n for obj_name, obj in iteritems(vars(old_module)):\n bot.unregister(obj)\n\n # Also remove all references to sopel callables from top level of the\n # module, so that they will not get loaded again if reloading the\n # module does not override them.\n for obj_name in old_callables.keys():\n delattr(old_module, obj_name)\n\n # Also delete the setup function\n if hasattr(old_module, \"setup\"):\n delattr(old_module, \"setup\")\n\n modules = sopel.loader.enumerate_modules(bot.config)\n path, type_ = modules[name]\n load_module(bot, name, path, type_)\n\n\ndef load_module(bot, name, path, type_):\n module, mtime = sopel.loader.load_module(name, path, type_)\n relevant_parts = sopel.loader.clean_module(module, bot.config)\n\n bot.register(*relevant_parts)\n\n # TODO sys.modules[name] = module\n if hasattr(module, 'setup'):\n module.setup(bot)\n\n modified = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(mtime))\n\n bot.reply('%r (version: %s)' % (module, modified))\n\n\[email protected]_commands('update')\ndef f_update(bot, trigger):\n if not trigger.admin:\n return\n\n \"\"\"Pulls the latest versions of all modules from Git\"\"\"\n proc = subprocess.Popen('/usr/bin/git pull',\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE, shell=True)\n bot.reply(proc.communicate()[0])\n\n f_reload(bot, trigger)\n\n\[email protected]_commands(\"load\")\[email protected](\"low\")\[email protected](False)\ndef f_load(bot, trigger):\n \"\"\"Loads a module, for use by admins only.\"\"\"\n if not trigger.admin:\n return\n\n name = trigger.group(2)\n path = ''\n if name == bot.config.core.owner:\n return bot.reply('What?')\n\n if name in sys.modules:\n return bot.reply('Module already loaded, use reload')\n\n mods = sopel.loader.enumerate_modules(bot.config)\n if name not in mods:\n return bot.reply('Module %s not found' % name)\n path, type_ = mods[name]\n load_module(bot, name, path, type_)\n\n\n# Catch PM based messages\[email protected](\"reload\")\[email protected](\"low\")\[email protected](False)\ndef pm_f_reload(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .reload command via PM\"\"\"\n if trigger.is_privmsg:\n f_reload(bot, trigger)\n\n\[email protected]('update')\ndef pm_f_update(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .update command via PM\"\"\"\n if trigger.is_privmsg:\n f_update(bot, trigger)\n\n\[email protected](\"load\")\[email protected](\"low\")\[email protected](False)\ndef pm_f_load(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .load command via PM\"\"\"\n if trigger.is_privmsg:\n f_load(bot, trigger)\n", "path": "sopel/modules/reload.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nreload.py - Sopel Module Reloader Module\nCopyright 2008, Sean B. Palmer, inamidst.com\nLicensed under the Eiffel Forum License 2.\n\nhttp://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport collections\nimport sys\nimport time\nfrom sopel.tools import iteritems\nimport sopel.loader\nimport sopel.module\nimport subprocess\n\n\[email protected]_commands(\"reload\")\[email protected](\"low\")\[email protected](False)\ndef f_reload(bot, trigger):\n \"\"\"Reloads a module, for use by admins only.\"\"\"\n if not trigger.admin:\n return\n\n name = trigger.group(2)\n if name == bot.config.core.owner:\n return bot.reply('What?')\n\n if not name or name == '*' or name.upper() == 'ALL THE THINGS':\n bot._callables = {\n 'high': collections.defaultdict(list),\n 'medium': collections.defaultdict(list),\n 'low': collections.defaultdict(list)\n }\n bot.command_groups = collections.defaultdict(list)\n bot.setup()\n return bot.reply('done')\n\n if name not in sys.modules:\n return bot.reply('\"%s\" not loaded, try the `load` command' % name)\n\n old_module = sys.modules[name]\n\n old_callables = {}\n for obj_name, obj in iteritems(vars(old_module)):\n bot.unregister(obj)\n\n # Also remove all references to sopel callables from top level of the\n # module, so that they will not get loaded again if reloading the\n # module does not override them.\n for obj_name in old_callables.keys():\n delattr(old_module, obj_name)\n\n # Also delete the setup function\n if hasattr(old_module, \"setup\"):\n delattr(old_module, \"setup\")\n\n modules = sopel.loader.enumerate_modules(bot.config)\n if name not in modules:\n return bot.reply('\"%s\" not loaded, try the `load` command' % name)\n path, type_ = modules[name]\n load_module(bot, name, path, type_)\n\n\ndef load_module(bot, name, path, type_):\n module, mtime = sopel.loader.load_module(name, path, type_)\n relevant_parts = sopel.loader.clean_module(module, bot.config)\n\n bot.register(*relevant_parts)\n\n # TODO sys.modules[name] = module\n if hasattr(module, 'setup'):\n module.setup(bot)\n\n modified = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(mtime))\n\n bot.reply('%r (version: %s)' % (module, modified))\n\n\[email protected]_commands('update')\ndef f_update(bot, trigger):\n if not trigger.admin:\n return\n\n \"\"\"Pulls the latest versions of all modules from Git\"\"\"\n proc = subprocess.Popen('/usr/bin/git pull',\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE, shell=True)\n bot.reply(proc.communicate()[0])\n\n f_reload(bot, trigger)\n\n\[email protected]_commands(\"load\")\[email protected](\"low\")\[email protected](False)\ndef f_load(bot, trigger):\n \"\"\"Loads a module, for use by admins only.\"\"\"\n if not trigger.admin:\n return\n\n name = trigger.group(2)\n path = ''\n if name == bot.config.core.owner:\n return bot.reply('What?')\n\n if name in sys.modules:\n return bot.reply('Module already loaded, use reload')\n\n mods = sopel.loader.enumerate_modules(bot.config)\n if name not in mods:\n return bot.reply('Module %s not found' % name)\n path, type_ = mods[name]\n load_module(bot, name, path, type_)\n\n\n# Catch PM based messages\[email protected](\"reload\")\[email protected](\"low\")\[email protected](False)\ndef pm_f_reload(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .reload command via PM\"\"\"\n if trigger.is_privmsg:\n f_reload(bot, trigger)\n\n\[email protected]('update')\ndef pm_f_update(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .update command via PM\"\"\"\n if trigger.is_privmsg:\n f_update(bot, trigger)\n\n\[email protected](\"load\")\[email protected](\"low\")\[email protected](False)\ndef pm_f_load(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .load command via PM\"\"\"\n if trigger.is_privmsg:\n f_load(bot, trigger)\n", "path": "sopel/modules/reload.py"}]}
| 1,628 | 205 |
gh_patches_debug_33914
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-853
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement equality for CircuitDag
Implement `__eq__` for CircuitDag using `networkx.is_isomorphic()`. Use the node_match argument of is_isomorphic: `node_match=lambda n: n.val`.
This may be useful for #830.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/circuits/circuit_dag.py`
Content:
```
1 # Copyright 2018 The ops Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Any, Callable, Generic, Iterator, TypeVar
16
17 import networkx
18
19 from cirq import ops, devices
20 from cirq.circuits import circuit
21
22
23 T = TypeVar('T')
24
25 class Unique(Generic[T]):
26 """A wrapper for a value that doesn't compare equal to other instances.
27
28 For example: 5 == 5 but Unique(5) != Unique(5).
29
30 Unique is used by CircuitDag to wrap operations because nodes in a graph
31 are considered the same node if they compare equal to each other. X(q0)
32 in one moment of a Circuit and X(q0) in another moment of the Circuit are
33 wrapped by Unique(X(q0)) so they are distinct nodes in the graph.
34 """
35 def __init__(self, val: T) -> None:
36 self.val = val
37
38 def __repr__(self):
39 return 'Unique({}, {!r})'.format(id(self), self.val)
40
41
42 def _disjoint_qubits(op1: ops.Operation, op2: ops.Operation) -> bool:
43 """Returns true only if the operations have qubits in common."""
44 return not set(op1.qubits) & set(op2.qubits)
45
46
47 class CircuitDag(networkx.DiGraph):
48 """A representation of a Circuit as a directed acyclic graph.
49
50 Nodes of the graph are instances of Unique containing each operation of a
51 circuit.
52
53 Edges of the graph are tuples of nodes. Each edge specifies a required
54 application order between two operations. The first must be applied before
55 the second.
56
57 The graph is maximalist (transitive completion).
58 """
59
60 disjoint_qubits = staticmethod(_disjoint_qubits)
61
62 def __init__(self,
63 can_reorder: Callable[[ops.Operation, ops.Operation],
64 bool] = _disjoint_qubits,
65 incoming_graph_data: Any = None,
66 device: devices.Device = devices.UnconstrainedDevice
67 ) -> None:
68 """Initializes a CircuitDag.
69
70 Args:
71 can_reorder: A predicate that determines if two operations may be
72 reordered. Graph edges are created for pairs of operations
73 where this returns False.
74
75 The default predicate allows reordering only when the operations
76 don't share common qubits.
77 incoming_graph_data: Data in initialize the graph. This can be any
78 value supported by networkx.DiGraph() e.g. an edge list or
79 another graph.
80 device: Hardware that the circuit should be able to run on.
81 """
82 super().__init__(incoming_graph_data)
83 self.can_reorder = can_reorder
84 self.device = device
85
86 @staticmethod
87 def make_node(op: ops.Operation) -> Unique:
88 return Unique(op)
89
90 @staticmethod
91 def from_circuit(circuit: circuit.Circuit,
92 can_reorder: Callable[[ops.Operation, ops.Operation],
93 bool] = _disjoint_qubits
94 ) -> 'CircuitDag':
95 return CircuitDag.from_ops(circuit.all_operations(),
96 can_reorder=can_reorder,
97 device=circuit.device)
98
99 @staticmethod
100 def from_ops(*operations: ops.OP_TREE,
101 can_reorder: Callable[[ops.Operation, ops.Operation],
102 bool] = _disjoint_qubits,
103 device: devices.Device = devices.UnconstrainedDevice
104 ) -> 'CircuitDag':
105 dag = CircuitDag(can_reorder=can_reorder, device=device)
106 for op in ops.flatten_op_tree(operations):
107 dag.append(op)
108 return dag
109
110 def append(self, op: ops.Operation) -> None:
111 new_node = self.make_node(op)
112 self.add_edges_from([(node, new_node)
113 for node in self.nodes
114 if not self.can_reorder(node.val, new_node.val)])
115 self.add_node(new_node)
116
117 def ordered_nodes(self) -> Iterator[Unique[ops.Operation]]:
118 if not self.nodes:
119 return
120 g = self.copy()
121
122 def get_root_node(some_node: Unique[ops.Operation]
123 ) -> Unique[ops.Operation]:
124 pred = g.pred
125 while pred[some_node]:
126 some_node = next(iter(pred[some_node]))
127 return some_node
128
129 def get_first_node() -> Unique[ops.Operation]:
130 return get_root_node(next(iter(g.nodes)))
131
132 def get_next_node(succ: networkx.classes.coreviews.AtlasView
133 ) -> Unique[ops.Operation]:
134 if succ:
135 return get_root_node(next(iter(succ)))
136 else:
137 return get_first_node()
138
139 node = get_first_node()
140 while True:
141 yield node
142 succ = g.succ[node]
143 g.remove_node(node)
144
145 if not g.nodes:
146 return
147
148 node = get_next_node(succ)
149
150 def all_operations(self) -> Iterator[ops.Operation]:
151 return (node.val for node in self.ordered_nodes())
152
153 def to_circuit(self) -> circuit.Circuit:
154 return circuit.Circuit.from_ops(
155 self.all_operations(),
156 strategy=circuit.InsertStrategy.EARLIEST,
157 device=self.device)
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cirq/circuits/circuit_dag.py b/cirq/circuits/circuit_dag.py
--- a/cirq/circuits/circuit_dag.py
+++ b/cirq/circuits/circuit_dag.py
@@ -12,8 +12,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from typing import Any, Callable, Generic, Iterator, TypeVar
+from typing import Any, Callable, Dict, Generic, Iterator, TypeVar
+import functools
import networkx
from cirq import ops, devices
@@ -22,6 +23,7 @@
T = TypeVar('T')
[email protected]_ordering
class Unique(Generic[T]):
"""A wrapper for a value that doesn't compare equal to other instances.
@@ -38,6 +40,11 @@
def __repr__(self):
return 'Unique({}, {!r})'.format(id(self), self.val)
+ def __lt__(self, other):
+ if not isinstance(other, type(self)):
+ return NotImplemented
+ return id(self) < id(other)
+
def _disjoint_qubits(op1: ops.Operation, op2: ops.Operation) -> bool:
"""Returns true only if the operations have qubits in common."""
@@ -114,6 +121,24 @@
if not self.can_reorder(node.val, new_node.val)])
self.add_node(new_node)
+ def __eq__(self, other):
+ if not isinstance(other, type(self)):
+ return NotImplemented
+ g1 = self.copy()
+ g2 = other.copy()
+ for node, attr in g1.nodes.items():
+ attr['val'] = node.val
+ for node, attr in g2.nodes.items():
+ attr['val'] = node.val
+ def node_match(attr1: Dict[Any, Any], attr2: Dict[Any, Any]) -> bool:
+ return attr1['val'] == attr2['val']
+ return networkx.is_isomorphic(g1, g2, node_match=node_match)
+
+ def __ne__(self, other):
+ return not self == other
+
+ __hash__ = None # type: ignore
+
def ordered_nodes(self) -> Iterator[Unique[ops.Operation]]:
if not self.nodes:
return
|
{"golden_diff": "diff --git a/cirq/circuits/circuit_dag.py b/cirq/circuits/circuit_dag.py\n--- a/cirq/circuits/circuit_dag.py\n+++ b/cirq/circuits/circuit_dag.py\n@@ -12,8 +12,9 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-from typing import Any, Callable, Generic, Iterator, TypeVar\n+from typing import Any, Callable, Dict, Generic, Iterator, TypeVar\n \n+import functools\n import networkx\n \n from cirq import ops, devices\n@@ -22,6 +23,7 @@\n \n T = TypeVar('T')\n \[email protected]_ordering\n class Unique(Generic[T]):\n \"\"\"A wrapper for a value that doesn't compare equal to other instances.\n \n@@ -38,6 +40,11 @@\n def __repr__(self):\n return 'Unique({}, {!r})'.format(id(self), self.val)\n \n+ def __lt__(self, other):\n+ if not isinstance(other, type(self)):\n+ return NotImplemented\n+ return id(self) < id(other)\n+\n \n def _disjoint_qubits(op1: ops.Operation, op2: ops.Operation) -> bool:\n \"\"\"Returns true only if the operations have qubits in common.\"\"\"\n@@ -114,6 +121,24 @@\n if not self.can_reorder(node.val, new_node.val)])\n self.add_node(new_node)\n \n+ def __eq__(self, other):\n+ if not isinstance(other, type(self)):\n+ return NotImplemented\n+ g1 = self.copy()\n+ g2 = other.copy()\n+ for node, attr in g1.nodes.items():\n+ attr['val'] = node.val\n+ for node, attr in g2.nodes.items():\n+ attr['val'] = node.val\n+ def node_match(attr1: Dict[Any, Any], attr2: Dict[Any, Any]) -> bool:\n+ return attr1['val'] == attr2['val']\n+ return networkx.is_isomorphic(g1, g2, node_match=node_match)\n+\n+ def __ne__(self, other):\n+ return not self == other\n+\n+ __hash__ = None # type: ignore\n+\n def ordered_nodes(self) -> Iterator[Unique[ops.Operation]]:\n if not self.nodes:\n return\n", "issue": "Implement equality for CircuitDag\nImplement `__eq__` for CircuitDag using `networkx.is_isomorphic()`. Use the node_match argument of is_isomorphic: `node_match=lambda n: n.val`.\r\n\r\nThis may be useful for #830.\n", "before_files": [{"content": "# Copyright 2018 The ops Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Callable, Generic, Iterator, TypeVar\n\nimport networkx\n\nfrom cirq import ops, devices\nfrom cirq.circuits import circuit\n\n\nT = TypeVar('T')\n\nclass Unique(Generic[T]):\n \"\"\"A wrapper for a value that doesn't compare equal to other instances.\n\n For example: 5 == 5 but Unique(5) != Unique(5).\n\n Unique is used by CircuitDag to wrap operations because nodes in a graph\n are considered the same node if they compare equal to each other. X(q0)\n in one moment of a Circuit and X(q0) in another moment of the Circuit are\n wrapped by Unique(X(q0)) so they are distinct nodes in the graph.\n \"\"\"\n def __init__(self, val: T) -> None:\n self.val = val\n\n def __repr__(self):\n return 'Unique({}, {!r})'.format(id(self), self.val)\n\n\ndef _disjoint_qubits(op1: ops.Operation, op2: ops.Operation) -> bool:\n \"\"\"Returns true only if the operations have qubits in common.\"\"\"\n return not set(op1.qubits) & set(op2.qubits)\n\n\nclass CircuitDag(networkx.DiGraph):\n \"\"\"A representation of a Circuit as a directed acyclic graph.\n\n Nodes of the graph are instances of Unique containing each operation of a\n circuit.\n\n Edges of the graph are tuples of nodes. Each edge specifies a required\n application order between two operations. The first must be applied before\n the second.\n\n The graph is maximalist (transitive completion).\n \"\"\"\n\n disjoint_qubits = staticmethod(_disjoint_qubits)\n\n def __init__(self,\n can_reorder: Callable[[ops.Operation, ops.Operation],\n bool] = _disjoint_qubits,\n incoming_graph_data: Any = None,\n device: devices.Device = devices.UnconstrainedDevice\n ) -> None:\n \"\"\"Initializes a CircuitDag.\n\n Args:\n can_reorder: A predicate that determines if two operations may be\n reordered. Graph edges are created for pairs of operations\n where this returns False.\n\n The default predicate allows reordering only when the operations\n don't share common qubits.\n incoming_graph_data: Data in initialize the graph. This can be any\n value supported by networkx.DiGraph() e.g. an edge list or\n another graph.\n device: Hardware that the circuit should be able to run on.\n \"\"\"\n super().__init__(incoming_graph_data)\n self.can_reorder = can_reorder\n self.device = device\n\n @staticmethod\n def make_node(op: ops.Operation) -> Unique:\n return Unique(op)\n\n @staticmethod\n def from_circuit(circuit: circuit.Circuit,\n can_reorder: Callable[[ops.Operation, ops.Operation],\n bool] = _disjoint_qubits\n ) -> 'CircuitDag':\n return CircuitDag.from_ops(circuit.all_operations(),\n can_reorder=can_reorder,\n device=circuit.device)\n\n @staticmethod\n def from_ops(*operations: ops.OP_TREE,\n can_reorder: Callable[[ops.Operation, ops.Operation],\n bool] = _disjoint_qubits,\n device: devices.Device = devices.UnconstrainedDevice\n ) -> 'CircuitDag':\n dag = CircuitDag(can_reorder=can_reorder, device=device)\n for op in ops.flatten_op_tree(operations):\n dag.append(op)\n return dag\n\n def append(self, op: ops.Operation) -> None:\n new_node = self.make_node(op)\n self.add_edges_from([(node, new_node)\n for node in self.nodes\n if not self.can_reorder(node.val, new_node.val)])\n self.add_node(new_node)\n\n def ordered_nodes(self) -> Iterator[Unique[ops.Operation]]:\n if not self.nodes:\n return\n g = self.copy()\n\n def get_root_node(some_node: Unique[ops.Operation]\n ) -> Unique[ops.Operation]:\n pred = g.pred\n while pred[some_node]:\n some_node = next(iter(pred[some_node]))\n return some_node\n\n def get_first_node() -> Unique[ops.Operation]:\n return get_root_node(next(iter(g.nodes)))\n\n def get_next_node(succ: networkx.classes.coreviews.AtlasView\n ) -> Unique[ops.Operation]:\n if succ:\n return get_root_node(next(iter(succ)))\n else:\n return get_first_node()\n\n node = get_first_node()\n while True:\n yield node\n succ = g.succ[node]\n g.remove_node(node)\n\n if not g.nodes:\n return\n\n node = get_next_node(succ)\n\n def all_operations(self) -> Iterator[ops.Operation]:\n return (node.val for node in self.ordered_nodes())\n\n def to_circuit(self) -> circuit.Circuit:\n return circuit.Circuit.from_ops(\n self.all_operations(),\n strategy=circuit.InsertStrategy.EARLIEST,\n device=self.device)\n", "path": "cirq/circuits/circuit_dag.py"}], "after_files": [{"content": "# Copyright 2018 The ops Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Callable, Dict, Generic, Iterator, TypeVar\n\nimport functools\nimport networkx\n\nfrom cirq import ops, devices\nfrom cirq.circuits import circuit\n\n\nT = TypeVar('T')\n\[email protected]_ordering\nclass Unique(Generic[T]):\n \"\"\"A wrapper for a value that doesn't compare equal to other instances.\n\n For example: 5 == 5 but Unique(5) != Unique(5).\n\n Unique is used by CircuitDag to wrap operations because nodes in a graph\n are considered the same node if they compare equal to each other. X(q0)\n in one moment of a Circuit and X(q0) in another moment of the Circuit are\n wrapped by Unique(X(q0)) so they are distinct nodes in the graph.\n \"\"\"\n def __init__(self, val: T) -> None:\n self.val = val\n\n def __repr__(self):\n return 'Unique({}, {!r})'.format(id(self), self.val)\n\n def __lt__(self, other):\n if not isinstance(other, type(self)):\n return NotImplemented\n return id(self) < id(other)\n\n\ndef _disjoint_qubits(op1: ops.Operation, op2: ops.Operation) -> bool:\n \"\"\"Returns true only if the operations have qubits in common.\"\"\"\n return not set(op1.qubits) & set(op2.qubits)\n\n\nclass CircuitDag(networkx.DiGraph):\n \"\"\"A representation of a Circuit as a directed acyclic graph.\n\n Nodes of the graph are instances of Unique containing each operation of a\n circuit.\n\n Edges of the graph are tuples of nodes. Each edge specifies a required\n application order between two operations. The first must be applied before\n the second.\n\n The graph is maximalist (transitive completion).\n \"\"\"\n\n disjoint_qubits = staticmethod(_disjoint_qubits)\n\n def __init__(self,\n can_reorder: Callable[[ops.Operation, ops.Operation],\n bool] = _disjoint_qubits,\n incoming_graph_data: Any = None,\n device: devices.Device = devices.UnconstrainedDevice\n ) -> None:\n \"\"\"Initializes a CircuitDag.\n\n Args:\n can_reorder: A predicate that determines if two operations may be\n reordered. Graph edges are created for pairs of operations\n where this returns False.\n\n The default predicate allows reordering only when the operations\n don't share common qubits.\n incoming_graph_data: Data in initialize the graph. This can be any\n value supported by networkx.DiGraph() e.g. an edge list or\n another graph.\n device: Hardware that the circuit should be able to run on.\n \"\"\"\n super().__init__(incoming_graph_data)\n self.can_reorder = can_reorder\n self.device = device\n\n @staticmethod\n def make_node(op: ops.Operation) -> Unique:\n return Unique(op)\n\n @staticmethod\n def from_circuit(circuit: circuit.Circuit,\n can_reorder: Callable[[ops.Operation, ops.Operation],\n bool] = _disjoint_qubits\n ) -> 'CircuitDag':\n return CircuitDag.from_ops(circuit.all_operations(),\n can_reorder=can_reorder,\n device=circuit.device)\n\n @staticmethod\n def from_ops(*operations: ops.OP_TREE,\n can_reorder: Callable[[ops.Operation, ops.Operation],\n bool] = _disjoint_qubits,\n device: devices.Device = devices.UnconstrainedDevice\n ) -> 'CircuitDag':\n dag = CircuitDag(can_reorder=can_reorder, device=device)\n for op in ops.flatten_op_tree(operations):\n dag.append(op)\n return dag\n\n def append(self, op: ops.Operation) -> None:\n new_node = self.make_node(op)\n self.add_edges_from([(node, new_node)\n for node in self.nodes\n if not self.can_reorder(node.val, new_node.val)])\n self.add_node(new_node)\n\n def __eq__(self, other):\n if not isinstance(other, type(self)):\n return NotImplemented\n g1 = self.copy()\n g2 = other.copy()\n for node, attr in g1.nodes.items():\n attr['val'] = node.val\n for node, attr in g2.nodes.items():\n attr['val'] = node.val\n def node_match(attr1: Dict[Any, Any], attr2: Dict[Any, Any]) -> bool:\n return attr1['val'] == attr2['val']\n return networkx.is_isomorphic(g1, g2, node_match=node_match)\n\n def __ne__(self, other):\n return not self == other\n\n __hash__ = None # type: ignore\n\n def ordered_nodes(self) -> Iterator[Unique[ops.Operation]]:\n if not self.nodes:\n return\n g = self.copy()\n\n def get_root_node(some_node: Unique[ops.Operation]\n ) -> Unique[ops.Operation]:\n pred = g.pred\n while pred[some_node]:\n some_node = next(iter(pred[some_node]))\n return some_node\n\n def get_first_node() -> Unique[ops.Operation]:\n return get_root_node(next(iter(g.nodes)))\n\n def get_next_node(succ: networkx.classes.coreviews.AtlasView\n ) -> Unique[ops.Operation]:\n if succ:\n return get_root_node(next(iter(succ)))\n else:\n return get_first_node()\n\n node = get_first_node()\n while True:\n yield node\n succ = g.succ[node]\n g.remove_node(node)\n\n if not g.nodes:\n return\n\n node = get_next_node(succ)\n\n def all_operations(self) -> Iterator[ops.Operation]:\n return (node.val for node in self.ordered_nodes())\n\n def to_circuit(self) -> circuit.Circuit:\n return circuit.Circuit.from_ops(\n self.all_operations(),\n strategy=circuit.InsertStrategy.EARLIEST,\n device=self.device)\n", "path": "cirq/circuits/circuit_dag.py"}]}
| 1,925 | 528 |
gh_patches_debug_10721
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-3979
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
use CSP: sandbox on /simple/ pages
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/sandbox + https://www.youtube.com/watch?v=fbhW37JZtSA&feature=youtu.be
I believe this is a great fit for /simple/, which don't need any ability to do anthing but have a simple HTML structure.
I _think_ we can replace the whole current header with `Content-Security-Policy: sandbox allow-top-navigations`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/csp.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14 import copy
15
16
17 SELF = "'self'"
18 NONE = "'none'"
19
20
21 def _serialize(policy):
22 return "; ".join([
23 " ".join([k] + [v2 for v2 in v if v2 is not None])
24 for k, v in sorted(policy.items())
25 ])
26
27
28 def content_security_policy_tween_factory(handler, registry):
29 def content_security_policy_tween(request):
30 resp = handler(request)
31
32 try:
33 policy = request.find_service(name="csp")
34 except ValueError:
35 policy = collections.defaultdict(list)
36
37 # We don't want to apply our Content Security Policy to the debug
38 # toolbar, that's not part of our application and it doesn't work with
39 # our restrictive CSP.
40 policy = _serialize(policy).format(request=request)
41 if not request.path.startswith("/_debug_toolbar/") and policy:
42 resp.headers["Content-Security-Policy"] = policy
43
44 return resp
45
46 return content_security_policy_tween
47
48
49 class CSPPolicy(collections.defaultdict):
50 def __init__(self, policy=None):
51 super().__init__(list, policy or {})
52
53 def merge(self, policy):
54 for key, attrs in policy.items():
55 self[key].extend(attrs)
56
57
58 def csp_factory(_, request):
59 try:
60 return CSPPolicy(copy.deepcopy(request.registry.settings["csp"]))
61 except KeyError:
62 return CSPPolicy({})
63
64
65 def includeme(config):
66 config.register_service_factory(csp_factory, name="csp")
67 # Enable a Content Security Policy
68 config.add_settings({
69 "csp": {
70 "base-uri": [SELF],
71 "block-all-mixed-content": [],
72 "connect-src": [
73 item for item in [
74 SELF,
75 config.registry.settings.get("statuspage.url"),
76 "https://api.github.com/repos/",
77 ]
78 if item
79 ],
80 "default-src": [NONE],
81 "font-src": [SELF, "fonts.gstatic.com"],
82 "form-action": [SELF],
83 "frame-ancestors": [NONE],
84 "frame-src": [NONE],
85 "img-src": [
86 SELF,
87 config.registry.settings["camo.url"],
88 "www.google-analytics.com",
89 ],
90 "script-src": [
91 SELF,
92 "www.googletagmanager.com",
93 "www.google-analytics.com",
94 ],
95 "style-src": [SELF, "fonts.googleapis.com"],
96 },
97 })
98 config.add_tween("warehouse.csp.content_security_policy_tween_factory")
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/csp.py b/warehouse/csp.py
--- a/warehouse/csp.py
+++ b/warehouse/csp.py
@@ -34,6 +34,12 @@
except ValueError:
policy = collections.defaultdict(list)
+ # Replace CSP headers on /simple/ pages.
+ if request.path.startswith("/simple/"):
+ policy = collections.defaultdict(list)
+ policy["sandbox"] = ["allow-top-navigation"]
+ policy["default-src"] = [NONE]
+
# We don't want to apply our Content Security Policy to the debug
# toolbar, that's not part of our application and it doesn't work with
# our restrictive CSP.
|
{"golden_diff": "diff --git a/warehouse/csp.py b/warehouse/csp.py\n--- a/warehouse/csp.py\n+++ b/warehouse/csp.py\n@@ -34,6 +34,12 @@\n except ValueError:\n policy = collections.defaultdict(list)\n \n+ # Replace CSP headers on /simple/ pages.\n+ if request.path.startswith(\"/simple/\"):\n+ policy = collections.defaultdict(list)\n+ policy[\"sandbox\"] = [\"allow-top-navigation\"]\n+ policy[\"default-src\"] = [NONE]\n+\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n", "issue": "use CSP: sandbox on /simple/ pages\nhttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/sandbox + https://www.youtube.com/watch?v=fbhW37JZtSA&feature=youtu.be\r\n\r\nI believe this is a great fit for /simple/, which don't need any ability to do anthing but have a simple HTML structure.\r\n\r\nI _think_ we can replace the whole current header with `Content-Security-Policy: sandbox allow-top-navigations`.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\nimport copy\n\n\nSELF = \"'self'\"\nNONE = \"'none'\"\n\n\ndef _serialize(policy):\n return \"; \".join([\n \" \".join([k] + [v2 for v2 in v if v2 is not None])\n for k, v in sorted(policy.items())\n ])\n\n\ndef content_security_policy_tween_factory(handler, registry):\n def content_security_policy_tween(request):\n resp = handler(request)\n\n try:\n policy = request.find_service(name=\"csp\")\n except ValueError:\n policy = collections.defaultdict(list)\n\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n policy = _serialize(policy).format(request=request)\n if not request.path.startswith(\"/_debug_toolbar/\") and policy:\n resp.headers[\"Content-Security-Policy\"] = policy\n\n return resp\n\n return content_security_policy_tween\n\n\nclass CSPPolicy(collections.defaultdict):\n def __init__(self, policy=None):\n super().__init__(list, policy or {})\n\n def merge(self, policy):\n for key, attrs in policy.items():\n self[key].extend(attrs)\n\n\ndef csp_factory(_, request):\n try:\n return CSPPolicy(copy.deepcopy(request.registry.settings[\"csp\"]))\n except KeyError:\n return CSPPolicy({})\n\n\ndef includeme(config):\n config.register_service_factory(csp_factory, name=\"csp\")\n # Enable a Content Security Policy\n config.add_settings({\n \"csp\": {\n \"base-uri\": [SELF],\n \"block-all-mixed-content\": [],\n \"connect-src\": [\n item for item in [\n SELF,\n config.registry.settings.get(\"statuspage.url\"),\n \"https://api.github.com/repos/\",\n ]\n if item\n ],\n \"default-src\": [NONE],\n \"font-src\": [SELF, \"fonts.gstatic.com\"],\n \"form-action\": [SELF],\n \"frame-ancestors\": [NONE],\n \"frame-src\": [NONE],\n \"img-src\": [\n SELF,\n config.registry.settings[\"camo.url\"],\n \"www.google-analytics.com\",\n ],\n \"script-src\": [\n SELF,\n \"www.googletagmanager.com\",\n \"www.google-analytics.com\",\n ],\n \"style-src\": [SELF, \"fonts.googleapis.com\"],\n },\n })\n config.add_tween(\"warehouse.csp.content_security_policy_tween_factory\")\n", "path": "warehouse/csp.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\nimport copy\n\n\nSELF = \"'self'\"\nNONE = \"'none'\"\n\n\ndef _serialize(policy):\n return \"; \".join([\n \" \".join([k] + [v2 for v2 in v if v2 is not None])\n for k, v in sorted(policy.items())\n ])\n\n\ndef content_security_policy_tween_factory(handler, registry):\n def content_security_policy_tween(request):\n resp = handler(request)\n\n try:\n policy = request.find_service(name=\"csp\")\n except ValueError:\n policy = collections.defaultdict(list)\n\n # Replace CSP headers on /simple/ pages.\n if request.path.startswith(\"/simple/\"):\n policy = collections.defaultdict(list)\n policy[\"sandbox\"] = [\"allow-top-navigation\"]\n policy[\"default-src\"] = [NONE]\n\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n policy = _serialize(policy).format(request=request)\n if not request.path.startswith(\"/_debug_toolbar/\") and policy:\n resp.headers[\"Content-Security-Policy\"] = policy\n\n return resp\n\n return content_security_policy_tween\n\n\nclass CSPPolicy(collections.defaultdict):\n def __init__(self, policy=None):\n super().__init__(list, policy or {})\n\n def merge(self, policy):\n for key, attrs in policy.items():\n self[key].extend(attrs)\n\n\ndef csp_factory(_, request):\n try:\n return CSPPolicy(copy.deepcopy(request.registry.settings[\"csp\"]))\n except KeyError:\n return CSPPolicy({})\n\n\ndef includeme(config):\n config.register_service_factory(csp_factory, name=\"csp\")\n # Enable a Content Security Policy\n config.add_settings({\n \"csp\": {\n \"base-uri\": [SELF],\n \"block-all-mixed-content\": [],\n \"connect-src\": [\n item for item in [\n SELF,\n config.registry.settings.get(\"statuspage.url\"),\n \"https://api.github.com/repos/\",\n ]\n if item\n ],\n \"default-src\": [NONE],\n \"font-src\": [SELF, \"fonts.gstatic.com\"],\n \"form-action\": [SELF],\n \"frame-ancestors\": [NONE],\n \"frame-src\": [NONE],\n \"img-src\": [\n SELF,\n config.registry.settings[\"camo.url\"],\n \"www.google-analytics.com\",\n ],\n \"script-src\": [\n SELF,\n \"www.googletagmanager.com\",\n \"www.google-analytics.com\",\n ],\n \"style-src\": [SELF, \"fonts.googleapis.com\"],\n },\n })\n config.add_tween(\"warehouse.csp.content_security_policy_tween_factory\")\n", "path": "warehouse/csp.py"}]}
| 1,215 | 153 |
gh_patches_debug_16614
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-4492
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Email settings not configured
### Deployment Method
- [ ] Installer
- [ ] Docker Development
- [X] Docker Production
- [ ] Bare metal Development
- [ ] Bare metal Production
- [ ] Digital Ocean image
- [ ] Other (please provide a link `Steps to Reproduce`
### Describe the problem*
Hello,
I am having issues configuring the email settings to connect to our on-prem exchange server.
I have configured the .env file with the following variables (Note - exchange does not require authentication internally)
INVENTREE_EMAIL_HOST
INVENTREE_EMAIL_PORT
INVENTREE_EMAIL_SENDER
However, when running the docker container, the 'System Information' screen reports that 'Email settings not configured. Under 'Global Settings' > 'Login Settings' a prompt shows at the top saying 'Outgoing email has not been configured. Some login and sign-up features may not work correctly!'
Apologies if I have missed something obvious in the documentation but I seem to be going round in circles currently. Any help that you could offer would be greatly appreciated. Thank you
### Steps to Reproduce
1. Stop the contain with `docker compose down`
2. Edit .env file to include the email variables (as per https://docs.inventree.org/en/0.10.0/start/config/#email-settings)
3. Starting the container with `docker compose up -d`
Further steps (Apologies, these might be all guesses at this stage)
1. I have tried setting these within the configuration file (I believe saved here: `volume/inventree-data/config.yaml` to include host, port, sender, tls and ssl variables, however with no change.
2. Feeling like I am missing a step, I am running `docker compose run inventree-server invoke update`
3. I am now running 0.10.1 but with the same issue.
### Relevant log output
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/InvenTree/status.py`
Content:
```
1 """Provides system status functionality checks."""
2 # -*- coding: utf-8 -*-
3
4 import logging
5 from datetime import timedelta
6
7 from django.conf import settings
8 from django.utils import timezone
9 from django.utils.translation import gettext_lazy as _
10
11 from django_q.models import Success
12 from django_q.monitor import Stat
13
14 import InvenTree.ready
15
16 logger = logging.getLogger("inventree")
17
18
19 def is_worker_running(**kwargs):
20 """Return True if the background worker process is oprational."""
21 clusters = Stat.get_all()
22
23 if len(clusters) > 0:
24 # TODO - Introspect on any cluster information
25 return True
26
27 """
28 Sometimes Stat.get_all() returns [].
29 In this case we have the 'heartbeat' task running every 5 minutes.
30 Check to see if we have any successful result within the last 10 minutes
31 """
32
33 now = timezone.now()
34 past = now - timedelta(minutes=10)
35
36 results = Success.objects.filter(
37 started__gte=past
38 )
39
40 # If any results are returned, then the background worker is running!
41 return results.exists()
42
43
44 def is_email_configured():
45 """Check if email backend is configured.
46
47 NOTE: This does not check if the configuration is valid!
48 """
49 configured = True
50
51 if InvenTree.ready.isInTestMode():
52 return False
53
54 if InvenTree.ready.isImportingData():
55 return False
56
57 if not settings.EMAIL_HOST:
58 configured = False
59
60 # Display warning unless in test mode
61 if not settings.TESTING: # pragma: no cover
62 logger.debug("EMAIL_HOST is not configured")
63
64 if not settings.EMAIL_HOST_USER:
65 configured = False
66
67 # Display warning unless in test mode
68 if not settings.TESTING: # pragma: no cover
69 logger.debug("EMAIL_HOST_USER is not configured")
70
71 if not settings.EMAIL_HOST_PASSWORD:
72 configured = False
73
74 # Display warning unless in test mode
75 if not settings.TESTING: # pragma: no cover
76 logger.debug("EMAIL_HOST_PASSWORD is not configured")
77
78 return configured
79
80
81 def check_system_health(**kwargs):
82 """Check that the InvenTree system is running OK.
83
84 Returns True if all system checks pass.
85 """
86 result = True
87
88 if InvenTree.ready.isInTestMode():
89 # Do not perform further checks if we are running unit tests
90 return False
91
92 if InvenTree.ready.isImportingData():
93 # Do not perform further checks if we are importing data
94 return False
95
96 if not is_worker_running(**kwargs): # pragma: no cover
97 result = False
98 logger.warning(_("Background worker check failed"))
99
100 if not is_email_configured(): # pragma: no cover
101 result = False
102 logger.warning(_("Email backend not configured"))
103
104 if not result: # pragma: no cover
105 logger.warning(_("InvenTree system health checks failed"))
106
107 return result
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/InvenTree/InvenTree/status.py b/InvenTree/InvenTree/status.py
--- a/InvenTree/InvenTree/status.py
+++ b/InvenTree/InvenTree/status.py
@@ -61,19 +61,13 @@
if not settings.TESTING: # pragma: no cover
logger.debug("EMAIL_HOST is not configured")
- if not settings.EMAIL_HOST_USER:
- configured = False
-
- # Display warning unless in test mode
- if not settings.TESTING: # pragma: no cover
- logger.debug("EMAIL_HOST_USER is not configured")
+ # Display warning unless in test mode
+ if not settings.TESTING: # pragma: no cover
+ logger.debug("EMAIL_HOST_USER is not configured")
- if not settings.EMAIL_HOST_PASSWORD:
- configured = False
-
- # Display warning unless in test mode
- if not settings.TESTING: # pragma: no cover
- logger.debug("EMAIL_HOST_PASSWORD is not configured")
+ # Display warning unless in test mode
+ if not settings.TESTING: # pragma: no cover
+ logger.debug("EMAIL_HOST_PASSWORD is not configured")
return configured
|
{"golden_diff": "diff --git a/InvenTree/InvenTree/status.py b/InvenTree/InvenTree/status.py\n--- a/InvenTree/InvenTree/status.py\n+++ b/InvenTree/InvenTree/status.py\n@@ -61,19 +61,13 @@\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST is not configured\")\n \n- if not settings.EMAIL_HOST_USER:\n- configured = False\n-\n- # Display warning unless in test mode\n- if not settings.TESTING: # pragma: no cover\n- logger.debug(\"EMAIL_HOST_USER is not configured\")\n+ # Display warning unless in test mode\n+ if not settings.TESTING: # pragma: no cover\n+ logger.debug(\"EMAIL_HOST_USER is not configured\")\n \n- if not settings.EMAIL_HOST_PASSWORD:\n- configured = False\n-\n- # Display warning unless in test mode\n- if not settings.TESTING: # pragma: no cover\n- logger.debug(\"EMAIL_HOST_PASSWORD is not configured\")\n+ # Display warning unless in test mode\n+ if not settings.TESTING: # pragma: no cover\n+ logger.debug(\"EMAIL_HOST_PASSWORD is not configured\")\n \n return configured\n", "issue": "Email settings not configured\n### Deployment Method\n\n- [ ] Installer\n- [ ] Docker Development\n- [X] Docker Production\n- [ ] Bare metal Development\n- [ ] Bare metal Production\n- [ ] Digital Ocean image\n- [ ] Other (please provide a link `Steps to Reproduce`\n\n### Describe the problem*\n\nHello,\r\n\r\nI am having issues configuring the email settings to connect to our on-prem exchange server. \r\n\r\nI have configured the .env file with the following variables (Note - exchange does not require authentication internally)\r\n\r\nINVENTREE_EMAIL_HOST\r\nINVENTREE_EMAIL_PORT\r\nINVENTREE_EMAIL_SENDER\r\n\r\nHowever, when running the docker container, the 'System Information' screen reports that 'Email settings not configured. Under 'Global Settings' > 'Login Settings' a prompt shows at the top saying 'Outgoing email has not been configured. Some login and sign-up features may not work correctly!'\r\n\r\nApologies if I have missed something obvious in the documentation but I seem to be going round in circles currently. Any help that you could offer would be greatly appreciated. Thank you\n\n### Steps to Reproduce\n\n1. Stop the contain with `docker compose down` \r\n2. Edit .env file to include the email variables (as per https://docs.inventree.org/en/0.10.0/start/config/#email-settings)\r\n3. Starting the container with `docker compose up -d`\r\n\r\nFurther steps (Apologies, these might be all guesses at this stage)\r\n\r\n1. I have tried setting these within the configuration file (I believe saved here: `volume/inventree-data/config.yaml` to include host, port, sender, tls and ssl variables, however with no change.\r\n2. Feeling like I am missing a step, I am running `docker compose run inventree-server invoke update`\r\n3. I am now running 0.10.1 but with the same issue.\r\n\r\n\n\n### Relevant log output\n\n_No response_\n", "before_files": [{"content": "\"\"\"Provides system status functionality checks.\"\"\"\n# -*- coding: utf-8 -*-\n\nimport logging\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom django_q.models import Success\nfrom django_q.monitor import Stat\n\nimport InvenTree.ready\n\nlogger = logging.getLogger(\"inventree\")\n\n\ndef is_worker_running(**kwargs):\n \"\"\"Return True if the background worker process is oprational.\"\"\"\n clusters = Stat.get_all()\n\n if len(clusters) > 0:\n # TODO - Introspect on any cluster information\n return True\n\n \"\"\"\n Sometimes Stat.get_all() returns [].\n In this case we have the 'heartbeat' task running every 5 minutes.\n Check to see if we have any successful result within the last 10 minutes\n \"\"\"\n\n now = timezone.now()\n past = now - timedelta(minutes=10)\n\n results = Success.objects.filter(\n started__gte=past\n )\n\n # If any results are returned, then the background worker is running!\n return results.exists()\n\n\ndef is_email_configured():\n \"\"\"Check if email backend is configured.\n\n NOTE: This does not check if the configuration is valid!\n \"\"\"\n configured = True\n\n if InvenTree.ready.isInTestMode():\n return False\n\n if InvenTree.ready.isImportingData():\n return False\n\n if not settings.EMAIL_HOST:\n configured = False\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST is not configured\")\n\n if not settings.EMAIL_HOST_USER:\n configured = False\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST_USER is not configured\")\n\n if not settings.EMAIL_HOST_PASSWORD:\n configured = False\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST_PASSWORD is not configured\")\n\n return configured\n\n\ndef check_system_health(**kwargs):\n \"\"\"Check that the InvenTree system is running OK.\n\n Returns True if all system checks pass.\n \"\"\"\n result = True\n\n if InvenTree.ready.isInTestMode():\n # Do not perform further checks if we are running unit tests\n return False\n\n if InvenTree.ready.isImportingData():\n # Do not perform further checks if we are importing data\n return False\n\n if not is_worker_running(**kwargs): # pragma: no cover\n result = False\n logger.warning(_(\"Background worker check failed\"))\n\n if not is_email_configured(): # pragma: no cover\n result = False\n logger.warning(_(\"Email backend not configured\"))\n\n if not result: # pragma: no cover\n logger.warning(_(\"InvenTree system health checks failed\"))\n\n return result\n", "path": "InvenTree/InvenTree/status.py"}], "after_files": [{"content": "\"\"\"Provides system status functionality checks.\"\"\"\n# -*- coding: utf-8 -*-\n\nimport logging\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom django_q.models import Success\nfrom django_q.monitor import Stat\n\nimport InvenTree.ready\n\nlogger = logging.getLogger(\"inventree\")\n\n\ndef is_worker_running(**kwargs):\n \"\"\"Return True if the background worker process is oprational.\"\"\"\n clusters = Stat.get_all()\n\n if len(clusters) > 0:\n # TODO - Introspect on any cluster information\n return True\n\n \"\"\"\n Sometimes Stat.get_all() returns [].\n In this case we have the 'heartbeat' task running every 5 minutes.\n Check to see if we have any successful result within the last 10 minutes\n \"\"\"\n\n now = timezone.now()\n past = now - timedelta(minutes=10)\n\n results = Success.objects.filter(\n started__gte=past\n )\n\n # If any results are returned, then the background worker is running!\n return results.exists()\n\n\ndef is_email_configured():\n \"\"\"Check if email backend is configured.\n\n NOTE: This does not check if the configuration is valid!\n \"\"\"\n configured = True\n\n if InvenTree.ready.isInTestMode():\n return False\n\n if InvenTree.ready.isImportingData():\n return False\n\n if not settings.EMAIL_HOST:\n configured = False\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST is not configured\")\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST_USER is not configured\")\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST_PASSWORD is not configured\")\n\n return configured\n\n\ndef check_system_health(**kwargs):\n \"\"\"Check that the InvenTree system is running OK.\n\n Returns True if all system checks pass.\n \"\"\"\n result = True\n\n if InvenTree.ready.isInTestMode():\n # Do not perform further checks if we are running unit tests\n return False\n\n if InvenTree.ready.isImportingData():\n # Do not perform further checks if we are importing data\n return False\n\n if not is_worker_running(**kwargs): # pragma: no cover\n result = False\n logger.warning(_(\"Background worker check failed\"))\n\n if not is_email_configured(): # pragma: no cover\n result = False\n logger.warning(_(\"Email backend not configured\"))\n\n if not result: # pragma: no cover\n logger.warning(_(\"InvenTree system health checks failed\"))\n\n return result\n", "path": "InvenTree/InvenTree/status.py"}]}
| 1,531 | 277 |
gh_patches_debug_4837
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-2800
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.10 cffi build fails
<!--
The issue tracker is a tool to address bugs NOT a support platform.
Please use the Discord community or Stack Overflow for support questions,
more information at https://github.com/kivy/python-for-android#support
-->
### Checklist
- [ x] the issue is indeed a bug and not a support request
- [ x] issue doesn't already exist: https://github.com/kivy/python-for-android/issues
- [ x] I have a short, runnable example that reproduces the issue
- [x ] I reproduced the problem with the latest development version (`p4a.branch = develop`)
- [ x] I used the grave accent (aka backticks) to format code or logs when appropriated
### Versions
- Python: 2.10.6 (OS)
- OS: Ubuntu 22.04.2
- Kivy: 2.2.0
- Cython: 0.29.33
- OpenJDK:
### Description
p4a v2023.5.21 build of cffi fails
### buildozer.spec
Spec file:
```
requirements = python3,kivy, cffi
```
### Logs
```
/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/python3/arm64-v8a__ndk_target_21/python3/Include -DANDROID -I/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/python-installs/apptest/arm64-v8a/include/python3.1 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/Include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/native-build -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.10/c/_cffi_backend.o
c/_cffi_backend.c:407:23: error: expression is not assignable
Py_REFCNT(ct) = 43;
~~~~~~~~~~~~~ ^
c/_cffi_backend.c:410:23: error: expression is not assignable
Py_REFCNT(ct) = 0;
~~~~~~~~~~~~~ ^
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/cffi/__init__.py`
Content:
```
1 import os
2 from pythonforandroid.recipe import CompiledComponentsPythonRecipe
3
4
5 class CffiRecipe(CompiledComponentsPythonRecipe):
6 """
7 Extra system dependencies: autoconf, automake and libtool.
8 """
9 name = 'cffi'
10 version = '1.13.2'
11 url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'
12
13 depends = ['setuptools', 'pycparser', 'libffi']
14
15 patches = ['disable-pkg-config.patch']
16
17 # call_hostpython_via_targetpython = False
18 install_in_hostpython = True
19
20 def get_hostrecipe_env(self, arch=None):
21 # fixes missing ffi.h on some host systems (e.g. gentoo)
22 env = super().get_hostrecipe_env(arch)
23 libffi = self.get_recipe('libffi', self.ctx)
24 includes = libffi.get_include_dirs(arch)
25 env['FFI_INC'] = ",".join(includes)
26 return env
27
28 def get_recipe_env(self, arch=None):
29 env = super().get_recipe_env(arch)
30 libffi = self.get_recipe('libffi', self.ctx)
31 includes = libffi.get_include_dirs(arch)
32 env['CFLAGS'] = ' -I'.join([env.get('CFLAGS', '')] + includes)
33 env['CFLAGS'] += ' -I{}'.format(self.ctx.python_recipe.include_root(arch.arch))
34 env['LDFLAGS'] = (env.get('CFLAGS', '') + ' -L' +
35 self.ctx.get_libs_dir(arch.arch))
36 env['LDFLAGS'] += ' -L{}'.format(os.path.join(self.ctx.bootstrap.build_dir, 'libs', arch.arch))
37 # required for libc and libdl
38 env['LDFLAGS'] += ' -L{}'.format(arch.ndk_lib_dir_versioned)
39 env['PYTHONPATH'] = ':'.join([
40 self.ctx.get_site_packages_dir(arch),
41 env['BUILDLIB_PATH'],
42 ])
43 env['LDFLAGS'] += ' -L{}'.format(self.ctx.python_recipe.link_root(arch.arch))
44 env['LDFLAGS'] += ' -lpython{}'.format(self.ctx.python_recipe.link_version)
45 return env
46
47
48 recipe = CffiRecipe()
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pythonforandroid/recipes/cffi/__init__.py b/pythonforandroid/recipes/cffi/__init__.py
--- a/pythonforandroid/recipes/cffi/__init__.py
+++ b/pythonforandroid/recipes/cffi/__init__.py
@@ -7,7 +7,7 @@
Extra system dependencies: autoconf, automake and libtool.
"""
name = 'cffi'
- version = '1.13.2'
+ version = '1.15.1'
url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'
depends = ['setuptools', 'pycparser', 'libffi']
|
{"golden_diff": "diff --git a/pythonforandroid/recipes/cffi/__init__.py b/pythonforandroid/recipes/cffi/__init__.py\n--- a/pythonforandroid/recipes/cffi/__init__.py\n+++ b/pythonforandroid/recipes/cffi/__init__.py\n@@ -7,7 +7,7 @@\n Extra system dependencies: autoconf, automake and libtool.\n \"\"\"\n name = 'cffi'\n- version = '1.13.2'\n+ version = '1.15.1'\n url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'\n \n depends = ['setuptools', 'pycparser', 'libffi']\n", "issue": "Python 3.10 cffi build fails\n<!--\r\nThe issue tracker is a tool to address bugs NOT a support platform.\r\nPlease use the Discord community or Stack Overflow for support questions,\r\nmore information at https://github.com/kivy/python-for-android#support\r\n-->\r\n\r\n### Checklist\r\n\r\n- [ x] the issue is indeed a bug and not a support request\r\n- [ x] issue doesn't already exist: https://github.com/kivy/python-for-android/issues\r\n- [ x] I have a short, runnable example that reproduces the issue\r\n- [x ] I reproduced the problem with the latest development version (`p4a.branch = develop`)\r\n- [ x] I used the grave accent (aka backticks) to format code or logs when appropriated\r\n\r\n### Versions\r\n\r\n- Python: 2.10.6 (OS)\r\n- OS: Ubuntu 22.04.2\r\n- Kivy: 2.2.0\r\n- Cython: 0.29.33\r\n- OpenJDK:\r\n\r\n### Description\r\n\r\np4a v2023.5.21 build of cffi fails\r\n\r\n### buildozer.spec\r\n\r\n\r\nSpec file:\r\n```\r\nrequirements = python3,kivy, cffi\r\n```\r\n\r\n### Logs\r\n\r\n```\r\n/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/python3/arm64-v8a__ndk_target_21/python3/Include -DANDROID -I/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/python-installs/apptest/arm64-v8a/include/python3.1 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/Include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/native-build -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.10/c/_cffi_backend.o\r\nc/_cffi_backend.c:407:23: error: expression is not assignable\r\n Py_REFCNT(ct) = 43;\r\n ~~~~~~~~~~~~~ ^\r\nc/_cffi_backend.c:410:23: error: expression is not assignable\r\n Py_REFCNT(ct) = 0;\r\n ~~~~~~~~~~~~~ ^\r\n\r\n```\r\n\n", "before_files": [{"content": "import os\nfrom pythonforandroid.recipe import CompiledComponentsPythonRecipe\n\n\nclass CffiRecipe(CompiledComponentsPythonRecipe):\n \"\"\"\n Extra system dependencies: autoconf, automake and libtool.\n \"\"\"\n name = 'cffi'\n version = '1.13.2'\n url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'\n\n depends = ['setuptools', 'pycparser', 'libffi']\n\n patches = ['disable-pkg-config.patch']\n\n # call_hostpython_via_targetpython = False\n install_in_hostpython = True\n\n def get_hostrecipe_env(self, arch=None):\n # fixes missing ffi.h on some host systems (e.g. gentoo)\n env = super().get_hostrecipe_env(arch)\n libffi = self.get_recipe('libffi', self.ctx)\n includes = libffi.get_include_dirs(arch)\n env['FFI_INC'] = \",\".join(includes)\n return env\n\n def get_recipe_env(self, arch=None):\n env = super().get_recipe_env(arch)\n libffi = self.get_recipe('libffi', self.ctx)\n includes = libffi.get_include_dirs(arch)\n env['CFLAGS'] = ' -I'.join([env.get('CFLAGS', '')] + includes)\n env['CFLAGS'] += ' -I{}'.format(self.ctx.python_recipe.include_root(arch.arch))\n env['LDFLAGS'] = (env.get('CFLAGS', '') + ' -L' +\n self.ctx.get_libs_dir(arch.arch))\n env['LDFLAGS'] += ' -L{}'.format(os.path.join(self.ctx.bootstrap.build_dir, 'libs', arch.arch))\n # required for libc and libdl\n env['LDFLAGS'] += ' -L{}'.format(arch.ndk_lib_dir_versioned)\n env['PYTHONPATH'] = ':'.join([\n self.ctx.get_site_packages_dir(arch),\n env['BUILDLIB_PATH'],\n ])\n env['LDFLAGS'] += ' -L{}'.format(self.ctx.python_recipe.link_root(arch.arch))\n env['LDFLAGS'] += ' -lpython{}'.format(self.ctx.python_recipe.link_version)\n return env\n\n\nrecipe = CffiRecipe()\n", "path": "pythonforandroid/recipes/cffi/__init__.py"}], "after_files": [{"content": "import os\nfrom pythonforandroid.recipe import CompiledComponentsPythonRecipe\n\n\nclass CffiRecipe(CompiledComponentsPythonRecipe):\n \"\"\"\n Extra system dependencies: autoconf, automake and libtool.\n \"\"\"\n name = 'cffi'\n version = '1.15.1'\n url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'\n\n depends = ['setuptools', 'pycparser', 'libffi']\n\n patches = ['disable-pkg-config.patch']\n\n # call_hostpython_via_targetpython = False\n install_in_hostpython = True\n\n def get_hostrecipe_env(self, arch=None):\n # fixes missing ffi.h on some host systems (e.g. gentoo)\n env = super().get_hostrecipe_env(arch)\n libffi = self.get_recipe('libffi', self.ctx)\n includes = libffi.get_include_dirs(arch)\n env['FFI_INC'] = \",\".join(includes)\n return env\n\n def get_recipe_env(self, arch=None):\n env = super().get_recipe_env(arch)\n libffi = self.get_recipe('libffi', self.ctx)\n includes = libffi.get_include_dirs(arch)\n env['CFLAGS'] = ' -I'.join([env.get('CFLAGS', '')] + includes)\n env['CFLAGS'] += ' -I{}'.format(self.ctx.python_recipe.include_root(arch.arch))\n env['LDFLAGS'] = (env.get('CFLAGS', '') + ' -L' +\n self.ctx.get_libs_dir(arch.arch))\n env['LDFLAGS'] += ' -L{}'.format(os.path.join(self.ctx.bootstrap.build_dir, 'libs', arch.arch))\n # required for libc and libdl\n env['LDFLAGS'] += ' -L{}'.format(arch.ndk_lib_dir_versioned)\n env['PYTHONPATH'] = ':'.join([\n self.ctx.get_site_packages_dir(arch),\n env['BUILDLIB_PATH'],\n ])\n env['LDFLAGS'] += ' -L{}'.format(self.ctx.python_recipe.link_root(arch.arch))\n env['LDFLAGS'] += ' -lpython{}'.format(self.ctx.python_recipe.link_version)\n return env\n\n\nrecipe = CffiRecipe()\n", "path": "pythonforandroid/recipes/cffi/__init__.py"}]}
| 1,595 | 153 |
gh_patches_debug_25476
|
rasdani/github-patches
|
git_diff
|
AnalogJ__lexicon-476
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TLDExtract Private Domains for Dynamic DNS Providers
Hello all,
I'm currently putting together the plugin for dynu.com (listed in NYI proiders).
As Dynu also acts as Dynamic DNS provider with several toplevel domains as base for the dynamic domain (`yourhost.dynu.net`, `yourhost.freeddns.org`..., also, wildcards), I had some trouble putting together the plugin.
As an example, I'm making up `mydomain.dynu.net` as my target dynamic dns hostname.
Now, the `tldextract` package used to determine the part of the domain that belongs to the toplevel will spit out `net` as the toplevel, `dynu` as the TLD and then drops `mydomain` in further processing as seen [in client.py](../blob/master/lexicon/client.py#L43).
In turn, finding the right domain from the list of dns entries in `_authenticate` is not possible by default (as `self.domain` is set to `dynu.net`).
I discovered two workarounds for this:
1. use `--delegated "mydomain.dynu.net"` to explicitly target the subdomain
2. change the code [in client.py](../blob/master/lexicon/client.py#L41) to this:
```python
extract = tldextract.TLDExtract(include_psl_private_domains=True)
# Process domain, strip subdomain
domain_parts = extract(
self.config.resolve('lexicon:domain'))
runtime_config['domain'] = '{0}.{1}'.format(
domain_parts.domain, domain_parts.suffix)
```
The latter is taken from [the tldextract README](https://github.com/john-kurkowski/tldextract#public-vs-private-domains).
And because Dynu probably isn't the only Dynamic DNS provider using subdomains for their users, I guess this should be the default solution.
There's a catch however that is still in ongoing development [tldextract#144](https://github.com/john-kurkowski/tldextract/pull/144):
The list of TLDs is cached on first load of the extension, so if the config is not set to `include_psl_private_domains` before the package is first initialized, it won't work. So either an update has to be triggered manually, or, lexicon should be installed and used from a virtualenv in the first place.
Since I'm already making use of method 2 in my dev enviroment, I could open a PR right away, but I'm not 100% sure on side effects for other plugins, hence my hesitation.
Thanks and best,
Chris
edit// whitespace in codeblock, typos, grammar
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lexicon/client.py`
Content:
```
1 """Main module of Lexicon. Defines the Client class, that holds all Lexicon logic."""
2 from __future__ import absolute_import
3 import importlib
4
5 import tldextract
6
7 from lexicon import discovery
8 from lexicon.config import (
9 ConfigResolver, DictConfigSource,
10 legacy_config_resolver, non_interactive_config_resolver,
11 )
12
13
14 class ProviderNotAvailableError(Exception):
15 """
16 Custom exception to raise when a provider is not available,
17 typically because some optional dependencies are missing
18 """
19
20
21 class Client(object): # pylint: disable=useless-object-inheritance,too-few-public-methods
22 """This is the Lexicon client, that will execute all the logic."""
23
24 def __init__(self, config=None):
25 if not config:
26 # If there is not config specified, we load a non-interactive configuration.
27 self.config = non_interactive_config_resolver()
28 elif not isinstance(config, ConfigResolver):
29 # If config is not a ConfigResolver, we are in a legacy situation.
30 # We protect this part of the Client API.
31 self.config = legacy_config_resolver(config)
32 else:
33 self.config = config
34
35 # Validate configuration
36 self._validate_config()
37
38 runtime_config = {}
39
40 # Process domain, strip subdomain
41 domain_parts = tldextract.extract(
42 self.config.resolve('lexicon:domain'))
43 runtime_config['domain'] = '{0}.{1}'.format(
44 domain_parts.domain, domain_parts.suffix)
45
46 if self.config.resolve('lexicon:delegated'):
47 # handle delegated domain
48 delegated = self.config.resolve('lexicon:delegated').rstrip('.')
49 if delegated != runtime_config.get('domain'):
50 # convert to relative name
51 if delegated.endswith(runtime_config.get('domain')):
52 delegated = delegated[:-len(runtime_config.get('domain'))]
53 delegated = delegated.rstrip('.')
54 # update domain
55 runtime_config['domain'] = '{0}.{1}'.format(
56 delegated, runtime_config.get('domain'))
57
58 self.action = self.config.resolve('lexicon:action')
59 self.provider_name = (self.config.resolve('lexicon:provider_name')
60 or self.config.resolve('lexicon:provider'))
61
62 self.config.add_config_source(DictConfigSource(runtime_config), 0)
63
64 provider_module = importlib.import_module(
65 'lexicon.providers.' + self.provider_name)
66 provider_class = getattr(provider_module, 'Provider')
67 self.provider = provider_class(self.config)
68
69 def execute(self):
70 """Execute provided configuration in class constructor to the DNS records"""
71 self.provider.authenticate()
72 identifier = self.config.resolve('lexicon:identifier')
73 record_type = self.config.resolve('lexicon:type')
74 name = self.config.resolve('lexicon:name')
75 content = self.config.resolve('lexicon:content')
76
77 if self.action == 'create':
78 return self.provider.create_record(record_type, name, content)
79
80 if self.action == 'list':
81 return self.provider.list_records(record_type, name, content)
82
83 if self.action == 'update':
84 return self.provider.update_record(identifier, record_type, name, content)
85
86 if self.action == 'delete':
87 return self.provider.delete_record(identifier, record_type, name, content)
88
89 raise ValueError('Invalid action statement: {0}'.format(self.action))
90
91 def _validate_config(self):
92 provider_name = self.config.resolve('lexicon:provider_name')
93 if not self.config.resolve('lexicon:provider_name'):
94 raise AttributeError('provider_name')
95
96 try:
97 available = discovery.find_providers()[self.config.resolve('lexicon:provider_name')]
98 except KeyError:
99 raise ProviderNotAvailableError('This provider ({0}) is not supported by Lexicon.'
100 .format(provider_name))
101 else:
102 if not available:
103 raise ProviderNotAvailableError(
104 'This provider ({0}) has required dependencies that are missing. '
105 'Please install lexicon[{0}] first.'.format(provider_name))
106
107 if not self.config.resolve('lexicon:action'):
108 raise AttributeError('action')
109 if not self.config.resolve('lexicon:domain'):
110 raise AttributeError('domain')
111 if not self.config.resolve('lexicon:type'):
112 raise AttributeError('type')
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lexicon/client.py b/lexicon/client.py
--- a/lexicon/client.py
+++ b/lexicon/client.py
@@ -1,6 +1,7 @@
"""Main module of Lexicon. Defines the Client class, that holds all Lexicon logic."""
from __future__ import absolute_import
import importlib
+import os
import tldextract
@@ -10,6 +11,9 @@
legacy_config_resolver, non_interactive_config_resolver,
)
+TLDEXTRACT_CACHE_FILE_DEFAULT = os.path.join('~', '.lexicon_tld_set')
+TLDEXTRACT_CACHE_FILE = os.path.expanduser(os.environ.get("LEXICON_TLDEXTRACT_CACHE",
+ TLDEXTRACT_CACHE_FILE_DEFAULT))
class ProviderNotAvailableError(Exception):
"""
@@ -38,7 +42,9 @@
runtime_config = {}
# Process domain, strip subdomain
- domain_parts = tldextract.extract(
+ domain_extractor = tldextract.TLDExtract(cache_file=TLDEXTRACT_CACHE_FILE,
+ include_psl_private_domains=True)
+ domain_parts = domain_extractor(
self.config.resolve('lexicon:domain'))
runtime_config['domain'] = '{0}.{1}'.format(
domain_parts.domain, domain_parts.suffix)
|
{"golden_diff": "diff --git a/lexicon/client.py b/lexicon/client.py\n--- a/lexicon/client.py\n+++ b/lexicon/client.py\n@@ -1,6 +1,7 @@\n \"\"\"Main module of Lexicon. Defines the Client class, that holds all Lexicon logic.\"\"\"\n from __future__ import absolute_import\n import importlib\n+import os\n \n import tldextract\n \n@@ -10,6 +11,9 @@\n legacy_config_resolver, non_interactive_config_resolver,\n )\n \n+TLDEXTRACT_CACHE_FILE_DEFAULT = os.path.join('~', '.lexicon_tld_set')\n+TLDEXTRACT_CACHE_FILE = os.path.expanduser(os.environ.get(\"LEXICON_TLDEXTRACT_CACHE\",\n+ TLDEXTRACT_CACHE_FILE_DEFAULT))\n \n class ProviderNotAvailableError(Exception):\n \"\"\"\n@@ -38,7 +42,9 @@\n runtime_config = {}\n \n # Process domain, strip subdomain\n- domain_parts = tldextract.extract(\n+ domain_extractor = tldextract.TLDExtract(cache_file=TLDEXTRACT_CACHE_FILE,\n+ include_psl_private_domains=True)\n+ domain_parts = domain_extractor(\n self.config.resolve('lexicon:domain'))\n runtime_config['domain'] = '{0}.{1}'.format(\n domain_parts.domain, domain_parts.suffix)\n", "issue": "TLDExtract Private Domains for Dynamic DNS Providers\nHello all,\r\n\r\nI'm currently putting together the plugin for dynu.com (listed in NYI proiders).\r\nAs Dynu also acts as Dynamic DNS provider with several toplevel domains as base for the dynamic domain (`yourhost.dynu.net`, `yourhost.freeddns.org`..., also, wildcards), I had some trouble putting together the plugin.\r\n\r\nAs an example, I'm making up `mydomain.dynu.net` as my target dynamic dns hostname.\r\n\r\nNow, the `tldextract` package used to determine the part of the domain that belongs to the toplevel will spit out `net` as the toplevel, `dynu` as the TLD and then drops `mydomain` in further processing as seen [in client.py](../blob/master/lexicon/client.py#L43).\r\n\r\nIn turn, finding the right domain from the list of dns entries in `_authenticate` is not possible by default (as `self.domain` is set to `dynu.net`).\r\n\r\nI discovered two workarounds for this:\r\n\r\n1. use `--delegated \"mydomain.dynu.net\"` to explicitly target the subdomain\r\n2. change the code [in client.py](../blob/master/lexicon/client.py#L41) to this:\r\n\r\n```python\r\nextract = tldextract.TLDExtract(include_psl_private_domains=True)\r\n\r\n# Process domain, strip subdomain\r\ndomain_parts = extract(\r\n self.config.resolve('lexicon:domain'))\r\nruntime_config['domain'] = '{0}.{1}'.format(\r\n domain_parts.domain, domain_parts.suffix)\r\n```\r\n\r\nThe latter is taken from [the tldextract README](https://github.com/john-kurkowski/tldextract#public-vs-private-domains).\r\nAnd because Dynu probably isn't the only Dynamic DNS provider using subdomains for their users, I guess this should be the default solution.\r\nThere's a catch however that is still in ongoing development [tldextract#144](https://github.com/john-kurkowski/tldextract/pull/144):\r\nThe list of TLDs is cached on first load of the extension, so if the config is not set to `include_psl_private_domains` before the package is first initialized, it won't work. So either an update has to be triggered manually, or, lexicon should be installed and used from a virtualenv in the first place.\r\n\r\nSince I'm already making use of method 2 in my dev enviroment, I could open a PR right away, but I'm not 100% sure on side effects for other plugins, hence my hesitation.\r\n\r\nThanks and best,\r\nChris\r\n\r\nedit// whitespace in codeblock, typos, grammar\n", "before_files": [{"content": "\"\"\"Main module of Lexicon. Defines the Client class, that holds all Lexicon logic.\"\"\"\nfrom __future__ import absolute_import\nimport importlib\n\nimport tldextract\n\nfrom lexicon import discovery\nfrom lexicon.config import (\n ConfigResolver, DictConfigSource,\n legacy_config_resolver, non_interactive_config_resolver,\n)\n\n\nclass ProviderNotAvailableError(Exception):\n \"\"\"\n Custom exception to raise when a provider is not available,\n typically because some optional dependencies are missing\n \"\"\"\n\n\nclass Client(object): # pylint: disable=useless-object-inheritance,too-few-public-methods\n \"\"\"This is the Lexicon client, that will execute all the logic.\"\"\"\n\n def __init__(self, config=None):\n if not config:\n # If there is not config specified, we load a non-interactive configuration.\n self.config = non_interactive_config_resolver()\n elif not isinstance(config, ConfigResolver):\n # If config is not a ConfigResolver, we are in a legacy situation.\n # We protect this part of the Client API.\n self.config = legacy_config_resolver(config)\n else:\n self.config = config\n\n # Validate configuration\n self._validate_config()\n\n runtime_config = {}\n\n # Process domain, strip subdomain\n domain_parts = tldextract.extract(\n self.config.resolve('lexicon:domain'))\n runtime_config['domain'] = '{0}.{1}'.format(\n domain_parts.domain, domain_parts.suffix)\n\n if self.config.resolve('lexicon:delegated'):\n # handle delegated domain\n delegated = self.config.resolve('lexicon:delegated').rstrip('.')\n if delegated != runtime_config.get('domain'):\n # convert to relative name\n if delegated.endswith(runtime_config.get('domain')):\n delegated = delegated[:-len(runtime_config.get('domain'))]\n delegated = delegated.rstrip('.')\n # update domain\n runtime_config['domain'] = '{0}.{1}'.format(\n delegated, runtime_config.get('domain'))\n\n self.action = self.config.resolve('lexicon:action')\n self.provider_name = (self.config.resolve('lexicon:provider_name')\n or self.config.resolve('lexicon:provider'))\n\n self.config.add_config_source(DictConfigSource(runtime_config), 0)\n\n provider_module = importlib.import_module(\n 'lexicon.providers.' + self.provider_name)\n provider_class = getattr(provider_module, 'Provider')\n self.provider = provider_class(self.config)\n\n def execute(self):\n \"\"\"Execute provided configuration in class constructor to the DNS records\"\"\"\n self.provider.authenticate()\n identifier = self.config.resolve('lexicon:identifier')\n record_type = self.config.resolve('lexicon:type')\n name = self.config.resolve('lexicon:name')\n content = self.config.resolve('lexicon:content')\n\n if self.action == 'create':\n return self.provider.create_record(record_type, name, content)\n\n if self.action == 'list':\n return self.provider.list_records(record_type, name, content)\n\n if self.action == 'update':\n return self.provider.update_record(identifier, record_type, name, content)\n\n if self.action == 'delete':\n return self.provider.delete_record(identifier, record_type, name, content)\n\n raise ValueError('Invalid action statement: {0}'.format(self.action))\n\n def _validate_config(self):\n provider_name = self.config.resolve('lexicon:provider_name')\n if not self.config.resolve('lexicon:provider_name'):\n raise AttributeError('provider_name')\n\n try:\n available = discovery.find_providers()[self.config.resolve('lexicon:provider_name')]\n except KeyError:\n raise ProviderNotAvailableError('This provider ({0}) is not supported by Lexicon.'\n .format(provider_name))\n else:\n if not available:\n raise ProviderNotAvailableError(\n 'This provider ({0}) has required dependencies that are missing. '\n 'Please install lexicon[{0}] first.'.format(provider_name))\n\n if not self.config.resolve('lexicon:action'):\n raise AttributeError('action')\n if not self.config.resolve('lexicon:domain'):\n raise AttributeError('domain')\n if not self.config.resolve('lexicon:type'):\n raise AttributeError('type')\n", "path": "lexicon/client.py"}], "after_files": [{"content": "\"\"\"Main module of Lexicon. Defines the Client class, that holds all Lexicon logic.\"\"\"\nfrom __future__ import absolute_import\nimport importlib\nimport os\n\nimport tldextract\n\nfrom lexicon import discovery\nfrom lexicon.config import (\n ConfigResolver, DictConfigSource,\n legacy_config_resolver, non_interactive_config_resolver,\n)\n\nTLDEXTRACT_CACHE_FILE_DEFAULT = os.path.join('~', '.lexicon_tld_set')\nTLDEXTRACT_CACHE_FILE = os.path.expanduser(os.environ.get(\"LEXICON_TLDEXTRACT_CACHE\",\n TLDEXTRACT_CACHE_FILE_DEFAULT))\n\nclass ProviderNotAvailableError(Exception):\n \"\"\"\n Custom exception to raise when a provider is not available,\n typically because some optional dependencies are missing\n \"\"\"\n\n\nclass Client(object): # pylint: disable=useless-object-inheritance,too-few-public-methods\n \"\"\"This is the Lexicon client, that will execute all the logic.\"\"\"\n\n def __init__(self, config=None):\n if not config:\n # If there is not config specified, we load a non-interactive configuration.\n self.config = non_interactive_config_resolver()\n elif not isinstance(config, ConfigResolver):\n # If config is not a ConfigResolver, we are in a legacy situation.\n # We protect this part of the Client API.\n self.config = legacy_config_resolver(config)\n else:\n self.config = config\n\n # Validate configuration\n self._validate_config()\n\n runtime_config = {}\n\n # Process domain, strip subdomain\n domain_extractor = tldextract.TLDExtract(cache_file=TLDEXTRACT_CACHE_FILE,\n include_psl_private_domains=True)\n domain_parts = domain_extractor(\n self.config.resolve('lexicon:domain'))\n runtime_config['domain'] = '{0}.{1}'.format(\n domain_parts.domain, domain_parts.suffix)\n\n if self.config.resolve('lexicon:delegated'):\n # handle delegated domain\n delegated = self.config.resolve('lexicon:delegated').rstrip('.')\n if delegated != runtime_config.get('domain'):\n # convert to relative name\n if delegated.endswith(runtime_config.get('domain')):\n delegated = delegated[:-len(runtime_config.get('domain'))]\n delegated = delegated.rstrip('.')\n # update domain\n runtime_config['domain'] = '{0}.{1}'.format(\n delegated, runtime_config.get('domain'))\n\n self.action = self.config.resolve('lexicon:action')\n self.provider_name = (self.config.resolve('lexicon:provider_name')\n or self.config.resolve('lexicon:provider'))\n\n self.config.add_config_source(DictConfigSource(runtime_config), 0)\n\n provider_module = importlib.import_module(\n 'lexicon.providers.' + self.provider_name)\n provider_class = getattr(provider_module, 'Provider')\n self.provider = provider_class(self.config)\n\n def execute(self):\n \"\"\"Execute provided configuration in class constructor to the DNS records\"\"\"\n self.provider.authenticate()\n identifier = self.config.resolve('lexicon:identifier')\n record_type = self.config.resolve('lexicon:type')\n name = self.config.resolve('lexicon:name')\n content = self.config.resolve('lexicon:content')\n\n if self.action == 'create':\n return self.provider.create_record(record_type, name, content)\n\n if self.action == 'list':\n return self.provider.list_records(record_type, name, content)\n\n if self.action == 'update':\n return self.provider.update_record(identifier, record_type, name, content)\n\n if self.action == 'delete':\n return self.provider.delete_record(identifier, record_type, name, content)\n\n raise ValueError('Invalid action statement: {0}'.format(self.action))\n\n def _validate_config(self):\n provider_name = self.config.resolve('lexicon:provider_name')\n if not self.config.resolve('lexicon:provider_name'):\n raise AttributeError('provider_name')\n\n try:\n available = discovery.find_providers()[self.config.resolve('lexicon:provider_name')]\n except KeyError:\n raise ProviderNotAvailableError('This provider ({0}) is not supported by Lexicon.'\n .format(provider_name))\n else:\n if not available:\n raise ProviderNotAvailableError(\n 'This provider ({0}) has required dependencies that are missing. '\n 'Please install lexicon[{0}] first.'.format(provider_name))\n\n if not self.config.resolve('lexicon:action'):\n raise AttributeError('action')\n if not self.config.resolve('lexicon:domain'):\n raise AttributeError('domain')\n if not self.config.resolve('lexicon:type'):\n raise AttributeError('type')\n", "path": "lexicon/client.py"}]}
| 1,956 | 277 |
gh_patches_debug_3931
|
rasdani/github-patches
|
git_diff
|
google__clusterfuzz-995
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Initialize issue_body_footer when reading issue tracker config
It seems like we forgot to add initialization of `issue_body_footer` in https://github.com/google/clusterfuzz/blob/dfe686cde7429ed0ba482b0b0b4f27106b0a0d5f/src/appengine/libs/issue_management/issue_tracker_policy.py#L114, that's why the issues filed recently do not have the text explaining that OSS-Fuzz team doesn't read bug tracked comments and can be contacted via GitHub only.
Bug example: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=17216
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/appengine/libs/issue_management/issue_tracker_policy.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13 """Issue tracker policy."""
14
15 from builtins import object
16 from collections import namedtuple
17
18 from config import local_config
19
20 Status = namedtuple('Status',
21 ['assigned', 'duplicate', 'wontfix', 'fixed', 'verified'])
22
23 EXPECTED_STATUSES = [
24 'assigned',
25 'duplicate',
26 'wontfix',
27 'fixed',
28 'verified',
29 'new',
30 ]
31
32
33 class ConfigurationError(Exception):
34 """Base configuration error class."""
35
36
37 class NewIssuePolicy(object):
38 """New issue policy."""
39
40 def __init__(self):
41 self.status = ''
42 self.ccs = []
43 self.labels = []
44 self.issue_body_footer = ''
45
46
47 def _to_str_list(values):
48 """Convert a list to a list of strs."""
49 return [str(value) for value in values]
50
51
52 class IssueTrackerPolicy(object):
53 """Represents an issue tracker policy."""
54
55 def __init__(self, data):
56 self._data = data
57 if 'status' not in self._data:
58 raise ConfigurationError('Status not set in policies.')
59
60 if 'labels' not in self._data:
61 raise ConfigurationError('Labels not set in policies.')
62
63 for status in EXPECTED_STATUSES:
64 if status not in self._data['status']:
65 raise ConfigurationError(
66 'Expected status {} is not set.'.format(status))
67
68 def status(self, status_type):
69 """Get the actual status string for the given type."""
70 return self._data['status'][status_type]
71
72 def label(self, label_type):
73 """Get the actual label string for the given type."""
74 label = self._data['labels'].get(label_type)
75 if label is None:
76 return None
77
78 return str(label)
79
80 def substitution_mapping(self, label):
81 """Get an explicit substitution mapping."""
82 if 'substitutions' not in self._data:
83 return label
84
85 mapped = self._data['substitutions'].get(label)
86 if not mapped:
87 return label
88
89 return str(mapped)
90
91 @property
92 def deadline_policy_message(self):
93 """Get the deadline policy message, if if exists."""
94 return self._data.get('deadline_policy_message')
95
96 def get_new_issue_properties(self, is_security, is_crash):
97 """Get the properties to apply to a new issue."""
98 policy = NewIssuePolicy()
99
100 if 'all' in self._data:
101 self._apply_new_issue_properties(policy, self._data['all'], is_crash)
102
103 if is_security:
104 if 'security' in self._data:
105 self._apply_new_issue_properties(policy, self._data['security'],
106 is_crash)
107 else:
108 if 'non_security' in self._data:
109 self._apply_new_issue_properties(policy, self._data['non_security'],
110 is_crash)
111
112 return policy
113
114 def _apply_new_issue_properties(self, policy, issue_type, is_crash):
115 """Apply issue policies."""
116 if not issue_type:
117 return
118
119 if 'status' in issue_type:
120 policy.status = self._data['status'][issue_type['status']]
121
122 if 'ccs' in issue_type:
123 policy.labels.extend(issue_type['ccs'])
124
125 labels = issue_type.get('labels')
126 if labels:
127 policy.labels.extend(_to_str_list(labels))
128
129 if is_crash:
130 crash_labels = issue_type.get('crash_labels')
131 if crash_labels:
132 policy.labels.extend(_to_str_list(crash_labels))
133 else:
134 non_crash_labels = issue_type.get('non_crash_labels')
135 if non_crash_labels:
136 policy.labels.extend(_to_str_list(non_crash_labels))
137
138 def get_existing_issue_properties(self):
139 """Get the properties to apply to a new issue."""
140 policy = NewIssuePolicy()
141
142 if 'existing' in self._data:
143 self._apply_new_issue_properties(policy, self._data['existing'], False)
144
145 return policy
146
147
148 def get(project_name):
149 """Get policy."""
150 issue_tracker_config = local_config.IssueTrackerConfig()
151 project_config = issue_tracker_config.get(project_name)
152 if not project_config:
153 raise ConfigurationError(
154 'Issue tracker for {} does not exist'.format(project_name))
155
156 if not 'policies' in project_config:
157 raise ConfigurationError(
158 'Policies for {} do not exist'.format(project_name))
159
160 return IssueTrackerPolicy(project_config['policies'])
161
162
163 def get_empty():
164 """Get an empty policy."""
165 return IssueTrackerPolicy({
166 'status': {
167 'assigned': 'unused',
168 'duplicate': 'unused',
169 'wontfix': 'unused',
170 'fixed': 'unused',
171 'verified': 'unused',
172 'new': 'unused',
173 },
174 'labels': {},
175 })
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/appengine/libs/issue_management/issue_tracker_policy.py b/src/appengine/libs/issue_management/issue_tracker_policy.py
--- a/src/appengine/libs/issue_management/issue_tracker_policy.py
+++ b/src/appengine/libs/issue_management/issue_tracker_policy.py
@@ -126,6 +126,10 @@
if labels:
policy.labels.extend(_to_str_list(labels))
+ issue_body_footer = issue_type.get('issue_body_footer')
+ if issue_body_footer:
+ policy.issue_body_footer = issue_body_footer
+
if is_crash:
crash_labels = issue_type.get('crash_labels')
if crash_labels:
|
{"golden_diff": "diff --git a/src/appengine/libs/issue_management/issue_tracker_policy.py b/src/appengine/libs/issue_management/issue_tracker_policy.py\n--- a/src/appengine/libs/issue_management/issue_tracker_policy.py\n+++ b/src/appengine/libs/issue_management/issue_tracker_policy.py\n@@ -126,6 +126,10 @@\n if labels:\n policy.labels.extend(_to_str_list(labels))\n \n+ issue_body_footer = issue_type.get('issue_body_footer')\n+ if issue_body_footer:\n+ policy.issue_body_footer = issue_body_footer\n+\n if is_crash:\n crash_labels = issue_type.get('crash_labels')\n if crash_labels:\n", "issue": "Initialize issue_body_footer when reading issue tracker config\nIt seems like we forgot to add initialization of `issue_body_footer` in https://github.com/google/clusterfuzz/blob/dfe686cde7429ed0ba482b0b0b4f27106b0a0d5f/src/appengine/libs/issue_management/issue_tracker_policy.py#L114, that's why the issues filed recently do not have the text explaining that OSS-Fuzz team doesn't read bug tracked comments and can be contacted via GitHub only.\r\n\r\nBug example: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=17216\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Issue tracker policy.\"\"\"\n\nfrom builtins import object\nfrom collections import namedtuple\n\nfrom config import local_config\n\nStatus = namedtuple('Status',\n ['assigned', 'duplicate', 'wontfix', 'fixed', 'verified'])\n\nEXPECTED_STATUSES = [\n 'assigned',\n 'duplicate',\n 'wontfix',\n 'fixed',\n 'verified',\n 'new',\n]\n\n\nclass ConfigurationError(Exception):\n \"\"\"Base configuration error class.\"\"\"\n\n\nclass NewIssuePolicy(object):\n \"\"\"New issue policy.\"\"\"\n\n def __init__(self):\n self.status = ''\n self.ccs = []\n self.labels = []\n self.issue_body_footer = ''\n\n\ndef _to_str_list(values):\n \"\"\"Convert a list to a list of strs.\"\"\"\n return [str(value) for value in values]\n\n\nclass IssueTrackerPolicy(object):\n \"\"\"Represents an issue tracker policy.\"\"\"\n\n def __init__(self, data):\n self._data = data\n if 'status' not in self._data:\n raise ConfigurationError('Status not set in policies.')\n\n if 'labels' not in self._data:\n raise ConfigurationError('Labels not set in policies.')\n\n for status in EXPECTED_STATUSES:\n if status not in self._data['status']:\n raise ConfigurationError(\n 'Expected status {} is not set.'.format(status))\n\n def status(self, status_type):\n \"\"\"Get the actual status string for the given type.\"\"\"\n return self._data['status'][status_type]\n\n def label(self, label_type):\n \"\"\"Get the actual label string for the given type.\"\"\"\n label = self._data['labels'].get(label_type)\n if label is None:\n return None\n\n return str(label)\n\n def substitution_mapping(self, label):\n \"\"\"Get an explicit substitution mapping.\"\"\"\n if 'substitutions' not in self._data:\n return label\n\n mapped = self._data['substitutions'].get(label)\n if not mapped:\n return label\n\n return str(mapped)\n\n @property\n def deadline_policy_message(self):\n \"\"\"Get the deadline policy message, if if exists.\"\"\"\n return self._data.get('deadline_policy_message')\n\n def get_new_issue_properties(self, is_security, is_crash):\n \"\"\"Get the properties to apply to a new issue.\"\"\"\n policy = NewIssuePolicy()\n\n if 'all' in self._data:\n self._apply_new_issue_properties(policy, self._data['all'], is_crash)\n\n if is_security:\n if 'security' in self._data:\n self._apply_new_issue_properties(policy, self._data['security'],\n is_crash)\n else:\n if 'non_security' in self._data:\n self._apply_new_issue_properties(policy, self._data['non_security'],\n is_crash)\n\n return policy\n\n def _apply_new_issue_properties(self, policy, issue_type, is_crash):\n \"\"\"Apply issue policies.\"\"\"\n if not issue_type:\n return\n\n if 'status' in issue_type:\n policy.status = self._data['status'][issue_type['status']]\n\n if 'ccs' in issue_type:\n policy.labels.extend(issue_type['ccs'])\n\n labels = issue_type.get('labels')\n if labels:\n policy.labels.extend(_to_str_list(labels))\n\n if is_crash:\n crash_labels = issue_type.get('crash_labels')\n if crash_labels:\n policy.labels.extend(_to_str_list(crash_labels))\n else:\n non_crash_labels = issue_type.get('non_crash_labels')\n if non_crash_labels:\n policy.labels.extend(_to_str_list(non_crash_labels))\n\n def get_existing_issue_properties(self):\n \"\"\"Get the properties to apply to a new issue.\"\"\"\n policy = NewIssuePolicy()\n\n if 'existing' in self._data:\n self._apply_new_issue_properties(policy, self._data['existing'], False)\n\n return policy\n\n\ndef get(project_name):\n \"\"\"Get policy.\"\"\"\n issue_tracker_config = local_config.IssueTrackerConfig()\n project_config = issue_tracker_config.get(project_name)\n if not project_config:\n raise ConfigurationError(\n 'Issue tracker for {} does not exist'.format(project_name))\n\n if not 'policies' in project_config:\n raise ConfigurationError(\n 'Policies for {} do not exist'.format(project_name))\n\n return IssueTrackerPolicy(project_config['policies'])\n\n\ndef get_empty():\n \"\"\"Get an empty policy.\"\"\"\n return IssueTrackerPolicy({\n 'status': {\n 'assigned': 'unused',\n 'duplicate': 'unused',\n 'wontfix': 'unused',\n 'fixed': 'unused',\n 'verified': 'unused',\n 'new': 'unused',\n },\n 'labels': {},\n })\n", "path": "src/appengine/libs/issue_management/issue_tracker_policy.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Issue tracker policy.\"\"\"\n\nfrom builtins import object\nfrom collections import namedtuple\n\nfrom config import local_config\n\nStatus = namedtuple('Status',\n ['assigned', 'duplicate', 'wontfix', 'fixed', 'verified'])\n\nEXPECTED_STATUSES = [\n 'assigned',\n 'duplicate',\n 'wontfix',\n 'fixed',\n 'verified',\n 'new',\n]\n\n\nclass ConfigurationError(Exception):\n \"\"\"Base configuration error class.\"\"\"\n\n\nclass NewIssuePolicy(object):\n \"\"\"New issue policy.\"\"\"\n\n def __init__(self):\n self.status = ''\n self.ccs = []\n self.labels = []\n self.issue_body_footer = ''\n\n\ndef _to_str_list(values):\n \"\"\"Convert a list to a list of strs.\"\"\"\n return [str(value) for value in values]\n\n\nclass IssueTrackerPolicy(object):\n \"\"\"Represents an issue tracker policy.\"\"\"\n\n def __init__(self, data):\n self._data = data\n if 'status' not in self._data:\n raise ConfigurationError('Status not set in policies.')\n\n if 'labels' not in self._data:\n raise ConfigurationError('Labels not set in policies.')\n\n for status in EXPECTED_STATUSES:\n if status not in self._data['status']:\n raise ConfigurationError(\n 'Expected status {} is not set.'.format(status))\n\n def status(self, status_type):\n \"\"\"Get the actual status string for the given type.\"\"\"\n return self._data['status'][status_type]\n\n def label(self, label_type):\n \"\"\"Get the actual label string for the given type.\"\"\"\n label = self._data['labels'].get(label_type)\n if label is None:\n return None\n\n return str(label)\n\n def substitution_mapping(self, label):\n \"\"\"Get an explicit substitution mapping.\"\"\"\n if 'substitutions' not in self._data:\n return label\n\n mapped = self._data['substitutions'].get(label)\n if not mapped:\n return label\n\n return str(mapped)\n\n @property\n def deadline_policy_message(self):\n \"\"\"Get the deadline policy message, if if exists.\"\"\"\n return self._data.get('deadline_policy_message')\n\n def get_new_issue_properties(self, is_security, is_crash):\n \"\"\"Get the properties to apply to a new issue.\"\"\"\n policy = NewIssuePolicy()\n\n if 'all' in self._data:\n self._apply_new_issue_properties(policy, self._data['all'], is_crash)\n\n if is_security:\n if 'security' in self._data:\n self._apply_new_issue_properties(policy, self._data['security'],\n is_crash)\n else:\n if 'non_security' in self._data:\n self._apply_new_issue_properties(policy, self._data['non_security'],\n is_crash)\n\n return policy\n\n def _apply_new_issue_properties(self, policy, issue_type, is_crash):\n \"\"\"Apply issue policies.\"\"\"\n if not issue_type:\n return\n\n if 'status' in issue_type:\n policy.status = self._data['status'][issue_type['status']]\n\n if 'ccs' in issue_type:\n policy.labels.extend(issue_type['ccs'])\n\n labels = issue_type.get('labels')\n if labels:\n policy.labels.extend(_to_str_list(labels))\n\n issue_body_footer = issue_type.get('issue_body_footer')\n if issue_body_footer:\n policy.issue_body_footer = issue_body_footer\n\n if is_crash:\n crash_labels = issue_type.get('crash_labels')\n if crash_labels:\n policy.labels.extend(_to_str_list(crash_labels))\n else:\n non_crash_labels = issue_type.get('non_crash_labels')\n if non_crash_labels:\n policy.labels.extend(_to_str_list(non_crash_labels))\n\n def get_existing_issue_properties(self):\n \"\"\"Get the properties to apply to a new issue.\"\"\"\n policy = NewIssuePolicy()\n\n if 'existing' in self._data:\n self._apply_new_issue_properties(policy, self._data['existing'], False)\n\n return policy\n\n\ndef get(project_name):\n \"\"\"Get policy.\"\"\"\n issue_tracker_config = local_config.IssueTrackerConfig()\n project_config = issue_tracker_config.get(project_name)\n if not project_config:\n raise ConfigurationError(\n 'Issue tracker for {} does not exist'.format(project_name))\n\n if not 'policies' in project_config:\n raise ConfigurationError(\n 'Policies for {} do not exist'.format(project_name))\n\n return IssueTrackerPolicy(project_config['policies'])\n\n\ndef get_empty():\n \"\"\"Get an empty policy.\"\"\"\n return IssueTrackerPolicy({\n 'status': {\n 'assigned': 'unused',\n 'duplicate': 'unused',\n 'wontfix': 'unused',\n 'fixed': 'unused',\n 'verified': 'unused',\n 'new': 'unused',\n },\n 'labels': {},\n })\n", "path": "src/appengine/libs/issue_management/issue_tracker_policy.py"}]}
| 1,989 | 146 |
gh_patches_debug_3903
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-1674
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
archinstall won't start due to TypeError
# Error
When I execute `archinstall` I get the following error:
```
...
File "/usr/lib/python3.10/site-packages/archinstall/lib/packages/packages.py", line 115, in installed_package
return LocalPackage(**package_info)
TypeError: LocalPackage.__init__() got an unexpected keyword argument 'warning'
```
# What I tried
I looked at the code of `installed_package` and found it executes `pacman -Q --info {package}`, so I tried to execute that with archlinux-keyring to see what the output was like.
I executed `pacman -Q --info archlinux-keyring` and got the following output
```
warning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized
Name : archlinux-keyring
Version : 20230225-1
...
```
# Why this seems to be happening
## Code
https://github.com/archlinux/archinstall/blob/8f6cc07062968b259bebd346521ef685c16f89dc/archinstall/lib/packages/packages.py#L105-L115
## Explanation
Because the line `warning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized` I get as part of the output of `pacman -Q --info {package}` has a colon it is being interpreted as a key value pair.
# Possible fix
Ignore all lines that start with 'warning'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/packages/packages.py`
Content:
```
1 import json
2 import ssl
3 from typing import Dict, Any, Tuple, List
4 from urllib.error import HTTPError
5 from urllib.parse import urlencode
6 from urllib.request import urlopen
7
8 from ..exceptions import PackageError, SysCallError
9 from ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage
10 from ..pacman import run_pacman
11
12 BASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'
13 # BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'
14 BASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'
15
16
17 def _make_request(url: str, params: Dict) -> Any:
18 ssl_context = ssl.create_default_context()
19 ssl_context.check_hostname = False
20 ssl_context.verify_mode = ssl.CERT_NONE
21
22 encoded = urlencode(params)
23 full_url = f'{url}?{encoded}'
24
25 return urlopen(full_url, context=ssl_context)
26
27
28 def group_search(name :str) -> List[PackageSearchResult]:
29 # TODO UPSTREAM: Implement /json/ for the groups search
30 try:
31 response = _make_request(BASE_GROUP_URL, {'name': name})
32 except HTTPError as err:
33 if err.code == 404:
34 return []
35 else:
36 raise err
37
38 # Just to be sure some code didn't slip through the exception
39 data = response.read().decode('UTF-8')
40
41 return [PackageSearchResult(**package) for package in json.loads(data)['results']]
42
43
44 def package_search(package :str) -> PackageSearch:
45 """
46 Finds a specific package via the package database.
47 It makes a simple web-request, which might be a bit slow.
48 """
49 # TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')
50 # TODO: utilize pacman cache first, upstream second.
51 response = _make_request(BASE_URL_PKG_SEARCH, {'name': package})
52
53 if response.code != 200:
54 raise PackageError(f"Could not locate package: [{response.code}] {response}")
55
56 data = response.read().decode('UTF-8')
57
58 return PackageSearch(**json.loads(data))
59
60
61 def find_package(package :str) -> List[PackageSearchResult]:
62 data = package_search(package)
63 results = []
64
65 for result in data.results:
66 if result.pkgname == package:
67 results.append(result)
68
69 # If we didn't find the package in the search results,
70 # odds are it's a group package
71 if not results:
72 # Check if the package is actually a group
73 for result in group_search(package):
74 results.append(result)
75
76 return results
77
78
79 def find_packages(*names :str) -> Dict[str, Any]:
80 """
81 This function returns the search results for many packages.
82 The function itself is rather slow, so consider not sending to
83 many packages to the search query.
84 """
85 result = {}
86 for package in names:
87 for found_package in find_package(package):
88 result[package] = found_package
89
90 return result
91
92
93 def validate_package_list(packages :list) -> Tuple[list, list]:
94 """
95 Validates a list of given packages.
96 return: Tuple of lists containing valid packavges in the first and invalid
97 packages in the second entry
98 """
99 valid_packages = {package for package in packages if find_package(package)}
100 invalid_packages = set(packages) - valid_packages
101
102 return list(valid_packages), list(invalid_packages)
103
104
105 def installed_package(package :str) -> LocalPackage:
106 package_info = {}
107 try:
108 for line in run_pacman(f"-Q --info {package}"):
109 if b':' in line:
110 key, value = line.decode().split(':', 1)
111 package_info[key.strip().lower().replace(' ', '_')] = value.strip()
112 except SysCallError:
113 pass
114
115 return LocalPackage(**package_info)
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/lib/packages/packages.py b/archinstall/lib/packages/packages.py
--- a/archinstall/lib/packages/packages.py
+++ b/archinstall/lib/packages/packages.py
@@ -1,3 +1,4 @@
+import dataclasses
import json
import ssl
from typing import Dict, Any, Tuple, List
@@ -112,4 +113,4 @@
except SysCallError:
pass
- return LocalPackage(**package_info)
+ return LocalPackage({field.name: package_info.get(field.name) for field in dataclasses.fields(LocalPackage)})
|
{"golden_diff": "diff --git a/archinstall/lib/packages/packages.py b/archinstall/lib/packages/packages.py\n--- a/archinstall/lib/packages/packages.py\n+++ b/archinstall/lib/packages/packages.py\n@@ -1,3 +1,4 @@\n+import dataclasses\n import json\n import ssl\n from typing import Dict, Any, Tuple, List\n@@ -112,4 +113,4 @@\n \texcept SysCallError:\n \t\tpass\n \n-\treturn LocalPackage(**package_info)\n+\treturn LocalPackage({field.name: package_info.get(field.name) for field in dataclasses.fields(LocalPackage)})\n", "issue": "archinstall won't start due to TypeError\n# Error\r\nWhen I execute `archinstall` I get the following error:\r\n\r\n```\r\n...\r\nFile \"/usr/lib/python3.10/site-packages/archinstall/lib/packages/packages.py\", line 115, in installed_package\r\n return LocalPackage(**package_info)\r\nTypeError: LocalPackage.__init__() got an unexpected keyword argument 'warning'\r\n```\r\n# What I tried\r\nI looked at the code of `installed_package` and found it executes `pacman -Q --info {package}`, so I tried to execute that with archlinux-keyring to see what the output was like.\r\n\r\nI executed `pacman -Q --info archlinux-keyring` and got the following output\r\n```\r\nwarning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized\r\nName \t: archlinux-keyring\r\nVersion : 20230225-1\r\n...\r\n```\r\n# Why this seems to be happening\r\n## Code\r\nhttps://github.com/archlinux/archinstall/blob/8f6cc07062968b259bebd346521ef685c16f89dc/archinstall/lib/packages/packages.py#L105-L115\r\n## Explanation \r\nBecause the line `warning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized` I get as part of the output of `pacman -Q --info {package}` has a colon it is being interpreted as a key value pair.\r\n\r\n# Possible fix\r\nIgnore all lines that start with 'warning'\r\n\r\n\n", "before_files": [{"content": "import json\nimport ssl\nfrom typing import Dict, Any, Tuple, List\nfrom urllib.error import HTTPError\nfrom urllib.parse import urlencode\nfrom urllib.request import urlopen\n\nfrom ..exceptions import PackageError, SysCallError\nfrom ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage\nfrom ..pacman import run_pacman\n\nBASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'\n# BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'\nBASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'\n\n\ndef _make_request(url: str, params: Dict) -> Any:\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\n\tencoded = urlencode(params)\n\tfull_url = f'{url}?{encoded}'\n\n\treturn urlopen(full_url, context=ssl_context)\n\n\ndef group_search(name :str) -> List[PackageSearchResult]:\n\t# TODO UPSTREAM: Implement /json/ for the groups search\n\ttry:\n\t\tresponse = _make_request(BASE_GROUP_URL, {'name': name})\n\texcept HTTPError as err:\n\t\tif err.code == 404:\n\t\t\treturn []\n\t\telse:\n\t\t\traise err\n\n\t# Just to be sure some code didn't slip through the exception\n\tdata = response.read().decode('UTF-8')\n\n\treturn [PackageSearchResult(**package) for package in json.loads(data)['results']]\n\n\ndef package_search(package :str) -> PackageSearch:\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\t# TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')\n\t# TODO: utilize pacman cache first, upstream second.\n\tresponse = _make_request(BASE_URL_PKG_SEARCH, {'name': package})\n\n\tif response.code != 200:\n\t\traise PackageError(f\"Could not locate package: [{response.code}] {response}\")\n\n\tdata = response.read().decode('UTF-8')\n\n\treturn PackageSearch(**json.loads(data))\n\n\ndef find_package(package :str) -> List[PackageSearchResult]:\n\tdata = package_search(package)\n\tresults = []\n\n\tfor result in data.results:\n\t\tif result.pkgname == package:\n\t\t\tresults.append(result)\n\n\t# If we didn't find the package in the search results,\n\t# odds are it's a group package\n\tif not results:\n\t\t# Check if the package is actually a group\n\t\tfor result in group_search(package):\n\t\t\tresults.append(result)\n\n\treturn results\n\n\ndef find_packages(*names :str) -> Dict[str, Any]:\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tfor found_package in find_package(package):\n\t\t\tresult[package] = found_package\n\n\treturn result\n\n\ndef validate_package_list(packages :list) -> Tuple[list, list]:\n\t\"\"\"\n\tValidates a list of given packages.\n\treturn: Tuple of lists containing valid packavges in the first and invalid\n\tpackages in the second entry\n\t\"\"\"\n\tvalid_packages = {package for package in packages if find_package(package)}\n\tinvalid_packages = set(packages) - valid_packages\n\n\treturn list(valid_packages), list(invalid_packages)\n\n\ndef installed_package(package :str) -> LocalPackage:\n\tpackage_info = {}\n\ttry:\n\t\tfor line in run_pacman(f\"-Q --info {package}\"):\n\t\t\tif b':' in line:\n\t\t\t\tkey, value = line.decode().split(':', 1)\n\t\t\t\tpackage_info[key.strip().lower().replace(' ', '_')] = value.strip()\n\texcept SysCallError:\n\t\tpass\n\n\treturn LocalPackage(**package_info)\n", "path": "archinstall/lib/packages/packages.py"}], "after_files": [{"content": "import dataclasses\nimport json\nimport ssl\nfrom typing import Dict, Any, Tuple, List\nfrom urllib.error import HTTPError\nfrom urllib.parse import urlencode\nfrom urllib.request import urlopen\n\nfrom ..exceptions import PackageError, SysCallError\nfrom ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage\nfrom ..pacman import run_pacman\n\nBASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'\n# BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'\nBASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'\n\n\ndef _make_request(url: str, params: Dict) -> Any:\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\n\tencoded = urlencode(params)\n\tfull_url = f'{url}?{encoded}'\n\n\treturn urlopen(full_url, context=ssl_context)\n\n\ndef group_search(name :str) -> List[PackageSearchResult]:\n\t# TODO UPSTREAM: Implement /json/ for the groups search\n\ttry:\n\t\tresponse = _make_request(BASE_GROUP_URL, {'name': name})\n\texcept HTTPError as err:\n\t\tif err.code == 404:\n\t\t\treturn []\n\t\telse:\n\t\t\traise err\n\n\t# Just to be sure some code didn't slip through the exception\n\tdata = response.read().decode('UTF-8')\n\n\treturn [PackageSearchResult(**package) for package in json.loads(data)['results']]\n\n\ndef package_search(package :str) -> PackageSearch:\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\t# TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')\n\t# TODO: utilize pacman cache first, upstream second.\n\tresponse = _make_request(BASE_URL_PKG_SEARCH, {'name': package})\n\n\tif response.code != 200:\n\t\traise PackageError(f\"Could not locate package: [{response.code}] {response}\")\n\n\tdata = response.read().decode('UTF-8')\n\n\treturn PackageSearch(**json.loads(data))\n\n\ndef find_package(package :str) -> List[PackageSearchResult]:\n\tdata = package_search(package)\n\tresults = []\n\n\tfor result in data.results:\n\t\tif result.pkgname == package:\n\t\t\tresults.append(result)\n\n\t# If we didn't find the package in the search results,\n\t# odds are it's a group package\n\tif not results:\n\t\t# Check if the package is actually a group\n\t\tfor result in group_search(package):\n\t\t\tresults.append(result)\n\n\treturn results\n\n\ndef find_packages(*names :str) -> Dict[str, Any]:\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tfor found_package in find_package(package):\n\t\t\tresult[package] = found_package\n\n\treturn result\n\n\ndef validate_package_list(packages :list) -> Tuple[list, list]:\n\t\"\"\"\n\tValidates a list of given packages.\n\treturn: Tuple of lists containing valid packavges in the first and invalid\n\tpackages in the second entry\n\t\"\"\"\n\tvalid_packages = {package for package in packages if find_package(package)}\n\tinvalid_packages = set(packages) - valid_packages\n\n\treturn list(valid_packages), list(invalid_packages)\n\n\ndef installed_package(package :str) -> LocalPackage:\n\tpackage_info = {}\n\ttry:\n\t\tfor line in run_pacman(f\"-Q --info {package}\"):\n\t\t\tif b':' in line:\n\t\t\t\tkey, value = line.decode().split(':', 1)\n\t\t\t\tpackage_info[key.strip().lower().replace(' ', '_')] = value.strip()\n\texcept SysCallError:\n\t\tpass\n\n\treturn LocalPackage({field.name: package_info.get(field.name) for field in dataclasses.fields(LocalPackage)})\n", "path": "archinstall/lib/packages/packages.py"}]}
| 1,728 | 122 |
gh_patches_debug_29273
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-577
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Old permalinks not mapped in single page API endpoint
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
The permalink is checked for correctness in the single page API endpoint, but in #119 we decided not to keep track of old permalinks. Instead, we will just map permalinks based on the last url parameter (the slug) and ignore the rest.
### Steps to Reproduce
1. Copy one page permalink
2. Move that page to another parent to change the permalink
3. Request this page in the single page API endpoint with the old permalink from step 1
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
The page should be found even if the permalink is old
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
The error `No Page matches the given url or id` is returned.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/api/v3/single_page.py`
Content:
```
1 """
2 View to return a JSON representation of a single page. The page can
3 be selected via the id or the permalink.
4 """
5 from django.http import JsonResponse, Http404
6 from django.shortcuts import get_object_or_404
7
8 from cms.models import Region, Page
9 from .pages import transform_page
10
11
12 # pylint: disable=unused-argument
13 def single_page(request, region_slug, language_code):
14 """
15 View function returning the desired page as a JSON or a 404 if the
16 requested page does not exist.
17
18 :param request: The request that has been sent to the Django server
19 :type request: ~django.http.HttpRequest
20
21 :param region_slug: Slug defining the region
22 :type region_slug: str
23
24 :param language_code: Code to identify the desired language
25 :type language_code: str
26
27 :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.
28
29 :return: Return a JSON with the requested page and a HTTP status 200.
30 :rtype: ~django.http.JsonResponse
31 """
32 region = Region.get_current_region(request)
33
34 if request.GET.get("id"):
35 page = get_object_or_404(region.pages, id=request.GET.get("id"))
36 page_translation = page.get_public_translation(language_code)
37 if page_translation:
38 return JsonResponse(transform_page(page_translation), safe=False)
39
40 elif request.GET.get("url"):
41 # Strip leading and trailing slashes to avoid ambiguous urls
42 url = request.GET.get("url").strip("/")
43 # Get potential page candidate by only filtering for the translation slug
44 page = get_object_or_404(
45 Page, region=region, translations__slug=url.split("/")[-1]
46 )
47 # Get most recent public revision of the page
48 page_translation = page.get_public_translation(language_code)
49 # Check if the whole path is correct, not only the slug
50 # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past
51 if page_translation.permalink == url:
52 return JsonResponse(transform_page(page_translation), safe=False)
53
54 raise Http404("No Page matches the given url or id.")
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/api/v3/single_page.py b/src/api/v3/single_page.py
--- a/src/api/v3/single_page.py
+++ b/src/api/v3/single_page.py
@@ -5,7 +5,7 @@
from django.http import JsonResponse, Http404
from django.shortcuts import get_object_or_404
-from cms.models import Region, Page
+from cms.models import Region
from .pages import transform_page
@@ -40,15 +40,17 @@
elif request.GET.get("url"):
# Strip leading and trailing slashes to avoid ambiguous urls
url = request.GET.get("url").strip("/")
- # Get potential page candidate by only filtering for the translation slug
+ # The last path component of the url is the page translation slug
+ page_translation_slug = url.split("/")[-1]
+ # Get page by filtering for translation slug and translation language code
page = get_object_or_404(
- Page, region=region, translations__slug=url.split("/")[-1]
+ region.pages,
+ translations__slug=page_translation_slug,
+ translations__language__code=language_code,
)
# Get most recent public revision of the page
page_translation = page.get_public_translation(language_code)
- # Check if the whole path is correct, not only the slug
- # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past
- if page_translation.permalink == url:
+ if page_translation:
return JsonResponse(transform_page(page_translation), safe=False)
raise Http404("No Page matches the given url or id.")
|
{"golden_diff": "diff --git a/src/api/v3/single_page.py b/src/api/v3/single_page.py\n--- a/src/api/v3/single_page.py\n+++ b/src/api/v3/single_page.py\n@@ -5,7 +5,7 @@\n from django.http import JsonResponse, Http404\n from django.shortcuts import get_object_or_404\n \n-from cms.models import Region, Page\n+from cms.models import Region\n from .pages import transform_page\n \n \n@@ -40,15 +40,17 @@\n elif request.GET.get(\"url\"):\n # Strip leading and trailing slashes to avoid ambiguous urls\n url = request.GET.get(\"url\").strip(\"/\")\n- # Get potential page candidate by only filtering for the translation slug\n+ # The last path component of the url is the page translation slug\n+ page_translation_slug = url.split(\"/\")[-1]\n+ # Get page by filtering for translation slug and translation language code\n page = get_object_or_404(\n- Page, region=region, translations__slug=url.split(\"/\")[-1]\n+ region.pages,\n+ translations__slug=page_translation_slug,\n+ translations__language__code=language_code,\n )\n # Get most recent public revision of the page\n page_translation = page.get_public_translation(language_code)\n- # Check if the whole path is correct, not only the slug\n- # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past\n- if page_translation.permalink == url:\n+ if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n \n raise Http404(\"No Page matches the given url or id.\")\n", "issue": "Old permalinks not mapped in single page API endpoint\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nThe permalink is checked for correctness in the single page API endpoint, but in #119 we decided not to keep track of old permalinks. Instead, we will just map permalinks based on the last url parameter (the slug) and ignore the rest.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Copy one page permalink\r\n2. Move that page to another parent to change the permalink\r\n3. Request this page in the single page API endpoint with the old permalink from step 1\r\n\r\n### Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe page should be found even if the permalink is old\r\n\r\n### Actual Behavior\r\n<!-- A clear and concise description of what actually happened. -->\r\nThe error `No Page matches the given url or id` is returned.\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nView to return a JSON representation of a single page. The page can\nbe selected via the id or the permalink.\n\"\"\"\nfrom django.http import JsonResponse, Http404\nfrom django.shortcuts import get_object_or_404\n\nfrom cms.models import Region, Page\nfrom .pages import transform_page\n\n\n# pylint: disable=unused-argument\ndef single_page(request, region_slug, language_code):\n \"\"\"\n View function returning the desired page as a JSON or a 404 if the\n requested page does not exist.\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_code: Code to identify the desired language\n :type language_code: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: Return a JSON with the requested page and a HTTP status 200.\n :rtype: ~django.http.JsonResponse\n \"\"\"\n region = Region.get_current_region(request)\n\n if request.GET.get(\"id\"):\n page = get_object_or_404(region.pages, id=request.GET.get(\"id\"))\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n elif request.GET.get(\"url\"):\n # Strip leading and trailing slashes to avoid ambiguous urls\n url = request.GET.get(\"url\").strip(\"/\")\n # Get potential page candidate by only filtering for the translation slug\n page = get_object_or_404(\n Page, region=region, translations__slug=url.split(\"/\")[-1]\n )\n # Get most recent public revision of the page\n page_translation = page.get_public_translation(language_code)\n # Check if the whole path is correct, not only the slug\n # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past\n if page_translation.permalink == url:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n raise Http404(\"No Page matches the given url or id.\")\n", "path": "src/api/v3/single_page.py"}], "after_files": [{"content": "\"\"\"\nView to return a JSON representation of a single page. The page can\nbe selected via the id or the permalink.\n\"\"\"\nfrom django.http import JsonResponse, Http404\nfrom django.shortcuts import get_object_or_404\n\nfrom cms.models import Region\nfrom .pages import transform_page\n\n\n# pylint: disable=unused-argument\ndef single_page(request, region_slug, language_code):\n \"\"\"\n View function returning the desired page as a JSON or a 404 if the\n requested page does not exist.\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_code: Code to identify the desired language\n :type language_code: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: Return a JSON with the requested page and a HTTP status 200.\n :rtype: ~django.http.JsonResponse\n \"\"\"\n region = Region.get_current_region(request)\n\n if request.GET.get(\"id\"):\n page = get_object_or_404(region.pages, id=request.GET.get(\"id\"))\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n elif request.GET.get(\"url\"):\n # Strip leading and trailing slashes to avoid ambiguous urls\n url = request.GET.get(\"url\").strip(\"/\")\n # The last path component of the url is the page translation slug\n page_translation_slug = url.split(\"/\")[-1]\n # Get page by filtering for translation slug and translation language code\n page = get_object_or_404(\n region.pages,\n translations__slug=page_translation_slug,\n translations__language__code=language_code,\n )\n # Get most recent public revision of the page\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n raise Http404(\"No Page matches the given url or id.\")\n", "path": "src/api/v3/single_page.py"}]}
| 1,050 | 366 |
gh_patches_debug_24049
|
rasdani/github-patches
|
git_diff
|
mozilla__bugbug-140
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Print progress bar while downloading bugs
Similar to what we're doing for commits data: 28b83c12c29185c52afb58d94a533a9448969a8a.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/bugzilla.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import json
7 import os
8
9 import requests
10 from libmozdata import bugzilla
11
12 from bugbug import db
13
14 BUGS_DB = 'data/bugs.json'
15 db.register(BUGS_DB, 'https://www.dropbox.com/s/xm6wzac9jl81irz/bugs.json.xz?dl=1')
16
17 ATTACHMENT_INCLUDE_FIELDS = [
18 'id', 'is_obsolete', 'flags', 'is_patch', 'creator', 'content_type', 'creation_time',
19 ]
20
21 COMMENT_INCLUDE_FIELDS = [
22 'id', 'text', 'author', 'creation_time',
23 ]
24
25
26 def get_bug_fields():
27 os.makedirs('data', exist_ok=True)
28
29 try:
30 with open('data/bug_fields.json', 'r') as f:
31 return json.load(f)
32 except IOError:
33 pass
34
35 r = requests.get('https://bugzilla.mozilla.org/rest/field/bug')
36 r.raise_for_status()
37 return r.json()['fields']
38
39
40 def get_bugs():
41 return db.read(BUGS_DB)
42
43
44 def set_token(token):
45 bugzilla.Bugzilla.TOKEN = token
46
47
48 def _download(ids_or_query):
49 new_bugs = {}
50
51 def bughandler(bug):
52 bug_id = int(bug['id'])
53
54 if bug_id not in new_bugs:
55 new_bugs[bug_id] = dict()
56
57 new_bugs[bug_id].update(bug)
58
59 def commenthandler(bug, bug_id):
60 bug_id = int(bug_id)
61
62 if bug_id not in new_bugs:
63 new_bugs[bug_id] = dict()
64
65 new_bugs[bug_id]['comments'] = bug['comments']
66
67 def attachmenthandler(bug, bug_id):
68 bug_id = int(bug_id)
69
70 if bug_id not in new_bugs:
71 new_bugs[bug_id] = dict()
72
73 new_bugs[bug_id]['attachments'] = bug
74
75 def historyhandler(bug):
76 bug_id = int(bug['id'])
77
78 if bug_id not in new_bugs:
79 new_bugs[bug_id] = dict()
80
81 new_bugs[bug_id]['history'] = bug['history']
82
83 bugzilla.Bugzilla(ids_or_query, bughandler=bughandler, commenthandler=commenthandler, comment_include_fields=COMMENT_INCLUDE_FIELDS, attachmenthandler=attachmenthandler, attachment_include_fields=ATTACHMENT_INCLUDE_FIELDS, historyhandler=historyhandler).get_data().wait()
84
85 return new_bugs
86
87
88 def download_bugs_between(date_from, date_to, security=False):
89 products = set([
90 'Add-on SDK',
91 'Android Background Services',
92 'Core',
93 'DevTools',
94 'External Software Affecting Firefox',
95 'Firefox',
96 'Firefox for Android',
97 # 'Firefox for iOS',
98 'Firefox Graveyard',
99 'Firefox Health Report',
100 # 'Focus',
101 # 'Hello (Loop)',
102 'NSPR',
103 'NSS',
104 'Toolkit',
105 'WebExtensions',
106 ])
107
108 r = requests.get(f'https://bugzilla.mozilla.org/rest/bug?include_fields=id&f1=creation_ts&o1=greaterthan&v1={date_from.strftime("%Y-%m-%d")}&limit=1&order=bug_id')
109 r.raise_for_status()
110 first_id = r.json()['bugs'][0]['id']
111
112 r = requests.get(f'https://bugzilla.mozilla.org/rest/bug?include_fields=id&f1=creation_ts&o1=lessthan&v1={date_to.strftime("%Y-%m-%d")}&limit=1&order=bug_id%20desc')
113 r.raise_for_status()
114 last_id = r.json()['bugs'][0]['id']
115
116 assert first_id < last_id
117
118 all_ids = range(first_id, last_id + 1)
119
120 download_bugs(all_ids, security=security, products=products)
121
122 return all_ids
123
124
125 def download_bugs(bug_ids, products=None, security=False):
126 old_bug_count = 0
127 old_bugs = []
128 new_bug_ids = set(int(bug_id) for bug_id in bug_ids)
129 for bug in get_bugs():
130 old_bug_count += 1
131 if int(bug['id']) in new_bug_ids:
132 old_bugs.append(bug)
133 new_bug_ids.remove(bug['id'])
134
135 print(f'Loaded {old_bug_count} bugs.')
136
137 new_bug_ids = sorted(list(new_bug_ids))
138
139 total_downloaded = 0
140 chunks = (new_bug_ids[i:(i + 500)] for i in range(0, len(new_bug_ids), 500))
141 for chunk in chunks:
142 new_bugs = _download(chunk)
143
144 total_downloaded += len(new_bugs)
145
146 print(f'Downloaded {total_downloaded} out of {len(new_bug_ids)} bugs')
147
148 if not security:
149 new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if len(bug['groups']) == 0}
150
151 if products is not None:
152 new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if bug['product'] in products}
153
154 db.append(BUGS_DB, new_bugs.values())
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bugbug/bugzilla.py b/bugbug/bugzilla.py
--- a/bugbug/bugzilla.py
+++ b/bugbug/bugzilla.py
@@ -8,6 +8,7 @@
import requests
from libmozdata import bugzilla
+from tqdm import tqdm
from bugbug import db
@@ -136,19 +137,17 @@
new_bug_ids = sorted(list(new_bug_ids))
- total_downloaded = 0
chunks = (new_bug_ids[i:(i + 500)] for i in range(0, len(new_bug_ids), 500))
- for chunk in chunks:
- new_bugs = _download(chunk)
+ with tqdm(total=len(new_bug_ids)) as progress_bar:
+ for chunk in chunks:
+ new_bugs = _download(chunk)
- total_downloaded += len(new_bugs)
+ progress_bar.update(len(chunk))
- print(f'Downloaded {total_downloaded} out of {len(new_bug_ids)} bugs')
+ if not security:
+ new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if len(bug['groups']) == 0}
- if not security:
- new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if len(bug['groups']) == 0}
+ if products is not None:
+ new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if bug['product'] in products}
- if products is not None:
- new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if bug['product'] in products}
-
- db.append(BUGS_DB, new_bugs.values())
+ db.append(BUGS_DB, new_bugs.values())
|
{"golden_diff": "diff --git a/bugbug/bugzilla.py b/bugbug/bugzilla.py\n--- a/bugbug/bugzilla.py\n+++ b/bugbug/bugzilla.py\n@@ -8,6 +8,7 @@\n \n import requests\n from libmozdata import bugzilla\n+from tqdm import tqdm\n \n from bugbug import db\n \n@@ -136,19 +137,17 @@\n \n new_bug_ids = sorted(list(new_bug_ids))\n \n- total_downloaded = 0\n chunks = (new_bug_ids[i:(i + 500)] for i in range(0, len(new_bug_ids), 500))\n- for chunk in chunks:\n- new_bugs = _download(chunk)\n+ with tqdm(total=len(new_bug_ids)) as progress_bar:\n+ for chunk in chunks:\n+ new_bugs = _download(chunk)\n \n- total_downloaded += len(new_bugs)\n+ progress_bar.update(len(chunk))\n \n- print(f'Downloaded {total_downloaded} out of {len(new_bug_ids)} bugs')\n+ if not security:\n+ new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if len(bug['groups']) == 0}\n \n- if not security:\n- new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if len(bug['groups']) == 0}\n+ if products is not None:\n+ new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if bug['product'] in products}\n \n- if products is not None:\n- new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if bug['product'] in products}\n-\n- db.append(BUGS_DB, new_bugs.values())\n+ db.append(BUGS_DB, new_bugs.values())\n", "issue": "Print progress bar while downloading bugs\nSimilar to what we're doing for commits data: 28b83c12c29185c52afb58d94a533a9448969a8a.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport json\nimport os\n\nimport requests\nfrom libmozdata import bugzilla\n\nfrom bugbug import db\n\nBUGS_DB = 'data/bugs.json'\ndb.register(BUGS_DB, 'https://www.dropbox.com/s/xm6wzac9jl81irz/bugs.json.xz?dl=1')\n\nATTACHMENT_INCLUDE_FIELDS = [\n 'id', 'is_obsolete', 'flags', 'is_patch', 'creator', 'content_type', 'creation_time',\n]\n\nCOMMENT_INCLUDE_FIELDS = [\n 'id', 'text', 'author', 'creation_time',\n]\n\n\ndef get_bug_fields():\n os.makedirs('data', exist_ok=True)\n\n try:\n with open('data/bug_fields.json', 'r') as f:\n return json.load(f)\n except IOError:\n pass\n\n r = requests.get('https://bugzilla.mozilla.org/rest/field/bug')\n r.raise_for_status()\n return r.json()['fields']\n\n\ndef get_bugs():\n return db.read(BUGS_DB)\n\n\ndef set_token(token):\n bugzilla.Bugzilla.TOKEN = token\n\n\ndef _download(ids_or_query):\n new_bugs = {}\n\n def bughandler(bug):\n bug_id = int(bug['id'])\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id].update(bug)\n\n def commenthandler(bug, bug_id):\n bug_id = int(bug_id)\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id]['comments'] = bug['comments']\n\n def attachmenthandler(bug, bug_id):\n bug_id = int(bug_id)\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id]['attachments'] = bug\n\n def historyhandler(bug):\n bug_id = int(bug['id'])\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id]['history'] = bug['history']\n\n bugzilla.Bugzilla(ids_or_query, bughandler=bughandler, commenthandler=commenthandler, comment_include_fields=COMMENT_INCLUDE_FIELDS, attachmenthandler=attachmenthandler, attachment_include_fields=ATTACHMENT_INCLUDE_FIELDS, historyhandler=historyhandler).get_data().wait()\n\n return new_bugs\n\n\ndef download_bugs_between(date_from, date_to, security=False):\n products = set([\n 'Add-on SDK',\n 'Android Background Services',\n 'Core',\n 'DevTools',\n 'External Software Affecting Firefox',\n 'Firefox',\n 'Firefox for Android',\n # 'Firefox for iOS',\n 'Firefox Graveyard',\n 'Firefox Health Report',\n # 'Focus',\n # 'Hello (Loop)',\n 'NSPR',\n 'NSS',\n 'Toolkit',\n 'WebExtensions',\n ])\n\n r = requests.get(f'https://bugzilla.mozilla.org/rest/bug?include_fields=id&f1=creation_ts&o1=greaterthan&v1={date_from.strftime(\"%Y-%m-%d\")}&limit=1&order=bug_id')\n r.raise_for_status()\n first_id = r.json()['bugs'][0]['id']\n\n r = requests.get(f'https://bugzilla.mozilla.org/rest/bug?include_fields=id&f1=creation_ts&o1=lessthan&v1={date_to.strftime(\"%Y-%m-%d\")}&limit=1&order=bug_id%20desc')\n r.raise_for_status()\n last_id = r.json()['bugs'][0]['id']\n\n assert first_id < last_id\n\n all_ids = range(first_id, last_id + 1)\n\n download_bugs(all_ids, security=security, products=products)\n\n return all_ids\n\n\ndef download_bugs(bug_ids, products=None, security=False):\n old_bug_count = 0\n old_bugs = []\n new_bug_ids = set(int(bug_id) for bug_id in bug_ids)\n for bug in get_bugs():\n old_bug_count += 1\n if int(bug['id']) in new_bug_ids:\n old_bugs.append(bug)\n new_bug_ids.remove(bug['id'])\n\n print(f'Loaded {old_bug_count} bugs.')\n\n new_bug_ids = sorted(list(new_bug_ids))\n\n total_downloaded = 0\n chunks = (new_bug_ids[i:(i + 500)] for i in range(0, len(new_bug_ids), 500))\n for chunk in chunks:\n new_bugs = _download(chunk)\n\n total_downloaded += len(new_bugs)\n\n print(f'Downloaded {total_downloaded} out of {len(new_bug_ids)} bugs')\n\n if not security:\n new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if len(bug['groups']) == 0}\n\n if products is not None:\n new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if bug['product'] in products}\n\n db.append(BUGS_DB, new_bugs.values())\n", "path": "bugbug/bugzilla.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport json\nimport os\n\nimport requests\nfrom libmozdata import bugzilla\nfrom tqdm import tqdm\n\nfrom bugbug import db\n\nBUGS_DB = 'data/bugs.json'\ndb.register(BUGS_DB, 'https://www.dropbox.com/s/xm6wzac9jl81irz/bugs.json.xz?dl=1')\n\nATTACHMENT_INCLUDE_FIELDS = [\n 'id', 'is_obsolete', 'flags', 'is_patch', 'creator', 'content_type', 'creation_time',\n]\n\nCOMMENT_INCLUDE_FIELDS = [\n 'id', 'text', 'author', 'creation_time',\n]\n\n\ndef get_bug_fields():\n os.makedirs('data', exist_ok=True)\n\n try:\n with open('data/bug_fields.json', 'r') as f:\n return json.load(f)\n except IOError:\n pass\n\n r = requests.get('https://bugzilla.mozilla.org/rest/field/bug')\n r.raise_for_status()\n return r.json()['fields']\n\n\ndef get_bugs():\n return db.read(BUGS_DB)\n\n\ndef set_token(token):\n bugzilla.Bugzilla.TOKEN = token\n\n\ndef _download(ids_or_query):\n new_bugs = {}\n\n def bughandler(bug):\n bug_id = int(bug['id'])\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id].update(bug)\n\n def commenthandler(bug, bug_id):\n bug_id = int(bug_id)\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id]['comments'] = bug['comments']\n\n def attachmenthandler(bug, bug_id):\n bug_id = int(bug_id)\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id]['attachments'] = bug\n\n def historyhandler(bug):\n bug_id = int(bug['id'])\n\n if bug_id not in new_bugs:\n new_bugs[bug_id] = dict()\n\n new_bugs[bug_id]['history'] = bug['history']\n\n bugzilla.Bugzilla(ids_or_query, bughandler=bughandler, commenthandler=commenthandler, comment_include_fields=COMMENT_INCLUDE_FIELDS, attachmenthandler=attachmenthandler, attachment_include_fields=ATTACHMENT_INCLUDE_FIELDS, historyhandler=historyhandler).get_data().wait()\n\n return new_bugs\n\n\ndef download_bugs_between(date_from, date_to, security=False):\n products = set([\n 'Add-on SDK',\n 'Android Background Services',\n 'Core',\n 'DevTools',\n 'External Software Affecting Firefox',\n 'Firefox',\n 'Firefox for Android',\n # 'Firefox for iOS',\n 'Firefox Graveyard',\n 'Firefox Health Report',\n # 'Focus',\n # 'Hello (Loop)',\n 'NSPR',\n 'NSS',\n 'Toolkit',\n 'WebExtensions',\n ])\n\n r = requests.get(f'https://bugzilla.mozilla.org/rest/bug?include_fields=id&f1=creation_ts&o1=greaterthan&v1={date_from.strftime(\"%Y-%m-%d\")}&limit=1&order=bug_id')\n r.raise_for_status()\n first_id = r.json()['bugs'][0]['id']\n\n r = requests.get(f'https://bugzilla.mozilla.org/rest/bug?include_fields=id&f1=creation_ts&o1=lessthan&v1={date_to.strftime(\"%Y-%m-%d\")}&limit=1&order=bug_id%20desc')\n r.raise_for_status()\n last_id = r.json()['bugs'][0]['id']\n\n assert first_id < last_id\n\n all_ids = range(first_id, last_id + 1)\n\n download_bugs(all_ids, security=security, products=products)\n\n return all_ids\n\n\ndef download_bugs(bug_ids, products=None, security=False):\n old_bug_count = 0\n old_bugs = []\n new_bug_ids = set(int(bug_id) for bug_id in bug_ids)\n for bug in get_bugs():\n old_bug_count += 1\n if int(bug['id']) in new_bug_ids:\n old_bugs.append(bug)\n new_bug_ids.remove(bug['id'])\n\n print(f'Loaded {old_bug_count} bugs.')\n\n new_bug_ids = sorted(list(new_bug_ids))\n\n chunks = (new_bug_ids[i:(i + 500)] for i in range(0, len(new_bug_ids), 500))\n with tqdm(total=len(new_bug_ids)) as progress_bar:\n for chunk in chunks:\n new_bugs = _download(chunk)\n\n progress_bar.update(len(chunk))\n\n if not security:\n new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if len(bug['groups']) == 0}\n\n if products is not None:\n new_bugs = {bug_id: bug for bug_id, bug in new_bugs.items() if bug['product'] in products}\n\n db.append(BUGS_DB, new_bugs.values())\n", "path": "bugbug/bugzilla.py"}]}
| 1,897 | 421 |
gh_patches_debug_21571
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1805
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove sass + ts compilation from ./manage.py run
These slow down starting the development server.
For developers actually changing TS or SASS files, they are not helpful enough, and those will likely start a `--watch` job anyway. We can simply provide a ´./manage.py watch` script for this use case.
For anyone else, they simply slow down the development server startup, which is a bit annoying.
Opinions?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/development/management/commands/run.py`
Content:
```
1 import sys
2
3 from django.core.management import execute_from_command_line
4 from django.core.management.base import BaseCommand
5
6
7 class Command(BaseCommand):
8 args = ""
9 help = 'Execute "runserver 0.0.0.0:8000"'
10
11 def handle(self, *args, **options):
12 self.stdout.write('Executing "manage.py scss"')
13 execute_from_command_line(["manage.py", "scss"])
14 self.stdout.write('Executing "manage.py ts compile"')
15 execute_from_command_line(["manage.py", "ts", "compile"])
16 self.stdout.write('Executing "manage.py runserver 0.0.0.0:8000"')
17 sys.argv = ["manage.py", "runserver", "0.0.0.0:8000"]
18 execute_from_command_line(sys.argv)
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/development/management/commands/run.py b/evap/development/management/commands/run.py
--- a/evap/development/management/commands/run.py
+++ b/evap/development/management/commands/run.py
@@ -1,4 +1,5 @@
import sys
+from subprocess import Popen # nosec
from django.core.management import execute_from_command_line
from django.core.management.base import BaseCommand
@@ -9,10 +10,8 @@
help = 'Execute "runserver 0.0.0.0:8000"'
def handle(self, *args, **options):
- self.stdout.write('Executing "manage.py scss"')
- execute_from_command_line(["manage.py", "scss"])
- self.stdout.write('Executing "manage.py ts compile"')
- execute_from_command_line(["manage.py", "ts", "compile"])
- self.stdout.write('Executing "manage.py runserver 0.0.0.0:8000"')
- sys.argv = ["manage.py", "runserver", "0.0.0.0:8000"]
- execute_from_command_line(sys.argv)
+ self.stdout.write('Executing "manage.py scss" and "manage.py ts compile"')
+ with Popen(["./manage.py", "scss"]), Popen(["./manage.py", "ts", "compile"]): # nosec
+ self.stdout.write('Executing "manage.py runserver 0.0.0.0:8000"')
+ sys.argv = ["manage.py", "runserver", "0.0.0.0:8000"]
+ execute_from_command_line(sys.argv)
|
{"golden_diff": "diff --git a/evap/development/management/commands/run.py b/evap/development/management/commands/run.py\n--- a/evap/development/management/commands/run.py\n+++ b/evap/development/management/commands/run.py\n@@ -1,4 +1,5 @@\n import sys\n+from subprocess import Popen # nosec\n \n from django.core.management import execute_from_command_line\n from django.core.management.base import BaseCommand\n@@ -9,10 +10,8 @@\n help = 'Execute \"runserver 0.0.0.0:8000\"'\n \n def handle(self, *args, **options):\n- self.stdout.write('Executing \"manage.py scss\"')\n- execute_from_command_line([\"manage.py\", \"scss\"])\n- self.stdout.write('Executing \"manage.py ts compile\"')\n- execute_from_command_line([\"manage.py\", \"ts\", \"compile\"])\n- self.stdout.write('Executing \"manage.py runserver 0.0.0.0:8000\"')\n- sys.argv = [\"manage.py\", \"runserver\", \"0.0.0.0:8000\"]\n- execute_from_command_line(sys.argv)\n+ self.stdout.write('Executing \"manage.py scss\" and \"manage.py ts compile\"')\n+ with Popen([\"./manage.py\", \"scss\"]), Popen([\"./manage.py\", \"ts\", \"compile\"]): # nosec\n+ self.stdout.write('Executing \"manage.py runserver 0.0.0.0:8000\"')\n+ sys.argv = [\"manage.py\", \"runserver\", \"0.0.0.0:8000\"]\n+ execute_from_command_line(sys.argv)\n", "issue": "Remove sass + ts compilation from ./manage.py run\nThese slow down starting the development server.\r\n\r\nFor developers actually changing TS or SASS files, they are not helpful enough, and those will likely start a `--watch` job anyway. We can simply provide a \u00b4./manage.py watch` script for this use case.\r\n\r\nFor anyone else, they simply slow down the development server startup, which is a bit annoying.\r\n\r\nOpinions?\n", "before_files": [{"content": "import sys\n\nfrom django.core.management import execute_from_command_line\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n args = \"\"\n help = 'Execute \"runserver 0.0.0.0:8000\"'\n\n def handle(self, *args, **options):\n self.stdout.write('Executing \"manage.py scss\"')\n execute_from_command_line([\"manage.py\", \"scss\"])\n self.stdout.write('Executing \"manage.py ts compile\"')\n execute_from_command_line([\"manage.py\", \"ts\", \"compile\"])\n self.stdout.write('Executing \"manage.py runserver 0.0.0.0:8000\"')\n sys.argv = [\"manage.py\", \"runserver\", \"0.0.0.0:8000\"]\n execute_from_command_line(sys.argv)\n", "path": "evap/development/management/commands/run.py"}], "after_files": [{"content": "import sys\nfrom subprocess import Popen # nosec\n\nfrom django.core.management import execute_from_command_line\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n args = \"\"\n help = 'Execute \"runserver 0.0.0.0:8000\"'\n\n def handle(self, *args, **options):\n self.stdout.write('Executing \"manage.py scss\" and \"manage.py ts compile\"')\n with Popen([\"./manage.py\", \"scss\"]), Popen([\"./manage.py\", \"ts\", \"compile\"]): # nosec\n self.stdout.write('Executing \"manage.py runserver 0.0.0.0:8000\"')\n sys.argv = [\"manage.py\", \"runserver\", \"0.0.0.0:8000\"]\n execute_from_command_line(sys.argv)\n", "path": "evap/development/management/commands/run.py"}]}
| 562 | 381 |
gh_patches_debug_562
|
rasdani/github-patches
|
git_diff
|
mabel-dev__opteryx-1641
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
🪲 Python 3.9 tests stalling
### Thank you for taking the time to report a problem with Opteryx.
_To help us to respond to your request we ask that you try to provide the below detail about the bug._
**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._
**Expected behaviour** _A clear and concise description of what you expected to happen._
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
~~~sql
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opteryx/__version__.py`
Content:
```
1 __build__ = 477
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 15
31 _revision = 0
32 _status = VersionStatus.BETA
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 477
+__build__ = 482
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
|
{"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 477\n+__build__ = 482\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n", "issue": "\ud83e\udeb2 Python 3.9 tests stalling\n### Thank you for taking the time to report a problem with Opteryx.\r\n_To help us to respond to your request we ask that you try to provide the below detail about the bug._\r\n\r\n**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._\r\n\r\n\r\n**Expected behaviour** _A clear and concise description of what you expected to happen._\r\n\r\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\n\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "__build__ = 477\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 15\n_revision = 0\n_status = VersionStatus.BETA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}], "after_files": [{"content": "__build__ = 482\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 15\n_revision = 0\n_status = VersionStatus.BETA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]}
| 779 | 101 |
gh_patches_debug_36740
|
rasdani/github-patches
|
git_diff
|
plotly__dash-1970
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Dropdown: Selected options not showing when the `value` contains a comma
As [reported on the forum by @ marcus](https://community.plotly.com/t/dcc-dropdown-bug-suspected-please-confirm-or-correct-me/60585):
Problem is in this spot: 'value': "50 , 10"
Specifically comma in `value` string makes that selection is not being displayed. When coma is removed or replaced (dot creates no issue) it works fine, but when coma is within the string, the selections are not displayed in the input box and are not removed from the `options`
This only occurs in Multi-Value dropdowns. This is new in Dash V2.1.0 It works as expected in V2.0.0
```
from dash import Dash, dcc, html, Input, Output
app = Dash(__name__)
app.layout = html.Div(
[
dcc.Dropdown(
id="dropdown",
className="inputbox-long",
options=[
{"label": "New York City", "value": "50, 10"},
{"label": "Montreal", "value": "MTL" },
{"label": "San Francisco", "value": "SF"},
],
placeholder="Select one or more",
multi=True,
),
html.Div(id="output"),
]
)
@app.callback(
Output("output", "children"), Input("dropdown", "value"),
)
def update(value):
return value
if __name__ == "__main__":
app.run_server(debug=True)
```

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `components/dash-html-components/dash_html_components_base/__init__.py`
Content:
```
1 """Vanilla HTML components for Dash"""
2
3 from ._imports_ import * # noqa: E402, F401, F403
4 from ._imports_ import __all__ # noqa: E402
5
6 import json
7 import os as _os
8 import sys as _sys
9 import dash as _dash
10
11 _basepath = _os.path.dirname(__file__)
12 _filepath = _os.path.abspath(_os.path.join(_basepath, "package-info.json"))
13 with open(_filepath) as f:
14 package = json.load(f)
15
16 package_name = package["name"].replace(" ", "_").replace("-", "_")
17 __version__ = package["version"]
18
19
20 # Module imports trigger a dash.development import, need to check this first
21 if not hasattr(_dash, "__plotly_dash") and not hasattr(_dash, "development"):
22 print(
23 "Dash was not successfully imported. Make sure you don't have a file "
24 "named \n'dash.py' in your current directory.",
25 file=_sys.stderr,
26 )
27 _sys.exit(1)
28
29 _current_path = _os.path.dirname(_os.path.abspath(__file__))
30
31
32 _this_module = "dash_html_components"
33
34 _js_dist = [
35 {
36 "relative_package_path": 'html/{}.min.js'.format(_this_module),
37 "external_url": (
38 "https://unpkg.com/dash-html-components@{}"
39 "/dash_html_components/dash_html_components.min.js"
40 ).format(__version__),
41 "namespace": "dash"
42 },
43 {
44 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),
45 'external_url': (
46 'https://unpkg.com/dash-html-components@{}'
47 '/dash_html_components/dash_html_components.min.js.map'
48 ).format(__version__),
49 'namespace': 'dash',
50 'dynamic': True
51 }
52 ]
53
54 _css_dist = []
55
56
57 for _component in __all__:
58 setattr(locals()[_component], '_js_dist', _js_dist)
59 setattr(locals()[_component], '_css_dist', _css_dist)
60
```
Path: `components/dash-html-components/setup.py`
Content:
```
1 import io
2 import json
3 from setuptools import setup
4
5 with open('package.json') as f:
6 package = json.load(f)
7
8 package_name = str(package["name"].replace(" ", "_").replace("-", "_"))
9
10 setup(
11 name='dash_html_components',
12 version=package["version"],
13 author=package['author'],
14 author_email='[email protected]',
15 packages=[package_name],
16 url='https://github.com/plotly/dash-html-components',
17 include_package_data=True,
18 license=package['license'],
19 description=package['description'] if 'description' in package else package_name,
20 long_description=io.open('README.md', encoding='utf-8').read(),
21 long_description_content_type='text/markdown',
22 install_requires=[]
23 )
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/components/dash-html-components/dash_html_components_base/__init__.py b/components/dash-html-components/dash_html_components_base/__init__.py
--- a/components/dash-html-components/dash_html_components_base/__init__.py
+++ b/components/dash-html-components/dash_html_components_base/__init__.py
@@ -33,27 +33,27 @@
_js_dist = [
{
- "relative_package_path": 'html/{}.min.js'.format(_this_module),
+ "relative_package_path": "html/{}.min.js".format(_this_module),
"external_url": (
"https://unpkg.com/dash-html-components@{}"
"/dash_html_components/dash_html_components.min.js"
).format(__version__),
- "namespace": "dash"
+ "namespace": "dash",
},
{
- 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),
- 'external_url': (
- 'https://unpkg.com/dash-html-components@{}'
- '/dash_html_components/dash_html_components.min.js.map'
+ "relative_package_path": "html/{}.min.js.map".format(_this_module),
+ "external_url": (
+ "https://unpkg.com/dash-html-components@{}"
+ "/dash_html_components/dash_html_components.min.js.map"
).format(__version__),
- 'namespace': 'dash',
- 'dynamic': True
- }
+ "namespace": "dash",
+ "dynamic": True,
+ },
]
_css_dist = []
for _component in __all__:
- setattr(locals()[_component], '_js_dist', _js_dist)
- setattr(locals()[_component], '_css_dist', _css_dist)
+ setattr(locals()[_component], "_js_dist", _js_dist)
+ setattr(locals()[_component], "_css_dist", _css_dist)
diff --git a/components/dash-html-components/setup.py b/components/dash-html-components/setup.py
--- a/components/dash-html-components/setup.py
+++ b/components/dash-html-components/setup.py
@@ -2,22 +2,22 @@
import json
from setuptools import setup
-with open('package.json') as f:
+with open("package.json") as f:
package = json.load(f)
package_name = str(package["name"].replace(" ", "_").replace("-", "_"))
setup(
- name='dash_html_components',
+ name="dash_html_components",
version=package["version"],
- author=package['author'],
- author_email='[email protected]',
+ author=package["author"],
+ author_email="[email protected]",
packages=[package_name],
- url='https://github.com/plotly/dash-html-components',
+ url="https://github.com/plotly/dash-html-components",
include_package_data=True,
- license=package['license'],
- description=package['description'] if 'description' in package else package_name,
- long_description=io.open('README.md', encoding='utf-8').read(),
- long_description_content_type='text/markdown',
- install_requires=[]
+ license=package["license"],
+ description=package["description"] if "description" in package else package_name,
+ long_description=io.open("README.md", encoding="utf-8").read(),
+ long_description_content_type="text/markdown",
+ install_requires=[],
)
|
{"golden_diff": "diff --git a/components/dash-html-components/dash_html_components_base/__init__.py b/components/dash-html-components/dash_html_components_base/__init__.py\n--- a/components/dash-html-components/dash_html_components_base/__init__.py\n+++ b/components/dash-html-components/dash_html_components_base/__init__.py\n@@ -33,27 +33,27 @@\n \n _js_dist = [\n {\n- \"relative_package_path\": 'html/{}.min.js'.format(_this_module),\n+ \"relative_package_path\": \"html/{}.min.js\".format(_this_module),\n \"external_url\": (\n \"https://unpkg.com/dash-html-components@{}\"\n \"/dash_html_components/dash_html_components.min.js\"\n ).format(__version__),\n- \"namespace\": \"dash\"\n+ \"namespace\": \"dash\",\n },\n {\n- 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),\n- 'external_url': (\n- 'https://unpkg.com/dash-html-components@{}'\n- '/dash_html_components/dash_html_components.min.js.map'\n+ \"relative_package_path\": \"html/{}.min.js.map\".format(_this_module),\n+ \"external_url\": (\n+ \"https://unpkg.com/dash-html-components@{}\"\n+ \"/dash_html_components/dash_html_components.min.js.map\"\n ).format(__version__),\n- 'namespace': 'dash',\n- 'dynamic': True\n- }\n+ \"namespace\": \"dash\",\n+ \"dynamic\": True,\n+ },\n ]\n \n _css_dist = []\n \n \n for _component in __all__:\n- setattr(locals()[_component], '_js_dist', _js_dist)\n- setattr(locals()[_component], '_css_dist', _css_dist)\n+ setattr(locals()[_component], \"_js_dist\", _js_dist)\n+ setattr(locals()[_component], \"_css_dist\", _css_dist)\ndiff --git a/components/dash-html-components/setup.py b/components/dash-html-components/setup.py\n--- a/components/dash-html-components/setup.py\n+++ b/components/dash-html-components/setup.py\n@@ -2,22 +2,22 @@\n import json\n from setuptools import setup\n \n-with open('package.json') as f:\n+with open(\"package.json\") as f:\n package = json.load(f)\n \n package_name = str(package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\"))\n \n setup(\n- name='dash_html_components',\n+ name=\"dash_html_components\",\n version=package[\"version\"],\n- author=package['author'],\n- author_email='[email protected]',\n+ author=package[\"author\"],\n+ author_email=\"[email protected]\",\n packages=[package_name],\n- url='https://github.com/plotly/dash-html-components',\n+ url=\"https://github.com/plotly/dash-html-components\",\n include_package_data=True,\n- license=package['license'],\n- description=package['description'] if 'description' in package else package_name,\n- long_description=io.open('README.md', encoding='utf-8').read(),\n- long_description_content_type='text/markdown',\n- install_requires=[]\n+ license=package[\"license\"],\n+ description=package[\"description\"] if \"description\" in package else package_name,\n+ long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n+ long_description_content_type=\"text/markdown\",\n+ install_requires=[],\n )\n", "issue": "[BUG] Dropdown: Selected options not showing when the `value` contains a comma \nAs [reported on the forum by @ marcus](https://community.plotly.com/t/dcc-dropdown-bug-suspected-please-confirm-or-correct-me/60585):\r\n\r\nProblem is in this spot: 'value': \"50 , 10\"\r\nSpecifically comma in `value` string makes that selection is not being displayed. When coma is removed or replaced (dot creates no issue) it works fine, but when coma is within the string, the selections are not displayed in the input box and are not removed from the `options`\r\n\r\n This only occurs in Multi-Value dropdowns. This is new in Dash V2.1.0 It works as expected in V2.0.0\r\n\r\n\r\n\r\n\r\n```\r\nfrom dash import Dash, dcc, html, Input, Output\r\n\r\napp = Dash(__name__)\r\n\r\napp.layout = html.Div(\r\n [\r\n dcc.Dropdown(\r\n id=\"dropdown\",\r\n className=\"inputbox-long\",\r\n options=[\r\n {\"label\": \"New York City\", \"value\": \"50, 10\"},\r\n {\"label\": \"Montreal\", \"value\": \"MTL\" },\r\n {\"label\": \"San Francisco\", \"value\": \"SF\"},\r\n ],\r\n placeholder=\"Select one or more\",\r\n multi=True,\r\n ),\r\n html.Div(id=\"output\"),\r\n ]\r\n)\r\n\r\n\r\[email protected](\r\n Output(\"output\", \"children\"), Input(\"dropdown\", \"value\"),\r\n)\r\ndef update(value): \r\n return value\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app.run_server(debug=True)\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"Vanilla HTML components for Dash\"\"\"\n\nfrom ._imports_ import * # noqa: E402, F401, F403\nfrom ._imports_ import __all__ # noqa: E402\n\nimport json\nimport os as _os\nimport sys as _sys\nimport dash as _dash\n\n_basepath = _os.path.dirname(__file__)\n_filepath = _os.path.abspath(_os.path.join(_basepath, \"package-info.json\"))\nwith open(_filepath) as f:\n package = json.load(f)\n\npackage_name = package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\")\n__version__ = package[\"version\"]\n\n\n# Module imports trigger a dash.development import, need to check this first\nif not hasattr(_dash, \"__plotly_dash\") and not hasattr(_dash, \"development\"):\n print(\n \"Dash was not successfully imported. Make sure you don't have a file \"\n \"named \\n'dash.py' in your current directory.\",\n file=_sys.stderr,\n )\n _sys.exit(1)\n\n_current_path = _os.path.dirname(_os.path.abspath(__file__))\n\n\n_this_module = \"dash_html_components\"\n\n_js_dist = [\n {\n \"relative_package_path\": 'html/{}.min.js'.format(_this_module),\n \"external_url\": (\n \"https://unpkg.com/dash-html-components@{}\"\n \"/dash_html_components/dash_html_components.min.js\"\n ).format(__version__),\n \"namespace\": \"dash\"\n },\n {\n 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),\n 'external_url': (\n 'https://unpkg.com/dash-html-components@{}'\n '/dash_html_components/dash_html_components.min.js.map'\n ).format(__version__),\n 'namespace': 'dash',\n 'dynamic': True\n }\n]\n\n_css_dist = []\n\n\nfor _component in __all__:\n setattr(locals()[_component], '_js_dist', _js_dist)\n setattr(locals()[_component], '_css_dist', _css_dist)\n", "path": "components/dash-html-components/dash_html_components_base/__init__.py"}, {"content": "import io\nimport json\nfrom setuptools import setup\n\nwith open('package.json') as f:\n package = json.load(f)\n\npackage_name = str(package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\"))\n\nsetup(\n name='dash_html_components',\n version=package[\"version\"],\n author=package['author'],\n author_email='[email protected]',\n packages=[package_name],\n url='https://github.com/plotly/dash-html-components',\n include_package_data=True,\n license=package['license'],\n description=package['description'] if 'description' in package else package_name,\n long_description=io.open('README.md', encoding='utf-8').read(),\n long_description_content_type='text/markdown',\n install_requires=[]\n)\n", "path": "components/dash-html-components/setup.py"}], "after_files": [{"content": "\"\"\"Vanilla HTML components for Dash\"\"\"\n\nfrom ._imports_ import * # noqa: E402, F401, F403\nfrom ._imports_ import __all__ # noqa: E402\n\nimport json\nimport os as _os\nimport sys as _sys\nimport dash as _dash\n\n_basepath = _os.path.dirname(__file__)\n_filepath = _os.path.abspath(_os.path.join(_basepath, \"package-info.json\"))\nwith open(_filepath) as f:\n package = json.load(f)\n\npackage_name = package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\")\n__version__ = package[\"version\"]\n\n\n# Module imports trigger a dash.development import, need to check this first\nif not hasattr(_dash, \"__plotly_dash\") and not hasattr(_dash, \"development\"):\n print(\n \"Dash was not successfully imported. Make sure you don't have a file \"\n \"named \\n'dash.py' in your current directory.\",\n file=_sys.stderr,\n )\n _sys.exit(1)\n\n_current_path = _os.path.dirname(_os.path.abspath(__file__))\n\n\n_this_module = \"dash_html_components\"\n\n_js_dist = [\n {\n \"relative_package_path\": \"html/{}.min.js\".format(_this_module),\n \"external_url\": (\n \"https://unpkg.com/dash-html-components@{}\"\n \"/dash_html_components/dash_html_components.min.js\"\n ).format(__version__),\n \"namespace\": \"dash\",\n },\n {\n \"relative_package_path\": \"html/{}.min.js.map\".format(_this_module),\n \"external_url\": (\n \"https://unpkg.com/dash-html-components@{}\"\n \"/dash_html_components/dash_html_components.min.js.map\"\n ).format(__version__),\n \"namespace\": \"dash\",\n \"dynamic\": True,\n },\n]\n\n_css_dist = []\n\n\nfor _component in __all__:\n setattr(locals()[_component], \"_js_dist\", _js_dist)\n setattr(locals()[_component], \"_css_dist\", _css_dist)\n", "path": "components/dash-html-components/dash_html_components_base/__init__.py"}, {"content": "import io\nimport json\nfrom setuptools import setup\n\nwith open(\"package.json\") as f:\n package = json.load(f)\n\npackage_name = str(package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\"))\n\nsetup(\n name=\"dash_html_components\",\n version=package[\"version\"],\n author=package[\"author\"],\n author_email=\"[email protected]\",\n packages=[package_name],\n url=\"https://github.com/plotly/dash-html-components\",\n include_package_data=True,\n license=package[\"license\"],\n description=package[\"description\"] if \"description\" in package else package_name,\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=[],\n)\n", "path": "components/dash-html-components/setup.py"}]}
| 1,468 | 764 |
gh_patches_debug_15670
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-1925
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Optimize Rainloop image
## Before you open your issue
- [X] Check if no issue or pull-request for this already exists.
- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- [X] You understand `Mailu` is made by volunteers in their **free time** — be conscise, civil and accept that delays can occur.
- [X] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
## Environment & Versions
### Environment
- [X] docker-compose
- [X] kubernetes
- [X] docker swarm
### Versions
Master
```
$> docker images
mailu/rainloop master 2ad8d1c29ff3 45 hours ago 607MB
```
Optimized version using NGINX:
```
user/rainloop master 7de9dee9286d 2 seconds ago 116MB
```
## Description
In order to resolve issues #1830 and #1200, it is required to reduce build time, complexity and image size of images before multi-architecture builds can be supported. The current Rainloop image size is 607MB and can be optimized when changing from Apache to NGINX with officially supported Docker images. This can also increase overall web performance.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `webmails/rainloop/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import shutil
5 import logging as log
6 import sys
7 from socrate import system, conf
8
9 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
10
11 # Actual startup script
12 os.environ["FRONT_ADDRESS"] = system.resolve_address(os.environ.get("HOST_FRONT", "front"))
13 os.environ["IMAP_ADDRESS"] = system.resolve_address(os.environ.get("HOST_IMAP", "imap"))
14
15 os.environ["MAX_FILESIZE"] = str(int(int(os.environ.get("MESSAGE_SIZE_LIMIT"))*0.66/1048576))
16
17 base = "/data/_data_/_default_/"
18 shutil.rmtree(base + "domains/", ignore_errors=True)
19 os.makedirs(base + "domains", exist_ok=True)
20 os.makedirs(base + "configs", exist_ok=True)
21
22 conf.jinja("/default.ini", os.environ, "/data/_data_/_default_/domains/default.ini")
23 conf.jinja("/application.ini", os.environ, "/data/_data_/_default_/configs/application.ini")
24 conf.jinja("/php.ini", os.environ, "/usr/local/etc/php/conf.d/rainloop.ini")
25
26 os.system("chown -R www-data:www-data /data")
27 os.system("chmod -R a+rX /var/www/html/")
28
29 os.execv("/usr/local/bin/apache2-foreground", ["apache2-foreground"])
30
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/webmails/rainloop/start.py b/webmails/rainloop/start.py
--- a/webmails/rainloop/start.py
+++ b/webmails/rainloop/start.py
@@ -19,12 +19,11 @@
os.makedirs(base + "domains", exist_ok=True)
os.makedirs(base + "configs", exist_ok=True)
-conf.jinja("/default.ini", os.environ, "/data/_data_/_default_/domains/default.ini")
-conf.jinja("/application.ini", os.environ, "/data/_data_/_default_/configs/application.ini")
-conf.jinja("/php.ini", os.environ, "/usr/local/etc/php/conf.d/rainloop.ini")
+conf.jinja("/defaults/default.ini", os.environ, "/data/_data_/_default_/domains/default.ini")
+conf.jinja("/defaults/application.ini", os.environ, "/data/_data_/_default_/configs/application.ini")
+conf.jinja("/defaults/php.ini", os.environ, "/etc/php7/php.ini")
-os.system("chown -R www-data:www-data /data")
-os.system("chmod -R a+rX /var/www/html/")
-
-os.execv("/usr/local/bin/apache2-foreground", ["apache2-foreground"])
+os.system("chown -R nginx:nginx /data")
+os.system("chmod -R a+rX /var/www/rainloop/")
+os.execv("/usr/sbin/nginx", ["nginx", "-g", "daemon off;"])
|
{"golden_diff": "diff --git a/webmails/rainloop/start.py b/webmails/rainloop/start.py\n--- a/webmails/rainloop/start.py\n+++ b/webmails/rainloop/start.py\n@@ -19,12 +19,11 @@\n os.makedirs(base + \"domains\", exist_ok=True)\n os.makedirs(base + \"configs\", exist_ok=True)\n \n-conf.jinja(\"/default.ini\", os.environ, \"/data/_data_/_default_/domains/default.ini\")\n-conf.jinja(\"/application.ini\", os.environ, \"/data/_data_/_default_/configs/application.ini\")\n-conf.jinja(\"/php.ini\", os.environ, \"/usr/local/etc/php/conf.d/rainloop.ini\")\n+conf.jinja(\"/defaults/default.ini\", os.environ, \"/data/_data_/_default_/domains/default.ini\")\n+conf.jinja(\"/defaults/application.ini\", os.environ, \"/data/_data_/_default_/configs/application.ini\")\n+conf.jinja(\"/defaults/php.ini\", os.environ, \"/etc/php7/php.ini\")\n \n-os.system(\"chown -R www-data:www-data /data\")\n-os.system(\"chmod -R a+rX /var/www/html/\")\n-\n-os.execv(\"/usr/local/bin/apache2-foreground\", [\"apache2-foreground\"])\n+os.system(\"chown -R nginx:nginx /data\")\n+os.system(\"chmod -R a+rX /var/www/rainloop/\")\n \n+os.execv(\"/usr/sbin/nginx\", [\"nginx\", \"-g\", \"daemon off;\"])\n", "issue": "Optimize Rainloop image\n## Before you open your issue\r\n- [X] Check if no issue or pull-request for this already exists.\r\n- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- [X] You understand `Mailu` is made by volunteers in their **free time** \u2014 be conscise, civil and accept that delays can occur.\r\n- [X] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\n## Environment & Versions\r\n### Environment\r\n - [X] docker-compose\r\n - [X] kubernetes\r\n - [X] docker swarm\r\n\r\n### Versions\r\nMaster\r\n```\r\n$> docker images\r\nmailu/rainloop master 2ad8d1c29ff3 45 hours ago 607MB\r\n```\r\nOptimized version using NGINX:\r\n```\r\nuser/rainloop master 7de9dee9286d 2 seconds ago 116MB\r\n```\r\n\r\n## Description\r\nIn order to resolve issues #1830 and #1200, it is required to reduce build time, complexity and image size of images before multi-architecture builds can be supported. The current Rainloop image size is 607MB and can be optimized when changing from Apache to NGINX with officially supported Docker images. This can also increase overall web performance.\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport shutil\nimport logging as log\nimport sys\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\nos.environ[\"IMAP_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_IMAP\", \"imap\"))\n\nos.environ[\"MAX_FILESIZE\"] = str(int(int(os.environ.get(\"MESSAGE_SIZE_LIMIT\"))*0.66/1048576))\n\nbase = \"/data/_data_/_default_/\"\nshutil.rmtree(base + \"domains/\", ignore_errors=True)\nos.makedirs(base + \"domains\", exist_ok=True)\nos.makedirs(base + \"configs\", exist_ok=True)\n\nconf.jinja(\"/default.ini\", os.environ, \"/data/_data_/_default_/domains/default.ini\")\nconf.jinja(\"/application.ini\", os.environ, \"/data/_data_/_default_/configs/application.ini\")\nconf.jinja(\"/php.ini\", os.environ, \"/usr/local/etc/php/conf.d/rainloop.ini\")\n\nos.system(\"chown -R www-data:www-data /data\")\nos.system(\"chmod -R a+rX /var/www/html/\")\n\nos.execv(\"/usr/local/bin/apache2-foreground\", [\"apache2-foreground\"])\n\n", "path": "webmails/rainloop/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport shutil\nimport logging as log\nimport sys\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\nos.environ[\"IMAP_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_IMAP\", \"imap\"))\n\nos.environ[\"MAX_FILESIZE\"] = str(int(int(os.environ.get(\"MESSAGE_SIZE_LIMIT\"))*0.66/1048576))\n\nbase = \"/data/_data_/_default_/\"\nshutil.rmtree(base + \"domains/\", ignore_errors=True)\nos.makedirs(base + \"domains\", exist_ok=True)\nos.makedirs(base + \"configs\", exist_ok=True)\n\nconf.jinja(\"/defaults/default.ini\", os.environ, \"/data/_data_/_default_/domains/default.ini\")\nconf.jinja(\"/defaults/application.ini\", os.environ, \"/data/_data_/_default_/configs/application.ini\")\nconf.jinja(\"/defaults/php.ini\", os.environ, \"/etc/php7/php.ini\")\n\nos.system(\"chown -R nginx:nginx /data\")\nos.system(\"chmod -R a+rX /var/www/rainloop/\")\n\nos.execv(\"/usr/sbin/nginx\", [\"nginx\", \"-g\", \"daemon off;\"])\n", "path": "webmails/rainloop/start.py"}]}
| 942 | 315 |
gh_patches_debug_56203
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-3130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change "Edit" to "Manage" in "Your Projects"
Change the button/link text "Edit" to "Manage" in "Your Projects".
From IRC conversation with @alanbato, @ewdurbin and @nlhkabu .
~~~
<EWDurbin> I think perhaps “Manage” might be a better name for the button that currently says “Edit"
<EWDurbin> Just right off the bat. Since well you can’t really Edit anything, just delete files/releases/projects
<di_codes> ^ agreed
<alanbato> Makes sense to me, Edit misguides people into thinking they can change project attributes imho
<nlh> yep 100% agree
<sumanah> nlh: and I agree with them but I want to hear your thoughts -- you're the one who's done user testing, so do you think people would understand "manage"?
<nlh> i'll open a PR :)
<nlh> yes
<sumanah> thanks nlh!
<nlh> it's also more consistent with the URL structure
~~~
But I do not see a pull request from Nicole yet, so I declare this a:
**Good First Issue**: This issue is good for first time contributors. If there is not a corresponding pull request for this issue, it is up for grabs. For directions for getting set up, see our [Getting Started Guide](https://warehouse.pypa.io/development/getting-started/). If you are working on this issue and have questions, please feel free to ask them here, in [`#pypa-dev` on Freenode](https://webchat.freenode.net/?channels=%23pypa-dev), or on the [pypa-dev mailing list](https://groups.google.com/forum/#!forum/pypa-dev).
Fix for #3118
Fixes #3118.
Updated the projects.html and the relating sass snippet to show **manage** in stead of **edit** when in **Your projects**.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/packaging/views.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from first import first
14 from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
15 from pyramid.view import view_config
16 from sqlalchemy.orm.exc import NoResultFound
17
18 from warehouse.accounts.models import User
19 from warehouse.cache.origin import origin_cache
20 from warehouse.packaging.models import Release, Role
21
22
23 @view_config(
24 route_name="packaging.project",
25 renderer="packaging/detail.html",
26 decorator=[
27 origin_cache(
28 1 * 24 * 60 * 60, # 1 day
29 stale_while_revalidate=1 * 24 * 60 * 60, # 1 day
30 stale_if_error=5 * 24 * 60 * 60, # 5 days
31 ),
32 ],
33 )
34 def project_detail(project, request):
35 if project.name != request.matchdict.get("name", project.name):
36 return HTTPMovedPermanently(
37 request.current_route_path(name=project.name),
38 )
39
40 try:
41 release = (
42 request.db.query(Release)
43 .filter(Release.project == project)
44 .order_by(
45 Release.is_prerelease.nullslast(),
46 Release._pypi_ordering.desc())
47 .limit(1)
48 .one()
49 )
50 except NoResultFound:
51 return HTTPNotFound()
52
53 return release_detail(release, request)
54
55
56 @view_config(
57 route_name="packaging.release",
58 renderer="packaging/detail.html",
59 decorator=[
60 origin_cache(
61 1 * 24 * 60 * 60, # 1 day
62 stale_while_revalidate=1 * 24 * 60 * 60, # 1 day
63 stale_if_error=5 * 24 * 60 * 60, # 5 days
64 ),
65 ],
66 )
67 def release_detail(release, request):
68 project = release.project
69
70 if not {project.name, release.version} <= set(request.matchdict.values()):
71 return HTTPMovedPermanently(
72 request.current_route_path(
73 name=project.name, version=release.version,
74 ),
75 )
76
77 # Get all of the registered versions for this Project, in order of newest
78 # to oldest.
79 all_releases = (
80 request.db.query(Release)
81 .filter(Release.project == project)
82 .with_entities(
83 Release.version,
84 Release.is_prerelease,
85 Release.created)
86 .order_by(Release._pypi_ordering.desc())
87 .all()
88 )
89
90 # Get the latest non-prerelease of this Project, or the latest release if
91 # all releases are prereleases.
92 latest_release = first(
93 all_releases,
94 key=lambda r: not r.is_prerelease,
95 default=all_releases[0],
96 )
97
98 # Get all of the maintainers for this project.
99 maintainers = [
100 r.user
101 for r in (
102 request.db.query(Role)
103 .join(User)
104 .filter(Role.project == project)
105 .distinct(User.username)
106 .order_by(User.username)
107 .all()
108 )
109 ]
110
111 # Get the license from the classifiers or metadata, preferring classifiers.
112 license = None
113 if release.license:
114 # Make a best effort when the entire license text is given
115 # by using the first line only.
116 license = release.license.split('\n')[0]
117 license_classifiers = [c.split(" :: ")[-1] for c in release.classifiers
118 if c.startswith("License")]
119 if license_classifiers:
120 license = ', '.join(license_classifiers)
121
122 return {
123 "project": project,
124 "release": release,
125 "files": release.files.all(),
126 "latest_release": latest_release,
127 "all_releases": all_releases,
128 "maintainers": maintainers,
129 "license": license,
130 }
131
132
133 @view_config(
134 route_name="includes.edit-project-button",
135 renderer="includes/edit-project-button.html",
136 uses_session=True,
137 permission="manage",
138 )
139 def edit_project_button(project, request):
140 return {'project': project}
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py
--- a/warehouse/packaging/views.py
+++ b/warehouse/packaging/views.py
@@ -132,7 +132,7 @@
@view_config(
route_name="includes.edit-project-button",
- renderer="includes/edit-project-button.html",
+ renderer="includes/manage-project-button.html",
uses_session=True,
permission="manage",
)
|
{"golden_diff": "diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py\n--- a/warehouse/packaging/views.py\n+++ b/warehouse/packaging/views.py\n@@ -132,7 +132,7 @@\n \n @view_config(\n route_name=\"includes.edit-project-button\",\n- renderer=\"includes/edit-project-button.html\",\n+ renderer=\"includes/manage-project-button.html\",\n uses_session=True,\n permission=\"manage\",\n )\n", "issue": "Change \"Edit\" to \"Manage\" in \"Your Projects\"\nChange the button/link text \"Edit\" to \"Manage\" in \"Your Projects\".\r\n\r\nFrom IRC conversation with @alanbato, @ewdurbin and @nlhkabu .\r\n\r\n~~~\r\n<EWDurbin> I think perhaps \u201cManage\u201d might be a better name for the button that currently says \u201cEdit\"\r\n<EWDurbin> Just right off the bat. Since well you can\u2019t really Edit anything, just delete files/releases/projects\r\n<di_codes> ^ agreed\r\n<alanbato> Makes sense to me, Edit misguides people into thinking they can change project attributes imho\r\n<nlh> yep 100% agree\r\n<sumanah> nlh: and I agree with them but I want to hear your thoughts -- you're the one who's done user testing, so do you think people would understand \"manage\"?\r\n<nlh> i'll open a PR :)\r\n<nlh> yes\r\n<sumanah> thanks nlh!\r\n<nlh> it's also more consistent with the URL structure\r\n~~~\r\n\r\nBut I do not see a pull request from Nicole yet, so I declare this a:\r\n\r\n**Good First Issue**: This issue is good for first time contributors. If there is not a corresponding pull request for this issue, it is up for grabs. For directions for getting set up, see our [Getting Started Guide](https://warehouse.pypa.io/development/getting-started/). If you are working on this issue and have questions, please feel free to ask them here, in [`#pypa-dev` on Freenode](https://webchat.freenode.net/?channels=%23pypa-dev), or on the [pypa-dev mailing list](https://groups.google.com/forum/#!forum/pypa-dev).\nFix for #3118\nFixes #3118.\r\n\r\nUpdated the projects.html and the relating sass snippet to show **manage** in stead of **edit** when in **Your projects**.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom first import first\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import Release, Role\n\n\n@view_config(\n route_name=\"packaging.project\",\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=1 * 24 * 60 * 60, # 1 day\n stale_if_error=5 * 24 * 60 * 60, # 5 days\n ),\n ],\n)\ndef project_detail(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name),\n )\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(\n Release.is_prerelease.nullslast(),\n Release._pypi_ordering.desc())\n .limit(1)\n .one()\n )\n except NoResultFound:\n return HTTPNotFound()\n\n return release_detail(release, request)\n\n\n@view_config(\n route_name=\"packaging.release\",\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=1 * 24 * 60 * 60, # 1 day\n stale_if_error=5 * 24 * 60 * 60, # 5 days\n ),\n ],\n)\ndef release_detail(release, request):\n project = release.project\n\n if not {project.name, release.version} <= set(request.matchdict.values()):\n return HTTPMovedPermanently(\n request.current_route_path(\n name=project.name, version=release.version,\n ),\n )\n\n # Get all of the registered versions for this Project, in order of newest\n # to oldest.\n all_releases = (\n request.db.query(Release)\n .filter(Release.project == project)\n .with_entities(\n Release.version,\n Release.is_prerelease,\n Release.created)\n .order_by(Release._pypi_ordering.desc())\n .all()\n )\n\n # Get the latest non-prerelease of this Project, or the latest release if\n # all releases are prereleases.\n latest_release = first(\n all_releases,\n key=lambda r: not r.is_prerelease,\n default=all_releases[0],\n )\n\n # Get all of the maintainers for this project.\n maintainers = [\n r.user\n for r in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .order_by(User.username)\n .all()\n )\n ]\n\n # Get the license from the classifiers or metadata, preferring classifiers.\n license = None\n if release.license:\n # Make a best effort when the entire license text is given\n # by using the first line only.\n license = release.license.split('\\n')[0]\n license_classifiers = [c.split(\" :: \")[-1] for c in release.classifiers\n if c.startswith(\"License\")]\n if license_classifiers:\n license = ', '.join(license_classifiers)\n\n return {\n \"project\": project,\n \"release\": release,\n \"files\": release.files.all(),\n \"latest_release\": latest_release,\n \"all_releases\": all_releases,\n \"maintainers\": maintainers,\n \"license\": license,\n }\n\n\n@view_config(\n route_name=\"includes.edit-project-button\",\n renderer=\"includes/edit-project-button.html\",\n uses_session=True,\n permission=\"manage\",\n)\ndef edit_project_button(project, request):\n return {'project': project}\n", "path": "warehouse/packaging/views.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom first import first\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import Release, Role\n\n\n@view_config(\n route_name=\"packaging.project\",\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=1 * 24 * 60 * 60, # 1 day\n stale_if_error=5 * 24 * 60 * 60, # 5 days\n ),\n ],\n)\ndef project_detail(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name),\n )\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(\n Release.is_prerelease.nullslast(),\n Release._pypi_ordering.desc())\n .limit(1)\n .one()\n )\n except NoResultFound:\n return HTTPNotFound()\n\n return release_detail(release, request)\n\n\n@view_config(\n route_name=\"packaging.release\",\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=1 * 24 * 60 * 60, # 1 day\n stale_if_error=5 * 24 * 60 * 60, # 5 days\n ),\n ],\n)\ndef release_detail(release, request):\n project = release.project\n\n if not {project.name, release.version} <= set(request.matchdict.values()):\n return HTTPMovedPermanently(\n request.current_route_path(\n name=project.name, version=release.version,\n ),\n )\n\n # Get all of the registered versions for this Project, in order of newest\n # to oldest.\n all_releases = (\n request.db.query(Release)\n .filter(Release.project == project)\n .with_entities(\n Release.version,\n Release.is_prerelease,\n Release.created)\n .order_by(Release._pypi_ordering.desc())\n .all()\n )\n\n # Get the latest non-prerelease of this Project, or the latest release if\n # all releases are prereleases.\n latest_release = first(\n all_releases,\n key=lambda r: not r.is_prerelease,\n default=all_releases[0],\n )\n\n # Get all of the maintainers for this project.\n maintainers = [\n r.user\n for r in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .order_by(User.username)\n .all()\n )\n ]\n\n # Get the license from the classifiers or metadata, preferring classifiers.\n license = None\n if release.license:\n # Make a best effort when the entire license text is given\n # by using the first line only.\n license = release.license.split('\\n')[0]\n license_classifiers = [c.split(\" :: \")[-1] for c in release.classifiers\n if c.startswith(\"License\")]\n if license_classifiers:\n license = ', '.join(license_classifiers)\n\n return {\n \"project\": project,\n \"release\": release,\n \"files\": release.files.all(),\n \"latest_release\": latest_release,\n \"all_releases\": all_releases,\n \"maintainers\": maintainers,\n \"license\": license,\n }\n\n\n@view_config(\n route_name=\"includes.edit-project-button\",\n renderer=\"includes/manage-project-button.html\",\n uses_session=True,\n permission=\"manage\",\n)\ndef edit_project_button(project, request):\n return {'project': project}\n", "path": "warehouse/packaging/views.py"}]}
| 2,012 | 99 |
gh_patches_debug_31146
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2594
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
install.sh fails with empty secret_key on Mac OS Ventura
## Description
Mathesar fails to start because secret_key is empty in .env file after running `install.sh`. The script also fails due to that and steps after that do not run.
Note: This happens on Mac OS Ventura, but seems to work fine on Big Sur
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/engine.py`
Content:
```
1 import copy
2
3 from sqlalchemy import create_engine as sa_create_engine
4
5 from db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS
6
7
8 def get_connection_string(username, password, hostname, database, port='5432'):
9 return f"postgresql://{username}:{password}@{hostname}:{port}/{database}"
10
11
12 def create_future_engine_with_custom_types(
13 username, password, hostname, database, port, *args, **kwargs
14 ):
15 engine = create_future_engine(
16 username, password, hostname, database, port, *args, **kwargs
17 )
18 # We need to add our custom types to any engine created for SQLALchemy use
19 # so that they can be used for reflection
20 add_custom_types_to_ischema_names(engine)
21 return engine
22
23
24 # TODO would an engine without ischema names updated ever be used? make it private if not
25 def create_future_engine(
26 username, password, hostname, database, port, *args, **kwargs
27 ):
28 conn_str = get_connection_string(
29 username, password, hostname, database, port
30 )
31 kwargs.update(future=True)
32 return create_engine(conn_str, *args, **kwargs)
33
34
35 # NOTE: used in testing, hence public
36 def create_engine(conn_str, *args, **kwargs):
37 """
38 Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names
39 across all engines. This is important for testing: without this intervention, fixtures become
40 randomly corrupted.
41 """
42 engine = sa_create_engine(conn_str, *args, **kwargs)
43 _make_ischema_names_unique(engine)
44 return engine
45
46
47 # TODO should refactor for this to be private
48 def add_custom_types_to_ischema_names(engine):
49 """
50 Updating the ischema_names dict changes which Postgres types are reflected into which SA
51 classes.
52 """
53 for db_type, sa_class in CUSTOM_DB_TYPE_TO_SA_CLASS.items():
54 db_type_id = db_type.id
55 engine.dialect.ischema_names[db_type_id] = sa_class
56
57
58 def get_dummy_engine():
59 """
60 In some cases we only need an engine to access the Postgres dialect. E.g. when examining the
61 ischema_names dict. In those cases, following is enough:
62 """
63 engine = create_engine("postgresql://", future=True)
64 add_custom_types_to_ischema_names(engine)
65 return engine
66
67
68 def _make_ischema_names_unique(engine):
69 """
70 For some reason, engine.dialect.ischema_names reference the same dict across different engines.
71 This resets it to a referentially unique copy of itself.
72 """
73 ischema_names = engine.dialect.ischema_names
74 ischema_names_copy = copy.deepcopy(ischema_names)
75 setattr(engine.dialect, "ischema_names", ischema_names_copy)
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/db/engine.py b/db/engine.py
--- a/db/engine.py
+++ b/db/engine.py
@@ -1,14 +1,11 @@
import copy
from sqlalchemy import create_engine as sa_create_engine
+from sqlalchemy.engine import URL
from db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS
-def get_connection_string(username, password, hostname, database, port='5432'):
- return f"postgresql://{username}:{password}@{hostname}:{port}/{database}"
-
-
def create_future_engine_with_custom_types(
username, password, hostname, database, port, *args, **kwargs
):
@@ -25,21 +22,26 @@
def create_future_engine(
username, password, hostname, database, port, *args, **kwargs
):
- conn_str = get_connection_string(
- username, password, hostname, database, port
+ conn_url = URL.create(
+ "postgresql",
+ username=username,
+ password=password,
+ host=hostname,
+ database=database,
+ port=port,
)
kwargs.update(future=True)
- return create_engine(conn_str, *args, **kwargs)
+ return create_engine(conn_url, *args, **kwargs)
# NOTE: used in testing, hence public
-def create_engine(conn_str, *args, **kwargs):
+def create_engine(conn_url, *args, **kwargs):
"""
Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names
across all engines. This is important for testing: without this intervention, fixtures become
randomly corrupted.
"""
- engine = sa_create_engine(conn_str, *args, **kwargs)
+ engine = sa_create_engine(conn_url, *args, **kwargs)
_make_ischema_names_unique(engine)
return engine
|
{"golden_diff": "diff --git a/db/engine.py b/db/engine.py\n--- a/db/engine.py\n+++ b/db/engine.py\n@@ -1,14 +1,11 @@\n import copy\n \n from sqlalchemy import create_engine as sa_create_engine\n+from sqlalchemy.engine import URL\n \n from db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS\n \n \n-def get_connection_string(username, password, hostname, database, port='5432'):\n- return f\"postgresql://{username}:{password}@{hostname}:{port}/{database}\"\n-\n-\n def create_future_engine_with_custom_types(\n username, password, hostname, database, port, *args, **kwargs\n ):\n@@ -25,21 +22,26 @@\n def create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n ):\n- conn_str = get_connection_string(\n- username, password, hostname, database, port\n+ conn_url = URL.create(\n+ \"postgresql\",\n+ username=username,\n+ password=password,\n+ host=hostname,\n+ database=database,\n+ port=port,\n )\n kwargs.update(future=True)\n- return create_engine(conn_str, *args, **kwargs)\n+ return create_engine(conn_url, *args, **kwargs)\n \n \n # NOTE: used in testing, hence public\n-def create_engine(conn_str, *args, **kwargs):\n+def create_engine(conn_url, *args, **kwargs):\n \"\"\"\n Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names\n across all engines. This is important for testing: without this intervention, fixtures become\n randomly corrupted.\n \"\"\"\n- engine = sa_create_engine(conn_str, *args, **kwargs)\n+ engine = sa_create_engine(conn_url, *args, **kwargs)\n _make_ischema_names_unique(engine)\n return engine\n", "issue": "install.sh fails with empty secret_key on Mac OS Ventura\n## Description\r\n\r\nMathesar fails to start because secret_key is empty in .env file after running `install.sh`. The script also fails due to that and steps after that do not run.\r\n\r\nNote: This happens on Mac OS Ventura, but seems to work fine on Big Sur\n", "before_files": [{"content": "import copy\n\nfrom sqlalchemy import create_engine as sa_create_engine\n\nfrom db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS\n\n\ndef get_connection_string(username, password, hostname, database, port='5432'):\n return f\"postgresql://{username}:{password}@{hostname}:{port}/{database}\"\n\n\ndef create_future_engine_with_custom_types(\n username, password, hostname, database, port, *args, **kwargs\n):\n engine = create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n )\n # We need to add our custom types to any engine created for SQLALchemy use\n # so that they can be used for reflection\n add_custom_types_to_ischema_names(engine)\n return engine\n\n\n# TODO would an engine without ischema names updated ever be used? make it private if not\ndef create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n):\n conn_str = get_connection_string(\n username, password, hostname, database, port\n )\n kwargs.update(future=True)\n return create_engine(conn_str, *args, **kwargs)\n\n\n# NOTE: used in testing, hence public\ndef create_engine(conn_str, *args, **kwargs):\n \"\"\"\n Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names\n across all engines. This is important for testing: without this intervention, fixtures become\n randomly corrupted.\n \"\"\"\n engine = sa_create_engine(conn_str, *args, **kwargs)\n _make_ischema_names_unique(engine)\n return engine\n\n\n# TODO should refactor for this to be private\ndef add_custom_types_to_ischema_names(engine):\n \"\"\"\n Updating the ischema_names dict changes which Postgres types are reflected into which SA\n classes.\n \"\"\"\n for db_type, sa_class in CUSTOM_DB_TYPE_TO_SA_CLASS.items():\n db_type_id = db_type.id\n engine.dialect.ischema_names[db_type_id] = sa_class\n\n\ndef get_dummy_engine():\n \"\"\"\n In some cases we only need an engine to access the Postgres dialect. E.g. when examining the\n ischema_names dict. In those cases, following is enough:\n \"\"\"\n engine = create_engine(\"postgresql://\", future=True)\n add_custom_types_to_ischema_names(engine)\n return engine\n\n\ndef _make_ischema_names_unique(engine):\n \"\"\"\n For some reason, engine.dialect.ischema_names reference the same dict across different engines.\n This resets it to a referentially unique copy of itself.\n \"\"\"\n ischema_names = engine.dialect.ischema_names\n ischema_names_copy = copy.deepcopy(ischema_names)\n setattr(engine.dialect, \"ischema_names\", ischema_names_copy)\n", "path": "db/engine.py"}], "after_files": [{"content": "import copy\n\nfrom sqlalchemy import create_engine as sa_create_engine\nfrom sqlalchemy.engine import URL\n\nfrom db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS\n\n\ndef create_future_engine_with_custom_types(\n username, password, hostname, database, port, *args, **kwargs\n):\n engine = create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n )\n # We need to add our custom types to any engine created for SQLALchemy use\n # so that they can be used for reflection\n add_custom_types_to_ischema_names(engine)\n return engine\n\n\n# TODO would an engine without ischema names updated ever be used? make it private if not\ndef create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n):\n conn_url = URL.create(\n \"postgresql\",\n username=username,\n password=password,\n host=hostname,\n database=database,\n port=port,\n )\n kwargs.update(future=True)\n return create_engine(conn_url, *args, **kwargs)\n\n\n# NOTE: used in testing, hence public\ndef create_engine(conn_url, *args, **kwargs):\n \"\"\"\n Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names\n across all engines. This is important for testing: without this intervention, fixtures become\n randomly corrupted.\n \"\"\"\n engine = sa_create_engine(conn_url, *args, **kwargs)\n _make_ischema_names_unique(engine)\n return engine\n\n\n# TODO should refactor for this to be private\ndef add_custom_types_to_ischema_names(engine):\n \"\"\"\n Updating the ischema_names dict changes which Postgres types are reflected into which SA\n classes.\n \"\"\"\n for db_type, sa_class in CUSTOM_DB_TYPE_TO_SA_CLASS.items():\n db_type_id = db_type.id\n engine.dialect.ischema_names[db_type_id] = sa_class\n\n\ndef get_dummy_engine():\n \"\"\"\n In some cases we only need an engine to access the Postgres dialect. E.g. when examining the\n ischema_names dict. In those cases, following is enough:\n \"\"\"\n engine = create_engine(\"postgresql://\", future=True)\n add_custom_types_to_ischema_names(engine)\n return engine\n\n\ndef _make_ischema_names_unique(engine):\n \"\"\"\n For some reason, engine.dialect.ischema_names reference the same dict across different engines.\n This resets it to a referentially unique copy of itself.\n \"\"\"\n ischema_names = engine.dialect.ischema_names\n ischema_names_copy = copy.deepcopy(ischema_names)\n setattr(engine.dialect, \"ischema_names\", ischema_names_copy)\n", "path": "db/engine.py"}]}
| 1,063 | 396 |
gh_patches_debug_33551
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-5443
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Counting Comments on map popup and list items (2 issues - similar problem in a+)
**URL:** https://meinberlin-dev.liqd.net/mapideas/2023-01031/ ; https://meinberlin-dev.liqd.net/projekte/testprojekt-newsletter/
**user:** any
**expected behaviour:** the counting of comments should be consistent
**behaviour:**
1. The number of comments in the detail idea view is not the same anymore as the number in the idea overview (list & map). This is because the detail ide view now counts as well child comments while the idea overview doesn't. (see screenshot 1 vs. 2)
2. The counting in the detail view stops at 100 seperate comments. If there are child comments, it adds to counting of 100. The number is then also different to the idea overview. If I scroll down, then new comments are loaded and the counting number on top changes. This can be very confusing. (see screenshot 1, 2 & 3)
**important screensize:** any
**device & browser:** mac ff
**Comment/Question:**
Screenshot?
**1. screenshot of idea overview (map)**
<img width="821" alt="Bildschirmfoto 2023-08-01 um 15 36 52" src="https://github.com/liqd/a4-meinberlin/assets/113608720/ac6d7dd2-9785-49ad-85d4-f380cda6401d">
**2. screenshot of idea detail view with child comments**
<img width="847" alt="Bildschirmfoto 2023-08-01 um 15 37 17" src="https://github.com/liqd/a4-meinberlin/assets/113608720/45951686-f9d2-4acb-8615-8b75182ac943">
**3. screenshot of idea detail view with child comments and scrolled down**
<img width="972" alt="Bildschirmfoto 2023-08-01 um 15 37 40" src="https://github.com/liqd/a4-meinberlin/assets/113608720/3e2c3d16-0578-4a87-8f47-285d61e04be3">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/projects/templatetags/meinberlin_project_tags.py`
Content:
```
1 from django import template
2
3 from adhocracy4.comments.models import Comment
4 from adhocracy4.polls.models import Vote as Vote
5 from meinberlin.apps.budgeting.models import Proposal as budget_proposal
6 from meinberlin.apps.ideas.models import Idea
7 from meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal
8 from meinberlin.apps.likes.models import Like
9 from meinberlin.apps.livequestions.models import LiveQuestion
10 from meinberlin.apps.mapideas.models import MapIdea
11
12 register = template.Library()
13
14
15 @register.filter
16 def project_url(project):
17 if (
18 project.project_type == "meinberlin_bplan.Bplan"
19 or project.project_type == "meinberlin_extprojects.ExternalProject"
20 ):
21 return project.externalproject.url
22 return project.get_absolute_url()
23
24
25 @register.filter
26 def is_external(project):
27 return (
28 project.project_type == "meinberlin_bplan.Bplan"
29 or project.project_type == "meinberlin_extprojects.ExternalProject"
30 )
31
32
33 @register.simple_tag
34 def get_num_entries(module):
35 """Count all user-generated items."""
36 item_count = (
37 Idea.objects.filter(module=module).count()
38 + MapIdea.objects.filter(module=module).count()
39 + budget_proposal.objects.filter(module=module).count()
40 + kiezkasse_proposal.objects.filter(module=module).count()
41 + Comment.objects.filter(idea__module=module).count()
42 + Comment.objects.filter(mapidea__module=module).count()
43 + Comment.objects.filter(budget_proposal__module=module).count()
44 + Comment.objects.filter(kiezkasse_proposal__module=module).count()
45 + Comment.objects.filter(topic__module=module).count()
46 + Comment.objects.filter(maptopic__module=module).count()
47 + Comment.objects.filter(paragraph__chapter__module=module).count()
48 + Comment.objects.filter(chapter__module=module).count()
49 + Comment.objects.filter(poll__module=module).count()
50 + Vote.objects.filter(choice__question__poll__module=module).count()
51 + LiveQuestion.objects.filter(module=module).count()
52 + Like.objects.filter(question__module=module).count()
53 )
54 return item_count
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
@@ -1,4 +1,7 @@
from django import template
+from django.db.models import Count
+from django.db.models import Q
+from django.db.models import Sum
from adhocracy4.comments.models import Comment
from adhocracy4.polls.models import Vote as Vote
@@ -38,17 +41,28 @@
+ MapIdea.objects.filter(module=module).count()
+ budget_proposal.objects.filter(module=module).count()
+ kiezkasse_proposal.objects.filter(module=module).count()
- + Comment.objects.filter(idea__module=module).count()
- + Comment.objects.filter(mapidea__module=module).count()
- + Comment.objects.filter(budget_proposal__module=module).count()
- + Comment.objects.filter(kiezkasse_proposal__module=module).count()
- + Comment.objects.filter(topic__module=module).count()
- + Comment.objects.filter(maptopic__module=module).count()
- + Comment.objects.filter(paragraph__chapter__module=module).count()
- + Comment.objects.filter(chapter__module=module).count()
- + Comment.objects.filter(poll__module=module).count()
+ Vote.objects.filter(choice__question__poll__module=module).count()
+ LiveQuestion.objects.filter(module=module).count()
+ Like.objects.filter(question__module=module).count()
)
- return item_count
+ comment_filter = (
+ Q(idea__module=module)
+ | Q(mapidea__module=module)
+ | Q(budget_proposal__module=module)
+ | Q(kiezkasse_proposal__module=module)
+ | Q(topic__module=module)
+ | Q(maptopic__module=module)
+ | Q(paragraph__chapter__module=module)
+ | Q(chapter__module=module)
+ | Q(poll__module=module)
+ )
+ comment_count = (
+ Comment.objects.filter(comment_filter)
+ .annotate(child_comment_count=Count("child_comments__pk", distinct=True))
+ .aggregate(comment_count=Count("pk") + Sum("child_comment_count"))[
+ "comment_count"
+ ]
+ )
+ if comment_count is None:
+ comment_count = 0
+ return item_count + comment_count
|
{"golden_diff": "diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n@@ -1,4 +1,7 @@\n from django import template\n+from django.db.models import Count\n+from django.db.models import Q\n+from django.db.models import Sum\n \n from adhocracy4.comments.models import Comment\n from adhocracy4.polls.models import Vote as Vote\n@@ -38,17 +41,28 @@\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n- + Comment.objects.filter(idea__module=module).count()\n- + Comment.objects.filter(mapidea__module=module).count()\n- + Comment.objects.filter(budget_proposal__module=module).count()\n- + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n- + Comment.objects.filter(topic__module=module).count()\n- + Comment.objects.filter(maptopic__module=module).count()\n- + Comment.objects.filter(paragraph__chapter__module=module).count()\n- + Comment.objects.filter(chapter__module=module).count()\n- + Comment.objects.filter(poll__module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n- return item_count\n+ comment_filter = (\n+ Q(idea__module=module)\n+ | Q(mapidea__module=module)\n+ | Q(budget_proposal__module=module)\n+ | Q(kiezkasse_proposal__module=module)\n+ | Q(topic__module=module)\n+ | Q(maptopic__module=module)\n+ | Q(paragraph__chapter__module=module)\n+ | Q(chapter__module=module)\n+ | Q(poll__module=module)\n+ )\n+ comment_count = (\n+ Comment.objects.filter(comment_filter)\n+ .annotate(child_comment_count=Count(\"child_comments__pk\", distinct=True))\n+ .aggregate(comment_count=Count(\"pk\") + Sum(\"child_comment_count\"))[\n+ \"comment_count\"\n+ ]\n+ )\n+ if comment_count is None:\n+ comment_count = 0\n+ return item_count + comment_count\n", "issue": "Counting Comments on map popup and list items (2 issues - similar problem in a+)\n**URL:** https://meinberlin-dev.liqd.net/mapideas/2023-01031/ ; https://meinberlin-dev.liqd.net/projekte/testprojekt-newsletter/\r\n**user:** any\r\n**expected behaviour:** the counting of comments should be consistent\r\n**behaviour:** \r\n\r\n1. The number of comments in the detail idea view is not the same anymore as the number in the idea overview (list & map). This is because the detail ide view now counts as well child comments while the idea overview doesn't. (see screenshot 1 vs. 2)\r\n\r\n2. The counting in the detail view stops at 100 seperate comments. If there are child comments, it adds to counting of 100. The number is then also different to the idea overview. If I scroll down, then new comments are loaded and the counting number on top changes. This can be very confusing. (see screenshot 1, 2 & 3)\r\n\r\n**important screensize:** any\r\n**device & browser:** mac ff\r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n**1. screenshot of idea overview (map)**\r\n<img width=\"821\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 36 52\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/ac6d7dd2-9785-49ad-85d4-f380cda6401d\">\r\n\r\n**2. screenshot of idea detail view with child comments**\r\n<img width=\"847\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 37 17\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/45951686-f9d2-4acb-8615-8b75182ac943\">\r\n\r\n**3. screenshot of idea detail view with child comments and scrolled down**\r\n<img width=\"972\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 37 40\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/3e2c3d16-0578-4a87-8f47-285d61e04be3\">\r\n\r\n\n", "before_files": [{"content": "from django import template\n\nfrom adhocracy4.comments.models import Comment\nfrom adhocracy4.polls.models import Vote as Vote\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.likes.models import Like\nfrom meinberlin.apps.livequestions.models import LiveQuestion\nfrom meinberlin.apps.mapideas.models import MapIdea\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n ):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef is_external(project):\n return (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n )\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = (\n Idea.objects.filter(module=module).count()\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n + Comment.objects.filter(idea__module=module).count()\n + Comment.objects.filter(mapidea__module=module).count()\n + Comment.objects.filter(budget_proposal__module=module).count()\n + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n + Comment.objects.filter(topic__module=module).count()\n + Comment.objects.filter(maptopic__module=module).count()\n + Comment.objects.filter(paragraph__chapter__module=module).count()\n + Comment.objects.filter(chapter__module=module).count()\n + Comment.objects.filter(poll__module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n return item_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}], "after_files": [{"content": "from django import template\nfrom django.db.models import Count\nfrom django.db.models import Q\nfrom django.db.models import Sum\n\nfrom adhocracy4.comments.models import Comment\nfrom adhocracy4.polls.models import Vote as Vote\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.likes.models import Like\nfrom meinberlin.apps.livequestions.models import LiveQuestion\nfrom meinberlin.apps.mapideas.models import MapIdea\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n ):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef is_external(project):\n return (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n )\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = (\n Idea.objects.filter(module=module).count()\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n comment_filter = (\n Q(idea__module=module)\n | Q(mapidea__module=module)\n | Q(budget_proposal__module=module)\n | Q(kiezkasse_proposal__module=module)\n | Q(topic__module=module)\n | Q(maptopic__module=module)\n | Q(paragraph__chapter__module=module)\n | Q(chapter__module=module)\n | Q(poll__module=module)\n )\n comment_count = (\n Comment.objects.filter(comment_filter)\n .annotate(child_comment_count=Count(\"child_comments__pk\", distinct=True))\n .aggregate(comment_count=Count(\"pk\") + Sum(\"child_comment_count\"))[\n \"comment_count\"\n ]\n )\n if comment_count is None:\n comment_count = 0\n return item_count + comment_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}]}
| 1,440 | 598 |
gh_patches_debug_30154
|
rasdani/github-patches
|
git_diff
|
fal-ai__dbt-fal-190
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python script should be able to handle relative imports
I was trying execute a script using `fal`, it works fine when full code is in a single script but breaks down when I write down my script to different modules. Probably this is because fal is internally using python's `exec` builtins function to execute the script after reading the file. Would appreciate it very much if you guys can add this feature to fal as soon as possible. It is a great tool to work with dbt.! :D
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/fal/cli/fal_runner.py`
Content:
```
1 import argparse
2 from typing import List
3 import os
4
5 import dbt.exceptions
6 import dbt.ui
7 from dbt.config.profile import DEFAULT_PROFILES_DIR
8
9 from fal.run_scripts import run_global_scripts, run_scripts
10 from fal.fal_script import FalScript
11 from faldbt.project import FalDbt, FalGeneralException, FalProject
12
13
14 def create_fal_dbt(
15 args: argparse.Namespace,
16 ):
17 real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))
18 real_profiles_dir = None
19 if args.profiles_dir is not None:
20 real_profiles_dir = os.path.realpath(os.path.normpath(args.profiles_dir))
21 elif os.getenv("DBT_PROFILES_DIR"):
22 real_profiles_dir = os.path.realpath(
23 os.path.normpath(os.getenv("DBT_PROFILES_DIR"))
24 )
25 else:
26 real_profiles_dir = DEFAULT_PROFILES_DIR
27
28 return FalDbt(
29 real_project_dir,
30 real_profiles_dir,
31 args.select,
32 args.exclude,
33 args.selector,
34 args.keyword,
35 )
36
37
38 def fal_run(
39 args: argparse.Namespace,
40 selects_count=0, # TODO: remove `action="extend"` to match exactly what dbt does
41 exclude_count=0,
42 script_count=0,
43 ):
44 "Runs the fal run command in a subprocess"
45
46 args_dict = vars(args)
47 selector_flags = args.select or args.exclude or args.selector
48 if args_dict.get("all") and selector_flags:
49 raise FalGeneralException(
50 "Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)"
51 )
52
53 faldbt = create_fal_dbt(args)
54 project = FalProject(faldbt)
55 models = project.get_filtered_models(
56 args_dict.get("all"), selector_flags, args_dict.get("before")
57 )
58
59 _handle_selector_warnings(selects_count, exclude_count, script_count, args)
60
61 scripts = _select_scripts(args_dict, models, project, args)
62
63 # run model specific scripts first
64 run_scripts(scripts, project)
65
66 # then run global scripts
67 if _should_run_global_scripts(args_dict):
68 _run_global_scripts(
69 project, faldbt, "before" if args_dict.get("before") else "after"
70 )
71
72
73 def _handle_selector_warnings(selects_count, exclude_count, script_count, args):
74 # TODO: remove `action="extend"` to match exactly what dbt does
75 if selects_count > 1:
76 dbt.exceptions.warn_or_error(
77 "Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\n"
78 + f"Please use model selection like dbt. Use: --select {' '.join(args.select)}",
79 log_fmt=dbt.ui.warning_tag("{}"),
80 )
81 if exclude_count > 1:
82 dbt.exceptions.warn_or_error(
83 "Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\n"
84 + f"Please use model exclusion like dbt. Use: --exclude {' '.join(args.exclude)}",
85 log_fmt=dbt.ui.warning_tag("{}"),
86 )
87 if script_count > 1:
88 dbt.exceptions.warn_or_error(
89 "Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\n"
90 + f"Please use: --script {' '.join(args.scripts)}",
91 log_fmt=dbt.ui.warning_tag("{}"),
92 )
93
94
95 def _should_run_global_scripts(args_dict) -> bool:
96 return args_dict.get("scripts")
97
98
99 def _select_scripts(args_dict, models, project, args) -> List[FalScript]:
100 scripts = []
101 # if --script selector is there only run selected scripts
102 if args_dict.get("scripts"):
103 scripts = []
104 for model in models:
105 model_scripts = model.get_scripts(args.keyword, args_dict.get("before"))
106 for el in args.scripts:
107 if el in model_scripts:
108 scripts.append(FalScript(model, el))
109 else:
110 real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))
111 for model in models:
112 for path in model.get_script_paths(
113 args.keyword, real_project_dir, args_dict.get("before")
114 ):
115 scripts.append(FalScript(model, path))
116
117 return scripts
118
119
120 def _run_global_scripts(project: FalProject, faldbt: FalDbt, global_key: str):
121 global_scripts = list(
122 map(
123 lambda path: FalScript(None, path),
124 faldbt._global_script_paths[global_key],
125 )
126 )
127
128 run_global_scripts(global_scripts, project)
129
```
Path: `src/fal/fal_script.py`
Content:
```
1 from dataclasses import dataclass, field
2 from typing import List, TypeVar, Dict, Union
3 from faldbt.project import DbtModel, FalDbt
4 from pathlib import Path
5
6 T = TypeVar("T", bound="FalScript")
7
8
9 class FalDagCycle(Exception):
10 pass
11
12
13 @dataclass(frozen=True)
14 class FalScript:
15 model: Union[DbtModel, None]
16 path: Path
17
18 def exec(self, context, faldbt: FalDbt):
19 """
20 Executes the script
21 """
22 with open(self.path) as file:
23 a_script = file.read()
24 exec(
25 a_script,
26 {
27 "context": context,
28 "ref": faldbt.ref,
29 "source": faldbt.source,
30 "write_to_source": faldbt.write_to_source,
31 "write_to_firestore": faldbt.write_to_firestore,
32 "list_models": faldbt.list_models,
33 "list_models_ids": faldbt.list_models_ids,
34 "list_sources": faldbt.list_sources,
35 "list_features": faldbt.list_features,
36 },
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py
--- a/src/fal/cli/fal_runner.py
+++ b/src/fal/cli/fal_runner.py
@@ -1,5 +1,6 @@
import argparse
from typing import List
+from pathlib import Path
import os
import dbt.exceptions
@@ -105,7 +106,7 @@
model_scripts = model.get_scripts(args.keyword, args_dict.get("before"))
for el in args.scripts:
if el in model_scripts:
- scripts.append(FalScript(model, el))
+ scripts.append(FalScript(model, Path(el)))
else:
real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))
for model in models:
diff --git a/src/fal/fal_script.py b/src/fal/fal_script.py
--- a/src/fal/fal_script.py
+++ b/src/fal/fal_script.py
@@ -2,6 +2,7 @@
from typing import List, TypeVar, Dict, Union
from faldbt.project import DbtModel, FalDbt
from pathlib import Path
+import sys
T = TypeVar("T", bound="FalScript")
@@ -19,6 +20,11 @@
"""
Executes the script
"""
+
+ # Enable local imports
+ local_path = str(self.path.parent)
+ sys.path.append(local_path)
+
with open(self.path) as file:
a_script = file.read()
exec(
@@ -35,3 +41,4 @@
"list_features": faldbt.list_features,
},
)
+ sys.path.remove(local_path)
|
{"golden_diff": "diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py\n--- a/src/fal/cli/fal_runner.py\n+++ b/src/fal/cli/fal_runner.py\n@@ -1,5 +1,6 @@\n import argparse\n from typing import List\n+from pathlib import Path\n import os\n \n import dbt.exceptions\n@@ -105,7 +106,7 @@\n model_scripts = model.get_scripts(args.keyword, args_dict.get(\"before\"))\n for el in args.scripts:\n if el in model_scripts:\n- scripts.append(FalScript(model, el))\n+ scripts.append(FalScript(model, Path(el)))\n else:\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n for model in models:\ndiff --git a/src/fal/fal_script.py b/src/fal/fal_script.py\n--- a/src/fal/fal_script.py\n+++ b/src/fal/fal_script.py\n@@ -2,6 +2,7 @@\n from typing import List, TypeVar, Dict, Union\n from faldbt.project import DbtModel, FalDbt\n from pathlib import Path\n+import sys\n \n T = TypeVar(\"T\", bound=\"FalScript\")\n \n@@ -19,6 +20,11 @@\n \"\"\"\n Executes the script\n \"\"\"\n+\n+ # Enable local imports\n+ local_path = str(self.path.parent)\n+ sys.path.append(local_path)\n+\n with open(self.path) as file:\n a_script = file.read()\n exec(\n@@ -35,3 +41,4 @@\n \"list_features\": faldbt.list_features,\n },\n )\n+ sys.path.remove(local_path)\n", "issue": "Python script should be able to handle relative imports\nI was trying execute a script using `fal`, it works fine when full code is in a single script but breaks down when I write down my script to different modules. Probably this is because fal is internally using python's `exec` builtins function to execute the script after reading the file. Would appreciate it very much if you guys can add this feature to fal as soon as possible. It is a great tool to work with dbt.! :D\n", "before_files": [{"content": "import argparse\nfrom typing import List\nimport os\n\nimport dbt.exceptions\nimport dbt.ui\nfrom dbt.config.profile import DEFAULT_PROFILES_DIR\n\nfrom fal.run_scripts import run_global_scripts, run_scripts\nfrom fal.fal_script import FalScript\nfrom faldbt.project import FalDbt, FalGeneralException, FalProject\n\n\ndef create_fal_dbt(\n args: argparse.Namespace,\n):\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n real_profiles_dir = None\n if args.profiles_dir is not None:\n real_profiles_dir = os.path.realpath(os.path.normpath(args.profiles_dir))\n elif os.getenv(\"DBT_PROFILES_DIR\"):\n real_profiles_dir = os.path.realpath(\n os.path.normpath(os.getenv(\"DBT_PROFILES_DIR\"))\n )\n else:\n real_profiles_dir = DEFAULT_PROFILES_DIR\n\n return FalDbt(\n real_project_dir,\n real_profiles_dir,\n args.select,\n args.exclude,\n args.selector,\n args.keyword,\n )\n\n\ndef fal_run(\n args: argparse.Namespace,\n selects_count=0, # TODO: remove `action=\"extend\"` to match exactly what dbt does\n exclude_count=0,\n script_count=0,\n):\n \"Runs the fal run command in a subprocess\"\n\n args_dict = vars(args)\n selector_flags = args.select or args.exclude or args.selector\n if args_dict.get(\"all\") and selector_flags:\n raise FalGeneralException(\n \"Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)\"\n )\n\n faldbt = create_fal_dbt(args)\n project = FalProject(faldbt)\n models = project.get_filtered_models(\n args_dict.get(\"all\"), selector_flags, args_dict.get(\"before\")\n )\n\n _handle_selector_warnings(selects_count, exclude_count, script_count, args)\n\n scripts = _select_scripts(args_dict, models, project, args)\n\n # run model specific scripts first\n run_scripts(scripts, project)\n\n # then run global scripts\n if _should_run_global_scripts(args_dict):\n _run_global_scripts(\n project, faldbt, \"before\" if args_dict.get(\"before\") else \"after\"\n )\n\n\ndef _handle_selector_warnings(selects_count, exclude_count, script_count, args):\n # TODO: remove `action=\"extend\"` to match exactly what dbt does\n if selects_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model selection like dbt. Use: --select {' '.join(args.select)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if exclude_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model exclusion like dbt. Use: --exclude {' '.join(args.exclude)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if script_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use: --script {' '.join(args.scripts)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n\n\ndef _should_run_global_scripts(args_dict) -> bool:\n return args_dict.get(\"scripts\")\n\n\ndef _select_scripts(args_dict, models, project, args) -> List[FalScript]:\n scripts = []\n # if --script selector is there only run selected scripts\n if args_dict.get(\"scripts\"):\n scripts = []\n for model in models:\n model_scripts = model.get_scripts(args.keyword, args_dict.get(\"before\"))\n for el in args.scripts:\n if el in model_scripts:\n scripts.append(FalScript(model, el))\n else:\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n for model in models:\n for path in model.get_script_paths(\n args.keyword, real_project_dir, args_dict.get(\"before\")\n ):\n scripts.append(FalScript(model, path))\n\n return scripts\n\n\ndef _run_global_scripts(project: FalProject, faldbt: FalDbt, global_key: str):\n global_scripts = list(\n map(\n lambda path: FalScript(None, path),\n faldbt._global_script_paths[global_key],\n )\n )\n\n run_global_scripts(global_scripts, project)\n", "path": "src/fal/cli/fal_runner.py"}, {"content": "from dataclasses import dataclass, field\nfrom typing import List, TypeVar, Dict, Union\nfrom faldbt.project import DbtModel, FalDbt\nfrom pathlib import Path\n\nT = TypeVar(\"T\", bound=\"FalScript\")\n\n\nclass FalDagCycle(Exception):\n pass\n\n\n@dataclass(frozen=True)\nclass FalScript:\n model: Union[DbtModel, None]\n path: Path\n\n def exec(self, context, faldbt: FalDbt):\n \"\"\"\n Executes the script\n \"\"\"\n with open(self.path) as file:\n a_script = file.read()\n exec(\n a_script,\n {\n \"context\": context,\n \"ref\": faldbt.ref,\n \"source\": faldbt.source,\n \"write_to_source\": faldbt.write_to_source,\n \"write_to_firestore\": faldbt.write_to_firestore,\n \"list_models\": faldbt.list_models,\n \"list_models_ids\": faldbt.list_models_ids,\n \"list_sources\": faldbt.list_sources,\n \"list_features\": faldbt.list_features,\n },\n )\n", "path": "src/fal/fal_script.py"}], "after_files": [{"content": "import argparse\nfrom typing import List\nfrom pathlib import Path\nimport os\n\nimport dbt.exceptions\nimport dbt.ui\nfrom dbt.config.profile import DEFAULT_PROFILES_DIR\n\nfrom fal.run_scripts import run_global_scripts, run_scripts\nfrom fal.fal_script import FalScript\nfrom faldbt.project import FalDbt, FalGeneralException, FalProject\n\n\ndef create_fal_dbt(\n args: argparse.Namespace,\n):\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n real_profiles_dir = None\n if args.profiles_dir is not None:\n real_profiles_dir = os.path.realpath(os.path.normpath(args.profiles_dir))\n elif os.getenv(\"DBT_PROFILES_DIR\"):\n real_profiles_dir = os.path.realpath(\n os.path.normpath(os.getenv(\"DBT_PROFILES_DIR\"))\n )\n else:\n real_profiles_dir = DEFAULT_PROFILES_DIR\n\n return FalDbt(\n real_project_dir,\n real_profiles_dir,\n args.select,\n args.exclude,\n args.selector,\n args.keyword,\n )\n\n\ndef fal_run(\n args: argparse.Namespace,\n selects_count=0, # TODO: remove `action=\"extend\"` to match exactly what dbt does\n exclude_count=0,\n script_count=0,\n):\n \"Runs the fal run command in a subprocess\"\n\n args_dict = vars(args)\n selector_flags = args.select or args.exclude or args.selector\n if args_dict.get(\"all\") and selector_flags:\n raise FalGeneralException(\n \"Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)\"\n )\n\n faldbt = create_fal_dbt(args)\n project = FalProject(faldbt)\n models = project.get_filtered_models(\n args_dict.get(\"all\"), selector_flags, args_dict.get(\"before\")\n )\n\n _handle_selector_warnings(selects_count, exclude_count, script_count, args)\n\n scripts = _select_scripts(args_dict, models, project, args)\n\n # run model specific scripts first\n run_scripts(scripts, project)\n\n # then run global scripts\n if _should_run_global_scripts(args_dict):\n _run_global_scripts(\n project, faldbt, \"before\" if args_dict.get(\"before\") else \"after\"\n )\n\n\ndef _handle_selector_warnings(selects_count, exclude_count, script_count, args):\n # TODO: remove `action=\"extend\"` to match exactly what dbt does\n if selects_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model selection like dbt. Use: --select {' '.join(args.select)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if exclude_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use model exclusion like dbt. Use: --exclude {' '.join(args.exclude)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n if script_count > 1:\n dbt.exceptions.warn_or_error(\n \"Passing multiple --select/--model flags to fal is deprecated and will be removed in fal version 0.4.\\n\"\n + f\"Please use: --script {' '.join(args.scripts)}\",\n log_fmt=dbt.ui.warning_tag(\"{}\"),\n )\n\n\ndef _should_run_global_scripts(args_dict) -> bool:\n return args_dict.get(\"scripts\")\n\n\ndef _select_scripts(args_dict, models, project, args) -> List[FalScript]:\n scripts = []\n # if --script selector is there only run selected scripts\n if args_dict.get(\"scripts\"):\n scripts = []\n for model in models:\n model_scripts = model.get_scripts(args.keyword, args_dict.get(\"before\"))\n for el in args.scripts:\n if el in model_scripts:\n scripts.append(FalScript(model, Path(el)))\n else:\n real_project_dir = os.path.realpath(os.path.normpath(args.project_dir))\n for model in models:\n for path in model.get_script_paths(\n args.keyword, real_project_dir, args_dict.get(\"before\")\n ):\n scripts.append(FalScript(model, path))\n\n return scripts\n\n\ndef _run_global_scripts(project: FalProject, faldbt: FalDbt, global_key: str):\n global_scripts = list(\n map(\n lambda path: FalScript(None, path),\n faldbt._global_script_paths[global_key],\n )\n )\n\n run_global_scripts(global_scripts, project)\n", "path": "src/fal/cli/fal_runner.py"}, {"content": "from dataclasses import dataclass, field\nfrom typing import List, TypeVar, Dict, Union\nfrom faldbt.project import DbtModel, FalDbt\nfrom pathlib import Path\nimport sys\n\nT = TypeVar(\"T\", bound=\"FalScript\")\n\n\nclass FalDagCycle(Exception):\n pass\n\n\n@dataclass(frozen=True)\nclass FalScript:\n model: Union[DbtModel, None]\n path: Path\n\n def exec(self, context, faldbt: FalDbt):\n \"\"\"\n Executes the script\n \"\"\"\n\n # Enable local imports\n local_path = str(self.path.parent)\n sys.path.append(local_path)\n\n with open(self.path) as file:\n a_script = file.read()\n exec(\n a_script,\n {\n \"context\": context,\n \"ref\": faldbt.ref,\n \"source\": faldbt.source,\n \"write_to_source\": faldbt.write_to_source,\n \"write_to_firestore\": faldbt.write_to_firestore,\n \"list_models\": faldbt.list_models,\n \"list_models_ids\": faldbt.list_models_ids,\n \"list_sources\": faldbt.list_sources,\n \"list_features\": faldbt.list_features,\n },\n )\n sys.path.remove(local_path)\n", "path": "src/fal/fal_script.py"}]}
| 1,985 | 370 |
gh_patches_debug_18860
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-2755
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
inconsistency between CU1 and CU3 gate definitions
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
This is not a bug or enhancement request as such, but seems like an internal inconsistency in Qiskit's gate definitions.
In [the gate definitions](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb), U1 is defined as [1,0,0,e^(iλ)], while an Rz is a [e^(-iλ/2),0,0,e^(iλ/2)].
U3 is defined in the docs similarly to U1 - ie. a U3 is a U1*Ry*U1. Therefore, a U3(0,0,a) = U1(a). However, CU3 is defined in the docs in such a way that CU3(0,0,a) != CU1(a). CU3 is instead defined using the Rz definition, rather than the U1.
So:
U3(0,0,a) = U1(a)
CU3(0,0,a) != CU1(a)
This is a confusing set of definitions. I assume that these definitions were a conscious decision, and that you are aware of the inconsistency, but I don't understand why?
I hope this hasn't been asked already - I couldn't find a duplicate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/extensions/standard/cu3.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 controlled-u3 gate.
17 """
18 from qiskit.circuit import Gate
19 from qiskit.circuit import QuantumCircuit
20 from qiskit.circuit import QuantumRegister
21 from qiskit.extensions.standard.u1 import U1Gate
22 from qiskit.extensions.standard.u3 import U3Gate
23 from qiskit.extensions.standard.cx import CnotGate
24
25
26 class Cu3Gate(Gate):
27 """controlled-u3 gate."""
28
29 def __init__(self, theta, phi, lam):
30 """Create new cu3 gate."""
31 super().__init__("cu3", 2, [theta, phi, lam])
32
33 def _define(self):
34 """
35 gate cu3(theta,phi,lambda) c, t
36 { u1((lambda-phi)/2) t; cx c,t;
37 u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;
38 u3(theta/2,phi,0) t;
39 }
40 """
41 definition = []
42 q = QuantumRegister(2, "q")
43 rule = [
44 (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),
45 (CnotGate(), [q[0], q[1]], []),
46 (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),
47 (CnotGate(), [q[0], q[1]], []),
48 (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])
49 ]
50 for inst in rule:
51 definition.append(inst)
52 self.definition = definition
53
54 def inverse(self):
55 """Invert this gate."""
56 return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])
57
58
59 def cu3(self, theta, phi, lam, ctl, tgt):
60 """Apply cu3 from ctl to tgt with angle theta, phi, lam."""
61 return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])
62
63
64 QuantumCircuit.cu3 = cu3
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qiskit/extensions/standard/cu3.py b/qiskit/extensions/standard/cu3.py
--- a/qiskit/extensions/standard/cu3.py
+++ b/qiskit/extensions/standard/cu3.py
@@ -33,7 +33,7 @@
def _define(self):
"""
gate cu3(theta,phi,lambda) c, t
- { u1((lambda-phi)/2) t; cx c,t;
+ { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;
u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;
u3(theta/2,phi,0) t;
}
@@ -41,6 +41,7 @@
definition = []
q = QuantumRegister(2, "q")
rule = [
+ (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),
(U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),
(CnotGate(), [q[0], q[1]], []),
(U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),
|
{"golden_diff": "diff --git a/qiskit/extensions/standard/cu3.py b/qiskit/extensions/standard/cu3.py\n--- a/qiskit/extensions/standard/cu3.py\n+++ b/qiskit/extensions/standard/cu3.py\n@@ -33,7 +33,7 @@\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n- { u1((lambda-phi)/2) t; cx c,t;\n+ { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n@@ -41,6 +41,7 @@\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n+ (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n", "issue": "inconsistency between CU1 and CU3 gate definitions\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues to confirm this idea does not exist. -->\r\n\r\n### What is the expected enhancement?\r\n\r\nThis is not a bug or enhancement request as such, but seems like an internal inconsistency in Qiskit's gate definitions.\r\nIn [the gate definitions](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb), U1 is defined as [1,0,0,e^(i\u03bb)], while an Rz is a [e^(-i\u03bb/2),0,0,e^(i\u03bb/2)].\r\n\r\nU3 is defined in the docs similarly to U1 - ie. a U3 is a U1*Ry*U1. Therefore, a U3(0,0,a) = U1(a). However, CU3 is defined in the docs in such a way that CU3(0,0,a) != CU1(a). CU3 is instead defined using the Rz definition, rather than the U1.\r\n\r\nSo: \r\nU3(0,0,a) = U1(a)\r\nCU3(0,0,a) != CU1(a)\r\n\r\nThis is a confusing set of definitions. I assume that these definitions were a conscious decision, and that you are aware of the inconsistency, but I don't understand why?\r\nI hope this hasn't been asked already - I couldn't find a duplicate.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\ncontrolled-u3 gate.\n\"\"\"\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.u3 import U3Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass Cu3Gate(Gate):\n \"\"\"controlled-u3 gate.\"\"\"\n\n def __init__(self, theta, phi, lam):\n \"\"\"Create new cu3 gate.\"\"\"\n super().__init__(\"cu3\", 2, [theta, phi, lam])\n\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n { u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n \"\"\"\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])\n ]\n for inst in rule:\n definition.append(inst)\n self.definition = definition\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])\n\n\ndef cu3(self, theta, phi, lam, ctl, tgt):\n \"\"\"Apply cu3 from ctl to tgt with angle theta, phi, lam.\"\"\"\n return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])\n\n\nQuantumCircuit.cu3 = cu3\n", "path": "qiskit/extensions/standard/cu3.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\ncontrolled-u3 gate.\n\"\"\"\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.u3 import U3Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass Cu3Gate(Gate):\n \"\"\"controlled-u3 gate.\"\"\"\n\n def __init__(self, theta, phi, lam):\n \"\"\"Create new cu3 gate.\"\"\"\n super().__init__(\"cu3\", 2, [theta, phi, lam])\n\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n \"\"\"\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])\n ]\n for inst in rule:\n definition.append(inst)\n self.definition = definition\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])\n\n\ndef cu3(self, theta, phi, lam, ctl, tgt):\n \"\"\"Apply cu3 from ctl to tgt with angle theta, phi, lam.\"\"\"\n return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])\n\n\nQuantumCircuit.cu3 = cu3\n", "path": "qiskit/extensions/standard/cu3.py"}]}
| 1,320 | 313 |
gh_patches_debug_251
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-497
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] Clarify Python version requirements
# Brief Description of Fix
I was looking through documentation (for users and contributors), and it was unclear to me which python versions we actually support. It seems that we support python 3.6 + 3.7. This arose as I was updating the `pyproject.toml` file to avoid the warning:
```
--py36 is deprecated and will be removed in a future version. Use --target-version py36 instead.
```
Our current locations of explicit python versions are in:
- `pyproject.toml`
- `py36 = true`
- `environment-dev.yml`
- `- python >= 3.6`
- `.azure-pipelines/pipeline-master.yml`
- `python.version: "3.7"`
# Proposed Fix
If `pyjanitor` is in fact meant to function on 3.6+, we should
- Explicitly inform contributors that their code should be 3.6+ compatible
- Inform users which python versions the package requires, on the documentation site, PyPI etc
- Add `python_requires=">=3.6"` to `setup.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3
4 def requirements():
5 with open("requirements.txt", "r+") as f:
6 return f.read()
7
8
9 setup(
10 name="pyjanitor",
11 version="0.18.0",
12 description="Tools for cleaning pandas DataFrames",
13 author="Eric J. Ma",
14 author_email="[email protected]",
15 url="https://github.com/ericmjl/pyjanitor",
16 packages=["janitor"],
17 install_requires=requirements(),
18 )
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,4 +15,5 @@
url="https://github.com/ericmjl/pyjanitor",
packages=["janitor"],
install_requires=requirements(),
+ python_requires=">=3.6",
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,4 +15,5 @@\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n+ python_requires=\">=3.6\",\n )\n", "issue": "[DOC] Clarify Python version requirements\n# Brief Description of Fix\r\n\r\nI was looking through documentation (for users and contributors), and it was unclear to me which python versions we actually support. It seems that we support python 3.6 + 3.7. This arose as I was updating the `pyproject.toml` file to avoid the warning:\r\n```\r\n--py36 is deprecated and will be removed in a future version. Use --target-version py36 instead.\r\n```\r\n\r\nOur current locations of explicit python versions are in:\r\n- `pyproject.toml`\r\n - `py36 = true`\r\n- `environment-dev.yml`\r\n - `- python >= 3.6`\r\n- `.azure-pipelines/pipeline-master.yml`\r\n - `python.version: \"3.7\"`\r\n\r\n# Proposed Fix\r\n\r\nIf `pyjanitor` is in fact meant to function on 3.6+, we should\r\n- Explicitly inform contributors that their code should be 3.6+ compatible\r\n- Inform users which python versions the package requires, on the documentation site, PyPI etc\r\n- Add `python_requires=\">=3.6\"` to `setup.py`\r\n\n", "before_files": [{"content": "from setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.18.0\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.18.0\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n python_requires=\">=3.6\",\n)\n", "path": "setup.py"}]}
| 636 | 70 |
gh_patches_debug_16797
|
rasdani/github-patches
|
git_diff
|
semgrep__semgrep-rules-1457
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive for return-in-init when return in internal function
**Describe the bug**
[`return-in-init`](https://github.com/returntocorp/semgrep-rules/blob/master/python/lang/correctness/return-in-init.yaml) warns about a return statement in `__init__`. However, this may be valid if another function is defined within `__init__` and return is used there.
**To Reproduce**
```
class Odd:
def __init__(self, numbers):
def is_odd(n):
return n % 2 == 1
self.numbers = filter(is_odd, numbers)
```
```
$ semgrep --config=p/ci
test1.py
severity:error rule:python.lang.correctness.return-in-init.return-in-init: `return` should never appear inside a class __init__ function. This will cause a runtime error.
4: return n % 2 == 1
```
**Expected behavior**
I expect no error from `return-in-init` in this case.
**Priority**
How important is this to you?
- P2: annoying but not blocking me
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/lang/correctness/return-in-init.py`
Content:
```
1 class A:
2 def __init__(a, b, c):
3 # ruleid:return-in-init
4 return A(a, b, c)
5
6
7 class B:
8 def __init__(a, b, c):
9 # ok:return-in-init
10 return
11
12
13 class C:
14 def __init__(a, b, c):
15 # ruleid:yield-in-init
16 yield
17
18
19 class D:
20 def __init__():
21 # ruleid:yield-in-init
22 yield 5
23
24
25 def __init__(a, b, c):
26 # ok:yield-in-init
27 return A(a, b, c)
28
29
30 def __init__(a, b, c):
31 # ok:yield-in-init
32 yield
33
34
35 def __init__():
36 # ok:yield-in-init
37 yield 5
38
39
40 class E:
41 def func1():
42 if not hello:
43 # ok:yield-in-init
44 yield 5
45 # ok:yield-in-init
46 yield other
47
48
49 class F:
50 def __init__():
51 pass
52
53 def func1():
54 # ok:return-in-init
55 return 5
56
57 def func2():
58 # ok:return-in-init
59 return
60
61
62 class G:
63 def __init__():
64 pass
65
66 def func1():
67 # ok:yield-in-init
68 yield 5
69
70 def func2():
71 # ok:yield-in-init
72 yield
73
74 class H:
75 def __init__(self, x):
76 # ok:return-in-init
77 return None
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/lang/correctness/return-in-init.py b/python/lang/correctness/return-in-init.py
--- a/python/lang/correctness/return-in-init.py
+++ b/python/lang/correctness/return-in-init.py
@@ -75,3 +75,41 @@
def __init__(self, x):
# ok:return-in-init
return None
+
+class Odd:
+ def __init__(self, numbers):
+ def is_odd(n):
+ # ok:return-in-init
+ return n % 2 == 1
+ self.numbers = filter(is_odd, numbers)
+
+ # todoruleid:return-in-init
+ return self.numbers
+
+class Even:
+ def __init__(self):
+ class EvenNumber:
+ def __init__(self, n):
+ self.n = n
+ # todoruleid:return-in-init
+ return n
+
+ def is_even(self):
+ # ok:return-in-init
+ return self.n % 2 == 0
+
+ self.number = EvenNumber()
+
+ def not_init(self):
+ class EvenNumber:
+ def __init__(self, n):
+ self.n = n
+ # ruleid:return-in-init
+ return n
+
+ def is_even(self):
+ # ok:return-in-init
+ return self.n % 2 == 0
+
+ # ok:return-in-init
+ return EvenNumber()
|
{"golden_diff": "diff --git a/python/lang/correctness/return-in-init.py b/python/lang/correctness/return-in-init.py\n--- a/python/lang/correctness/return-in-init.py\n+++ b/python/lang/correctness/return-in-init.py\n@@ -75,3 +75,41 @@\n def __init__(self, x):\n # ok:return-in-init\n return None\n+\n+class Odd:\n+ def __init__(self, numbers):\n+ def is_odd(n):\n+ # ok:return-in-init\n+ return n % 2 == 1\n+ self.numbers = filter(is_odd, numbers)\n+\n+ # todoruleid:return-in-init\n+ return self.numbers\n+\n+class Even:\n+ def __init__(self):\n+ class EvenNumber:\n+ def __init__(self, n):\n+ self.n = n\n+ # todoruleid:return-in-init\n+ return n\n+\n+ def is_even(self):\n+ # ok:return-in-init\n+ return self.n % 2 == 0\n+\n+ self.number = EvenNumber()\n+\n+ def not_init(self):\n+ class EvenNumber:\n+ def __init__(self, n):\n+ self.n = n\n+ # ruleid:return-in-init\n+ return n\n+\n+ def is_even(self):\n+ # ok:return-in-init\n+ return self.n % 2 == 0\n+\n+ # ok:return-in-init\n+ return EvenNumber()\n", "issue": "False positive for return-in-init when return in internal function\n**Describe the bug**\r\n\r\n[`return-in-init`](https://github.com/returntocorp/semgrep-rules/blob/master/python/lang/correctness/return-in-init.yaml) warns about a return statement in `__init__`. However, this may be valid if another function is defined within `__init__` and return is used there.\r\n\r\n**To Reproduce**\r\n\r\n```\r\nclass Odd:\r\n def __init__(self, numbers):\r\n def is_odd(n):\r\n return n % 2 == 1\r\n self.numbers = filter(is_odd, numbers)\r\n```\r\n\r\n```\r\n$ semgrep --config=p/ci\r\ntest1.py\r\nseverity:error rule:python.lang.correctness.return-in-init.return-in-init: `return` should never appear inside a class __init__ function. This will cause a runtime error.\r\n4: return n % 2 == 1\r\n```\r\n\r\n**Expected behavior**\r\n\r\nI expect no error from `return-in-init` in this case.\r\n\r\n**Priority**\r\nHow important is this to you?\r\n- P2: annoying but not blocking me\r\n\n", "before_files": [{"content": "class A:\n def __init__(a, b, c):\n # ruleid:return-in-init\n return A(a, b, c)\n\n\nclass B:\n def __init__(a, b, c):\n # ok:return-in-init\n return\n\n\nclass C:\n def __init__(a, b, c):\n # ruleid:yield-in-init\n yield\n\n\nclass D:\n def __init__():\n # ruleid:yield-in-init\n yield 5\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n return A(a, b, c)\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n yield\n\n\ndef __init__():\n # ok:yield-in-init\n yield 5\n\n\nclass E:\n def func1():\n if not hello:\n # ok:yield-in-init\n yield 5\n # ok:yield-in-init\n yield other\n\n\nclass F:\n def __init__():\n pass\n\n def func1():\n # ok:return-in-init\n return 5\n\n def func2():\n # ok:return-in-init\n return\n\n\nclass G:\n def __init__():\n pass\n\n def func1():\n # ok:yield-in-init\n yield 5\n\n def func2():\n # ok:yield-in-init\n yield\n\nclass H:\n def __init__(self, x):\n # ok:return-in-init\n return None\n", "path": "python/lang/correctness/return-in-init.py"}], "after_files": [{"content": "class A:\n def __init__(a, b, c):\n # ruleid:return-in-init\n return A(a, b, c)\n\n\nclass B:\n def __init__(a, b, c):\n # ok:return-in-init\n return\n\n\nclass C:\n def __init__(a, b, c):\n # ruleid:yield-in-init\n yield\n\n\nclass D:\n def __init__():\n # ruleid:yield-in-init\n yield 5\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n return A(a, b, c)\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n yield\n\n\ndef __init__():\n # ok:yield-in-init\n yield 5\n\n\nclass E:\n def func1():\n if not hello:\n # ok:yield-in-init\n yield 5\n # ok:yield-in-init\n yield other\n\n\nclass F:\n def __init__():\n pass\n\n def func1():\n # ok:return-in-init\n return 5\n\n def func2():\n # ok:return-in-init\n return\n\n\nclass G:\n def __init__():\n pass\n\n def func1():\n # ok:yield-in-init\n yield 5\n\n def func2():\n # ok:yield-in-init\n yield\n\nclass H:\n def __init__(self, x):\n # ok:return-in-init\n return None\n\nclass Odd:\n def __init__(self, numbers):\n def is_odd(n):\n # ok:return-in-init\n return n % 2 == 1\n self.numbers = filter(is_odd, numbers)\n\n # todoruleid:return-in-init\n return self.numbers\n\nclass Even:\n def __init__(self):\n class EvenNumber:\n def __init__(self, n):\n self.n = n\n # todoruleid:return-in-init\n return n\n\n def is_even(self):\n # ok:return-in-init\n return self.n % 2 == 0\n\n self.number = EvenNumber()\n\n def not_init(self):\n class EvenNumber:\n def __init__(self, n):\n self.n = n\n # ruleid:return-in-init\n return n\n\n def is_even(self):\n # ok:return-in-init\n return self.n % 2 == 0\n\n # ok:return-in-init\n return EvenNumber()\n", "path": "python/lang/correctness/return-in-init.py"}]}
| 992 | 334 |
gh_patches_debug_40443
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-1280
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
We need a new API that displays concordances information for all chants in the database
In an email from Jan:
> The intensive process of getting all the data from CD via individual json-cid requests (59.000+) is running already on the 3rd day (and not finished yet) but this will not keep the Cantus data fresh in the Cantus Index API in the long term.
>
> The solution would be to regularly create a large JSON file export of all the CD chants (with the same fields as in json-cid exports) and make it available as a file to download. An example of such json export is here: https://austriamanus.org/files/concordances-export.json
> This kind of data transfer works also with the MMMO database which has approximately half the amount of data compared to a CD. I believe it would also be the best solution for CD.
This will not be difficult. We can use the code in our `json-con` API, but return all chants rather than filtering them by Cantus ID.
What's a good path for this API to live at? `/json-concordances-export`?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/management/commands/update_cached_concordances.py`
Content:
```
1 import ujson
2 import os
3 from sys import stdout
4 from datetime import datetime
5 from collections import defaultdict
6 from django.db.models.query import QuerySet
7 from django.core.management.base import BaseCommand
8 from main_app.models import Chant
9
10
11 class Command(BaseCommand):
12 def handle(self, *args, **kwargs) -> None:
13 CACHE_DIR: str = "api_cache"
14 FILEPATH: str = f"{CACHE_DIR}/concordances.json"
15 start_time: str = datetime.now().isoformat()
16 stdout.write(f"Running update_cached_concordances at {start_time}.\n")
17 concordances: dict = get_concordances()
18 write_time: str = datetime.now().isoformat()
19 metadata: dict = {
20 "last_updated": write_time,
21 }
22 data_and_metadata: dict = {
23 "data": concordances,
24 "metadata": metadata,
25 }
26 stdout.write(f"Attempting to make directory at {CACHE_DIR} to hold cache: ")
27 try:
28 os.mkdir(CACHE_DIR)
29 stdout.write(f"successfully created directory at {CACHE_DIR}.\n")
30 except FileExistsError:
31 stdout.write(f"directory at {CACHE_DIR} already exists.\n")
32 stdout.write(f"Writing concordances to {FILEPATH} at {write_time}.\n")
33 with open(FILEPATH, "w") as json_file:
34 ujson.dump(data_and_metadata, json_file)
35 end_time = datetime.now().isoformat()
36 stdout.write(
37 f"Concordances successfully written to {FILEPATH} at {end_time}.\n\n"
38 )
39
40
41 def get_concordances() -> dict:
42 DOMAIN: str = "https://cantusdatabase.org"
43
44 stdout.write("Querying database for published chants\n")
45 published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)
46 values: QuerySet[dict] = published_chants.select_related(
47 "source",
48 "feast",
49 "genre",
50 "office",
51 ).values(
52 "id",
53 "source_id",
54 "source__siglum",
55 "folio",
56 "c_sequence",
57 "incipit",
58 "feast__name",
59 "genre__name",
60 "office__name",
61 "position",
62 "cantus_id",
63 "image_link",
64 "mode",
65 "manuscript_full_text_std_spelling",
66 "volpiano",
67 )
68
69 stdout.write("Processing chants\n")
70 concordances: defaultdict = defaultdict(list)
71 for chant in values:
72 source_id: int = chant["source_id"]
73 source_absolute_url: str = f"{DOMAIN}/source/{source_id}/"
74 chant_id: int = chant["id"]
75 chant_absolute_url: str = f"{DOMAIN}/chant/{chant_id}/"
76
77 concordances[chant["cantus_id"]].append(
78 {
79 "siglum": chant["source__siglum"],
80 "srclink": source_absolute_url,
81 "chantlink": chant_absolute_url,
82 "folio": chant["folio"],
83 "sequence": chant["c_sequence"],
84 "incipit": chant["incipit"],
85 "feast": chant["feast__name"],
86 "genre": chant["genre__name"],
87 "office": chant["office__name"],
88 "position": chant["position"],
89 "cantus_id": chant["cantus_id"],
90 "image": chant["image_link"],
91 "mode": chant["mode"],
92 "full_text": chant["manuscript_full_text_std_spelling"],
93 "melody": chant["volpiano"],
94 "db": "CD",
95 }
96 )
97
98 stdout.write(f"All chants processed - found {len(concordances)} Cantus IDs\n")
99
100 return dict(concordances)
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
--- a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
+++ b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
@@ -1,6 +1,7 @@
import ujson
import os
from sys import stdout
+from typing import Optional
from datetime import datetime
from collections import defaultdict
from django.db.models.query import QuerySet
@@ -8,10 +9,27 @@
from main_app.models import Chant
+# Usage: `python manage.py update_cached_concordances`
+# or `python manage.py update_cached_concordances -d "/path/to/directory/in/which/to/save/concordances"`
+
+
class Command(BaseCommand):
+ def add_arguments(self, parser):
+ parser.add_argument(
+ "-d",
+ "--directory",
+ help="Optional filepath specifying a directory to output concordances",
+ type=str,
+ )
+
def handle(self, *args, **kwargs) -> None:
- CACHE_DIR: str = "api_cache"
- FILEPATH: str = f"{CACHE_DIR}/concordances.json"
+ cache_dir: Optional[str] = kwargs["directory"]
+ if not cache_dir:
+ # this default directory should match the value in docker-compose.yml,
+ # at services:django:volumes:api_cache_volume
+ cache_dir = "/resources/api_cache"
+
+ filepath: str = f"{cache_dir}/concordances.json"
start_time: str = datetime.now().isoformat()
stdout.write(f"Running update_cached_concordances at {start_time}.\n")
concordances: dict = get_concordances()
@@ -23,22 +41,29 @@
"data": concordances,
"metadata": metadata,
}
- stdout.write(f"Attempting to make directory at {CACHE_DIR} to hold cache: ")
+ stdout.write(f"Attempting to make directory at {cache_dir} to hold cache: ")
try:
- os.mkdir(CACHE_DIR)
- stdout.write(f"successfully created directory at {CACHE_DIR}.\n")
+ os.mkdir(cache_dir)
+ stdout.write(f"successfully created directory at {cache_dir}.\n")
except FileExistsError:
- stdout.write(f"directory at {CACHE_DIR} already exists.\n")
- stdout.write(f"Writing concordances to {FILEPATH} at {write_time}.\n")
- with open(FILEPATH, "w") as json_file:
+ stdout.write(f"directory at {cache_dir} already exists.\n")
+ stdout.write(f"Writing concordances to {filepath} at {write_time}.\n")
+ with open(filepath, "w") as json_file:
ujson.dump(data_and_metadata, json_file)
end_time = datetime.now().isoformat()
stdout.write(
- f"Concordances successfully written to {FILEPATH} at {end_time}.\n\n"
+ f"Concordances successfully written to {filepath} at {end_time}.\n\n"
)
def get_concordances() -> dict:
+ """Fetch all published chants in the database, group them by Cantus ID, and return
+ a dictionary containing information on each of these chants.
+
+ Returns:
+ dict: A dictionary where each key is a Cantus ID and each value is a list all
+ published chants in the database with that Cantus ID.
+ """
DOMAIN: str = "https://cantusdatabase.org"
stdout.write("Querying database for published chants\n")
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n--- a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n+++ b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n@@ -1,6 +1,7 @@\n import ujson\n import os\n from sys import stdout\n+from typing import Optional\n from datetime import datetime\n from collections import defaultdict\n from django.db.models.query import QuerySet\n@@ -8,10 +9,27 @@\n from main_app.models import Chant\n \n \n+# Usage: `python manage.py update_cached_concordances`\n+# or `python manage.py update_cached_concordances -d \"/path/to/directory/in/which/to/save/concordances\"`\n+\n+\n class Command(BaseCommand):\n+ def add_arguments(self, parser):\n+ parser.add_argument(\n+ \"-d\",\n+ \"--directory\",\n+ help=\"Optional filepath specifying a directory to output concordances\",\n+ type=str,\n+ )\n+\n def handle(self, *args, **kwargs) -> None:\n- CACHE_DIR: str = \"api_cache\"\n- FILEPATH: str = f\"{CACHE_DIR}/concordances.json\"\n+ cache_dir: Optional[str] = kwargs[\"directory\"]\n+ if not cache_dir:\n+ # this default directory should match the value in docker-compose.yml,\n+ # at services:django:volumes:api_cache_volume\n+ cache_dir = \"/resources/api_cache\"\n+\n+ filepath: str = f\"{cache_dir}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n@@ -23,22 +41,29 @@\n \"data\": concordances,\n \"metadata\": metadata,\n }\n- stdout.write(f\"Attempting to make directory at {CACHE_DIR} to hold cache: \")\n+ stdout.write(f\"Attempting to make directory at {cache_dir} to hold cache: \")\n try:\n- os.mkdir(CACHE_DIR)\n- stdout.write(f\"successfully created directory at {CACHE_DIR}.\\n\")\n+ os.mkdir(cache_dir)\n+ stdout.write(f\"successfully created directory at {cache_dir}.\\n\")\n except FileExistsError:\n- stdout.write(f\"directory at {CACHE_DIR} already exists.\\n\")\n- stdout.write(f\"Writing concordances to {FILEPATH} at {write_time}.\\n\")\n- with open(FILEPATH, \"w\") as json_file:\n+ stdout.write(f\"directory at {cache_dir} already exists.\\n\")\n+ stdout.write(f\"Writing concordances to {filepath} at {write_time}.\\n\")\n+ with open(filepath, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n- f\"Concordances successfully written to {FILEPATH} at {end_time}.\\n\\n\"\n+ f\"Concordances successfully written to {filepath} at {end_time}.\\n\\n\"\n )\n \n \n def get_concordances() -> dict:\n+ \"\"\"Fetch all published chants in the database, group them by Cantus ID, and return\n+ a dictionary containing information on each of these chants.\n+\n+ Returns:\n+ dict: A dictionary where each key is a Cantus ID and each value is a list all\n+ published chants in the database with that Cantus ID.\n+ \"\"\"\n DOMAIN: str = \"https://cantusdatabase.org\"\n \n stdout.write(\"Querying database for published chants\\n\")\n", "issue": "We need a new API that displays concordances information for all chants in the database\nIn an email from Jan:\r\n\r\n> The intensive process of getting all the data from CD via individual json-cid requests (59.000+) is running already on the 3rd day (and not finished yet) but this will not keep the Cantus data fresh in the Cantus Index API in the long term.\r\n> \r\n> The solution would be to regularly create a large JSON file export of all the CD chants (with the same fields as in json-cid exports) and make it available as a file to download. An example of such json export is here: https://austriamanus.org/files/concordances-export.json\r\n> This kind of data transfer works also with the MMMO database which has approximately half the amount of data compared to a CD. I believe it would also be the best solution for CD.\r\n\r\nThis will not be difficult. We can use the code in our `json-con` API, but return all chants rather than filtering them by Cantus ID.\r\n\r\nWhat's a good path for this API to live at? `/json-concordances-export`?\n", "before_files": [{"content": "import ujson\nimport os\nfrom sys import stdout\nfrom datetime import datetime\nfrom collections import defaultdict\nfrom django.db.models.query import QuerySet\nfrom django.core.management.base import BaseCommand\nfrom main_app.models import Chant\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **kwargs) -> None:\n CACHE_DIR: str = \"api_cache\"\n FILEPATH: str = f\"{CACHE_DIR}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n write_time: str = datetime.now().isoformat()\n metadata: dict = {\n \"last_updated\": write_time,\n }\n data_and_metadata: dict = {\n \"data\": concordances,\n \"metadata\": metadata,\n }\n stdout.write(f\"Attempting to make directory at {CACHE_DIR} to hold cache: \")\n try:\n os.mkdir(CACHE_DIR)\n stdout.write(f\"successfully created directory at {CACHE_DIR}.\\n\")\n except FileExistsError:\n stdout.write(f\"directory at {CACHE_DIR} already exists.\\n\")\n stdout.write(f\"Writing concordances to {FILEPATH} at {write_time}.\\n\")\n with open(FILEPATH, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n f\"Concordances successfully written to {FILEPATH} at {end_time}.\\n\\n\"\n )\n\n\ndef get_concordances() -> dict:\n DOMAIN: str = \"https://cantusdatabase.org\"\n\n stdout.write(\"Querying database for published chants\\n\")\n published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)\n values: QuerySet[dict] = published_chants.select_related(\n \"source\",\n \"feast\",\n \"genre\",\n \"office\",\n ).values(\n \"id\",\n \"source_id\",\n \"source__siglum\",\n \"folio\",\n \"c_sequence\",\n \"incipit\",\n \"feast__name\",\n \"genre__name\",\n \"office__name\",\n \"position\",\n \"cantus_id\",\n \"image_link\",\n \"mode\",\n \"manuscript_full_text_std_spelling\",\n \"volpiano\",\n )\n\n stdout.write(\"Processing chants\\n\")\n concordances: defaultdict = defaultdict(list)\n for chant in values:\n source_id: int = chant[\"source_id\"]\n source_absolute_url: str = f\"{DOMAIN}/source/{source_id}/\"\n chant_id: int = chant[\"id\"]\n chant_absolute_url: str = f\"{DOMAIN}/chant/{chant_id}/\"\n\n concordances[chant[\"cantus_id\"]].append(\n {\n \"siglum\": chant[\"source__siglum\"],\n \"srclink\": source_absolute_url,\n \"chantlink\": chant_absolute_url,\n \"folio\": chant[\"folio\"],\n \"sequence\": chant[\"c_sequence\"],\n \"incipit\": chant[\"incipit\"],\n \"feast\": chant[\"feast__name\"],\n \"genre\": chant[\"genre__name\"],\n \"office\": chant[\"office__name\"],\n \"position\": chant[\"position\"],\n \"cantus_id\": chant[\"cantus_id\"],\n \"image\": chant[\"image_link\"],\n \"mode\": chant[\"mode\"],\n \"full_text\": chant[\"manuscript_full_text_std_spelling\"],\n \"melody\": chant[\"volpiano\"],\n \"db\": \"CD\",\n }\n )\n\n stdout.write(f\"All chants processed - found {len(concordances)} Cantus IDs\\n\")\n\n return dict(concordances)\n", "path": "django/cantusdb_project/main_app/management/commands/update_cached_concordances.py"}], "after_files": [{"content": "import ujson\nimport os\nfrom sys import stdout\nfrom typing import Optional\nfrom datetime import datetime\nfrom collections import defaultdict\nfrom django.db.models.query import QuerySet\nfrom django.core.management.base import BaseCommand\nfrom main_app.models import Chant\n\n\n# Usage: `python manage.py update_cached_concordances`\n# or `python manage.py update_cached_concordances -d \"/path/to/directory/in/which/to/save/concordances\"`\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"-d\",\n \"--directory\",\n help=\"Optional filepath specifying a directory to output concordances\",\n type=str,\n )\n\n def handle(self, *args, **kwargs) -> None:\n cache_dir: Optional[str] = kwargs[\"directory\"]\n if not cache_dir:\n # this default directory should match the value in docker-compose.yml,\n # at services:django:volumes:api_cache_volume\n cache_dir = \"/resources/api_cache\"\n\n filepath: str = f\"{cache_dir}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n write_time: str = datetime.now().isoformat()\n metadata: dict = {\n \"last_updated\": write_time,\n }\n data_and_metadata: dict = {\n \"data\": concordances,\n \"metadata\": metadata,\n }\n stdout.write(f\"Attempting to make directory at {cache_dir} to hold cache: \")\n try:\n os.mkdir(cache_dir)\n stdout.write(f\"successfully created directory at {cache_dir}.\\n\")\n except FileExistsError:\n stdout.write(f\"directory at {cache_dir} already exists.\\n\")\n stdout.write(f\"Writing concordances to {filepath} at {write_time}.\\n\")\n with open(filepath, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n f\"Concordances successfully written to {filepath} at {end_time}.\\n\\n\"\n )\n\n\ndef get_concordances() -> dict:\n \"\"\"Fetch all published chants in the database, group them by Cantus ID, and return\n a dictionary containing information on each of these chants.\n\n Returns:\n dict: A dictionary where each key is a Cantus ID and each value is a list all\n published chants in the database with that Cantus ID.\n \"\"\"\n DOMAIN: str = \"https://cantusdatabase.org\"\n\n stdout.write(\"Querying database for published chants\\n\")\n published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)\n values: QuerySet[dict] = published_chants.select_related(\n \"source\",\n \"feast\",\n \"genre\",\n \"office\",\n ).values(\n \"id\",\n \"source_id\",\n \"source__siglum\",\n \"folio\",\n \"c_sequence\",\n \"incipit\",\n \"feast__name\",\n \"genre__name\",\n \"office__name\",\n \"position\",\n \"cantus_id\",\n \"image_link\",\n \"mode\",\n \"manuscript_full_text_std_spelling\",\n \"volpiano\",\n )\n\n stdout.write(\"Processing chants\\n\")\n concordances: defaultdict = defaultdict(list)\n for chant in values:\n source_id: int = chant[\"source_id\"]\n source_absolute_url: str = f\"{DOMAIN}/source/{source_id}/\"\n chant_id: int = chant[\"id\"]\n chant_absolute_url: str = f\"{DOMAIN}/chant/{chant_id}/\"\n\n concordances[chant[\"cantus_id\"]].append(\n {\n \"siglum\": chant[\"source__siglum\"],\n \"srclink\": source_absolute_url,\n \"chantlink\": chant_absolute_url,\n \"folio\": chant[\"folio\"],\n \"sequence\": chant[\"c_sequence\"],\n \"incipit\": chant[\"incipit\"],\n \"feast\": chant[\"feast__name\"],\n \"genre\": chant[\"genre__name\"],\n \"office\": chant[\"office__name\"],\n \"position\": chant[\"position\"],\n \"cantus_id\": chant[\"cantus_id\"],\n \"image\": chant[\"image_link\"],\n \"mode\": chant[\"mode\"],\n \"full_text\": chant[\"manuscript_full_text_std_spelling\"],\n \"melody\": chant[\"volpiano\"],\n \"db\": \"CD\",\n }\n )\n\n stdout.write(f\"All chants processed - found {len(concordances)} Cantus IDs\\n\")\n\n return dict(concordances)\n", "path": "django/cantusdb_project/main_app/management/commands/update_cached_concordances.py"}]}
| 1,524 | 831 |
gh_patches_debug_1320
|
rasdani/github-patches
|
git_diff
|
conda__conda-5124
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
export toposort for conda-build
export toposort for conda-build
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda/exports.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from functools import partial
5 from logging import getLogger
6 from warnings import warn
7
8 log = getLogger(__name__)
9
10 from . import CondaError # NOQA
11 CondaError = CondaError
12
13 from . import compat, plan # NOQA
14 compat, plan = compat, plan
15
16 from .api import get_index # NOQA
17 get_index = get_index
18
19 from .cli.common import specs_from_args, spec_from_line, specs_from_url # NOQA
20 from .cli.conda_argparse import add_parser_prefix, add_parser_channels # NOQA
21 add_parser_channels, add_parser_prefix = add_parser_channels, add_parser_prefix
22 specs_from_args, spec_from_line = specs_from_args, spec_from_line
23 specs_from_url = specs_from_url
24
25 from .cli.conda_argparse import ArgumentParser # NOQA
26 ArgumentParser = ArgumentParser
27
28 from .common.compat import PY3, StringIO, input, iteritems, string_types, text_type # NOQA
29 PY3, StringIO, input, iteritems, string_types, text_type = PY3, StringIO, input, iteritems, string_types, text_type # NOQA
30 from .gateways.connection import CondaSession # NOQA
31 CondaSession = CondaSession
32
33 from .gateways.disk.link import lchmod # NOQA
34 lchmod = lchmod
35
36 from .fetch import TmpDownload # NOQA
37 TmpDownload = TmpDownload
38 handle_proxy_407 = lambda x, y: warn("handle_proxy_407 is deprecated. "
39 "Now handled by CondaSession.")
40 from .core.index import dist_str_in_index, fetch_index # NOQA
41 dist_str_in_index, fetch_index = dist_str_in_index, fetch_index
42 from .core.package_cache import download, rm_fetched # NOQA
43 download, rm_fetched = download, rm_fetched
44
45 from .install import package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA
46 package_cache, prefix_placeholder, rm_rf, symlink_conda = package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA
47
48 from .gateways.disk.delete import delete_trash, move_to_trash # NOQA
49 delete_trash, move_to_trash = delete_trash, move_to_trash
50
51 from .core.linked_data import is_linked, linked, linked_data # NOQA
52 is_linked, linked, linked_data = is_linked, linked, linked_data
53
54 from .misc import untracked, walk_prefix # NOQA
55 untracked, walk_prefix = untracked, walk_prefix
56
57 from .resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA
58 MatchSpec, NoPackagesFound, Resolve = MatchSpec, NoPackagesFound, Resolve
59 Unsatisfiable, normalized_version = Unsatisfiable, normalized_version
60
61 from .signature import KEYS, KEYS_DIR, hash_file, verify # NOQA
62 KEYS, KEYS_DIR = KEYS, KEYS_DIR
63 hash_file, verify = hash_file, verify
64
65 from .utils import hashsum_file, human_bytes, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA
66 hashsum_file, human_bytes = hashsum_file, human_bytes
67 memoized, unix_path_to_win = memoized, unix_path_to_win
68 win_path_to_unix, url_path = win_path_to_unix, url_path
69
70 from .gateways.disk.read import compute_md5sum # NOQA
71 md5_file = compute_md5sum
72
73 from .config import sys_rc_path # NOQA
74 sys_rc_path = sys_rc_path
75
76 from .models.version import VersionOrder # NOQA
77 VersionOrder = VersionOrder
78
79 import conda.base.context # NOQA
80 from .base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA
81 non_x86_linux_machines = non_x86_linux_machines
82
83 from ._vendor.auxlib.entity import EntityEncoder # NOQA
84 EntityEncoder = EntityEncoder
85 from .base.constants import DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA
86 DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX = DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA
87 get_prefix = partial(context_get_prefix, conda.base.context.context)
88 get_default_urls = lambda: DEFAULT_CHANNELS
89
90 arch_name = conda.base.context.context.arch_name
91 binstar_upload = conda.base.context.context.anaconda_upload
92 bits = conda.base.context.context.bits
93 default_prefix = conda.base.context.context.default_prefix
94 default_python = conda.base.context.context.default_python
95 envs_dirs = conda.base.context.context.envs_dirs
96 pkgs_dirs = conda.base.context.context.pkgs_dirs
97 platform = conda.base.context.context.platform
98 root_dir = conda.base.context.context.root_prefix
99 root_writable = conda.base.context.context.root_writable
100 subdir = conda.base.context.context.subdir
101 from .models.channel import get_conda_build_local_url # NOQA
102 get_rc_urls = lambda: list(conda.base.context.context.channels)
103 get_local_urls = lambda: list(get_conda_build_local_url()) or []
104 load_condarc = lambda fn: conda.base.context.reset_context([fn])
105 from .exceptions import PaddingError # NOQA
106 PaddingError = PaddingError
107 from .gateways.disk.link import CrossPlatformStLink # NOQA
108 CrossPlatformStLink = CrossPlatformStLink
109
110 from .models.enums import FileMode # NOQA
111 FileMode = FileMode
112 from .models.enums import PathType # NOQA
113 PathType = PathType
114
115
116 if PY3:
117 import configparser # NOQA # pragma: py2 no cover
118 else:
119 import ConfigParser as configparser # NOQA # pragma: py3 no cover
120 configparser = configparser
121
122
123 from .compat import TemporaryDirectory # NOQA
124 TemporaryDirectory = TemporaryDirectory
125
126 from .gateways.subprocess import ACTIVE_SUBPROCESSES, subprocess_call # NOQA
127 ACTIVE_SUBPROCESSES, subprocess_call = ACTIVE_SUBPROCESSES, subprocess_call
128
129 from .core.repodata import cache_fn_url # NOQA
130 cache_fn_url = cache_fn_url
131
132
133 class Completer(object):
134 def get_items(self):
135 return self._get_items()
136
137 def __contains__(self, item):
138 return True
139
140 def __iter__(self):
141 return iter(self.get_items())
142
143 class InstalledPackages(object): pass # NOQA
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda/exports.py b/conda/exports.py
--- a/conda/exports.py
+++ b/conda/exports.py
@@ -30,6 +30,9 @@
from .gateways.connection import CondaSession # NOQA
CondaSession = CondaSession
+from .common.toposort import _toposort
+_toposort = _toposort
+
from .gateways.disk.link import lchmod # NOQA
lchmod = lchmod
|
{"golden_diff": "diff --git a/conda/exports.py b/conda/exports.py\n--- a/conda/exports.py\n+++ b/conda/exports.py\n@@ -30,6 +30,9 @@\n from .gateways.connection import CondaSession # NOQA\n CondaSession = CondaSession\n \n+from .common.toposort import _toposort\n+_toposort = _toposort\n+\n from .gateways.disk.link import lchmod # NOQA\n lchmod = lchmod\n", "issue": "export toposort for conda-build\n\nexport toposort for conda-build\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom functools import partial\nfrom logging import getLogger\nfrom warnings import warn\n\nlog = getLogger(__name__)\n\nfrom . import CondaError # NOQA\nCondaError = CondaError\n\nfrom . import compat, plan # NOQA\ncompat, plan = compat, plan\n\nfrom .api import get_index # NOQA\nget_index = get_index\n\nfrom .cli.common import specs_from_args, spec_from_line, specs_from_url # NOQA\nfrom .cli.conda_argparse import add_parser_prefix, add_parser_channels # NOQA\nadd_parser_channels, add_parser_prefix = add_parser_channels, add_parser_prefix\nspecs_from_args, spec_from_line = specs_from_args, spec_from_line\nspecs_from_url = specs_from_url\n\nfrom .cli.conda_argparse import ArgumentParser # NOQA\nArgumentParser = ArgumentParser\n\nfrom .common.compat import PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nPY3, StringIO, input, iteritems, string_types, text_type = PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nfrom .gateways.connection import CondaSession # NOQA\nCondaSession = CondaSession\n\nfrom .gateways.disk.link import lchmod # NOQA\nlchmod = lchmod\n\nfrom .fetch import TmpDownload # NOQA\nTmpDownload = TmpDownload\nhandle_proxy_407 = lambda x, y: warn(\"handle_proxy_407 is deprecated. \"\n \"Now handled by CondaSession.\")\nfrom .core.index import dist_str_in_index, fetch_index # NOQA\ndist_str_in_index, fetch_index = dist_str_in_index, fetch_index\nfrom .core.package_cache import download, rm_fetched # NOQA\ndownload, rm_fetched = download, rm_fetched\n\nfrom .install import package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\npackage_cache, prefix_placeholder, rm_rf, symlink_conda = package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\n\nfrom .gateways.disk.delete import delete_trash, move_to_trash # NOQA\ndelete_trash, move_to_trash = delete_trash, move_to_trash\n\nfrom .core.linked_data import is_linked, linked, linked_data # NOQA\nis_linked, linked, linked_data = is_linked, linked, linked_data\n\nfrom .misc import untracked, walk_prefix # NOQA\nuntracked, walk_prefix = untracked, walk_prefix\n\nfrom .resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA\nMatchSpec, NoPackagesFound, Resolve = MatchSpec, NoPackagesFound, Resolve\nUnsatisfiable, normalized_version = Unsatisfiable, normalized_version\n\nfrom .signature import KEYS, KEYS_DIR, hash_file, verify # NOQA\nKEYS, KEYS_DIR = KEYS, KEYS_DIR\nhash_file, verify = hash_file, verify\n\nfrom .utils import hashsum_file, human_bytes, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA\nhashsum_file, human_bytes = hashsum_file, human_bytes\nmemoized, unix_path_to_win = memoized, unix_path_to_win\nwin_path_to_unix, url_path = win_path_to_unix, url_path\n\nfrom .gateways.disk.read import compute_md5sum # NOQA\nmd5_file = compute_md5sum\n\nfrom .config import sys_rc_path # NOQA\nsys_rc_path = sys_rc_path\n\nfrom .models.version import VersionOrder # NOQA\nVersionOrder = VersionOrder\n\nimport conda.base.context # NOQA\nfrom .base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA\nnon_x86_linux_machines = non_x86_linux_machines\n\nfrom ._vendor.auxlib.entity import EntityEncoder # NOQA\nEntityEncoder = EntityEncoder\nfrom .base.constants import DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nDEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX = DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nget_prefix = partial(context_get_prefix, conda.base.context.context)\nget_default_urls = lambda: DEFAULT_CHANNELS\n\narch_name = conda.base.context.context.arch_name\nbinstar_upload = conda.base.context.context.anaconda_upload\nbits = conda.base.context.context.bits\ndefault_prefix = conda.base.context.context.default_prefix\ndefault_python = conda.base.context.context.default_python\nenvs_dirs = conda.base.context.context.envs_dirs\npkgs_dirs = conda.base.context.context.pkgs_dirs\nplatform = conda.base.context.context.platform\nroot_dir = conda.base.context.context.root_prefix\nroot_writable = conda.base.context.context.root_writable\nsubdir = conda.base.context.context.subdir\nfrom .models.channel import get_conda_build_local_url # NOQA\nget_rc_urls = lambda: list(conda.base.context.context.channels)\nget_local_urls = lambda: list(get_conda_build_local_url()) or []\nload_condarc = lambda fn: conda.base.context.reset_context([fn])\nfrom .exceptions import PaddingError # NOQA\nPaddingError = PaddingError\nfrom .gateways.disk.link import CrossPlatformStLink # NOQA\nCrossPlatformStLink = CrossPlatformStLink\n\nfrom .models.enums import FileMode # NOQA\nFileMode = FileMode\nfrom .models.enums import PathType # NOQA\nPathType = PathType\n\n\nif PY3:\n import configparser # NOQA # pragma: py2 no cover\nelse:\n import ConfigParser as configparser # NOQA # pragma: py3 no cover\nconfigparser = configparser\n\n\nfrom .compat import TemporaryDirectory # NOQA\nTemporaryDirectory = TemporaryDirectory\n\nfrom .gateways.subprocess import ACTIVE_SUBPROCESSES, subprocess_call # NOQA\nACTIVE_SUBPROCESSES, subprocess_call = ACTIVE_SUBPROCESSES, subprocess_call\n\nfrom .core.repodata import cache_fn_url # NOQA\ncache_fn_url = cache_fn_url\n\n\nclass Completer(object):\n def get_items(self):\n return self._get_items()\n\n def __contains__(self, item):\n return True\n\n def __iter__(self):\n return iter(self.get_items())\n\nclass InstalledPackages(object): pass # NOQA\n", "path": "conda/exports.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom functools import partial\nfrom logging import getLogger\nfrom warnings import warn\n\nlog = getLogger(__name__)\n\nfrom . import CondaError # NOQA\nCondaError = CondaError\n\nfrom . import compat, plan # NOQA\ncompat, plan = compat, plan\n\nfrom .api import get_index # NOQA\nget_index = get_index\n\nfrom .cli.common import specs_from_args, spec_from_line, specs_from_url # NOQA\nfrom .cli.conda_argparse import add_parser_prefix, add_parser_channels # NOQA\nadd_parser_channels, add_parser_prefix = add_parser_channels, add_parser_prefix\nspecs_from_args, spec_from_line = specs_from_args, spec_from_line\nspecs_from_url = specs_from_url\n\nfrom .cli.conda_argparse import ArgumentParser # NOQA\nArgumentParser = ArgumentParser\n\nfrom .common.compat import PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nPY3, StringIO, input, iteritems, string_types, text_type = PY3, StringIO, input, iteritems, string_types, text_type # NOQA\nfrom .gateways.connection import CondaSession # NOQA\nCondaSession = CondaSession\n\nfrom .common.toposort import _toposort\n_toposort = _toposort\n\nfrom .gateways.disk.link import lchmod # NOQA\nlchmod = lchmod\n\nfrom .fetch import TmpDownload # NOQA\nTmpDownload = TmpDownload\nhandle_proxy_407 = lambda x, y: warn(\"handle_proxy_407 is deprecated. \"\n \"Now handled by CondaSession.\")\nfrom .core.index import dist_str_in_index, fetch_index # NOQA\ndist_str_in_index, fetch_index = dist_str_in_index, fetch_index\nfrom .core.package_cache import download, rm_fetched # NOQA\ndownload, rm_fetched = download, rm_fetched\n\nfrom .install import package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\npackage_cache, prefix_placeholder, rm_rf, symlink_conda = package_cache, prefix_placeholder, rm_rf, symlink_conda # NOQA\n\nfrom .gateways.disk.delete import delete_trash, move_to_trash # NOQA\ndelete_trash, move_to_trash = delete_trash, move_to_trash\n\nfrom .core.linked_data import is_linked, linked, linked_data # NOQA\nis_linked, linked, linked_data = is_linked, linked, linked_data\n\nfrom .misc import untracked, walk_prefix # NOQA\nuntracked, walk_prefix = untracked, walk_prefix\n\nfrom .resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA\nMatchSpec, NoPackagesFound, Resolve = MatchSpec, NoPackagesFound, Resolve\nUnsatisfiable, normalized_version = Unsatisfiable, normalized_version\n\nfrom .signature import KEYS, KEYS_DIR, hash_file, verify # NOQA\nKEYS, KEYS_DIR = KEYS, KEYS_DIR\nhash_file, verify = hash_file, verify\n\nfrom .utils import hashsum_file, human_bytes, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA\nhashsum_file, human_bytes = hashsum_file, human_bytes\nmemoized, unix_path_to_win = memoized, unix_path_to_win\nwin_path_to_unix, url_path = win_path_to_unix, url_path\n\nfrom .gateways.disk.read import compute_md5sum # NOQA\nmd5_file = compute_md5sum\n\nfrom .config import sys_rc_path # NOQA\nsys_rc_path = sys_rc_path\n\nfrom .models.version import VersionOrder # NOQA\nVersionOrder = VersionOrder\n\nimport conda.base.context # NOQA\nfrom .base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA\nnon_x86_linux_machines = non_x86_linux_machines\n\nfrom ._vendor.auxlib.entity import EntityEncoder # NOQA\nEntityEncoder = EntityEncoder\nfrom .base.constants import DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nDEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX = DEFAULT_CHANNELS, DEFAULT_CHANNELS_WIN, DEFAULT_CHANNELS_UNIX # NOQA\nget_prefix = partial(context_get_prefix, conda.base.context.context)\nget_default_urls = lambda: DEFAULT_CHANNELS\n\narch_name = conda.base.context.context.arch_name\nbinstar_upload = conda.base.context.context.anaconda_upload\nbits = conda.base.context.context.bits\ndefault_prefix = conda.base.context.context.default_prefix\ndefault_python = conda.base.context.context.default_python\nenvs_dirs = conda.base.context.context.envs_dirs\npkgs_dirs = conda.base.context.context.pkgs_dirs\nplatform = conda.base.context.context.platform\nroot_dir = conda.base.context.context.root_prefix\nroot_writable = conda.base.context.context.root_writable\nsubdir = conda.base.context.context.subdir\nfrom .models.channel import get_conda_build_local_url # NOQA\nget_rc_urls = lambda: list(conda.base.context.context.channels)\nget_local_urls = lambda: list(get_conda_build_local_url()) or []\nload_condarc = lambda fn: conda.base.context.reset_context([fn])\nfrom .exceptions import PaddingError # NOQA\nPaddingError = PaddingError\nfrom .gateways.disk.link import CrossPlatformStLink # NOQA\nCrossPlatformStLink = CrossPlatformStLink\n\nfrom .models.enums import FileMode # NOQA\nFileMode = FileMode\nfrom .models.enums import PathType # NOQA\nPathType = PathType\n\n\nif PY3:\n import configparser # NOQA # pragma: py2 no cover\nelse:\n import ConfigParser as configparser # NOQA # pragma: py3 no cover\nconfigparser = configparser\n\n\nfrom .compat import TemporaryDirectory # NOQA\nTemporaryDirectory = TemporaryDirectory\n\nfrom .gateways.subprocess import ACTIVE_SUBPROCESSES, subprocess_call # NOQA\nACTIVE_SUBPROCESSES, subprocess_call = ACTIVE_SUBPROCESSES, subprocess_call\n\nfrom .core.repodata import cache_fn_url # NOQA\ncache_fn_url = cache_fn_url\n\n\nclass Completer(object):\n def get_items(self):\n return self._get_items()\n\n def __contains__(self, item):\n return True\n\n def __iter__(self):\n return iter(self.get_items())\n\nclass InstalledPackages(object): pass # NOQA\n", "path": "conda/exports.py"}]}
| 2,030 | 110 |
gh_patches_debug_1450
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-3731
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
release infrastrucutre doesn't handle "out of order" releases
Specifically if we issue an `0.X` release, then an `0.X+1` release, and then we go to do an `0.X.1` release, the wheel automation won't work, since it builds a wheel for the latest release.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `release.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import getpass
8 import io
9 import os
10 import subprocess
11 import time
12
13 import click
14
15 from clint.textui.progress import Bar as ProgressBar
16
17 import requests
18
19
20 JENKINS_URL = (
21 "https://ci.cryptography.io/job/cryptography-support-jobs/"
22 "job/wheel-builder"
23 )
24
25
26 def run(*args, **kwargs):
27 kwargs.setdefault("stderr", subprocess.STDOUT)
28 try:
29 subprocess.check_output(list(args), **kwargs)
30 except subprocess.CalledProcessError as e:
31 # Reraise this with a different type so that str(e) is something with
32 # stdout in it.
33 raise Exception(e.cmd, e.returncode, e.output)
34
35
36 def wait_for_build_completed(session):
37 # Wait 20 seconds before actually checking if the build is complete, to
38 # ensure that it had time to really start.
39 time.sleep(20)
40 while True:
41 response = session.get(
42 "{0}/lastBuild/api/json/".format(JENKINS_URL),
43 headers={
44 "Accept": "application/json",
45 }
46 )
47 response.raise_for_status()
48 if not response.json()["building"]:
49 assert response.json()["result"] == "SUCCESS"
50 break
51 time.sleep(0.1)
52
53
54 def download_artifacts(session):
55 response = session.get(
56 "{0}/lastBuild/api/json/".format(JENKINS_URL),
57 headers={
58 "Accept": "application/json"
59 }
60 )
61 response.raise_for_status()
62 json_response = response.json()
63 assert not json_response["building"]
64 assert json_response["result"] == "SUCCESS"
65
66 paths = []
67
68 for artifact in json_response["artifacts"]:
69 response = session.get(
70 "{0}artifact/{1}".format(
71 json_response["url"], artifact["relativePath"]
72 ), stream=True
73 )
74 assert response.headers["content-length"]
75 print("Downloading {0}".format(artifact["fileName"]))
76 bar = ProgressBar(
77 expected_size=int(response.headers["content-length"]),
78 filled_char="="
79 )
80 content = io.BytesIO()
81 for data in response.iter_content(chunk_size=8192):
82 content.write(data)
83 bar.show(content.tell())
84 assert bar.expected_size == content.tell()
85 bar.done()
86 out_path = os.path.join(
87 os.path.dirname(__file__),
88 "dist",
89 artifact["fileName"],
90 )
91 with open(out_path, "wb") as f:
92 f.write(content.getvalue())
93 paths.append(out_path)
94 return paths
95
96
97 @click.command()
98 @click.argument("version")
99 def release(version):
100 """
101 ``version`` should be a string like '0.4' or '1.0'.
102 """
103 run("git", "tag", "-s", version, "-m", "{0} release".format(version))
104 run("git", "push", "--tags")
105
106 run("python", "setup.py", "sdist")
107 run("python", "setup.py", "sdist", "bdist_wheel", cwd="vectors/")
108
109 run(
110 "twine", "upload", "-s", "dist/cryptography-{0}*".format(version),
111 "vectors/dist/cryptography_vectors-{0}*".format(version), shell=True
112 )
113
114 session = requests.Session()
115
116 # This tells the CDN to delete the cached response for the URL. We do this
117 # so that the Jenkins builders will see the new sdist immediately when they
118 # go to build the wheels.
119 response = session.request(
120 "PURGE", "https://pypi.python.org/simple/cryptography/"
121 )
122 response.raise_for_status()
123
124 token = getpass.getpass("Input the Jenkins token: ")
125 response = session.get(
126 "{0}/build".format(JENKINS_URL),
127 params={
128 "token": token,
129 "cause": "Building wheels for {0}".format(version)
130 }
131 )
132 response.raise_for_status()
133 wait_for_build_completed(session)
134 paths = download_artifacts(session)
135 run("twine", "upload", " ".join(paths))
136
137
138 if __name__ == "__main__":
139 release()
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/release.py b/release.py
--- a/release.py
+++ b/release.py
@@ -126,6 +126,7 @@
"{0}/build".format(JENKINS_URL),
params={
"token": token,
+ "BUILD_VERSION": version,
"cause": "Building wheels for {0}".format(version)
}
)
|
{"golden_diff": "diff --git a/release.py b/release.py\n--- a/release.py\n+++ b/release.py\n@@ -126,6 +126,7 @@\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n+ \"BUILD_VERSION\": version,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n", "issue": "release infrastrucutre doesn't handle \"out of order\" releases\nSpecifically if we issue an `0.X` release, then an `0.X+1` release, and then we go to do an `0.X.1` release, the wheel automation won't work, since it builds a wheel for the latest release.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport io\nimport os\nimport subprocess\nimport time\n\nimport click\n\nfrom clint.textui.progress import Bar as ProgressBar\n\nimport requests\n\n\nJENKINS_URL = (\n \"https://ci.cryptography.io/job/cryptography-support-jobs/\"\n \"job/wheel-builder\"\n)\n\n\ndef run(*args, **kwargs):\n kwargs.setdefault(\"stderr\", subprocess.STDOUT)\n try:\n subprocess.check_output(list(args), **kwargs)\n except subprocess.CalledProcessError as e:\n # Reraise this with a different type so that str(e) is something with\n # stdout in it.\n raise Exception(e.cmd, e.returncode, e.output)\n\n\ndef wait_for_build_completed(session):\n # Wait 20 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(20)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n json_response = response.json()\n assert not json_response[\"building\"]\n assert json_response[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for artifact in json_response[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(\n json_response[\"url\"], artifact[\"relativePath\"]\n ), stream=True\n )\n assert response.headers[\"content-length\"]\n print(\"Downloading {0}\".format(artifact[\"fileName\"]))\n bar = ProgressBar(\n expected_size=int(response.headers[\"content-length\"]),\n filled_char=\"=\"\n )\n content = io.BytesIO()\n for data in response.iter_content(chunk_size=8192):\n content.write(data)\n bar.show(content.tell())\n assert bar.expected_size == content.tell()\n bar.done()\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(content.getvalue())\n paths.append(out_path)\n return paths\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n run(\"python\", \"setup.py\", \"sdist\")\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n\n run(\n \"twine\", \"upload\", \"-s\", \"dist/cryptography-{0}*\".format(version),\n \"vectors/dist/cryptography_vectors-{0}*\".format(version), shell=True\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.get(\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n run(\"twine\", \"upload\", \" \".join(paths))\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport io\nimport os\nimport subprocess\nimport time\n\nimport click\n\nfrom clint.textui.progress import Bar as ProgressBar\n\nimport requests\n\n\nJENKINS_URL = (\n \"https://ci.cryptography.io/job/cryptography-support-jobs/\"\n \"job/wheel-builder\"\n)\n\n\ndef run(*args, **kwargs):\n kwargs.setdefault(\"stderr\", subprocess.STDOUT)\n try:\n subprocess.check_output(list(args), **kwargs)\n except subprocess.CalledProcessError as e:\n # Reraise this with a different type so that str(e) is something with\n # stdout in it.\n raise Exception(e.cmd, e.returncode, e.output)\n\n\ndef wait_for_build_completed(session):\n # Wait 20 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(20)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n json_response = response.json()\n assert not json_response[\"building\"]\n assert json_response[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for artifact in json_response[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(\n json_response[\"url\"], artifact[\"relativePath\"]\n ), stream=True\n )\n assert response.headers[\"content-length\"]\n print(\"Downloading {0}\".format(artifact[\"fileName\"]))\n bar = ProgressBar(\n expected_size=int(response.headers[\"content-length\"]),\n filled_char=\"=\"\n )\n content = io.BytesIO()\n for data in response.iter_content(chunk_size=8192):\n content.write(data)\n bar.show(content.tell())\n assert bar.expected_size == content.tell()\n bar.done()\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(content.getvalue())\n paths.append(out_path)\n return paths\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n run(\"python\", \"setup.py\", \"sdist\")\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n\n run(\n \"twine\", \"upload\", \"-s\", \"dist/cryptography-{0}*\".format(version),\n \"vectors/dist/cryptography_vectors-{0}*\".format(version), shell=True\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.get(\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n \"BUILD_VERSION\": version,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n run(\"twine\", \"upload\", \" \".join(paths))\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}]}
| 1,590 | 82 |
gh_patches_debug_7406
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-1191
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Integração do SAPL 3.1 e Portal Modelo
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/base/templatetags/common_tags.py`
Content:
```
1 from compressor.utils import get_class
2 from django import template
3
4 from sapl.base.models import AppConfig
5 from sapl.materia.models import DocumentoAcessorio, MateriaLegislativa
6 from sapl.norma.models import NormaJuridica
7 from sapl.parlamentares.models import Filiacao
8
9 register = template.Library()
10
11
12 @register.simple_tag
13 def field_verbose_name(instance, field_name):
14 return instance._meta.get_field(field_name).verbose_name
15
16
17 @register.simple_tag
18 def fieldclass_verbose_name(class_name, field_name):
19 cls = get_class(class_name)
20 return cls._meta.get_field(field_name).verbose_name
21
22
23 @register.simple_tag
24 def model_verbose_name(class_name):
25 model = get_class(class_name)
26 return model._meta.verbose_name
27
28
29 @register.simple_tag
30 def model_verbose_name_plural(class_name):
31 model = get_class(class_name)
32 return model._meta.verbose_name_plural
33
34
35 @register.filter
36 def lookup(d, key):
37 return d[key] if key in d else []
38
39
40 @register.filter
41 def isinst(value, class_str):
42 classe = value.__class__.__name__
43 return classe == class_str
44
45
46 @register.filter
47 def get_add_perm(value, arg):
48 perm = value
49 view = arg
50
51 try:
52 nome_app = view.__class__.model._meta.app_label
53 except AttributeError:
54 return None
55 nome_model = view.__class__.model.__name__.lower()
56 can_add = '.add_' + nome_model
57
58 return perm.__contains__(nome_app + can_add)
59
60
61 @register.filter
62 def get_change_perm(value, arg):
63 perm = value
64 view = arg
65
66 try:
67 nome_app = view.__class__.model._meta.app_label
68 except AttributeError:
69 return None
70 nome_model = view.__class__.model.__name__.lower()
71 can_change = '.change_' + nome_model
72
73 return perm.__contains__(nome_app + can_change)
74
75
76 @register.filter
77 def get_delete_perm(value, arg):
78 perm = value
79 view = arg
80
81 try:
82 nome_app = view.__class__.model._meta.app_label
83 except AttributeError:
84 return None
85 nome_model = view.__class__.model.__name__.lower()
86 can_delete = '.delete_' + nome_model
87
88 return perm.__contains__(nome_app + can_delete)
89
90
91 @register.filter
92 def ultima_filiacao(value):
93 parlamentar = value
94
95 ultima_filiacao = Filiacao.objects.filter(
96 parlamentar=parlamentar).order_by('-data').first()
97
98 if ultima_filiacao:
99 return ultima_filiacao.partido
100 else:
101 return None
102
103
104 @register.filter
105 def get_config_attr(attribute):
106 return AppConfig.attr(attribute)
107
108
109 @register.filter
110 def str2intabs(value):
111 if not isinstance(value, str):
112 return ''
113 try:
114 v = int(value)
115 v = abs(v)
116 return v
117 except:
118 return ''
119
120
121 @register.filter
122 def url(value):
123 if value.startswith('http://') or value.startswith('https://'):
124 return True
125 return False
126
127
128 @register.filter
129 def cronometro_to_seconds(value):
130 if not AppConfig.attr('cronometro_' + value):
131 return 0
132
133 m, s, x = AppConfig.attr(
134 'cronometro_' + value).isoformat().split(':')
135
136 return 60 * int(m) + int(s)
137
138
139 @register.filter
140 def to_list_pk(object_list):
141 return [o.pk for o in object_list]
142
143
144 @register.filter
145 def search_get_model(object):
146 if type(object) == MateriaLegislativa:
147 return 'm'
148 elif type(object) == DocumentoAcessorio:
149 return 'd'
150 elif type(object) == NormaJuridica:
151 return 'n'
152
153 return None
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sapl/base/templatetags/common_tags.py b/sapl/base/templatetags/common_tags.py
--- a/sapl/base/templatetags/common_tags.py
+++ b/sapl/base/templatetags/common_tags.py
@@ -117,6 +117,23 @@
except:
return ''
[email protected]
+def has_iframe(request):
+
+ iframe = request.session.get('iframe', False)
+ if not iframe and 'iframe' in request.GET:
+ ival = request.GET['iframe']
+ if ival and int(ival) == 1:
+ request.session['iframe'] = True
+ return True
+ elif 'iframe' in request.GET:
+ ival = request.GET['iframe']
+ if ival and int(ival) == 0:
+ del request.session['iframe']
+ return False
+
+ return iframe
+
@register.filter
def url(value):
|
{"golden_diff": "diff --git a/sapl/base/templatetags/common_tags.py b/sapl/base/templatetags/common_tags.py\n--- a/sapl/base/templatetags/common_tags.py\n+++ b/sapl/base/templatetags/common_tags.py\n@@ -117,6 +117,23 @@\n except:\n return ''\n \[email protected]\n+def has_iframe(request):\n+\n+ iframe = request.session.get('iframe', False)\n+ if not iframe and 'iframe' in request.GET:\n+ ival = request.GET['iframe']\n+ if ival and int(ival) == 1:\n+ request.session['iframe'] = True\n+ return True\n+ elif 'iframe' in request.GET:\n+ ival = request.GET['iframe']\n+ if ival and int(ival) == 0:\n+ del request.session['iframe']\n+ return False\n+\n+ return iframe\n+\n \n @register.filter\n def url(value):\n", "issue": "Integra\u00e7\u00e3o do SAPL 3.1 e Portal Modelo\n\n", "before_files": [{"content": "from compressor.utils import get_class\nfrom django import template\n\nfrom sapl.base.models import AppConfig\nfrom sapl.materia.models import DocumentoAcessorio, MateriaLegislativa\nfrom sapl.norma.models import NormaJuridica\nfrom sapl.parlamentares.models import Filiacao\n\nregister = template.Library()\n\n\[email protected]_tag\ndef field_verbose_name(instance, field_name):\n return instance._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef fieldclass_verbose_name(class_name, field_name):\n cls = get_class(class_name)\n return cls._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef model_verbose_name(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name\n\n\[email protected]_tag\ndef model_verbose_name_plural(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name_plural\n\n\[email protected]\ndef lookup(d, key):\n return d[key] if key in d else []\n\n\[email protected]\ndef isinst(value, class_str):\n classe = value.__class__.__name__\n return classe == class_str\n\n\[email protected]\ndef get_add_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_add = '.add_' + nome_model\n\n return perm.__contains__(nome_app + can_add)\n\n\[email protected]\ndef get_change_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_change = '.change_' + nome_model\n\n return perm.__contains__(nome_app + can_change)\n\n\[email protected]\ndef get_delete_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_delete = '.delete_' + nome_model\n\n return perm.__contains__(nome_app + can_delete)\n\n\[email protected]\ndef ultima_filiacao(value):\n parlamentar = value\n\n ultima_filiacao = Filiacao.objects.filter(\n parlamentar=parlamentar).order_by('-data').first()\n\n if ultima_filiacao:\n return ultima_filiacao.partido\n else:\n return None\n\n\[email protected]\ndef get_config_attr(attribute):\n return AppConfig.attr(attribute)\n\n\[email protected]\ndef str2intabs(value):\n if not isinstance(value, str):\n return ''\n try:\n v = int(value)\n v = abs(v)\n return v\n except:\n return ''\n\n\[email protected]\ndef url(value):\n if value.startswith('http://') or value.startswith('https://'):\n return True\n return False\n\n\[email protected]\ndef cronometro_to_seconds(value):\n if not AppConfig.attr('cronometro_' + value):\n return 0\n\n m, s, x = AppConfig.attr(\n 'cronometro_' + value).isoformat().split(':')\n\n return 60 * int(m) + int(s)\n\n\[email protected]\ndef to_list_pk(object_list):\n return [o.pk for o in object_list]\n\n\[email protected]\ndef search_get_model(object):\n if type(object) == MateriaLegislativa:\n return 'm'\n elif type(object) == DocumentoAcessorio:\n return 'd'\n elif type(object) == NormaJuridica:\n return 'n'\n\n return None\n", "path": "sapl/base/templatetags/common_tags.py"}], "after_files": [{"content": "from compressor.utils import get_class\nfrom django import template\n\nfrom sapl.base.models import AppConfig\nfrom sapl.materia.models import DocumentoAcessorio, MateriaLegislativa\nfrom sapl.norma.models import NormaJuridica\nfrom sapl.parlamentares.models import Filiacao\n\nregister = template.Library()\n\n\[email protected]_tag\ndef field_verbose_name(instance, field_name):\n return instance._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef fieldclass_verbose_name(class_name, field_name):\n cls = get_class(class_name)\n return cls._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef model_verbose_name(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name\n\n\[email protected]_tag\ndef model_verbose_name_plural(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name_plural\n\n\[email protected]\ndef lookup(d, key):\n return d[key] if key in d else []\n\n\[email protected]\ndef isinst(value, class_str):\n classe = value.__class__.__name__\n return classe == class_str\n\n\[email protected]\ndef get_add_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_add = '.add_' + nome_model\n\n return perm.__contains__(nome_app + can_add)\n\n\[email protected]\ndef get_change_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_change = '.change_' + nome_model\n\n return perm.__contains__(nome_app + can_change)\n\n\[email protected]\ndef get_delete_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_delete = '.delete_' + nome_model\n\n return perm.__contains__(nome_app + can_delete)\n\n\[email protected]\ndef ultima_filiacao(value):\n parlamentar = value\n\n ultima_filiacao = Filiacao.objects.filter(\n parlamentar=parlamentar).order_by('-data').first()\n\n if ultima_filiacao:\n return ultima_filiacao.partido\n else:\n return None\n\n\[email protected]\ndef get_config_attr(attribute):\n return AppConfig.attr(attribute)\n\n\[email protected]\ndef str2intabs(value):\n if not isinstance(value, str):\n return ''\n try:\n v = int(value)\n v = abs(v)\n return v\n except:\n return ''\n\[email protected]\ndef has_iframe(request):\n\n iframe = request.session.get('iframe', False)\n if not iframe and 'iframe' in request.GET:\n ival = request.GET['iframe']\n if ival and int(ival) == 1:\n request.session['iframe'] = True\n return True\n elif 'iframe' in request.GET:\n ival = request.GET['iframe']\n if ival and int(ival) == 0:\n del request.session['iframe']\n return False\n\n return iframe\n\n\[email protected]\ndef url(value):\n if value.startswith('http://') or value.startswith('https://'):\n return True\n return False\n\n\[email protected]\ndef cronometro_to_seconds(value):\n if not AppConfig.attr('cronometro_' + value):\n return 0\n\n m, s, x = AppConfig.attr(\n 'cronometro_' + value).isoformat().split(':')\n\n return 60 * int(m) + int(s)\n\n\[email protected]\ndef to_list_pk(object_list):\n return [o.pk for o in object_list]\n\n\[email protected]\ndef search_get_model(object):\n if type(object) == MateriaLegislativa:\n return 'm'\n elif type(object) == DocumentoAcessorio:\n return 'd'\n elif type(object) == NormaJuridica:\n return 'n'\n\n return None\n", "path": "sapl/base/templatetags/common_tags.py"}]}
| 1,496 | 218 |
gh_patches_debug_29041
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1699
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unnecessary ping event
**Environment**:
- CTFd Version/Commit: 3.1.1, latest commit
- Operating System: any
- Web Browser and Version: any
in the comment you said "Immediately yield a ping event to force Response headers to be set", but this event seems to lies inside the while True loop, which results to an unnecessary ping event every 5 seconds.
I believe that's an unintended behavior, though it doesn't break anything.
https://github.com/CTFd/CTFd/blob/4c31dc23e8cfa0308367732d603b16e01871b00e/CTFd/utils/events/__init__.py#L57-L67
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/events/__init__.py`
Content:
```
1 import json
2 from collections import defaultdict
3 from queue import Queue
4
5 from gevent import Timeout, spawn
6 from tenacity import retry, wait_exponential
7
8 from CTFd.cache import cache
9 from CTFd.utils import string_types
10
11
12 class ServerSentEvent(object):
13 def __init__(self, data, type=None, id=None):
14 self.data = data
15 self.type = type
16 self.id = id
17
18 def __str__(self):
19 if isinstance(self.data, string_types):
20 data = self.data
21 else:
22 data = json.dumps(self.data)
23 lines = ["data:{value}".format(value=line) for line in data.splitlines()]
24 if self.type:
25 lines.insert(0, "event:{value}".format(value=self.type))
26 if self.id:
27 lines.append("id:{value}".format(value=self.id))
28 return "\n".join(lines) + "\n\n"
29
30 def to_dict(self):
31 d = {"data": self.data}
32 if self.type:
33 d["type"] = self.type
34 if self.id:
35 d["id"] = self.id
36 return d
37
38
39 class EventManager(object):
40 def __init__(self):
41 self.clients = {}
42
43 def publish(self, data, type=None, channel="ctf"):
44 event = ServerSentEvent(data, type=type)
45 message = event.to_dict()
46 for client in list(self.clients.values()):
47 client[channel].put(message)
48 return len(self.clients)
49
50 def listen(self):
51 pass
52
53 def subscribe(self, channel="ctf"):
54 q = defaultdict(Queue)
55 self.clients[id(q)] = q
56 try:
57 while True:
58 try:
59 # Immediately yield a ping event to force Response headers to be set
60 # or else some reverse proxies will incorrectly buffer SSE
61 yield ServerSentEvent(data="", type="ping")
62
63 with Timeout(5):
64 message = q[channel].get()
65 yield ServerSentEvent(**message)
66 except Timeout:
67 yield ServerSentEvent(data="", type="ping")
68 finally:
69 del self.clients[id(q)]
70 del q
71
72
73 class RedisEventManager(EventManager):
74 def __init__(self):
75 super(EventManager, self).__init__()
76 self.client = cache.cache._write_client
77 self.clients = {}
78
79 def publish(self, data, type=None, channel="ctf"):
80 event = ServerSentEvent(data, type=type)
81 message = json.dumps(event.to_dict())
82 return self.client.publish(message=message, channel=channel)
83
84 def listen(self, channel="ctf"):
85 @retry(wait=wait_exponential(min=1, max=30))
86 def _listen():
87 while True:
88 pubsub = self.client.pubsub()
89 pubsub.subscribe(channel)
90 try:
91 while True:
92 message = pubsub.get_message(
93 ignore_subscribe_messages=True, timeout=5
94 )
95 if message:
96 if message["type"] == "message":
97 event = json.loads(message["data"])
98 for client in list(self.clients.values()):
99 client[channel].put(event)
100 finally:
101 pubsub.close()
102
103 spawn(_listen)
104
105 def subscribe(self, channel="ctf"):
106 q = defaultdict(Queue)
107 self.clients[id(q)] = q
108 try:
109 while True:
110 try:
111 # Immediately yield a ping event to force Response headers to be set
112 # or else some reverse proxies will incorrectly buffer SSE
113 yield ServerSentEvent(data="", type="ping")
114
115 with Timeout(5):
116 message = q[channel].get()
117 yield ServerSentEvent(**message)
118 except Timeout:
119 yield ServerSentEvent(data="", type="ping")
120 finally:
121 del self.clients[id(q)]
122 del q
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/CTFd/utils/events/__init__.py b/CTFd/utils/events/__init__.py
--- a/CTFd/utils/events/__init__.py
+++ b/CTFd/utils/events/__init__.py
@@ -54,12 +54,11 @@
q = defaultdict(Queue)
self.clients[id(q)] = q
try:
+ # Immediately yield a ping event to force Response headers to be set
+ # or else some reverse proxies will incorrectly buffer SSE
+ yield ServerSentEvent(data="", type="ping")
while True:
try:
- # Immediately yield a ping event to force Response headers to be set
- # or else some reverse proxies will incorrectly buffer SSE
- yield ServerSentEvent(data="", type="ping")
-
with Timeout(5):
message = q[channel].get()
yield ServerSentEvent(**message)
@@ -106,12 +105,11 @@
q = defaultdict(Queue)
self.clients[id(q)] = q
try:
+ # Immediately yield a ping event to force Response headers to be set
+ # or else some reverse proxies will incorrectly buffer SSE
+ yield ServerSentEvent(data="", type="ping")
while True:
try:
- # Immediately yield a ping event to force Response headers to be set
- # or else some reverse proxies will incorrectly buffer SSE
- yield ServerSentEvent(data="", type="ping")
-
with Timeout(5):
message = q[channel].get()
yield ServerSentEvent(**message)
|
{"golden_diff": "diff --git a/CTFd/utils/events/__init__.py b/CTFd/utils/events/__init__.py\n--- a/CTFd/utils/events/__init__.py\n+++ b/CTFd/utils/events/__init__.py\n@@ -54,12 +54,11 @@\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n+ # Immediately yield a ping event to force Response headers to be set\n+ # or else some reverse proxies will incorrectly buffer SSE\n+ yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n- # Immediately yield a ping event to force Response headers to be set\n- # or else some reverse proxies will incorrectly buffer SSE\n- yield ServerSentEvent(data=\"\", type=\"ping\")\n-\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n@@ -106,12 +105,11 @@\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n+ # Immediately yield a ping event to force Response headers to be set\n+ # or else some reverse proxies will incorrectly buffer SSE\n+ yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n- # Immediately yield a ping event to force Response headers to be set\n- # or else some reverse proxies will incorrectly buffer SSE\n- yield ServerSentEvent(data=\"\", type=\"ping\")\n-\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n", "issue": "Unnecessary ping event\n**Environment**:\r\n\r\n- CTFd Version/Commit: 3.1.1, latest commit\r\n- Operating System: any\r\n- Web Browser and Version: any\r\n\r\nin the comment you said \"Immediately yield a ping event to force Response headers to be set\", but this event seems to lies inside the while True loop, which results to an unnecessary ping event every 5 seconds.\r\nI believe that's an unintended behavior, though it doesn't break anything.\r\n\r\nhttps://github.com/CTFd/CTFd/blob/4c31dc23e8cfa0308367732d603b16e01871b00e/CTFd/utils/events/__init__.py#L57-L67\n", "before_files": [{"content": "import json\nfrom collections import defaultdict\nfrom queue import Queue\n\nfrom gevent import Timeout, spawn\nfrom tenacity import retry, wait_exponential\n\nfrom CTFd.cache import cache\nfrom CTFd.utils import string_types\n\n\nclass ServerSentEvent(object):\n def __init__(self, data, type=None, id=None):\n self.data = data\n self.type = type\n self.id = id\n\n def __str__(self):\n if isinstance(self.data, string_types):\n data = self.data\n else:\n data = json.dumps(self.data)\n lines = [\"data:{value}\".format(value=line) for line in data.splitlines()]\n if self.type:\n lines.insert(0, \"event:{value}\".format(value=self.type))\n if self.id:\n lines.append(\"id:{value}\".format(value=self.id))\n return \"\\n\".join(lines) + \"\\n\\n\"\n\n def to_dict(self):\n d = {\"data\": self.data}\n if self.type:\n d[\"type\"] = self.type\n if self.id:\n d[\"id\"] = self.id\n return d\n\n\nclass EventManager(object):\n def __init__(self):\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = event.to_dict()\n for client in list(self.clients.values()):\n client[channel].put(message)\n return len(self.clients)\n\n def listen(self):\n pass\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n while True:\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n\n\nclass RedisEventManager(EventManager):\n def __init__(self):\n super(EventManager, self).__init__()\n self.client = cache.cache._write_client\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = json.dumps(event.to_dict())\n return self.client.publish(message=message, channel=channel)\n\n def listen(self, channel=\"ctf\"):\n @retry(wait=wait_exponential(min=1, max=30))\n def _listen():\n while True:\n pubsub = self.client.pubsub()\n pubsub.subscribe(channel)\n try:\n while True:\n message = pubsub.get_message(\n ignore_subscribe_messages=True, timeout=5\n )\n if message:\n if message[\"type\"] == \"message\":\n event = json.loads(message[\"data\"])\n for client in list(self.clients.values()):\n client[channel].put(event)\n finally:\n pubsub.close()\n\n spawn(_listen)\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n while True:\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n", "path": "CTFd/utils/events/__init__.py"}], "after_files": [{"content": "import json\nfrom collections import defaultdict\nfrom queue import Queue\n\nfrom gevent import Timeout, spawn\nfrom tenacity import retry, wait_exponential\n\nfrom CTFd.cache import cache\nfrom CTFd.utils import string_types\n\n\nclass ServerSentEvent(object):\n def __init__(self, data, type=None, id=None):\n self.data = data\n self.type = type\n self.id = id\n\n def __str__(self):\n if isinstance(self.data, string_types):\n data = self.data\n else:\n data = json.dumps(self.data)\n lines = [\"data:{value}\".format(value=line) for line in data.splitlines()]\n if self.type:\n lines.insert(0, \"event:{value}\".format(value=self.type))\n if self.id:\n lines.append(\"id:{value}\".format(value=self.id))\n return \"\\n\".join(lines) + \"\\n\\n\"\n\n def to_dict(self):\n d = {\"data\": self.data}\n if self.type:\n d[\"type\"] = self.type\n if self.id:\n d[\"id\"] = self.id\n return d\n\n\nclass EventManager(object):\n def __init__(self):\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = event.to_dict()\n for client in list(self.clients.values()):\n client[channel].put(message)\n return len(self.clients)\n\n def listen(self):\n pass\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n\n\nclass RedisEventManager(EventManager):\n def __init__(self):\n super(EventManager, self).__init__()\n self.client = cache.cache._write_client\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = json.dumps(event.to_dict())\n return self.client.publish(message=message, channel=channel)\n\n def listen(self, channel=\"ctf\"):\n @retry(wait=wait_exponential(min=1, max=30))\n def _listen():\n while True:\n pubsub = self.client.pubsub()\n pubsub.subscribe(channel)\n try:\n while True:\n message = pubsub.get_message(\n ignore_subscribe_messages=True, timeout=5\n )\n if message:\n if message[\"type\"] == \"message\":\n event = json.loads(message[\"data\"])\n for client in list(self.clients.values()):\n client[channel].put(event)\n finally:\n pubsub.close()\n\n spawn(_listen)\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n", "path": "CTFd/utils/events/__init__.py"}]}
| 1,492 | 340 |
gh_patches_debug_971
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1204
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue with requests dependency
I found that commit 95d9306d2a1fd22dffb12a0548abf2d2f744ed9d excludes requests 2.11 for a bug that is fixed now on requests 2.11.1. And that's giving me a version conflict with another of the modules on my project:
```
pkg_resources.ContextualVersionConflict: (requests 2.11.1 (..............), Requirement.parse('requests<2.11,>=2.5.2'), {'docker-py'})
```
Can we allow requests 2.11.1 ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup
6
7
8 ROOT_DIR = os.path.dirname(__file__)
9 SOURCE_DIR = os.path.join(ROOT_DIR)
10
11 requirements = [
12 'requests >= 2.5.2, < 2.11',
13 'six >= 1.4.0',
14 'websocket-client >= 0.32.0',
15 'docker-pycreds >= 0.2.1'
16 ]
17
18 if sys.platform == 'win32':
19 requirements.append('pypiwin32 >= 219')
20
21 extras_require = {
22 ':python_version < "3.5"': 'backports.ssl_match_hostname >= 3.5',
23 ':python_version < "3.3"': 'ipaddress >= 1.0.16',
24 }
25
26 version = None
27 exec(open('docker/version.py').read())
28
29 with open('./test-requirements.txt') as test_reqs_txt:
30 test_requirements = [line for line in test_reqs_txt]
31
32
33 setup(
34 name="docker-py",
35 version=version,
36 description="Python client for Docker.",
37 url='https://github.com/docker/docker-py/',
38 packages=[
39 'docker', 'docker.api', 'docker.auth', 'docker.transport',
40 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',
41 'docker.types',
42 ],
43 install_requires=requirements,
44 tests_require=test_requirements,
45 extras_require=extras_require,
46 zip_safe=False,
47 test_suite='tests',
48 classifiers=[
49 'Development Status :: 4 - Beta',
50 'Environment :: Other Environment',
51 'Intended Audience :: Developers',
52 'Operating System :: OS Independent',
53 'Programming Language :: Python',
54 'Programming Language :: Python :: 2',
55 'Programming Language :: Python :: 2.6',
56 'Programming Language :: Python :: 2.7',
57 'Programming Language :: Python :: 3',
58 'Programming Language :: Python :: 3.3',
59 'Programming Language :: Python :: 3.4',
60 'Programming Language :: Python :: 3.5',
61 'Topic :: Utilities',
62 'License :: OSI Approved :: Apache Software License',
63 ],
64 )
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
SOURCE_DIR = os.path.join(ROOT_DIR)
requirements = [
- 'requests >= 2.5.2, < 2.11',
+ 'requests >= 2.5.2',
'six >= 1.4.0',
'websocket-client >= 0.32.0',
'docker-pycreds >= 0.2.1'
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n SOURCE_DIR = os.path.join(ROOT_DIR)\n \n requirements = [\n- 'requests >= 2.5.2, < 2.11',\n+ 'requests >= 2.5.2',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n", "issue": "Issue with requests dependency\nI found that commit 95d9306d2a1fd22dffb12a0548abf2d2f744ed9d excludes requests 2.11 for a bug that is fixed now on requests 2.11.1. And that's giving me a version conflict with another of the modules on my project:\n\n```\npkg_resources.ContextualVersionConflict: (requests 2.11.1 (..............), Requirement.parse('requests<2.11,>=2.5.2'), {'docker-py'})\n```\n\nCan we allow requests 2.11.1 ?\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}]}
| 993 | 112 |
gh_patches_debug_63551
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-602
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hoist HTTPStatus into falcon top-level namespace
I.e., add an import line to `falcon/__init__.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/__init__.py`
Content:
```
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 HTTP_METHODS = (
16 'CONNECT',
17 'DELETE',
18 'GET',
19 'HEAD',
20 'OPTIONS',
21 'PATCH',
22 'POST',
23 'PUT',
24 'TRACE',
25 )
26
27 DEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'
28
29
30 # Hoist classes and functions into the falcon namespace
31 from falcon.version import __version__ # NOQA
32 from falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA
33 from falcon.status_codes import * # NOQA
34 from falcon.errors import * # NOQA
35 from falcon.redirects import * # NOQA
36 from falcon.http_error import HTTPError # NOQA
37 from falcon.util import * # NOQA
38 from falcon.hooks import before, after # NOQA
39 from falcon.request import Request, RequestOptions # NOQA
40 from falcon.response import Response # NOQA
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/falcon/__init__.py b/falcon/__init__.py
--- a/falcon/__init__.py
+++ b/falcon/__init__.py
@@ -34,6 +34,7 @@
from falcon.errors import * # NOQA
from falcon.redirects import * # NOQA
from falcon.http_error import HTTPError # NOQA
+from falcon.http_status import HTTPStatus # NOQA
from falcon.util import * # NOQA
from falcon.hooks import before, after # NOQA
from falcon.request import Request, RequestOptions # NOQA
|
{"golden_diff": "diff --git a/falcon/__init__.py b/falcon/__init__.py\n--- a/falcon/__init__.py\n+++ b/falcon/__init__.py\n@@ -34,6 +34,7 @@\n from falcon.errors import * # NOQA\n from falcon.redirects import * # NOQA\n from falcon.http_error import HTTPError # NOQA\n+from falcon.http_status import HTTPStatus # NOQA\n from falcon.util import * # NOQA\n from falcon.hooks import before, after # NOQA\n from falcon.request import Request, RequestOptions # NOQA\n", "issue": "Hoist HTTPStatus into falcon top-level namespace\nI.e., add an import line to `falcon/__init__.py`\n\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nHTTP_METHODS = (\n 'CONNECT',\n 'DELETE',\n 'GET',\n 'HEAD',\n 'OPTIONS',\n 'PATCH',\n 'POST',\n 'PUT',\n 'TRACE',\n)\n\nDEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'\n\n\n# Hoist classes and functions into the falcon namespace\nfrom falcon.version import __version__ # NOQA\nfrom falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA\nfrom falcon.status_codes import * # NOQA\nfrom falcon.errors import * # NOQA\nfrom falcon.redirects import * # NOQA\nfrom falcon.http_error import HTTPError # NOQA\nfrom falcon.util import * # NOQA\nfrom falcon.hooks import before, after # NOQA\nfrom falcon.request import Request, RequestOptions # NOQA\nfrom falcon.response import Response # NOQA\n", "path": "falcon/__init__.py"}], "after_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nHTTP_METHODS = (\n 'CONNECT',\n 'DELETE',\n 'GET',\n 'HEAD',\n 'OPTIONS',\n 'PATCH',\n 'POST',\n 'PUT',\n 'TRACE',\n)\n\nDEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'\n\n\n# Hoist classes and functions into the falcon namespace\nfrom falcon.version import __version__ # NOQA\nfrom falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA\nfrom falcon.status_codes import * # NOQA\nfrom falcon.errors import * # NOQA\nfrom falcon.redirects import * # NOQA\nfrom falcon.http_error import HTTPError # NOQA\nfrom falcon.http_status import HTTPStatus # NOQA\nfrom falcon.util import * # NOQA\nfrom falcon.hooks import before, after # NOQA\nfrom falcon.request import Request, RequestOptions # NOQA\nfrom falcon.response import Response # NOQA\n", "path": "falcon/__init__.py"}]}
| 692 | 136 |
gh_patches_debug_18108
|
rasdani/github-patches
|
git_diff
|
projectmesa__mesa-1355
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
refactor: Remove dependency on jQuery
We should replace the `$(...)` with vanilla JS.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import re
3 import os
4 import urllib.request
5 import zipfile
6 import shutil
7
8 from setuptools import setup, find_packages
9 from codecs import open
10
11 requires = ["click", "cookiecutter", "networkx", "numpy", "pandas", "tornado", "tqdm"]
12
13 extras_require = {
14 "dev": ["black", "coverage", "flake8", "pytest >= 4.6", "pytest-cov", "sphinx"],
15 "docs": ["sphinx", "ipython"],
16 }
17
18 version = ""
19 with open("mesa/__init__.py") as fd:
20 version = re.search(
21 r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]', fd.read(), re.MULTILINE
22 ).group(1)
23
24 with open("README.rst", "rb", encoding="utf-8") as f:
25 readme = f.read()
26
27 # Ensure JS dependencies are downloaded
28 external_dir = "mesa/visualization/templates/external"
29 # We use a different path for single-file JS because some of them are loaded
30 # the same way as Mesa JS files
31 external_dir_single = "mesa/visualization/templates/js/external"
32 # First, ensure that the external directories exists
33 os.makedirs(external_dir, exist_ok=True)
34 os.makedirs(external_dir_single, exist_ok=True)
35
36
37 def ensure_JS_dep(dirname, url):
38 dst_path = os.path.join(external_dir, dirname)
39 if os.path.isdir(dst_path):
40 # Do nothing if already downloaded
41 return
42 print(f"Downloading the {dirname} dependency from the internet...")
43 zip_file = dirname + ".zip"
44 urllib.request.urlretrieve(url, zip_file)
45 with zipfile.ZipFile(zip_file, "r") as zip_ref:
46 zip_ref.extractall()
47 shutil.move(dirname, dst_path)
48 # Cleanup
49 os.remove(zip_file)
50 print("Done")
51
52
53 def ensure_JS_dep_single(url, out_name=None):
54 # Used for downloading e.g. jQuery single file
55 if out_name is None:
56 out_name = url.split("/")[-1]
57 dst_path = os.path.join(external_dir_single, out_name)
58 if os.path.isfile(dst_path):
59 return
60 print(f"Downloading the {out_name} dependency from the internet...")
61 urllib.request.urlretrieve(url, out_name)
62 shutil.move(out_name, dst_path)
63
64
65 # Important: when you update JS dependency version, make sure to also update the
66 # hardcoded included files and versions in: mesa/visualization/templates/modular_template.html
67
68 # Ensure Bootstrap
69 bootstrap_version = "5.1.3"
70 ensure_JS_dep(
71 f"bootstrap-{bootstrap_version}-dist",
72 f"https://github.com/twbs/bootstrap/releases/download/v{bootstrap_version}/bootstrap-{bootstrap_version}-dist.zip",
73 )
74
75 # Ensure Bootstrap Slider
76 bootstrap_slider_version = "11.0.2"
77 ensure_JS_dep(
78 f"bootstrap-slider-{bootstrap_slider_version}",
79 f"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip",
80 )
81
82 jquery_version = "2.2.4"
83 ensure_JS_dep_single(
84 f"https://code.jquery.com/jquery-{jquery_version}.min.js",
85 )
86 # Important: when updating the D3 version, make sure to update the constant
87 # D3_JS_FILE in mesa/visualization/ModularVisualization.py.
88 d3_version = "7.4.3"
89 ensure_JS_dep_single(
90 f"https://cdnjs.cloudflare.com/ajax/libs/d3/{d3_version}/d3.min.js",
91 out_name=f"d3-{d3_version}.min.js",
92 )
93 # Important: Make sure to update CHART_JS_FILE in
94 # mesa/visualization/ModularVisualization.py.
95 chartjs_version = "3.6.1"
96 ensure_JS_dep_single(
97 f"https://cdn.jsdelivr.net/npm/chart.js@{chartjs_version}/dist/chart.min.js",
98 out_name=f"chart-{chartjs_version}.min.js",
99 )
100
101
102 setup(
103 name="Mesa",
104 version=version,
105 description="Agent-based modeling (ABM) in Python 3+",
106 long_description=readme,
107 author="Project Mesa Team",
108 author_email="[email protected]",
109 url="https://github.com/projectmesa/mesa",
110 packages=find_packages(),
111 package_data={
112 "mesa": [
113 "visualization/templates/*.html",
114 "visualization/templates/css/*",
115 "visualization/templates/js/*",
116 "visualization/templates/external/**/*",
117 ],
118 "cookiecutter-mesa": ["cookiecutter-mesa/*"],
119 },
120 include_package_data=True,
121 install_requires=requires,
122 extras_require=extras_require,
123 keywords="agent based modeling model ABM simulation multi-agent",
124 license="Apache 2.0",
125 zip_safe=False,
126 classifiers=[
127 "Topic :: Scientific/Engineering",
128 "Topic :: Scientific/Engineering :: Artificial Life",
129 "Topic :: Scientific/Engineering :: Artificial Intelligence",
130 "Intended Audience :: Science/Research",
131 "Programming Language :: Python :: 3 :: Only",
132 "Programming Language :: Python :: 3.7",
133 "Programming Language :: Python :: 3.8",
134 "Programming Language :: Python :: 3.9",
135 "Programming Language :: Python :: 3.10",
136 "License :: OSI Approved :: Apache Software License",
137 "Operating System :: OS Independent",
138 "Development Status :: 3 - Alpha",
139 "Natural Language :: English",
140 ],
141 entry_points="""
142 [console_scripts]
143 mesa=mesa.main:cli
144 """,
145 python_requires=">=3.7",
146 )
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -51,7 +51,7 @@
def ensure_JS_dep_single(url, out_name=None):
- # Used for downloading e.g. jQuery single file
+ # Used for downloading e.g. D3.js single file
if out_name is None:
out_name = url.split("/")[-1]
dst_path = os.path.join(external_dir_single, out_name)
@@ -79,10 +79,6 @@
f"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip",
)
-jquery_version = "2.2.4"
-ensure_JS_dep_single(
- f"https://code.jquery.com/jquery-{jquery_version}.min.js",
-)
# Important: when updating the D3 version, make sure to update the constant
# D3_JS_FILE in mesa/visualization/ModularVisualization.py.
d3_version = "7.4.3"
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -51,7 +51,7 @@\n \n \n def ensure_JS_dep_single(url, out_name=None):\n- # Used for downloading e.g. jQuery single file\n+ # Used for downloading e.g. D3.js single file\n if out_name is None:\n out_name = url.split(\"/\")[-1]\n dst_path = os.path.join(external_dir_single, out_name)\n@@ -79,10 +79,6 @@\n f\"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip\",\n )\n \n-jquery_version = \"2.2.4\"\n-ensure_JS_dep_single(\n- f\"https://code.jquery.com/jquery-{jquery_version}.min.js\",\n-)\n # Important: when updating the D3 version, make sure to update the constant\n # D3_JS_FILE in mesa/visualization/ModularVisualization.py.\n d3_version = \"7.4.3\"\n", "issue": "refactor: Remove dependency on jQuery\nWe should replace the `$(...)` with vanilla JS.\n", "before_files": [{"content": "#!/usr/bin/env python\nimport re\nimport os\nimport urllib.request\nimport zipfile\nimport shutil\n\nfrom setuptools import setup, find_packages\nfrom codecs import open\n\nrequires = [\"click\", \"cookiecutter\", \"networkx\", \"numpy\", \"pandas\", \"tornado\", \"tqdm\"]\n\nextras_require = {\n \"dev\": [\"black\", \"coverage\", \"flake8\", \"pytest >= 4.6\", \"pytest-cov\", \"sphinx\"],\n \"docs\": [\"sphinx\", \"ipython\"],\n}\n\nversion = \"\"\nwith open(\"mesa/__init__.py\") as fd:\n version = re.search(\n r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]', fd.read(), re.MULTILINE\n ).group(1)\n\nwith open(\"README.rst\", \"rb\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n# Ensure JS dependencies are downloaded\nexternal_dir = \"mesa/visualization/templates/external\"\n# We use a different path for single-file JS because some of them are loaded\n# the same way as Mesa JS files\nexternal_dir_single = \"mesa/visualization/templates/js/external\"\n# First, ensure that the external directories exists\nos.makedirs(external_dir, exist_ok=True)\nos.makedirs(external_dir_single, exist_ok=True)\n\n\ndef ensure_JS_dep(dirname, url):\n dst_path = os.path.join(external_dir, dirname)\n if os.path.isdir(dst_path):\n # Do nothing if already downloaded\n return\n print(f\"Downloading the {dirname} dependency from the internet...\")\n zip_file = dirname + \".zip\"\n urllib.request.urlretrieve(url, zip_file)\n with zipfile.ZipFile(zip_file, \"r\") as zip_ref:\n zip_ref.extractall()\n shutil.move(dirname, dst_path)\n # Cleanup\n os.remove(zip_file)\n print(\"Done\")\n\n\ndef ensure_JS_dep_single(url, out_name=None):\n # Used for downloading e.g. jQuery single file\n if out_name is None:\n out_name = url.split(\"/\")[-1]\n dst_path = os.path.join(external_dir_single, out_name)\n if os.path.isfile(dst_path):\n return\n print(f\"Downloading the {out_name} dependency from the internet...\")\n urllib.request.urlretrieve(url, out_name)\n shutil.move(out_name, dst_path)\n\n\n# Important: when you update JS dependency version, make sure to also update the\n# hardcoded included files and versions in: mesa/visualization/templates/modular_template.html\n\n# Ensure Bootstrap\nbootstrap_version = \"5.1.3\"\nensure_JS_dep(\n f\"bootstrap-{bootstrap_version}-dist\",\n f\"https://github.com/twbs/bootstrap/releases/download/v{bootstrap_version}/bootstrap-{bootstrap_version}-dist.zip\",\n)\n\n# Ensure Bootstrap Slider\nbootstrap_slider_version = \"11.0.2\"\nensure_JS_dep(\n f\"bootstrap-slider-{bootstrap_slider_version}\",\n f\"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip\",\n)\n\njquery_version = \"2.2.4\"\nensure_JS_dep_single(\n f\"https://code.jquery.com/jquery-{jquery_version}.min.js\",\n)\n# Important: when updating the D3 version, make sure to update the constant\n# D3_JS_FILE in mesa/visualization/ModularVisualization.py.\nd3_version = \"7.4.3\"\nensure_JS_dep_single(\n f\"https://cdnjs.cloudflare.com/ajax/libs/d3/{d3_version}/d3.min.js\",\n out_name=f\"d3-{d3_version}.min.js\",\n)\n# Important: Make sure to update CHART_JS_FILE in\n# mesa/visualization/ModularVisualization.py.\nchartjs_version = \"3.6.1\"\nensure_JS_dep_single(\n f\"https://cdn.jsdelivr.net/npm/chart.js@{chartjs_version}/dist/chart.min.js\",\n out_name=f\"chart-{chartjs_version}.min.js\",\n)\n\n\nsetup(\n name=\"Mesa\",\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author=\"Project Mesa Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/projectmesa/mesa\",\n packages=find_packages(),\n package_data={\n \"mesa\": [\n \"visualization/templates/*.html\",\n \"visualization/templates/css/*\",\n \"visualization/templates/js/*\",\n \"visualization/templates/external/**/*\",\n ],\n \"cookiecutter-mesa\": [\"cookiecutter-mesa/*\"],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords=\"agent based modeling model ABM simulation multi-agent\",\n license=\"Apache 2.0\",\n zip_safe=False,\n classifiers=[\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Life\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Intended Audience :: Science/Research\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 3 - Alpha\",\n \"Natural Language :: English\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n mesa=mesa.main:cli\n \"\"\",\n python_requires=\">=3.7\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport re\nimport os\nimport urllib.request\nimport zipfile\nimport shutil\n\nfrom setuptools import setup, find_packages\nfrom codecs import open\n\nrequires = [\"click\", \"cookiecutter\", \"networkx\", \"numpy\", \"pandas\", \"tornado\", \"tqdm\"]\n\nextras_require = {\n \"dev\": [\"black\", \"coverage\", \"flake8\", \"pytest >= 4.6\", \"pytest-cov\", \"sphinx\"],\n \"docs\": [\"sphinx\", \"ipython\"],\n}\n\nversion = \"\"\nwith open(\"mesa/__init__.py\") as fd:\n version = re.search(\n r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]', fd.read(), re.MULTILINE\n ).group(1)\n\nwith open(\"README.rst\", \"rb\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n# Ensure JS dependencies are downloaded\nexternal_dir = \"mesa/visualization/templates/external\"\n# We use a different path for single-file JS because some of them are loaded\n# the same way as Mesa JS files\nexternal_dir_single = \"mesa/visualization/templates/js/external\"\n# First, ensure that the external directories exists\nos.makedirs(external_dir, exist_ok=True)\nos.makedirs(external_dir_single, exist_ok=True)\n\n\ndef ensure_JS_dep(dirname, url):\n dst_path = os.path.join(external_dir, dirname)\n if os.path.isdir(dst_path):\n # Do nothing if already downloaded\n return\n print(f\"Downloading the {dirname} dependency from the internet...\")\n zip_file = dirname + \".zip\"\n urllib.request.urlretrieve(url, zip_file)\n with zipfile.ZipFile(zip_file, \"r\") as zip_ref:\n zip_ref.extractall()\n shutil.move(dirname, dst_path)\n # Cleanup\n os.remove(zip_file)\n print(\"Done\")\n\n\ndef ensure_JS_dep_single(url, out_name=None):\n # Used for downloading e.g. D3.js single file\n if out_name is None:\n out_name = url.split(\"/\")[-1]\n dst_path = os.path.join(external_dir_single, out_name)\n if os.path.isfile(dst_path):\n return\n print(f\"Downloading the {out_name} dependency from the internet...\")\n urllib.request.urlretrieve(url, out_name)\n shutil.move(out_name, dst_path)\n\n\n# Important: when you update JS dependency version, make sure to also update the\n# hardcoded included files and versions in: mesa/visualization/templates/modular_template.html\n\n# Ensure Bootstrap\nbootstrap_version = \"5.1.3\"\nensure_JS_dep(\n f\"bootstrap-{bootstrap_version}-dist\",\n f\"https://github.com/twbs/bootstrap/releases/download/v{bootstrap_version}/bootstrap-{bootstrap_version}-dist.zip\",\n)\n\n# Ensure Bootstrap Slider\nbootstrap_slider_version = \"11.0.2\"\nensure_JS_dep(\n f\"bootstrap-slider-{bootstrap_slider_version}\",\n f\"https://github.com/seiyria/bootstrap-slider/archive/refs/tags/v{bootstrap_slider_version}.zip\",\n)\n\n# Important: when updating the D3 version, make sure to update the constant\n# D3_JS_FILE in mesa/visualization/ModularVisualization.py.\nd3_version = \"7.4.3\"\nensure_JS_dep_single(\n f\"https://cdnjs.cloudflare.com/ajax/libs/d3/{d3_version}/d3.min.js\",\n out_name=f\"d3-{d3_version}.min.js\",\n)\n# Important: Make sure to update CHART_JS_FILE in\n# mesa/visualization/ModularVisualization.py.\nchartjs_version = \"3.6.1\"\nensure_JS_dep_single(\n f\"https://cdn.jsdelivr.net/npm/chart.js@{chartjs_version}/dist/chart.min.js\",\n out_name=f\"chart-{chartjs_version}.min.js\",\n)\n\n\nsetup(\n name=\"Mesa\",\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author=\"Project Mesa Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/projectmesa/mesa\",\n packages=find_packages(),\n package_data={\n \"mesa\": [\n \"visualization/templates/*.html\",\n \"visualization/templates/css/*\",\n \"visualization/templates/js/*\",\n \"visualization/templates/external/**/*\",\n ],\n \"cookiecutter-mesa\": [\"cookiecutter-mesa/*\"],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords=\"agent based modeling model ABM simulation multi-agent\",\n license=\"Apache 2.0\",\n zip_safe=False,\n classifiers=[\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Life\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Intended Audience :: Science/Research\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 3 - Alpha\",\n \"Natural Language :: English\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n mesa=mesa.main:cli\n \"\"\",\n python_requires=\">=3.7\",\n)\n", "path": "setup.py"}]}
| 1,815 | 220 |
gh_patches_debug_22620
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-1582
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Build fails with IPython 3.0
Trying to use ipython notebooks with the current dev version of IPython (3.0.0) fails building with some warnings etc. because the `nbformat` interface has changed a little:
```
...WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:19: IPython.nbformat.current is deprecated.
- use IPython.nbformat for read/write/validate public API
- use IPython.nbformat.vX directly to composing notebooks of a particular version
...
... WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:75: reads_json is deprecated, use reads
...
AttributeError: cells
```
This is fairly easily fixed and I will send a PR shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/compile/ipynb/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2013-2015 Damián Avila and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Implementation of compile_html based on nbconvert."""
28
29 from __future__ import unicode_literals, print_function
30 import io
31 import os
32
33 try:
34 from IPython.nbconvert.exporters import HTMLExporter
35 from IPython.nbformat import current as nbformat
36 from IPython.config import Config
37 flag = True
38 except ImportError:
39 flag = None
40
41 from nikola.plugin_categories import PageCompiler
42 from nikola.utils import makedirs, req_missing
43
44
45 class CompileIPynb(PageCompiler):
46 """Compile IPynb into HTML."""
47
48 name = "ipynb"
49 supports_onefile = False
50 demote_headers = True
51
52 def compile_html(self, source, dest, is_two_file=True):
53 if flag is None:
54 req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')
55 makedirs(os.path.dirname(dest))
56 HTMLExporter.default_template = 'basic'
57 c = Config(self.site.config['IPYNB_CONFIG'])
58 exportHtml = HTMLExporter(config=c)
59 with io.open(dest, "w+", encoding="utf8") as out_file:
60 with io.open(source, "r", encoding="utf8") as in_file:
61 nb = in_file.read()
62 nb_json = nbformat.reads_json(nb)
63 (body, resources) = exportHtml.from_notebook_node(nb_json)
64 out_file.write(body)
65
66 def create_post(self, path, **kw):
67 content = kw.pop('content', None)
68 onefile = kw.pop('onefile', False)
69 # is_page is not needed to create the file
70 kw.pop('is_page', False)
71
72 makedirs(os.path.dirname(path))
73 if onefile:
74 raise Exception('The one-file format is not supported by this compiler.')
75 with io.open(path, "w+", encoding="utf8") as fd:
76 if not content.startswith("Write your"):
77 fd.write(content)
78 else:
79 fd.write("""{
80 "metadata": {
81 "name": ""
82 },
83 "nbformat": 3,
84 "nbformat_minor": 0,
85 "worksheets": [
86 {
87 "cells": [
88 {
89 "cell_type": "code",
90 "collapsed": false,
91 "input": [],
92 "language": "python",
93 "metadata": {},
94 "outputs": []
95 }
96 ],
97 "metadata": {}
98 }
99 ]
100 }""")
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nikola/plugins/compile/ipynb/__init__.py b/nikola/plugins/compile/ipynb/__init__.py
--- a/nikola/plugins/compile/ipynb/__init__.py
+++ b/nikola/plugins/compile/ipynb/__init__.py
@@ -31,8 +31,15 @@
import os
try:
+ import IPython
from IPython.nbconvert.exporters import HTMLExporter
- from IPython.nbformat import current as nbformat
+ if IPython.version_info[0] >= 3: # API changed with 3.0.0
+ from IPython import nbformat
+ current_nbformat = nbformat.current_nbformat
+ else:
+ import IPython.nbformat.current as nbformat
+ current_nbformat = 'json'
+
from IPython.config import Config
flag = True
except ImportError:
@@ -58,8 +65,7 @@
exportHtml = HTMLExporter(config=c)
with io.open(dest, "w+", encoding="utf8") as out_file:
with io.open(source, "r", encoding="utf8") as in_file:
- nb = in_file.read()
- nb_json = nbformat.reads_json(nb)
+ nb_json = nbformat.read(in_file, current_nbformat)
(body, resources) = exportHtml.from_notebook_node(nb_json)
out_file.write(body)
|
{"golden_diff": "diff --git a/nikola/plugins/compile/ipynb/__init__.py b/nikola/plugins/compile/ipynb/__init__.py\n--- a/nikola/plugins/compile/ipynb/__init__.py\n+++ b/nikola/plugins/compile/ipynb/__init__.py\n@@ -31,8 +31,15 @@\n import os\n \n try:\n+ import IPython\n from IPython.nbconvert.exporters import HTMLExporter\n- from IPython.nbformat import current as nbformat\n+ if IPython.version_info[0] >= 3: # API changed with 3.0.0\n+ from IPython import nbformat\n+ current_nbformat = nbformat.current_nbformat\n+ else:\n+ import IPython.nbformat.current as nbformat\n+ current_nbformat = 'json'\n+\n from IPython.config import Config\n flag = True\n except ImportError:\n@@ -58,8 +65,7 @@\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n- nb = in_file.read()\n- nb_json = nbformat.reads_json(nb)\n+ nb_json = nbformat.read(in_file, current_nbformat)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n", "issue": "Build fails with IPython 3.0\nTrying to use ipython notebooks with the current dev version of IPython (3.0.0) fails building with some warnings etc. because the `nbformat` interface has changed a little:\n\n```\n...WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:19: IPython.nbformat.current is deprecated.\n\n- use IPython.nbformat for read/write/validate public API\n- use IPython.nbformat.vX directly to composing notebooks of a particular version\n...\n... WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:75: reads_json is deprecated, use reads\n...\nAttributeError: cells\n```\n\nThis is fairly easily fixed and I will send a PR shortly.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2013-2015 Dami\u00e1n Avila and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Implementation of compile_html based on nbconvert.\"\"\"\n\nfrom __future__ import unicode_literals, print_function\nimport io\nimport os\n\ntry:\n from IPython.nbconvert.exporters import HTMLExporter\n from IPython.nbformat import current as nbformat\n from IPython.config import Config\n flag = True\nexcept ImportError:\n flag = None\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import makedirs, req_missing\n\n\nclass CompileIPynb(PageCompiler):\n \"\"\"Compile IPynb into HTML.\"\"\"\n\n name = \"ipynb\"\n supports_onefile = False\n demote_headers = True\n\n def compile_html(self, source, dest, is_two_file=True):\n if flag is None:\n req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')\n makedirs(os.path.dirname(dest))\n HTMLExporter.default_template = 'basic'\n c = Config(self.site.config['IPYNB_CONFIG'])\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n nb = in_file.read()\n nb_json = nbformat.reads_json(nb)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n\n def create_post(self, path, **kw):\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not needed to create the file\n kw.pop('is_page', False)\n\n makedirs(os.path.dirname(path))\n if onefile:\n raise Exception('The one-file format is not supported by this compiler.')\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if not content.startswith(\"Write your\"):\n fd.write(content)\n else:\n fd.write(\"\"\"{\n \"metadata\": {\n \"name\": \"\"\n },\n \"nbformat\": 3,\n \"nbformat_minor\": 0,\n \"worksheets\": [\n {\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"collapsed\": false,\n \"input\": [],\n \"language\": \"python\",\n \"metadata\": {},\n \"outputs\": []\n }\n ],\n \"metadata\": {}\n }\n ]\n}\"\"\")\n", "path": "nikola/plugins/compile/ipynb/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2013-2015 Dami\u00e1n Avila and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Implementation of compile_html based on nbconvert.\"\"\"\n\nfrom __future__ import unicode_literals, print_function\nimport io\nimport os\n\ntry:\n import IPython\n from IPython.nbconvert.exporters import HTMLExporter\n if IPython.version_info[0] >= 3: # API changed with 3.0.0\n from IPython import nbformat\n current_nbformat = nbformat.current_nbformat\n else:\n import IPython.nbformat.current as nbformat\n current_nbformat = 'json'\n\n from IPython.config import Config\n flag = True\nexcept ImportError:\n flag = None\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import makedirs, req_missing\n\n\nclass CompileIPynb(PageCompiler):\n \"\"\"Compile IPynb into HTML.\"\"\"\n\n name = \"ipynb\"\n supports_onefile = False\n demote_headers = True\n\n def compile_html(self, source, dest, is_two_file=True):\n if flag is None:\n req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')\n makedirs(os.path.dirname(dest))\n HTMLExporter.default_template = 'basic'\n c = Config(self.site.config['IPYNB_CONFIG'])\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n nb_json = nbformat.read(in_file, current_nbformat)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n\n def create_post(self, path, **kw):\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not needed to create the file\n kw.pop('is_page', False)\n\n makedirs(os.path.dirname(path))\n if onefile:\n raise Exception('The one-file format is not supported by this compiler.')\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if not content.startswith(\"Write your\"):\n fd.write(content)\n else:\n fd.write(\"\"\"{\n \"metadata\": {\n \"name\": \"\"\n },\n \"nbformat\": 3,\n \"nbformat_minor\": 0,\n \"worksheets\": [\n {\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"collapsed\": false,\n \"input\": [],\n \"language\": \"python\",\n \"metadata\": {},\n \"outputs\": []\n }\n ],\n \"metadata\": {}\n }\n ]\n}\"\"\")\n", "path": "nikola/plugins/compile/ipynb/__init__.py"}]}
| 1,405 | 316 |
gh_patches_debug_10289
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-5661
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Include crawl date in data
I'm looking at an old output directory, trying to workout which release it is.
I think we could add the crawl time and/or build id to the dataset attributes easily.
I think @rjw62 asked for this before. Which I promptly forgot. Sorry.
I'll look at this later or Monday.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/exporters/geojson.py`
Content:
```
1 import base64
2 import hashlib
3 import io
4 import json
5 import logging
6 import uuid
7
8 from scrapy.exporters import JsonItemExporter
9 from scrapy.utils.misc import walk_modules
10 from scrapy.utils.python import to_bytes
11 from scrapy.utils.spider import iter_spider_classes
12
13 from locations.settings import SPIDER_MODULES
14
15 mapping = (
16 ("addr_full", "addr:full"),
17 ("housenumber", "addr:housenumber"),
18 ("street", "addr:street"),
19 ("street_address", "addr:street_address"),
20 ("city", "addr:city"),
21 ("state", "addr:state"),
22 ("postcode", "addr:postcode"),
23 ("country", "addr:country"),
24 ("name", "name"),
25 ("phone", "phone"),
26 ("website", "website"),
27 ("twitter", "contact:twitter"),
28 ("facebook", "contact:facebook"),
29 ("email", "contact:email"),
30 ("opening_hours", "opening_hours"),
31 ("image", "image"),
32 ("brand", "brand"),
33 ("brand_wikidata", "brand:wikidata"),
34 ("located_in", "located_in"),
35 ("located_in_wikidata", "located_in:wikidata"),
36 ("nsi_id", "nsi_id"),
37 )
38
39
40 def item_to_properties(item):
41 props = {}
42
43 # Ref is required, unless `no_refs = True` is set in spider
44 if ref := item.get("ref"):
45 props["ref"] = str(ref)
46
47 # Add in the extra bits
48 if extras := item.get("extras"):
49 for key, value in extras.items():
50 if value:
51 # Only export populated values
52 props[key] = value
53
54 # Bring in the optional stuff
55 for map_from, map_to in mapping:
56 if item_value := item.get(map_from):
57 props[map_to] = item_value
58
59 return props
60
61
62 def compute_hash(item):
63 ref = str(item.get("ref") or uuid.uuid1()).encode("utf8")
64 sha1 = hashlib.sha1(ref)
65
66 if spider_name := item.get("extras", {}).get("@spider"):
67 sha1.update(spider_name.encode("utf8"))
68
69 return base64.urlsafe_b64encode(sha1.digest()).decode("utf8")
70
71
72 def find_spider_class(spider_name):
73 if not spider_name:
74 return None
75 for mod in SPIDER_MODULES:
76 for module in walk_modules(mod):
77 for spider_class in iter_spider_classes(module):
78 if spider_name == spider_class.name:
79 return spider_class
80 return None
81
82
83 def get_dataset_attributes(spider_name) -> {}:
84 spider_class = find_spider_class(spider_name)
85 dataset_attributes = getattr(spider_class, "dataset_attributes", {})
86 settings = getattr(spider_class, "custom_settings", {}) or {}
87 if not settings.get("ROBOTSTXT_OBEY", True):
88 # See https://github.com/alltheplaces/alltheplaces/issues/4537
89 dataset_attributes["spider:robots_txt"] = "ignored"
90 dataset_attributes["@spider"] = spider_name
91
92 return dataset_attributes
93
94
95 class GeoJsonExporter(JsonItemExporter):
96 def __init__(self, file, **kwargs):
97 super().__init__(file, **kwargs)
98 self.spider_name = None
99
100 def start_exporting(self):
101 pass
102
103 def export_item(self, item):
104 spider_name = item.get("extras", {}).get("@spider")
105 if self.first_item:
106 self.spider_name = spider_name
107 self.write_geojson_header()
108 if spider_name != self.spider_name:
109 # It really should not happen that a single exporter instance
110 # handles output from different spiders. If it does happen,
111 # we rather crash than emit GeoJSON with the wrong dataset
112 # properties, which may include legally relevant license tags.
113 raise ValueError(
114 f"harvest from multiple spiders ({spider_name, self.spider_name}) cannot be written to same GeoJSON file"
115 )
116
117 super().export_item(item)
118
119 def _get_serialized_fields(self, item, default_value=None, include_empty=None):
120 feature = []
121 feature.append(("type", "Feature"))
122 feature.append(("id", compute_hash(item)))
123 feature.append(("properties", item_to_properties(item)))
124
125 lat = item.get("lat")
126 lon = item.get("lon")
127 geometry = item.get("geometry")
128 if lat and lon and not geometry:
129 try:
130 geometry = {
131 "type": "Point",
132 "coordinates": [float(item["lon"]), float(item["lat"])],
133 }
134 except ValueError:
135 logging.warning("Couldn't convert lat (%s) and lon (%s) to float", lat, lon)
136 feature.append(("geometry", geometry))
137
138 return feature
139
140 def write_geojson_header(self):
141 header = io.StringIO()
142 header.write('{"type":"FeatureCollection","dataset_attributes":')
143 json.dump(
144 get_dataset_attributes(self.spider_name), header, ensure_ascii=False, separators=(",", ":"), sort_keys=True
145 )
146 header.write(',"features":[\n')
147 self.file.write(to_bytes(header.getvalue(), self.encoding))
148
149 def finish_exporting(self):
150 self.file.write(b"\n]}\n")
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/exporters/geojson.py b/locations/exporters/geojson.py
--- a/locations/exporters/geojson.py
+++ b/locations/exporters/geojson.py
@@ -1,4 +1,5 @@
import base64
+import datetime
import hashlib
import io
import json
@@ -88,6 +89,7 @@
# See https://github.com/alltheplaces/alltheplaces/issues/4537
dataset_attributes["spider:robots_txt"] = "ignored"
dataset_attributes["@spider"] = spider_name
+ dataset_attributes["spider:collection_time"] = datetime.datetime.now().isoformat()
return dataset_attributes
|
{"golden_diff": "diff --git a/locations/exporters/geojson.py b/locations/exporters/geojson.py\n--- a/locations/exporters/geojson.py\n+++ b/locations/exporters/geojson.py\n@@ -1,4 +1,5 @@\n import base64\n+import datetime\n import hashlib\n import io\n import json\n@@ -88,6 +89,7 @@\n # See https://github.com/alltheplaces/alltheplaces/issues/4537\n dataset_attributes[\"spider:robots_txt\"] = \"ignored\"\n dataset_attributes[\"@spider\"] = spider_name\n+ dataset_attributes[\"spider:collection_time\"] = datetime.datetime.now().isoformat()\n \n return dataset_attributes\n", "issue": "Include crawl date in data\nI'm looking at an old output directory, trying to workout which release it is.\r\n\r\nI think we could add the crawl time and/or build id to the dataset attributes easily.\r\n\r\nI think @rjw62 asked for this before. Which I promptly forgot. Sorry.\r\n\r\nI'll look at this later or Monday.\n", "before_files": [{"content": "import base64\nimport hashlib\nimport io\nimport json\nimport logging\nimport uuid\n\nfrom scrapy.exporters import JsonItemExporter\nfrom scrapy.utils.misc import walk_modules\nfrom scrapy.utils.python import to_bytes\nfrom scrapy.utils.spider import iter_spider_classes\n\nfrom locations.settings import SPIDER_MODULES\n\nmapping = (\n (\"addr_full\", \"addr:full\"),\n (\"housenumber\", \"addr:housenumber\"),\n (\"street\", \"addr:street\"),\n (\"street_address\", \"addr:street_address\"),\n (\"city\", \"addr:city\"),\n (\"state\", \"addr:state\"),\n (\"postcode\", \"addr:postcode\"),\n (\"country\", \"addr:country\"),\n (\"name\", \"name\"),\n (\"phone\", \"phone\"),\n (\"website\", \"website\"),\n (\"twitter\", \"contact:twitter\"),\n (\"facebook\", \"contact:facebook\"),\n (\"email\", \"contact:email\"),\n (\"opening_hours\", \"opening_hours\"),\n (\"image\", \"image\"),\n (\"brand\", \"brand\"),\n (\"brand_wikidata\", \"brand:wikidata\"),\n (\"located_in\", \"located_in\"),\n (\"located_in_wikidata\", \"located_in:wikidata\"),\n (\"nsi_id\", \"nsi_id\"),\n)\n\n\ndef item_to_properties(item):\n props = {}\n\n # Ref is required, unless `no_refs = True` is set in spider\n if ref := item.get(\"ref\"):\n props[\"ref\"] = str(ref)\n\n # Add in the extra bits\n if extras := item.get(\"extras\"):\n for key, value in extras.items():\n if value:\n # Only export populated values\n props[key] = value\n\n # Bring in the optional stuff\n for map_from, map_to in mapping:\n if item_value := item.get(map_from):\n props[map_to] = item_value\n\n return props\n\n\ndef compute_hash(item):\n ref = str(item.get(\"ref\") or uuid.uuid1()).encode(\"utf8\")\n sha1 = hashlib.sha1(ref)\n\n if spider_name := item.get(\"extras\", {}).get(\"@spider\"):\n sha1.update(spider_name.encode(\"utf8\"))\n\n return base64.urlsafe_b64encode(sha1.digest()).decode(\"utf8\")\n\n\ndef find_spider_class(spider_name):\n if not spider_name:\n return None\n for mod in SPIDER_MODULES:\n for module in walk_modules(mod):\n for spider_class in iter_spider_classes(module):\n if spider_name == spider_class.name:\n return spider_class\n return None\n\n\ndef get_dataset_attributes(spider_name) -> {}:\n spider_class = find_spider_class(spider_name)\n dataset_attributes = getattr(spider_class, \"dataset_attributes\", {})\n settings = getattr(spider_class, \"custom_settings\", {}) or {}\n if not settings.get(\"ROBOTSTXT_OBEY\", True):\n # See https://github.com/alltheplaces/alltheplaces/issues/4537\n dataset_attributes[\"spider:robots_txt\"] = \"ignored\"\n dataset_attributes[\"@spider\"] = spider_name\n\n return dataset_attributes\n\n\nclass GeoJsonExporter(JsonItemExporter):\n def __init__(self, file, **kwargs):\n super().__init__(file, **kwargs)\n self.spider_name = None\n\n def start_exporting(self):\n pass\n\n def export_item(self, item):\n spider_name = item.get(\"extras\", {}).get(\"@spider\")\n if self.first_item:\n self.spider_name = spider_name\n self.write_geojson_header()\n if spider_name != self.spider_name:\n # It really should not happen that a single exporter instance\n # handles output from different spiders. If it does happen,\n # we rather crash than emit GeoJSON with the wrong dataset\n # properties, which may include legally relevant license tags.\n raise ValueError(\n f\"harvest from multiple spiders ({spider_name, self.spider_name}) cannot be written to same GeoJSON file\"\n )\n\n super().export_item(item)\n\n def _get_serialized_fields(self, item, default_value=None, include_empty=None):\n feature = []\n feature.append((\"type\", \"Feature\"))\n feature.append((\"id\", compute_hash(item)))\n feature.append((\"properties\", item_to_properties(item)))\n\n lat = item.get(\"lat\")\n lon = item.get(\"lon\")\n geometry = item.get(\"geometry\")\n if lat and lon and not geometry:\n try:\n geometry = {\n \"type\": \"Point\",\n \"coordinates\": [float(item[\"lon\"]), float(item[\"lat\"])],\n }\n except ValueError:\n logging.warning(\"Couldn't convert lat (%s) and lon (%s) to float\", lat, lon)\n feature.append((\"geometry\", geometry))\n\n return feature\n\n def write_geojson_header(self):\n header = io.StringIO()\n header.write('{\"type\":\"FeatureCollection\",\"dataset_attributes\":')\n json.dump(\n get_dataset_attributes(self.spider_name), header, ensure_ascii=False, separators=(\",\", \":\"), sort_keys=True\n )\n header.write(',\"features\":[\\n')\n self.file.write(to_bytes(header.getvalue(), self.encoding))\n\n def finish_exporting(self):\n self.file.write(b\"\\n]}\\n\")\n", "path": "locations/exporters/geojson.py"}], "after_files": [{"content": "import base64\nimport datetime\nimport hashlib\nimport io\nimport json\nimport logging\nimport uuid\n\nfrom scrapy.exporters import JsonItemExporter\nfrom scrapy.utils.misc import walk_modules\nfrom scrapy.utils.python import to_bytes\nfrom scrapy.utils.spider import iter_spider_classes\n\nfrom locations.settings import SPIDER_MODULES\n\nmapping = (\n (\"addr_full\", \"addr:full\"),\n (\"housenumber\", \"addr:housenumber\"),\n (\"street\", \"addr:street\"),\n (\"street_address\", \"addr:street_address\"),\n (\"city\", \"addr:city\"),\n (\"state\", \"addr:state\"),\n (\"postcode\", \"addr:postcode\"),\n (\"country\", \"addr:country\"),\n (\"name\", \"name\"),\n (\"phone\", \"phone\"),\n (\"website\", \"website\"),\n (\"twitter\", \"contact:twitter\"),\n (\"facebook\", \"contact:facebook\"),\n (\"email\", \"contact:email\"),\n (\"opening_hours\", \"opening_hours\"),\n (\"image\", \"image\"),\n (\"brand\", \"brand\"),\n (\"brand_wikidata\", \"brand:wikidata\"),\n (\"located_in\", \"located_in\"),\n (\"located_in_wikidata\", \"located_in:wikidata\"),\n (\"nsi_id\", \"nsi_id\"),\n)\n\n\ndef item_to_properties(item):\n props = {}\n\n # Ref is required, unless `no_refs = True` is set in spider\n if ref := item.get(\"ref\"):\n props[\"ref\"] = str(ref)\n\n # Add in the extra bits\n if extras := item.get(\"extras\"):\n for key, value in extras.items():\n if value:\n # Only export populated values\n props[key] = value\n\n # Bring in the optional stuff\n for map_from, map_to in mapping:\n if item_value := item.get(map_from):\n props[map_to] = item_value\n\n return props\n\n\ndef compute_hash(item):\n ref = str(item.get(\"ref\") or uuid.uuid1()).encode(\"utf8\")\n sha1 = hashlib.sha1(ref)\n\n if spider_name := item.get(\"extras\", {}).get(\"@spider\"):\n sha1.update(spider_name.encode(\"utf8\"))\n\n return base64.urlsafe_b64encode(sha1.digest()).decode(\"utf8\")\n\n\ndef find_spider_class(spider_name):\n if not spider_name:\n return None\n for mod in SPIDER_MODULES:\n for module in walk_modules(mod):\n for spider_class in iter_spider_classes(module):\n if spider_name == spider_class.name:\n return spider_class\n return None\n\n\ndef get_dataset_attributes(spider_name) -> {}:\n spider_class = find_spider_class(spider_name)\n dataset_attributes = getattr(spider_class, \"dataset_attributes\", {})\n settings = getattr(spider_class, \"custom_settings\", {}) or {}\n if not settings.get(\"ROBOTSTXT_OBEY\", True):\n # See https://github.com/alltheplaces/alltheplaces/issues/4537\n dataset_attributes[\"spider:robots_txt\"] = \"ignored\"\n dataset_attributes[\"@spider\"] = spider_name\n dataset_attributes[\"spider:collection_time\"] = datetime.datetime.now().isoformat()\n\n return dataset_attributes\n\n\nclass GeoJsonExporter(JsonItemExporter):\n def __init__(self, file, **kwargs):\n super().__init__(file, **kwargs)\n self.spider_name = None\n\n def start_exporting(self):\n pass\n\n def export_item(self, item):\n spider_name = item.get(\"extras\", {}).get(\"@spider\")\n if self.first_item:\n self.spider_name = spider_name\n self.write_geojson_header()\n if spider_name != self.spider_name:\n # It really should not happen that a single exporter instance\n # handles output from different spiders. If it does happen,\n # we rather crash than emit GeoJSON with the wrong dataset\n # properties, which may include legally relevant license tags.\n raise ValueError(\n f\"harvest from multiple spiders ({spider_name, self.spider_name}) cannot be written to same GeoJSON file\"\n )\n\n super().export_item(item)\n\n def _get_serialized_fields(self, item, default_value=None, include_empty=None):\n feature = []\n feature.append((\"type\", \"Feature\"))\n feature.append((\"id\", compute_hash(item)))\n feature.append((\"properties\", item_to_properties(item)))\n\n lat = item.get(\"lat\")\n lon = item.get(\"lon\")\n geometry = item.get(\"geometry\")\n if lat and lon and not geometry:\n try:\n geometry = {\n \"type\": \"Point\",\n \"coordinates\": [float(item[\"lon\"]), float(item[\"lat\"])],\n }\n except ValueError:\n logging.warning(\"Couldn't convert lat (%s) and lon (%s) to float\", lat, lon)\n feature.append((\"geometry\", geometry))\n\n return feature\n\n def write_geojson_header(self):\n header = io.StringIO()\n header.write('{\"type\":\"FeatureCollection\",\"dataset_attributes\":')\n json.dump(\n get_dataset_attributes(self.spider_name), header, ensure_ascii=False, separators=(\",\", \":\"), sort_keys=True\n )\n header.write(',\"features\":[\\n')\n self.file.write(to_bytes(header.getvalue(), self.encoding))\n\n def finish_exporting(self):\n self.file.write(b\"\\n]}\\n\")\n", "path": "locations/exporters/geojson.py"}]}
| 1,818 | 154 |
gh_patches_debug_11428
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-11825
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Unable to update Warehouse address
### What are you trying to achieve?
I'm trying to update the warehouse update, with the country set to "UK", according to addressValidationRules query, the required fields are
```
streetAddress1",
"city",
"postalCode"
```
### Steps to reproduce the problem
1. In shipping zone update/creating a new on select country UK
2. Fill all fields with the necessary information
3. Try to save changes
### What did you expect to happen?
Being able to update the warehouse address properly.
### Logs
Api responds with error -> Error code REQUIRED on field countryAreaAPI
### Environment
Saleor version: 3.10
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/account/forms.py`
Content:
```
1 from phonenumbers.phonenumberutil import country_code_for_region
2
3 from .i18n import AddressMetaForm, get_address_form_class
4
5
6 def get_address_form(
7 data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs
8 ):
9 country_form = AddressMetaForm(data, initial=initial)
10 if country_form.is_valid():
11 country_code = country_form.cleaned_data["country"]
12
13 if initial is None and country_code:
14 initial = {}
15 if country_code:
16 initial["phone"] = "+{}".format(country_code_for_region(country_code))
17
18 address_form_class = get_address_form_class(country_code)
19
20 if instance is not None:
21 address_form_class = get_address_form_class(instance.country.code)
22 address_form = address_form_class(
23 data, instance=instance, enable_normalization=enable_normalization, **kwargs
24 )
25 else:
26 initial_address = initial
27 address_form = address_form_class(
28 data or None,
29 initial=initial_address,
30 enable_normalization=enable_normalization,
31 **kwargs,
32 )
33
34 if hasattr(address_form.fields["country_area"], "choices"):
35 choices = address_form.fields["country_area"].choices
36 choices = [(choice[1], choice[1]) for choice in choices]
37 address_form.fields["country_area"].choices = choices
38 return address_form
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/account/forms.py b/saleor/account/forms.py
--- a/saleor/account/forms.py
+++ b/saleor/account/forms.py
@@ -14,11 +14,9 @@
initial = {}
if country_code:
initial["phone"] = "+{}".format(country_code_for_region(country_code))
-
address_form_class = get_address_form_class(country_code)
if instance is not None:
- address_form_class = get_address_form_class(instance.country.code)
address_form = address_form_class(
data, instance=instance, enable_normalization=enable_normalization, **kwargs
)
|
{"golden_diff": "diff --git a/saleor/account/forms.py b/saleor/account/forms.py\n--- a/saleor/account/forms.py\n+++ b/saleor/account/forms.py\n@@ -14,11 +14,9 @@\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n-\n address_form_class = get_address_form_class(country_code)\n \n if instance is not None:\n- address_form_class = get_address_form_class(instance.country.code)\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n", "issue": "Bug: Unable to update Warehouse address\n### What are you trying to achieve?\n\nI'm trying to update the warehouse update, with the country set to \"UK\", according to addressValidationRules query, the required fields are \r\n```\r\nstreetAddress1\",\r\n\"city\",\r\n\"postalCode\"\r\n```\n\n### Steps to reproduce the problem\n\n1. In shipping zone update/creating a new on select country UK\r\n2. Fill all fields with the necessary information\r\n3. Try to save changes\n\n### What did you expect to happen?\n\nBeing able to update the warehouse address properly.\n\n### Logs\n\nApi responds with error -> Error code REQUIRED on field countryAreaAPI\n\n### Environment\n\nSaleor version: 3.10\r\n\n", "before_files": [{"content": "from phonenumbers.phonenumberutil import country_code_for_region\n\nfrom .i18n import AddressMetaForm, get_address_form_class\n\n\ndef get_address_form(\n data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs\n):\n country_form = AddressMetaForm(data, initial=initial)\n if country_form.is_valid():\n country_code = country_form.cleaned_data[\"country\"]\n\n if initial is None and country_code:\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n\n address_form_class = get_address_form_class(country_code)\n\n if instance is not None:\n address_form_class = get_address_form_class(instance.country.code)\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n else:\n initial_address = initial\n address_form = address_form_class(\n data or None,\n initial=initial_address,\n enable_normalization=enable_normalization,\n **kwargs,\n )\n\n if hasattr(address_form.fields[\"country_area\"], \"choices\"):\n choices = address_form.fields[\"country_area\"].choices\n choices = [(choice[1], choice[1]) for choice in choices]\n address_form.fields[\"country_area\"].choices = choices\n return address_form\n", "path": "saleor/account/forms.py"}], "after_files": [{"content": "from phonenumbers.phonenumberutil import country_code_for_region\n\nfrom .i18n import AddressMetaForm, get_address_form_class\n\n\ndef get_address_form(\n data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs\n):\n country_form = AddressMetaForm(data, initial=initial)\n if country_form.is_valid():\n country_code = country_form.cleaned_data[\"country\"]\n\n if initial is None and country_code:\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n address_form_class = get_address_form_class(country_code)\n\n if instance is not None:\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n else:\n initial_address = initial\n address_form = address_form_class(\n data or None,\n initial=initial_address,\n enable_normalization=enable_normalization,\n **kwargs,\n )\n\n if hasattr(address_form.fields[\"country_area\"], \"choices\"):\n choices = address_form.fields[\"country_area\"].choices\n choices = [(choice[1], choice[1]) for choice in choices]\n address_form.fields[\"country_area\"].choices = choices\n return address_form\n", "path": "saleor/account/forms.py"}]}
| 764 | 137 |
gh_patches_debug_19322
|
rasdani/github-patches
|
git_diff
|
psf__black-3282
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support formatting Jupyter Notebooks in GitHub Actions
**Is your feature request related to a problem? Please describe.**
I'm trying to setup a GitHub Action that runs Black on a project that includes *.py and *.ipynb files, but the default action does not include the Jupyter extra. I followed the integration described in [this piece of documentation](https://black.readthedocs.io/en/stable/integrations/github_actions.html) but the option to include the Jupyter extra (`black[jupyter]`) is not available.
**Describe the solution you'd like**
If the action included an argument to include the Jupyter extra, the GitHub Action would work in as expected (when using `pip install black[jupyter]` locally).
**Describe alternatives you've considered**
I considered a custom GitHub Action and installing Black manually, but found out that modifying part of the action available in this repository is cleaner and would bring support to users with a similar need without affecting those that already use the GitHub Action.
**Additional context**
I was trying different things out and arrived to a solution that works as expected and can be included in this project without affecting users that already use the GitHub Action. **Add a new option to the GitHub Action to enable the Jupyter extra dependency**. I think that a boolean value might do the trick and using `false` as default maintains the current behavior.
``` diff
diff --git a/action.yml b/action.yml
index cfa6ef9..ed6c32e 100644
--- a/action.yml
+++ b/action.yml
@@ -8,6 +8,10 @@ inputs:
'--check --diff'"
required: false
default: "--check --diff"
+ jupyter:
+ description: "Include the required extra dependencies to format Jupyter Notebooks."
+ required: false
+ default: false
src:
description: "Source to run Black. Default: '.'"
required: false
@@ -38,6 +42,7 @@ runs:
# TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.
INPUT_OPTIONS: ${{ inputs.options }}
INPUT_SRC: ${{ inputs.src }}
+ INPUT_JUPYTER: ${{ inputs.jupyter }}
INPUT_BLACK_ARGS: ${{ inputs.black_args }}
INPUT_VERSION: ${{ inputs.version }}
pythonioencoding: utf-8
```
In this file, if the flag is enabled (if the `INPUT_JUPYTER` envar has a true value) then the `jupyter` extra is included in the installation step. Colorama is already included by default.
```diff
diff --git a/action/main.py b/action/main.py
index cd920f5..fbf6e73 100644
--- a/action/main.py
+++ b/action/main.py
@@ -10,11 +10,16 @@ ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
+JUPYTER = os.getenv("INPUT_JUPYTER")
VERSION = os.getenv("INPUT_VERSION", default="")
run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
-req = "black[colorama]"
+
+if JUPYTER:
+ req = "black[colorama,jupyter]"
+else:
+ req = "black[colorama]"
if VERSION:
req += f"=={VERSION}"
pip_proc = run(
```
The only difference would be visible in case I want to use the Jupyter extra, which can be enabled by passing the value explicitly:
```diff
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: psf/black@stable
+ jupyter: true
options: "--check --diff --verbose"
```
I forked this project to test the GitHub Action and it does work as expected (https://github.com/aaossa/black/commit/7af4287355003cd44e0febd8fe88e92f205db324). If you agree with this feature request, I can submit a PR with these changes and update the relevant documentation 👌
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `action/main.py`
Content:
```
1 import os
2 import shlex
3 import sys
4 from pathlib import Path
5 from subprocess import PIPE, STDOUT, run
6
7 ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
8 ENV_PATH = ACTION_PATH / ".black-env"
9 ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
10 OPTIONS = os.getenv("INPUT_OPTIONS", default="")
11 SRC = os.getenv("INPUT_SRC", default="")
12 BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
13 VERSION = os.getenv("INPUT_VERSION", default="")
14
15 run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
16
17 version_specifier = VERSION
18 if VERSION and VERSION[0] in "0123456789":
19 version_specifier = f"=={VERSION}"
20 req = f"black[colorama]{version_specifier}"
21 pip_proc = run(
22 [str(ENV_BIN / "python"), "-m", "pip", "install", req],
23 stdout=PIPE,
24 stderr=STDOUT,
25 encoding="utf-8",
26 )
27 if pip_proc.returncode:
28 print(pip_proc.stdout)
29 print("::error::Failed to install Black.", flush=True)
30 sys.exit(pip_proc.returncode)
31
32
33 base_cmd = [str(ENV_BIN / "black")]
34 if BLACK_ARGS:
35 # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
36 proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])
37 else:
38 proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])
39
40 sys.exit(proc.returncode)
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/action/main.py b/action/main.py
--- a/action/main.py
+++ b/action/main.py
@@ -9,6 +9,7 @@
ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
+JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
VERSION = os.getenv("INPUT_VERSION", default="")
@@ -17,7 +18,11 @@
version_specifier = VERSION
if VERSION and VERSION[0] in "0123456789":
version_specifier = f"=={VERSION}"
-req = f"black[colorama]{version_specifier}"
+if JUPYTER:
+ extra_deps = "[colorama,jupyter]"
+else:
+ extra_deps = "[colorama]"
+req = f"black{extra_deps}{version_specifier}"
pip_proc = run(
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
stdout=PIPE,
|
{"golden_diff": "diff --git a/action/main.py b/action/main.py\n--- a/action/main.py\n+++ b/action/main.py\n@@ -9,6 +9,7 @@\n ENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\n OPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\n SRC = os.getenv(\"INPUT_SRC\", default=\"\")\n+JUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\n BLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\n VERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n \n@@ -17,7 +18,11 @@\n version_specifier = VERSION\n if VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\n-req = f\"black[colorama]{version_specifier}\"\n+if JUPYTER:\n+ extra_deps = \"[colorama,jupyter]\"\n+else:\n+ extra_deps = \"[colorama]\"\n+req = f\"black{extra_deps}{version_specifier}\"\n pip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n", "issue": "Support formatting Jupyter Notebooks in GitHub Actions\n**Is your feature request related to a problem? Please describe.**\r\n\r\nI'm trying to setup a GitHub Action that runs Black on a project that includes *.py and *.ipynb files, but the default action does not include the Jupyter extra. I followed the integration described in [this piece of documentation](https://black.readthedocs.io/en/stable/integrations/github_actions.html) but the option to include the Jupyter extra (`black[jupyter]`) is not available.\r\n\r\n**Describe the solution you'd like**\r\n\r\nIf the action included an argument to include the Jupyter extra, the GitHub Action would work in as expected (when using `pip install black[jupyter]` locally).\r\n\r\n**Describe alternatives you've considered**\r\n\r\nI considered a custom GitHub Action and installing Black manually, but found out that modifying part of the action available in this repository is cleaner and would bring support to users with a similar need without affecting those that already use the GitHub Action.\r\n\r\n**Additional context**\r\n\r\nI was trying different things out and arrived to a solution that works as expected and can be included in this project without affecting users that already use the GitHub Action. **Add a new option to the GitHub Action to enable the Jupyter extra dependency**. I think that a boolean value might do the trick and using `false` as default maintains the current behavior.\r\n\r\n``` diff\r\ndiff --git a/action.yml b/action.yml\r\nindex cfa6ef9..ed6c32e 100644\r\n--- a/action.yml\r\n+++ b/action.yml\r\n@@ -8,6 +8,10 @@ inputs:\r\n '--check --diff'\"\r\n required: false\r\n default: \"--check --diff\"\r\n+ jupyter:\r\n+ description: \"Include the required extra dependencies to format Jupyter Notebooks.\"\r\n+ required: false\r\n+ default: false\r\n src:\r\n description: \"Source to run Black. Default: '.'\"\r\n required: false\r\n@@ -38,6 +42,7 @@ runs:\r\n # TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.\r\n INPUT_OPTIONS: ${{ inputs.options }}\r\n INPUT_SRC: ${{ inputs.src }}\r\n+ INPUT_JUPYTER: ${{ inputs.jupyter }}\r\n INPUT_BLACK_ARGS: ${{ inputs.black_args }}\r\n INPUT_VERSION: ${{ inputs.version }}\r\n pythonioencoding: utf-8\r\n```\r\n\r\nIn this file, if the flag is enabled (if the `INPUT_JUPYTER` envar has a true value) then the `jupyter` extra is included in the installation step. Colorama is already included by default. \r\n\r\n```diff\r\ndiff --git a/action/main.py b/action/main.py\r\nindex cd920f5..fbf6e73 100644\r\n--- a/action/main.py\r\n+++ b/action/main.py\r\n@@ -10,11 +10,16 @@ ENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\r\n OPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\r\n SRC = os.getenv(\"INPUT_SRC\", default=\"\")\r\n BLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\r\n+JUPYTER = os.getenv(\"INPUT_JUPYTER\")\r\n VERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\r\n\r\n run([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\r\n\r\n-req = \"black[colorama]\"\r\n+\r\n+if JUPYTER:\r\n+ req = \"black[colorama,jupyter]\"\r\n+else:\r\n+ req = \"black[colorama]\"\r\n if VERSION:\r\n req += f\"=={VERSION}\"\r\n pip_proc = run(\r\n```\r\n\r\nThe only difference would be visible in case I want to use the Jupyter extra, which can be enabled by passing the value explicitly:\r\n\r\n```diff\r\njobs:\r\n lint:\r\n runs-on: ubuntu-latest\r\n steps:\r\n - uses: actions/checkout@v2\r\n - uses: psf/black@stable\r\n+ jupyter: true\r\n options: \"--check --diff --verbose\"\r\n\r\n```\r\n\r\nI forked this project to test the GitHub Action and it does work as expected (https://github.com/aaossa/black/commit/7af4287355003cd44e0febd8fe88e92f205db324). If you agree with this feature request, I can submit a PR with these changes and update the relevant documentation \ud83d\udc4c \r\n\r\n\n", "before_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nreq = f\"black[colorama]{version_specifier}\"\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}], "after_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nJUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nif JUPYTER:\n extra_deps = \"[colorama,jupyter]\"\nelse:\n extra_deps = \"[colorama]\"\nreq = f\"black{extra_deps}{version_specifier}\"\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}]}
| 1,650 | 256 |
gh_patches_debug_57398
|
rasdani/github-patches
|
git_diff
|
translate__pootle-5797
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pootle_fs not expiring cache_keys
When a project uses pootle FS, stats are not updated. We have to manually call `pootle flush_cache --lru --django-cache` to update it manually.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_revision/receivers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.db.models.signals import post_save, pre_delete
10 from django.dispatch import receiver
11
12 from pootle.core.delegate import revision_updater
13 from pootle_app.models import Directory
14 from pootle_data.models import StoreData
15 from pootle_store.models import Store
16
17
18 @receiver(post_save, sender=StoreData)
19 def handle_storedata_save(**kwargs):
20 revision_updater.get(Store)(
21 context=kwargs["instance"].store).update(keys=["stats", "checks"])
22
23
24 @receiver(post_save, sender=Directory)
25 def handle_directory_save(**kwargs):
26 if kwargs.get("created"):
27 return
28 revision_updater.get(Directory)(
29 context=kwargs["instance"]).update(keys=["stats", "checks"])
30
31
32 @receiver(pre_delete, sender=Directory)
33 def handle_directory_delete(**kwargs):
34 revision_updater.get(Directory)(
35 context=kwargs["instance"].parent).update(keys=["stats", "checks"])
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py
--- a/pootle/apps/pootle_revision/receivers.py
+++ b/pootle/apps/pootle_revision/receivers.py
@@ -23,10 +23,12 @@
@receiver(post_save, sender=Directory)
def handle_directory_save(**kwargs):
- if kwargs.get("created"):
- return
+ context = (
+ kwargs["instance"].parent
+ if kwargs.get("created")
+ else kwargs["instance"])
revision_updater.get(Directory)(
- context=kwargs["instance"]).update(keys=["stats", "checks"])
+ context=context).update(keys=["stats", "checks"])
@receiver(pre_delete, sender=Directory)
|
{"golden_diff": "diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py\n--- a/pootle/apps/pootle_revision/receivers.py\n+++ b/pootle/apps/pootle_revision/receivers.py\n@@ -23,10 +23,12 @@\n \n @receiver(post_save, sender=Directory)\n def handle_directory_save(**kwargs):\n- if kwargs.get(\"created\"):\n- return\n+ context = (\n+ kwargs[\"instance\"].parent\n+ if kwargs.get(\"created\")\n+ else kwargs[\"instance\"])\n revision_updater.get(Directory)(\n- context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n+ context=context).update(keys=[\"stats\", \"checks\"])\n \n \n @receiver(pre_delete, sender=Directory)\n", "issue": "pootle_fs not expiring cache_keys\nWhen a project uses pootle FS, stats are not updated. We have to manually call `pootle flush_cache --lru --django-cache` to update it manually.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n if kwargs.get(\"created\"):\n return\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n context = (\n kwargs[\"instance\"].parent\n if kwargs.get(\"created\")\n else kwargs[\"instance\"])\n revision_updater.get(Directory)(\n context=context).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}]}
| 643 | 178 |
gh_patches_debug_61634
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-484
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Metrics] add indexing synthetic sugar
Idea is to improve the current implementation of `Metric` and to be able to do the following:
```
# A custom class ConfusionMatrix
cm = ConfusionMatrix(num_classes=3, output_transform=output_gt_predicted_classes_bg)
# Instead of below lines
# from ignite.metrics import MetricsLambda
# IoU = MetricsLambda(lambda res: res[1:], (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag())))
# We could have:
IoU = (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag()))[1:]
mIoU = IoU.mean()
```
cc @zasdfgbnm
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/metric.py`
Content:
```
1 from abc import ABCMeta, abstractmethod
2 from ignite._six import with_metaclass
3 from ignite.engine import Events
4 import torch
5
6
7 class Metric(with_metaclass(ABCMeta, object)):
8 """
9 Base class for all Metrics.
10
11 Args:
12 output_transform (callable, optional): a callable that is used to transform the
13 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
14 form expected by the metric. This can be useful if, for example, you have a multi-output model and
15 you want to compute the metric with respect to one of the outputs.
16
17 """
18
19 def __init__(self, output_transform=lambda x: x):
20 self._output_transform = output_transform
21 self.reset()
22
23 @abstractmethod
24 def reset(self):
25 """
26 Resets the metric to it's initial state.
27
28 This is called at the start of each epoch.
29 """
30 pass
31
32 @abstractmethod
33 def update(self, output):
34 """
35 Updates the metric's state using the passed batch output.
36
37 This is called once for each batch.
38
39 Args:
40 output: the is the output from the engine's process function.
41 """
42 pass
43
44 @abstractmethod
45 def compute(self):
46 """
47 Computes the metric based on it's accumulated state.
48
49 This is called at the end of each epoch.
50
51 Returns:
52 Any: the actual quantity of interest.
53
54 Raises:
55 NotComputableError: raised when the metric cannot be computed.
56 """
57 pass
58
59 def started(self, engine):
60 self.reset()
61
62 @torch.no_grad()
63 def iteration_completed(self, engine):
64 output = self._output_transform(engine.state.output)
65 self.update(output)
66
67 def completed(self, engine, name):
68 result = self.compute()
69 if torch.is_tensor(result) and len(result.shape) == 0:
70 result = result.item()
71 engine.state.metrics[name] = result
72
73 def attach(self, engine, name):
74 engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)
75 if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):
76 engine.add_event_handler(Events.EPOCH_STARTED, self.started)
77 if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
78 engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
79
80 def __add__(self, other):
81 from ignite.metrics import MetricsLambda
82 return MetricsLambda(lambda x, y: x + y, self, other)
83
84 def __radd__(self, other):
85 from ignite.metrics import MetricsLambda
86 return MetricsLambda(lambda x, y: x + y, other, self)
87
88 def __sub__(self, other):
89 from ignite.metrics import MetricsLambda
90 return MetricsLambda(lambda x, y: x - y, self, other)
91
92 def __rsub__(self, other):
93 from ignite.metrics import MetricsLambda
94 return MetricsLambda(lambda x, y: x - y, other, self)
95
96 def __mul__(self, other):
97 from ignite.metrics import MetricsLambda
98 return MetricsLambda(lambda x, y: x * y, self, other)
99
100 def __rmul__(self, other):
101 from ignite.metrics import MetricsLambda
102 return MetricsLambda(lambda x, y: x * y, other, self)
103
104 def __pow__(self, other):
105 from ignite.metrics import MetricsLambda
106 return MetricsLambda(lambda x, y: x ** y, self, other)
107
108 def __rpow__(self, other):
109 from ignite.metrics import MetricsLambda
110 return MetricsLambda(lambda x, y: x ** y, other, self)
111
112 def __mod__(self, other):
113 from ignite.metrics import MetricsLambda
114 return MetricsLambda(lambda x, y: x % y, self, other)
115
116 def __div__(self, other):
117 from ignite.metrics import MetricsLambda
118 return MetricsLambda(lambda x, y: x.__div__(y), self, other)
119
120 def __rdiv__(self, other):
121 from ignite.metrics import MetricsLambda
122 return MetricsLambda(lambda x, y: x.__div__(y), other, self)
123
124 def __truediv__(self, other):
125 from ignite.metrics import MetricsLambda
126 return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)
127
128 def __rtruediv__(self, other):
129 from ignite.metrics import MetricsLambda
130 return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)
131
132 def __floordiv__(self, other):
133 from ignite.metrics import MetricsLambda
134 return MetricsLambda(lambda x, y: x // y, self, other)
135
136 def __getattr__(self, attr):
137 from ignite.metrics import MetricsLambda
138
139 def fn(x, *args, **kwargs):
140 return getattr(x, attr)(*args, **kwargs)
141
142 def wrapper(*args, **kwargs):
143 return MetricsLambda(fn, self, *args, **kwargs)
144 return wrapper
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py
--- a/ignite/metrics/metric.py
+++ b/ignite/metrics/metric.py
@@ -142,3 +142,7 @@
def wrapper(*args, **kwargs):
return MetricsLambda(fn, self, *args, **kwargs)
return wrapper
+
+ def __getitem__(self, index):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x: x[index], self)
|
{"golden_diff": "diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py\n--- a/ignite/metrics/metric.py\n+++ b/ignite/metrics/metric.py\n@@ -142,3 +142,7 @@\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n return wrapper\n+\n+ def __getitem__(self, index):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x: x[index], self)\n", "issue": "[Metrics] add indexing synthetic sugar\nIdea is to improve the current implementation of `Metric` and to be able to do the following:\r\n```\r\n# A custom class ConfusionMatrix\r\ncm = ConfusionMatrix(num_classes=3, output_transform=output_gt_predicted_classes_bg)\r\n\r\n# Instead of below lines\r\n# from ignite.metrics import MetricsLambda\r\n# IoU = MetricsLambda(lambda res: res[1:], (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag())))\r\n# We could have: \r\nIoU = (cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag()))[1:]\r\nmIoU = IoU.mean()\r\n```\r\n\r\ncc @zasdfgbnm \n", "before_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom ignite._six import with_metaclass\nfrom ignite.engine import Events\nimport torch\n\n\nclass Metric(with_metaclass(ABCMeta, object)):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x):\n self._output_transform = output_transform\n self.reset()\n\n @abstractmethod\n def reset(self):\n \"\"\"\n Resets the metric to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output):\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function.\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self):\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest.\n\n Raises:\n NotComputableError: raised when the metric cannot be computed.\n \"\"\"\n pass\n\n def started(self, engine):\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine):\n output = self._output_transform(engine.state.output)\n self.update(output)\n\n def completed(self, engine, name):\n result = self.compute()\n if torch.is_tensor(result) and len(result.shape) == 0:\n result = result.item()\n engine.state.metrics[name] = result\n\n def attach(self, engine, name):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __radd__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, other, self)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __rsub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, other, self)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __rmul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, other, self)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __rpow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, other, self)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __rdiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __rtruediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n\n def __getattr__(self, attr):\n from ignite.metrics import MetricsLambda\n\n def fn(x, *args, **kwargs):\n return getattr(x, attr)(*args, **kwargs)\n\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n return wrapper\n", "path": "ignite/metrics/metric.py"}], "after_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom ignite._six import with_metaclass\nfrom ignite.engine import Events\nimport torch\n\n\nclass Metric(with_metaclass(ABCMeta, object)):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x):\n self._output_transform = output_transform\n self.reset()\n\n @abstractmethod\n def reset(self):\n \"\"\"\n Resets the metric to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output):\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function.\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self):\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest.\n\n Raises:\n NotComputableError: raised when the metric cannot be computed.\n \"\"\"\n pass\n\n def started(self, engine):\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine):\n output = self._output_transform(engine.state.output)\n self.update(output)\n\n def completed(self, engine, name):\n result = self.compute()\n if torch.is_tensor(result) and len(result.shape) == 0:\n result = result.item()\n engine.state.metrics[name] = result\n\n def attach(self, engine, name):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __radd__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, other, self)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __rsub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, other, self)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __rmul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, other, self)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __rpow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, other, self)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __rdiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __rtruediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n\n def __getattr__(self, attr):\n from ignite.metrics import MetricsLambda\n\n def fn(x, *args, **kwargs):\n return getattr(x, attr)(*args, **kwargs)\n\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n return wrapper\n\n def __getitem__(self, index):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x: x[index], self)\n", "path": "ignite/metrics/metric.py"}]}
| 1,842 | 114 |
gh_patches_debug_27040
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-1005
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Imported ratings added as reviews
During a goodreads import, star ratings seem to be added as Reviews, rather than ReviewRatings
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/importers/importer.py`
Content:
```
1 """ handle reading a csv from an external service, defaults are from GoodReads """
2 import csv
3 import logging
4
5 from bookwyrm import models
6 from bookwyrm.models import ImportJob, ImportItem
7 from bookwyrm.tasks import app
8
9 logger = logging.getLogger(__name__)
10
11
12 class Importer:
13 """Generic class for csv data import from an outside service"""
14
15 service = "Unknown"
16 delimiter = ","
17 encoding = "UTF-8"
18 mandatory_fields = ["Title", "Author"]
19
20 def create_job(self, user, csv_file, include_reviews, privacy):
21 """check over a csv and creates a database entry for the job"""
22 job = ImportJob.objects.create(
23 user=user, include_reviews=include_reviews, privacy=privacy
24 )
25 for index, entry in enumerate(
26 list(csv.DictReader(csv_file, delimiter=self.delimiter))
27 ):
28 if not all(x in entry for x in self.mandatory_fields):
29 raise ValueError("Author and title must be in data.")
30 entry = self.parse_fields(entry)
31 self.save_item(job, index, entry)
32 return job
33
34 def save_item(self, job, index, data): # pylint: disable=no-self-use
35 """creates and saves an import item"""
36 ImportItem(job=job, index=index, data=data).save()
37
38 def parse_fields(self, entry):
39 """updates csv data with additional info"""
40 entry.update({"import_source": self.service})
41 return entry
42
43 def create_retry_job(self, user, original_job, items):
44 """retry items that didn't import"""
45 job = ImportJob.objects.create(
46 user=user,
47 include_reviews=original_job.include_reviews,
48 privacy=original_job.privacy,
49 retry=True,
50 )
51 for item in items:
52 self.save_item(job, item.index, item.data)
53 return job
54
55 def start_import(self, job):
56 """initalizes a csv import job"""
57 result = import_data.delay(self.service, job.id)
58 job.task_id = result.id
59 job.save()
60
61
62 @app.task
63 def import_data(source, job_id):
64 """does the actual lookup work in a celery task"""
65 job = ImportJob.objects.get(id=job_id)
66 try:
67 for item in job.items.all():
68 try:
69 item.resolve()
70 except Exception as e: # pylint: disable=broad-except
71 logger.exception(e)
72 item.fail_reason = "Error loading book"
73 item.save()
74 continue
75
76 if item.book:
77 item.save()
78
79 # shelves book and handles reviews
80 handle_imported_book(
81 source, job.user, item, job.include_reviews, job.privacy
82 )
83 else:
84 item.fail_reason = "Could not find a match for book"
85 item.save()
86 finally:
87 job.complete = True
88 job.save()
89
90
91 def handle_imported_book(source, user, item, include_reviews, privacy):
92 """process a csv and then post about it"""
93 if isinstance(item.book, models.Work):
94 item.book = item.book.default_edition
95 if not item.book:
96 return
97
98 existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()
99
100 # shelve the book if it hasn't been shelved already
101 if item.shelf and not existing_shelf:
102 desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)
103 models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)
104
105 for read in item.reads:
106 # check for an existing readthrough with the same dates
107 if models.ReadThrough.objects.filter(
108 user=user,
109 book=item.book,
110 start_date=read.start_date,
111 finish_date=read.finish_date,
112 ).exists():
113 continue
114 read.book = item.book
115 read.user = user
116 read.save()
117
118 if include_reviews and (item.rating or item.review):
119 review_title = (
120 "Review of {!r} on {!r}".format(
121 item.book.title,
122 source,
123 )
124 if item.review
125 else ""
126 )
127
128 # we don't know the publication date of the review,
129 # but "now" is a bad guess
130 published_date_guess = item.date_read or item.date_added
131 models.Review.objects.create(
132 user=user,
133 book=item.book,
134 name=review_title,
135 content=item.review,
136 rating=item.rating,
137 published_date=published_date_guess,
138 privacy=privacy,
139 )
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/importers/importer.py b/bookwyrm/importers/importer.py
--- a/bookwyrm/importers/importer.py
+++ b/bookwyrm/importers/importer.py
@@ -116,24 +116,33 @@
read.save()
if include_reviews and (item.rating or item.review):
- review_title = (
- "Review of {!r} on {!r}".format(
- item.book.title,
- source,
- )
- if item.review
- else ""
- )
-
# we don't know the publication date of the review,
# but "now" is a bad guess
published_date_guess = item.date_read or item.date_added
- models.Review.objects.create(
- user=user,
- book=item.book,
- name=review_title,
- content=item.review,
- rating=item.rating,
- published_date=published_date_guess,
- privacy=privacy,
- )
+ if item.review:
+ review_title = (
+ "Review of {!r} on {!r}".format(
+ item.book.title,
+ source,
+ )
+ if item.review
+ else ""
+ )
+ models.Review.objects.create(
+ user=user,
+ book=item.book,
+ name=review_title,
+ content=item.review,
+ rating=item.rating,
+ published_date=published_date_guess,
+ privacy=privacy,
+ )
+ else:
+ # just a rating
+ models.ReviewRating.objects.create(
+ user=user,
+ book=item.book,
+ rating=item.rating,
+ published_date=published_date_guess,
+ privacy=privacy,
+ )
|
{"golden_diff": "diff --git a/bookwyrm/importers/importer.py b/bookwyrm/importers/importer.py\n--- a/bookwyrm/importers/importer.py\n+++ b/bookwyrm/importers/importer.py\n@@ -116,24 +116,33 @@\n read.save()\n \n if include_reviews and (item.rating or item.review):\n- review_title = (\n- \"Review of {!r} on {!r}\".format(\n- item.book.title,\n- source,\n- )\n- if item.review\n- else \"\"\n- )\n-\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n- models.Review.objects.create(\n- user=user,\n- book=item.book,\n- name=review_title,\n- content=item.review,\n- rating=item.rating,\n- published_date=published_date_guess,\n- privacy=privacy,\n- )\n+ if item.review:\n+ review_title = (\n+ \"Review of {!r} on {!r}\".format(\n+ item.book.title,\n+ source,\n+ )\n+ if item.review\n+ else \"\"\n+ )\n+ models.Review.objects.create(\n+ user=user,\n+ book=item.book,\n+ name=review_title,\n+ content=item.review,\n+ rating=item.rating,\n+ published_date=published_date_guess,\n+ privacy=privacy,\n+ )\n+ else:\n+ # just a rating\n+ models.ReviewRating.objects.create(\n+ user=user,\n+ book=item.book,\n+ rating=item.rating,\n+ published_date=published_date_guess,\n+ privacy=privacy,\n+ )\n", "issue": "Imported ratings added as reviews\nDuring a goodreads import, star ratings seem to be added as Reviews, rather than ReviewRatings\n", "before_files": [{"content": "\"\"\" handle reading a csv from an external service, defaults are from GoodReads \"\"\"\nimport csv\nimport logging\n\nfrom bookwyrm import models\nfrom bookwyrm.models import ImportJob, ImportItem\nfrom bookwyrm.tasks import app\n\nlogger = logging.getLogger(__name__)\n\n\nclass Importer:\n \"\"\"Generic class for csv data import from an outside service\"\"\"\n\n service = \"Unknown\"\n delimiter = \",\"\n encoding = \"UTF-8\"\n mandatory_fields = [\"Title\", \"Author\"]\n\n def create_job(self, user, csv_file, include_reviews, privacy):\n \"\"\"check over a csv and creates a database entry for the job\"\"\"\n job = ImportJob.objects.create(\n user=user, include_reviews=include_reviews, privacy=privacy\n )\n for index, entry in enumerate(\n list(csv.DictReader(csv_file, delimiter=self.delimiter))\n ):\n if not all(x in entry for x in self.mandatory_fields):\n raise ValueError(\"Author and title must be in data.\")\n entry = self.parse_fields(entry)\n self.save_item(job, index, entry)\n return job\n\n def save_item(self, job, index, data): # pylint: disable=no-self-use\n \"\"\"creates and saves an import item\"\"\"\n ImportItem(job=job, index=index, data=data).save()\n\n def parse_fields(self, entry):\n \"\"\"updates csv data with additional info\"\"\"\n entry.update({\"import_source\": self.service})\n return entry\n\n def create_retry_job(self, user, original_job, items):\n \"\"\"retry items that didn't import\"\"\"\n job = ImportJob.objects.create(\n user=user,\n include_reviews=original_job.include_reviews,\n privacy=original_job.privacy,\n retry=True,\n )\n for item in items:\n self.save_item(job, item.index, item.data)\n return job\n\n def start_import(self, job):\n \"\"\"initalizes a csv import job\"\"\"\n result = import_data.delay(self.service, job.id)\n job.task_id = result.id\n job.save()\n\n\[email protected]\ndef import_data(source, job_id):\n \"\"\"does the actual lookup work in a celery task\"\"\"\n job = ImportJob.objects.get(id=job_id)\n try:\n for item in job.items.all():\n try:\n item.resolve()\n except Exception as e: # pylint: disable=broad-except\n logger.exception(e)\n item.fail_reason = \"Error loading book\"\n item.save()\n continue\n\n if item.book:\n item.save()\n\n # shelves book and handles reviews\n handle_imported_book(\n source, job.user, item, job.include_reviews, job.privacy\n )\n else:\n item.fail_reason = \"Could not find a match for book\"\n item.save()\n finally:\n job.complete = True\n job.save()\n\n\ndef handle_imported_book(source, user, item, include_reviews, privacy):\n \"\"\"process a csv and then post about it\"\"\"\n if isinstance(item.book, models.Work):\n item.book = item.book.default_edition\n if not item.book:\n return\n\n existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()\n\n # shelve the book if it hasn't been shelved already\n if item.shelf and not existing_shelf:\n desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)\n models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)\n\n for read in item.reads:\n # check for an existing readthrough with the same dates\n if models.ReadThrough.objects.filter(\n user=user,\n book=item.book,\n start_date=read.start_date,\n finish_date=read.finish_date,\n ).exists():\n continue\n read.book = item.book\n read.user = user\n read.save()\n\n if include_reviews and (item.rating or item.review):\n review_title = (\n \"Review of {!r} on {!r}\".format(\n item.book.title,\n source,\n )\n if item.review\n else \"\"\n )\n\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n models.Review.objects.create(\n user=user,\n book=item.book,\n name=review_title,\n content=item.review,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n", "path": "bookwyrm/importers/importer.py"}], "after_files": [{"content": "\"\"\" handle reading a csv from an external service, defaults are from GoodReads \"\"\"\nimport csv\nimport logging\n\nfrom bookwyrm import models\nfrom bookwyrm.models import ImportJob, ImportItem\nfrom bookwyrm.tasks import app\n\nlogger = logging.getLogger(__name__)\n\n\nclass Importer:\n \"\"\"Generic class for csv data import from an outside service\"\"\"\n\n service = \"Unknown\"\n delimiter = \",\"\n encoding = \"UTF-8\"\n mandatory_fields = [\"Title\", \"Author\"]\n\n def create_job(self, user, csv_file, include_reviews, privacy):\n \"\"\"check over a csv and creates a database entry for the job\"\"\"\n job = ImportJob.objects.create(\n user=user, include_reviews=include_reviews, privacy=privacy\n )\n for index, entry in enumerate(\n list(csv.DictReader(csv_file, delimiter=self.delimiter))\n ):\n if not all(x in entry for x in self.mandatory_fields):\n raise ValueError(\"Author and title must be in data.\")\n entry = self.parse_fields(entry)\n self.save_item(job, index, entry)\n return job\n\n def save_item(self, job, index, data): # pylint: disable=no-self-use\n \"\"\"creates and saves an import item\"\"\"\n ImportItem(job=job, index=index, data=data).save()\n\n def parse_fields(self, entry):\n \"\"\"updates csv data with additional info\"\"\"\n entry.update({\"import_source\": self.service})\n return entry\n\n def create_retry_job(self, user, original_job, items):\n \"\"\"retry items that didn't import\"\"\"\n job = ImportJob.objects.create(\n user=user,\n include_reviews=original_job.include_reviews,\n privacy=original_job.privacy,\n retry=True,\n )\n for item in items:\n self.save_item(job, item.index, item.data)\n return job\n\n def start_import(self, job):\n \"\"\"initalizes a csv import job\"\"\"\n result = import_data.delay(self.service, job.id)\n job.task_id = result.id\n job.save()\n\n\[email protected]\ndef import_data(source, job_id):\n \"\"\"does the actual lookup work in a celery task\"\"\"\n job = ImportJob.objects.get(id=job_id)\n try:\n for item in job.items.all():\n try:\n item.resolve()\n except Exception as e: # pylint: disable=broad-except\n logger.exception(e)\n item.fail_reason = \"Error loading book\"\n item.save()\n continue\n\n if item.book:\n item.save()\n\n # shelves book and handles reviews\n handle_imported_book(\n source, job.user, item, job.include_reviews, job.privacy\n )\n else:\n item.fail_reason = \"Could not find a match for book\"\n item.save()\n finally:\n job.complete = True\n job.save()\n\n\ndef handle_imported_book(source, user, item, include_reviews, privacy):\n \"\"\"process a csv and then post about it\"\"\"\n if isinstance(item.book, models.Work):\n item.book = item.book.default_edition\n if not item.book:\n return\n\n existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()\n\n # shelve the book if it hasn't been shelved already\n if item.shelf and not existing_shelf:\n desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)\n models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)\n\n for read in item.reads:\n # check for an existing readthrough with the same dates\n if models.ReadThrough.objects.filter(\n user=user,\n book=item.book,\n start_date=read.start_date,\n finish_date=read.finish_date,\n ).exists():\n continue\n read.book = item.book\n read.user = user\n read.save()\n\n if include_reviews and (item.rating or item.review):\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n if item.review:\n review_title = (\n \"Review of {!r} on {!r}\".format(\n item.book.title,\n source,\n )\n if item.review\n else \"\"\n )\n models.Review.objects.create(\n user=user,\n book=item.book,\n name=review_title,\n content=item.review,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n else:\n # just a rating\n models.ReviewRating.objects.create(\n user=user,\n book=item.book,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n", "path": "bookwyrm/importers/importer.py"}]}
| 1,566 | 380 |
gh_patches_debug_22233
|
rasdani/github-patches
|
git_diff
|
statsmodels__statsmodels-4999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[MAINT/CLN] remove function explicitly marked as duplicate
In the function docstring:
`duplicate: Skipper added sm.tools.drop_missing`
<b>update</b> The relevant function is not used outside this module; nor is the other function in this module.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/tools/wrappers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Convenience Wrappers
3
4 Created on Sat Oct 30 14:56:35 2010
5
6 Author: josef-pktd
7 License: BSD
8 """
9
10 import numpy as np
11 import statsmodels.api as sm
12 from statsmodels import GLS, WLS, OLS
13
14 def remove_nanrows(y, x):
15 '''remove common rows in [y,x] that contain at least one nan
16
17 TODO: this should be made more flexible,
18 arbitrary number of arrays and 1d or 2d arrays
19
20 duplicate: Skipper added sm.tools.drop_missing
21
22 '''
23 mask = ~np.isnan(y)
24 mask *= ~(np.isnan(x).any(-1)) #* or &
25 y = y[mask]
26 x = x[mask]
27 return y, x
28
29
30 def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,
31 **kwds):
32 '''get linear model with extra options for entry
33
34 dispatches to regular model class and does not wrap the output
35
36 If several options are exclusive, for example sigma and weights, then the
37 chosen class depends on the implementation sequence.
38 '''
39
40 if filter_missing:
41 y, x = remove_nanrows(y, x)
42 #do the same for masked arrays
43
44 if add_const:
45 x = sm.add_constant(x, prepend=True)
46
47 if not sigma is None:
48 return GLS(y, x, sigma=sigma, **kwds)
49 elif not weights is None:
50 return WLS(y, x, weights=weights, **kwds)
51 else:
52 return OLS(y, x, **kwds)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/statsmodels/tools/wrappers.py b/statsmodels/tools/wrappers.py
deleted file mode 100644
--- a/statsmodels/tools/wrappers.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Convenience Wrappers
-
-Created on Sat Oct 30 14:56:35 2010
-
-Author: josef-pktd
-License: BSD
-"""
-
-import numpy as np
-import statsmodels.api as sm
-from statsmodels import GLS, WLS, OLS
-
-def remove_nanrows(y, x):
- '''remove common rows in [y,x] that contain at least one nan
-
- TODO: this should be made more flexible,
- arbitrary number of arrays and 1d or 2d arrays
-
- duplicate: Skipper added sm.tools.drop_missing
-
- '''
- mask = ~np.isnan(y)
- mask *= ~(np.isnan(x).any(-1)) #* or &
- y = y[mask]
- x = x[mask]
- return y, x
-
-
-def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,
- **kwds):
- '''get linear model with extra options for entry
-
- dispatches to regular model class and does not wrap the output
-
- If several options are exclusive, for example sigma and weights, then the
- chosen class depends on the implementation sequence.
- '''
-
- if filter_missing:
- y, x = remove_nanrows(y, x)
- #do the same for masked arrays
-
- if add_const:
- x = sm.add_constant(x, prepend=True)
-
- if not sigma is None:
- return GLS(y, x, sigma=sigma, **kwds)
- elif not weights is None:
- return WLS(y, x, weights=weights, **kwds)
- else:
- return OLS(y, x, **kwds)
|
{"golden_diff": "diff --git a/statsmodels/tools/wrappers.py b/statsmodels/tools/wrappers.py\ndeleted file mode 100644\n--- a/statsmodels/tools/wrappers.py\n+++ /dev/null\n@@ -1,52 +0,0 @@\n-# -*- coding: utf-8 -*-\n-\"\"\"Convenience Wrappers\n-\n-Created on Sat Oct 30 14:56:35 2010\n-\n-Author: josef-pktd\n-License: BSD\n-\"\"\"\n-\n-import numpy as np\n-import statsmodels.api as sm\n-from statsmodels import GLS, WLS, OLS\n-\n-def remove_nanrows(y, x):\n- '''remove common rows in [y,x] that contain at least one nan\n-\n- TODO: this should be made more flexible,\n- arbitrary number of arrays and 1d or 2d arrays\n-\n- duplicate: Skipper added sm.tools.drop_missing\n-\n- '''\n- mask = ~np.isnan(y)\n- mask *= ~(np.isnan(x).any(-1)) #* or &\n- y = y[mask]\n- x = x[mask]\n- return y, x\n-\n-\n-def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,\n- **kwds):\n- '''get linear model with extra options for entry\n-\n- dispatches to regular model class and does not wrap the output\n-\n- If several options are exclusive, for example sigma and weights, then the\n- chosen class depends on the implementation sequence.\n- '''\n-\n- if filter_missing:\n- y, x = remove_nanrows(y, x)\n- #do the same for masked arrays\n-\n- if add_const:\n- x = sm.add_constant(x, prepend=True)\n-\n- if not sigma is None:\n- return GLS(y, x, sigma=sigma, **kwds)\n- elif not weights is None:\n- return WLS(y, x, weights=weights, **kwds)\n- else:\n- return OLS(y, x, **kwds)\n", "issue": "[MAINT/CLN] remove function explicitly marked as duplicate\nIn the function docstring:\r\n`duplicate: Skipper added sm.tools.drop_missing`\r\n\r\n<b>update</b> The relevant function is not used outside this module; nor is the other function in this module.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Convenience Wrappers\n\nCreated on Sat Oct 30 14:56:35 2010\n\nAuthor: josef-pktd\nLicense: BSD\n\"\"\"\n\nimport numpy as np\nimport statsmodels.api as sm\nfrom statsmodels import GLS, WLS, OLS\n\ndef remove_nanrows(y, x):\n '''remove common rows in [y,x] that contain at least one nan\n\n TODO: this should be made more flexible,\n arbitrary number of arrays and 1d or 2d arrays\n\n duplicate: Skipper added sm.tools.drop_missing\n\n '''\n mask = ~np.isnan(y)\n mask *= ~(np.isnan(x).any(-1)) #* or &\n y = y[mask]\n x = x[mask]\n return y, x\n\n\ndef linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,\n **kwds):\n '''get linear model with extra options for entry\n\n dispatches to regular model class and does not wrap the output\n\n If several options are exclusive, for example sigma and weights, then the\n chosen class depends on the implementation sequence.\n '''\n\n if filter_missing:\n y, x = remove_nanrows(y, x)\n #do the same for masked arrays\n\n if add_const:\n x = sm.add_constant(x, prepend=True)\n\n if not sigma is None:\n return GLS(y, x, sigma=sigma, **kwds)\n elif not weights is None:\n return WLS(y, x, weights=weights, **kwds)\n else:\n return OLS(y, x, **kwds)\n", "path": "statsmodels/tools/wrappers.py"}], "after_files": [{"content": null, "path": "statsmodels/tools/wrappers.py"}]}
| 790 | 464 |
gh_patches_debug_4669
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1441
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change in Python 3.9.5 (and 3.8.10) causes Faker's list_module() to fail
* Faker version: 8.1.2
* OS: macOS 11.3.1
A [regression in Python](https://bugs.python.org/issue44061) breaks Faker, specifically [this line of code in Faker](https://github.com/joke2k/faker/blob/master/faker/utils/loading.py#L35) that calls `pkgutil.iter_modules([path])`.
It's not clear to me from the discussion in that python bug report exactly how they intend to resolve the issue, but I thought I'd flag this here.
### Steps to reproduce
1. Install python 3.9.5 or 3.8.10
1. Install faker
1. `import faker`
### Expected behavior
`import faker` should succeed
### Actual behavior
`import faker` raises an exception
```shell
>>> import faker
>>> import faker
Traceback (most recent call last):
File "/python/3.9/lib/python3.9/pkgutil.py", line 416, in get_importer
importer = sys.path_importer_cache[path_item]
KeyError: PosixPath('/venv/lib/python3.9/site-packages/faker/providers')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/venv/lib/python3.9/site-packages/faker/__init__.py", line 1, in <module>
from faker.factory import Factory
File "/venv/lib/python3.9/site-packages/faker/factory.py", line 7, in <module>
from faker.config import AVAILABLE_LOCALES, DEFAULT_LOCALE, PROVIDERS
File "/venv/lib/python3.9/site-packages/faker/config.py", line 11, in <module>
PROVIDERS = find_available_providers(
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 57, in find_available_providers
for mod in list_module(providers_mod) if mod != '__pycache__'
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 35, in list_module
return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 35, in <listcomp>
return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
File "/python/3.9/lib/python3.9/pkgutil.py", line 130, in iter_modules
for i in importers:
File "/python/3.9/lib/python3.9/pkgutil.py", line 420, in get_importer
importer = path_hook(path_item)
File "<frozen importlib._bootstrap_external>", line 1601, in path_hook_for_FileFinder
File "<frozen importlib._bootstrap_external>", line 1476, in __init__
File "<frozen importlib._bootstrap_external>", line 177, in _path_isabs
AttributeError: 'PosixPath' object has no attribute 'startswith'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/utils/loading.py`
Content:
```
1 import pkgutil
2 import sys
3
4 from importlib import import_module
5 from pathlib import Path
6 from types import ModuleType
7 from typing import List, Set
8
9
10 def get_path(module: ModuleType) -> str:
11 if getattr(sys, 'frozen', False):
12 # frozen
13
14 if getattr(sys, '_MEIPASS', False):
15 # PyInstaller
16 lib_dir = Path(getattr(sys, '_MEIPASS'))
17 else:
18 # others
19 lib_dir = Path(sys.executable).parent / 'lib'
20
21 path = lib_dir.joinpath(*module.__package__.split("."))
22 else:
23 # unfrozen
24 path = Path(module.__file__).parent
25 return str(path)
26
27
28 def list_module(module: ModuleType) -> List[str]:
29 path = get_path(module)
30
31 if getattr(sys, '_MEIPASS', False):
32 # PyInstaller
33 return [file.parent.name for file in Path(path).glob('*/__init__.py')]
34 else:
35 return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
36
37
38 def find_available_locales(providers: List[str]) -> List[str]:
39 available_locales: Set[str] = set()
40
41 for provider_path in providers:
42
43 provider_module = import_module(provider_path)
44 if getattr(provider_module, 'localized', False):
45 langs = list_module(provider_module)
46 available_locales.update(langs)
47 available_locales: List[str] = sorted(available_locales)
48 return available_locales
49
50
51 def find_available_providers(modules: List[ModuleType]) -> List[str]:
52 available_providers = set()
53 for providers_mod in modules:
54 if providers_mod.__package__:
55 providers = [
56 '.'.join([providers_mod.__package__, mod])
57 for mod in list_module(providers_mod) if mod != '__pycache__'
58 ]
59 available_providers.update(providers)
60 return sorted(available_providers)
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/faker/utils/loading.py b/faker/utils/loading.py
--- a/faker/utils/loading.py
+++ b/faker/utils/loading.py
@@ -32,7 +32,7 @@
# PyInstaller
return [file.parent.name for file in Path(path).glob('*/__init__.py')]
else:
- return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
+ return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]
def find_available_locales(providers: List[str]) -> List[str]:
|
{"golden_diff": "diff --git a/faker/utils/loading.py b/faker/utils/loading.py\n--- a/faker/utils/loading.py\n+++ b/faker/utils/loading.py\n@@ -32,7 +32,7 @@\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n- return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n+ return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]\n \n \n def find_available_locales(providers: List[str]) -> List[str]:\n", "issue": "Change in Python 3.9.5 (and 3.8.10) causes Faker's list_module() to fail\n* Faker version: 8.1.2\r\n* OS: macOS 11.3.1\r\n\r\nA [regression in Python](https://bugs.python.org/issue44061) breaks Faker, specifically [this line of code in Faker](https://github.com/joke2k/faker/blob/master/faker/utils/loading.py#L35) that calls `pkgutil.iter_modules([path])`.\r\n\r\nIt's not clear to me from the discussion in that python bug report exactly how they intend to resolve the issue, but I thought I'd flag this here.\r\n\r\n### Steps to reproduce\r\n\r\n1. Install python 3.9.5 or 3.8.10\r\n1. Install faker\r\n1. `import faker`\r\n\r\n### Expected behavior\r\n\r\n`import faker` should succeed\r\n\r\n### Actual behavior\r\n\r\n`import faker` raises an exception\r\n\r\n```shell\r\n>>> import faker\r\n>>> import faker\r\nTraceback (most recent call last):\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 416, in get_importer\r\n importer = sys.path_importer_cache[path_item]\r\nKeyError: PosixPath('/venv/lib/python3.9/site-packages/faker/providers')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/venv/lib/python3.9/site-packages/faker/__init__.py\", line 1, in <module>\r\n from faker.factory import Factory\r\n File \"/venv/lib/python3.9/site-packages/faker/factory.py\", line 7, in <module>\r\n from faker.config import AVAILABLE_LOCALES, DEFAULT_LOCALE, PROVIDERS\r\n File \"/venv/lib/python3.9/site-packages/faker/config.py\", line 11, in <module>\r\n PROVIDERS = find_available_providers(\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 57, in find_available_providers\r\n for mod in list_module(providers_mod) if mod != '__pycache__'\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 35, in list_module\r\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 35, in <listcomp>\r\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 130, in iter_modules\r\n for i in importers:\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 420, in get_importer\r\n importer = path_hook(path_item)\r\n File \"<frozen importlib._bootstrap_external>\", line 1601, in path_hook_for_FileFinder\r\n File \"<frozen importlib._bootstrap_external>\", line 1476, in __init__\r\n File \"<frozen importlib._bootstrap_external>\", line 177, in _path_isabs\r\nAttributeError: 'PosixPath' object has no attribute 'startswith'\r\n```\n", "before_files": [{"content": "import pkgutil\nimport sys\n\nfrom importlib import import_module\nfrom pathlib import Path\nfrom types import ModuleType\nfrom typing import List, Set\n\n\ndef get_path(module: ModuleType) -> str:\n if getattr(sys, 'frozen', False):\n # frozen\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n lib_dir = Path(getattr(sys, '_MEIPASS'))\n else:\n # others\n lib_dir = Path(sys.executable).parent / 'lib'\n\n path = lib_dir.joinpath(*module.__package__.split(\".\"))\n else:\n # unfrozen\n path = Path(module.__file__).parent\n return str(path)\n\n\ndef list_module(module: ModuleType) -> List[str]:\n path = get_path(module)\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n\n\ndef find_available_locales(providers: List[str]) -> List[str]:\n available_locales: Set[str] = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n available_locales: List[str] = sorted(available_locales)\n return available_locales\n\n\ndef find_available_providers(modules: List[ModuleType]) -> List[str]:\n available_providers = set()\n for providers_mod in modules:\n if providers_mod.__package__:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}], "after_files": [{"content": "import pkgutil\nimport sys\n\nfrom importlib import import_module\nfrom pathlib import Path\nfrom types import ModuleType\nfrom typing import List, Set\n\n\ndef get_path(module: ModuleType) -> str:\n if getattr(sys, 'frozen', False):\n # frozen\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n lib_dir = Path(getattr(sys, '_MEIPASS'))\n else:\n # others\n lib_dir = Path(sys.executable).parent / 'lib'\n\n path = lib_dir.joinpath(*module.__package__.split(\".\"))\n else:\n # unfrozen\n path = Path(module.__file__).parent\n return str(path)\n\n\ndef list_module(module: ModuleType) -> List[str]:\n path = get_path(module)\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]\n\n\ndef find_available_locales(providers: List[str]) -> List[str]:\n available_locales: Set[str] = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n available_locales: List[str] = sorted(available_locales)\n return available_locales\n\n\ndef find_available_providers(modules: List[ModuleType]) -> List[str]:\n available_providers = set()\n for providers_mod in modules:\n if providers_mod.__package__:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}]}
| 1,528 | 135 |
gh_patches_debug_14375
|
rasdani/github-patches
|
git_diff
|
mabel-dev__opteryx-1467
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
🪲 Column Names not Aliased
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
Example from user
~~~sql
SELECT *
FROM $planets AS P
INNER JOIN $satellites AS S
ON P.id = S.id
~~~
Simplified example
~~~sql
SELECT *
FROM $planets
INNER JOIN $satellites
ON $planets.id = $satellites.id
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opteryx/operators/exit_node.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 """
14 Exit Node
15
16 This is a SQL Query Execution Plan Node.
17
18 This does the final preparation before returning results to users.
19
20 This does two things that the projection node doesn't do:
21 - renames columns from the internal names
22 - removes all columns not being returned to the user
23
24 This node doesn't do any calculations, it is a pure Projection.
25 """
26 import time
27 from typing import Generator
28
29 from opteryx.exceptions import AmbiguousIdentifierError
30 from opteryx.exceptions import InvalidInternalStateError
31 from opteryx.models import QueryProperties
32 from opteryx.operators import BasePlanNode
33
34
35 class ExitNode(BasePlanNode):
36 def __init__(self, properties: QueryProperties, **config):
37 super().__init__(properties=properties)
38 self.columns = config.get("columns", [])
39
40 @property
41 def config(self): # pragma: no cover
42 return None
43
44 @property
45 def name(self): # pragma: no cover
46 return "Exit"
47
48 def execute(self) -> Generator:
49 start = time.monotonic_ns()
50 morsels = self._producers[0] # type:ignore
51
52 final_columns = []
53 final_names = []
54 for column in self.columns:
55 final_columns.append(column.schema_column.identity)
56 final_names.append(column.current_name)
57
58 if len(final_columns) != len(set(final_columns)): # pragma: no cover
59 from collections import Counter
60
61 duplicates = [column for column, count in Counter(final_columns).items() if count > 1]
62 matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}
63 raise AmbiguousIdentifierError(
64 message=f"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`"
65 )
66
67 self.statistics.time_exiting += time.monotonic_ns() - start
68 for morsel in morsels.execute():
69 start = time.monotonic_ns()
70 if not set(final_columns).issubset(morsel.column_names): # pragma: no cover
71 mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}
72 missing_references = {
73 mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names
74 }
75
76 raise InvalidInternalStateError(
77 f"The following fields were not in the resultset - {', '.join(missing_references.keys())}"
78 )
79
80 morsel = morsel.select(final_columns)
81 morsel = morsel.rename_columns(final_names)
82
83 self.statistics.time_exiting += time.monotonic_ns() - start
84 yield morsel
85 start = time.monotonic_ns()
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opteryx/operators/exit_node.py b/opteryx/operators/exit_node.py
--- a/opteryx/operators/exit_node.py
+++ b/opteryx/operators/exit_node.py
@@ -64,6 +64,14 @@
message=f"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`"
)
+ if len(set(final_names)) != len(final_names): # we have duplicate names
+ final_names = []
+ for column in self.columns:
+ if column.schema_column.origin:
+ final_names.append(f"{column.schema_column.origin[0]}.{column.current_name}")
+ else:
+ final_names.append(column.qualified_name)
+
self.statistics.time_exiting += time.monotonic_ns() - start
for morsel in morsels.execute():
start = time.monotonic_ns()
|
{"golden_diff": "diff --git a/opteryx/operators/exit_node.py b/opteryx/operators/exit_node.py\n--- a/opteryx/operators/exit_node.py\n+++ b/opteryx/operators/exit_node.py\n@@ -64,6 +64,14 @@\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n \n+ if len(set(final_names)) != len(final_names): # we have duplicate names\n+ final_names = []\n+ for column in self.columns:\n+ if column.schema_column.origin:\n+ final_names.append(f\"{column.schema_column.origin[0]}.{column.current_name}\")\n+ else:\n+ final_names.append(column.qualified_name)\n+\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n", "issue": "\ud83e\udeb2 Column Names not Aliased\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\nExample from user\r\n~~~sql\r\nSELECT *\r\n FROM $planets AS P\r\n INNER JOIN $satellites AS S\r\n ON P.id = S.id\r\n~~~\r\n\r\nSimplified example\r\n~~~sql\r\nSELECT *\r\n FROM $planets\r\n INNER JOIN $satellites\r\n ON $planets.id = $satellites.id\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExit Node\n\nThis is a SQL Query Execution Plan Node.\n\nThis does the final preparation before returning results to users.\n\nThis does two things that the projection node doesn't do:\n - renames columns from the internal names\n - removes all columns not being returned to the user\n\nThis node doesn't do any calculations, it is a pure Projection.\n\"\"\"\nimport time\nfrom typing import Generator\n\nfrom opteryx.exceptions import AmbiguousIdentifierError\nfrom opteryx.exceptions import InvalidInternalStateError\nfrom opteryx.models import QueryProperties\nfrom opteryx.operators import BasePlanNode\n\n\nclass ExitNode(BasePlanNode):\n def __init__(self, properties: QueryProperties, **config):\n super().__init__(properties=properties)\n self.columns = config.get(\"columns\", [])\n\n @property\n def config(self): # pragma: no cover\n return None\n\n @property\n def name(self): # pragma: no cover\n return \"Exit\"\n\n def execute(self) -> Generator:\n start = time.monotonic_ns()\n morsels = self._producers[0] # type:ignore\n\n final_columns = []\n final_names = []\n for column in self.columns:\n final_columns.append(column.schema_column.identity)\n final_names.append(column.current_name)\n\n if len(final_columns) != len(set(final_columns)): # pragma: no cover\n from collections import Counter\n\n duplicates = [column for column, count in Counter(final_columns).items() if count > 1]\n matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}\n raise AmbiguousIdentifierError(\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n if not set(final_columns).issubset(morsel.column_names): # pragma: no cover\n mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}\n missing_references = {\n mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names\n }\n\n raise InvalidInternalStateError(\n f\"The following fields were not in the resultset - {', '.join(missing_references.keys())}\"\n )\n\n morsel = morsel.select(final_columns)\n morsel = morsel.rename_columns(final_names)\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n yield morsel\n start = time.monotonic_ns()\n", "path": "opteryx/operators/exit_node.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExit Node\n\nThis is a SQL Query Execution Plan Node.\n\nThis does the final preparation before returning results to users.\n\nThis does two things that the projection node doesn't do:\n - renames columns from the internal names\n - removes all columns not being returned to the user\n\nThis node doesn't do any calculations, it is a pure Projection.\n\"\"\"\nimport time\nfrom typing import Generator\n\nfrom opteryx.exceptions import AmbiguousIdentifierError\nfrom opteryx.exceptions import InvalidInternalStateError\nfrom opteryx.models import QueryProperties\nfrom opteryx.operators import BasePlanNode\n\n\nclass ExitNode(BasePlanNode):\n def __init__(self, properties: QueryProperties, **config):\n super().__init__(properties=properties)\n self.columns = config.get(\"columns\", [])\n\n @property\n def config(self): # pragma: no cover\n return None\n\n @property\n def name(self): # pragma: no cover\n return \"Exit\"\n\n def execute(self) -> Generator:\n start = time.monotonic_ns()\n morsels = self._producers[0] # type:ignore\n\n final_columns = []\n final_names = []\n for column in self.columns:\n final_columns.append(column.schema_column.identity)\n final_names.append(column.current_name)\n\n if len(final_columns) != len(set(final_columns)): # pragma: no cover\n from collections import Counter\n\n duplicates = [column for column, count in Counter(final_columns).items() if count > 1]\n matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}\n raise AmbiguousIdentifierError(\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n\n if len(set(final_names)) != len(final_names): # we have duplicate names\n final_names = []\n for column in self.columns:\n if column.schema_column.origin:\n final_names.append(f\"{column.schema_column.origin[0]}.{column.current_name}\")\n else:\n final_names.append(column.qualified_name)\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n if not set(final_columns).issubset(morsel.column_names): # pragma: no cover\n mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}\n missing_references = {\n mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names\n }\n\n raise InvalidInternalStateError(\n f\"The following fields were not in the resultset - {', '.join(missing_references.keys())}\"\n )\n\n morsel = morsel.select(final_columns)\n morsel = morsel.rename_columns(final_names)\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n yield morsel\n start = time.monotonic_ns()\n", "path": "opteryx/operators/exit_node.py"}]}
| 1,281 | 198 |
gh_patches_debug_26167
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-969
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Indent JSON data while exporting it as Python code
I was testing out a web API and used the "Export flow as Python code" feature for the first time as user, and noticed an improvement.
Currently we just export the `flow.request.body` as is (independent of it's content type) but mitmproxy's interface is smart and renders different bodies differently (for eg. it indents JSON)
I think we could add this indent behaviour while exporting things as code too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/flow_export.py`
Content:
```
1 import urllib
2 import netlib.http
3 from textwrap import dedent
4
5
6 def curl_command(flow):
7 data = "curl "
8
9 for k, v in flow.request.headers.fields:
10 data += "-H '%s:%s' " % (k, v)
11
12 if flow.request.method != "GET":
13 data += "-X %s " % flow.request.method
14
15 full_url = flow.request.scheme + "://" + flow.request.host + flow.request.path
16 data += "'%s'" % full_url
17
18 if flow.request.content:
19 data += " --data-binary '%s'" % flow.request.content
20
21 return data
22
23
24 def python_code(flow):
25 code = dedent("""
26 import requests
27
28 url = '{url}'
29 {headers}{params}{data}
30 response = requests.request(
31 method='{method}',
32 url=url,{args}
33 )
34
35 print(response.text)
36 """).strip()
37
38 components = map(lambda x: urllib.quote(x, safe=""), flow.request.path_components)
39 url = flow.request.scheme + "://" + flow.request.host + "/" + "/".join(components)
40
41 args = ""
42 headers = ""
43 if flow.request.headers:
44 lines = [" '%s': '%s',\n" % (k, v) for k, v in flow.request.headers.fields]
45 headers += "\nheaders = {\n%s}\n" % "".join(lines)
46 args += "\n headers=headers,"
47
48 params = ""
49 if flow.request.query:
50 lines = [" '%s': '%s',\n" % (k, v) for k, v in flow.request.query]
51 params = "\nparams = {\n%s}\n" % "".join(lines)
52 args += "\n params=params,"
53
54 data = ""
55 if flow.request.body:
56 data = "\ndata = '''%s'''\n" % flow.request.body
57 args += "\n data=data,"
58
59 code = code.format(
60 url=url,
61 headers=headers,
62 params=params,
63 data=data,
64 method=flow.request.method,
65 args=args,
66 )
67
68 return code
69
70
71 def raw_request(flow):
72 data = netlib.http.http1.assemble_request(flow.request)
73 return data
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/flow_export.py b/mitmproxy/flow_export.py
--- a/mitmproxy/flow_export.py
+++ b/mitmproxy/flow_export.py
@@ -1,7 +1,10 @@
+import json
import urllib
-import netlib.http
from textwrap import dedent
+import netlib.http
+from netlib.utils import parse_content_type
+
def curl_command(flow):
data = "curl "
@@ -53,8 +56,16 @@
data = ""
if flow.request.body:
- data = "\ndata = '''%s'''\n" % flow.request.body
- args += "\n data=data,"
+ json_obj = is_json(flow.request.headers, flow.request.body)
+ if json_obj:
+ # Without the separators field json.dumps() produces
+ # trailing white spaces: https://bugs.python.org/issue16333
+ data = json.dumps(json_obj, indent=4, separators=(',', ': '))
+ data = "\njson = %s\n" % data
+ args += "\n json=json,"
+ else:
+ data = "\ndata = '''%s'''\n" % flow.request.body
+ args += "\n data=data,"
code = code.format(
url=url,
@@ -71,3 +82,14 @@
def raw_request(flow):
data = netlib.http.http1.assemble_request(flow.request)
return data
+
+
+def is_json(headers, content):
+ if headers:
+ ct = parse_content_type(headers.get("content-type", ""))
+ if ct and "%s/%s" % (ct[0], ct[1]) == "application/json":
+ try:
+ return json.loads(content)
+ except ValueError:
+ return False
+ return False
|
{"golden_diff": "diff --git a/mitmproxy/flow_export.py b/mitmproxy/flow_export.py\n--- a/mitmproxy/flow_export.py\n+++ b/mitmproxy/flow_export.py\n@@ -1,7 +1,10 @@\n+import json\n import urllib\n-import netlib.http\n from textwrap import dedent\n \n+import netlib.http\n+from netlib.utils import parse_content_type\n+\n \n def curl_command(flow):\n data = \"curl \"\n@@ -53,8 +56,16 @@\n \n data = \"\"\n if flow.request.body:\n- data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n- args += \"\\n data=data,\"\n+ json_obj = is_json(flow.request.headers, flow.request.body)\n+ if json_obj:\n+ # Without the separators field json.dumps() produces\n+ # trailing white spaces: https://bugs.python.org/issue16333\n+ data = json.dumps(json_obj, indent=4, separators=(',', ': '))\n+ data = \"\\njson = %s\\n\" % data\n+ args += \"\\n json=json,\"\n+ else:\n+ data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n+ args += \"\\n data=data,\"\n \n code = code.format(\n url=url,\n@@ -71,3 +82,14 @@\n def raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n+\n+\n+def is_json(headers, content):\n+ if headers:\n+ ct = parse_content_type(headers.get(\"content-type\", \"\"))\n+ if ct and \"%s/%s\" % (ct[0], ct[1]) == \"application/json\":\n+ try:\n+ return json.loads(content)\n+ except ValueError:\n+ return False\n+ return False\n", "issue": "Indent JSON data while exporting it as Python code\nI was testing out a web API and used the \"Export flow as Python code\" feature for the first time as user, and noticed an improvement.\n\nCurrently we just export the `flow.request.body` as is (independent of it's content type) but mitmproxy's interface is smart and renders different bodies differently (for eg. it indents JSON)\n\nI think we could add this indent behaviour while exporting things as code too.\n\n", "before_files": [{"content": "import urllib\nimport netlib.http\nfrom textwrap import dedent\n\n\ndef curl_command(flow):\n data = \"curl \"\n\n for k, v in flow.request.headers.fields:\n data += \"-H '%s:%s' \" % (k, v)\n\n if flow.request.method != \"GET\":\n data += \"-X %s \" % flow.request.method\n\n full_url = flow.request.scheme + \"://\" + flow.request.host + flow.request.path\n data += \"'%s'\" % full_url\n\n if flow.request.content:\n data += \" --data-binary '%s'\" % flow.request.content\n\n return data\n\n\ndef python_code(flow):\n code = dedent(\"\"\"\n import requests\n\n url = '{url}'\n {headers}{params}{data}\n response = requests.request(\n method='{method}',\n url=url,{args}\n )\n\n print(response.text)\n \"\"\").strip()\n\n components = map(lambda x: urllib.quote(x, safe=\"\"), flow.request.path_components)\n url = flow.request.scheme + \"://\" + flow.request.host + \"/\" + \"/\".join(components)\n\n args = \"\"\n headers = \"\"\n if flow.request.headers:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.headers.fields]\n headers += \"\\nheaders = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n headers=headers,\"\n\n params = \"\"\n if flow.request.query:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.query]\n params = \"\\nparams = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n params=params,\"\n\n data = \"\"\n if flow.request.body:\n data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n args += \"\\n data=data,\"\n\n code = code.format(\n url=url,\n headers=headers,\n params=params,\n data=data,\n method=flow.request.method,\n args=args,\n )\n\n return code\n\n\ndef raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n", "path": "mitmproxy/flow_export.py"}], "after_files": [{"content": "import json\nimport urllib\nfrom textwrap import dedent\n\nimport netlib.http\nfrom netlib.utils import parse_content_type\n\n\ndef curl_command(flow):\n data = \"curl \"\n\n for k, v in flow.request.headers.fields:\n data += \"-H '%s:%s' \" % (k, v)\n\n if flow.request.method != \"GET\":\n data += \"-X %s \" % flow.request.method\n\n full_url = flow.request.scheme + \"://\" + flow.request.host + flow.request.path\n data += \"'%s'\" % full_url\n\n if flow.request.content:\n data += \" --data-binary '%s'\" % flow.request.content\n\n return data\n\n\ndef python_code(flow):\n code = dedent(\"\"\"\n import requests\n\n url = '{url}'\n {headers}{params}{data}\n response = requests.request(\n method='{method}',\n url=url,{args}\n )\n\n print(response.text)\n \"\"\").strip()\n\n components = map(lambda x: urllib.quote(x, safe=\"\"), flow.request.path_components)\n url = flow.request.scheme + \"://\" + flow.request.host + \"/\" + \"/\".join(components)\n\n args = \"\"\n headers = \"\"\n if flow.request.headers:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.headers.fields]\n headers += \"\\nheaders = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n headers=headers,\"\n\n params = \"\"\n if flow.request.query:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.query]\n params = \"\\nparams = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n params=params,\"\n\n data = \"\"\n if flow.request.body:\n json_obj = is_json(flow.request.headers, flow.request.body)\n if json_obj:\n # Without the separators field json.dumps() produces\n # trailing white spaces: https://bugs.python.org/issue16333\n data = json.dumps(json_obj, indent=4, separators=(',', ': '))\n data = \"\\njson = %s\\n\" % data\n args += \"\\n json=json,\"\n else:\n data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n args += \"\\n data=data,\"\n\n code = code.format(\n url=url,\n headers=headers,\n params=params,\n data=data,\n method=flow.request.method,\n args=args,\n )\n\n return code\n\n\ndef raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n\n\ndef is_json(headers, content):\n if headers:\n ct = parse_content_type(headers.get(\"content-type\", \"\"))\n if ct and \"%s/%s\" % (ct[0], ct[1]) == \"application/json\":\n try:\n return json.loads(content)\n except ValueError:\n return False\n return False\n", "path": "mitmproxy/flow_export.py"}]}
| 982 | 408 |
gh_patches_debug_13045
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2891
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E2520 false positive for CloudWatch Alarm with expression
### CloudFormation Lint Version
0.80.3
### What operating system are you using?
MacOS
### Describe the bug
A valid CloudWatch alarm that uses a metrics expression is resulting in an E2520 false positive. The alarm was defined in the CloudWatch console and exported via the "View Source | CloudFormation YAML" capability, so it's definitionally a valid CloudWatch alarm. To confirm that the bug isn't in the console, created a copy of the alarm using the generated definition and neither CloudFormation nor CloudWatch have any complaints.
### Expected behavior
E2520 should not be raised when `Dimensions` is present under `MetricStat.Metric`.
### Reproduction template
```yaml
AWSTemplateFormatVersion: "2010-09-09"
Description: AXIS ALB alarms
Parameters:
pLoadBalancerId:
Type: String
Default: app/private-api-proxy/ced2a65499b104e7
pAlarmPrefix:
Type: String
Default: MySampleApp
Resources:
rAlb5xxPercentage:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: !Sub "${pAlarmPrefix}-ALB-5XX-Percentage"
AlarmDescription: >-
This alarm fires when the ALB is returning HTTP 5XX errors. It is
usually due to a misconfiguration of the ALB or not having any
associated targets.
See [runbook](https://google.com) for more details.
ActionsEnabled: true
OKActions: []
AlarmActions: []
InsufficientDataActions: []
Dimensions: []
EvaluationPeriods: 15
DatapointsToAlarm: 3
Threshold: 5
ComparisonOperator: GreaterThanOrEqualToThreshold
TreatMissingData: notBreaching
Metrics:
- Id: e1
Label: ALB 5XX Percentage
ReturnData: true
Expression: (m2/(m1+m2+m3+0.001))*100
- Id: m1
ReturnData: false
MetricStat:
Metric:
Namespace: AWS/ApplicationELB
MetricName: RequestCount
Dimensions:
- Name: LoadBalancer
Value: !Ref pLoadBalancerId
Period: 60
Stat: Sum
- Id: m2
ReturnData: false
MetricStat:
Metric:
Namespace: AWS/ApplicationELB
MetricName: HTTPCode_ELB_5XX_Count
Dimensions:
- Name: LoadBalancer
Value: !Ref pLoadBalancerId
Period: 60
Stat: Sum
- Id: m3
ReturnData: false
MetricStat:
Metric:
Namespace: AWS/ApplicationELB
MetricName: HTTPCode_ELB_4XX_Count
Dimensions:
- Name: LoadBalancer
Value: !Ref pLoadBalancerId
Period: 60
Stat: Sum
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/properties/Exclusive.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import cfnlint.helpers
6 from cfnlint.data import AdditionalSpecs
7 from cfnlint.rules import CloudFormationLintRule, RuleMatch
8
9
10 class Exclusive(CloudFormationLintRule):
11 """Check Properties Resource Configuration"""
12
13 id = "E2520"
14 shortdesc = "Check Properties that are mutually exclusive"
15 description = (
16 "Making sure CloudFormation properties that are exclusive are not defined"
17 )
18 source_url = "https://github.com/aws-cloudformation/cfn-python-lint"
19 tags = ["resources"]
20
21 def __init__(self):
22 """Init"""
23 super().__init__()
24 exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, "Exclusive.json")
25 self.resource_types_specs = exclusivespec["ResourceTypes"]
26 self.property_types_specs = exclusivespec["PropertyTypes"]
27 for resource_type_spec in self.resource_types_specs:
28 self.resource_property_types.append(resource_type_spec)
29 for property_type_spec in self.property_types_specs:
30 self.resource_sub_property_types.append(property_type_spec)
31
32 def check(self, properties, exclusions, path, cfn):
33 """Check itself"""
34 matches = []
35 for p_value, p_path in properties.items_safe(path[:]):
36 for k, v in exclusions.items():
37 property_sets = cfn.get_object_without_conditions(p_value, [k] + v)
38 for property_set in property_sets:
39 obj = property_set["Object"].clean()
40 for prop in obj:
41 if prop == k:
42 for excl_property in exclusions[prop]:
43 if excl_property in obj:
44 if property_set["Scenario"] is None:
45 message = "Property {0} should NOT exist with {1} for {2}"
46 matches.append(
47 RuleMatch(
48 p_path + [prop],
49 message.format(
50 excl_property,
51 prop,
52 "/".join(map(str, p_path)),
53 ),
54 )
55 )
56 else:
57 scenario_text = " and ".join(
58 [
59 f'when condition "{k}" is {v}'
60 for (k, v) in property_set[
61 "Scenario"
62 ].items()
63 ]
64 )
65 message = "Property {0} should NOT exist with {1} {2} for {3}"
66 matches.append(
67 RuleMatch(
68 p_path + [prop],
69 message.format(
70 excl_property,
71 prop,
72 scenario_text,
73 "/".join(map(str, p_path)),
74 ),
75 )
76 )
77
78 return matches
79
80 def match_resource_sub_properties(self, properties, property_type, path, cfn):
81 """Match for sub properties"""
82 matches = []
83
84 exclusions = self.property_types_specs.get(property_type, {})
85 matches.extend(self.check(properties, exclusions, path, cfn))
86
87 return matches
88
89 def match_resource_properties(self, properties, resource_type, path, cfn):
90 """Check CloudFormation Properties"""
91 matches = []
92
93 exclusions = self.resource_types_specs.get(resource_type, {})
94 matches.extend(self.check(properties, exclusions, path, cfn))
95
96 return matches
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py
--- a/src/cfnlint/rules/resources/properties/Exclusive.py
+++ b/src/cfnlint/rules/resources/properties/Exclusive.py
@@ -40,7 +40,7 @@
for prop in obj:
if prop == k:
for excl_property in exclusions[prop]:
- if excl_property in obj:
+ if obj.get(excl_property):
if property_set["Scenario"] is None:
message = "Property {0} should NOT exist with {1} for {2}"
matches.append(
|
{"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py\n--- a/src/cfnlint/rules/resources/properties/Exclusive.py\n+++ b/src/cfnlint/rules/resources/properties/Exclusive.py\n@@ -40,7 +40,7 @@\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n- if excl_property in obj:\n+ if obj.get(excl_property):\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n", "issue": "E2520 false positive for CloudWatch Alarm with expression\n### CloudFormation Lint Version\r\n\r\n0.80.3\r\n\r\n### What operating system are you using?\r\n\r\nMacOS\r\n\r\n### Describe the bug\r\n\r\nA valid CloudWatch alarm that uses a metrics expression is resulting in an E2520 false positive. The alarm was defined in the CloudWatch console and exported via the \"View Source | CloudFormation YAML\" capability, so it's definitionally a valid CloudWatch alarm. To confirm that the bug isn't in the console, created a copy of the alarm using the generated definition and neither CloudFormation nor CloudWatch have any complaints.\r\n\r\n### Expected behavior\r\n\r\nE2520 should not be raised when `Dimensions` is present under `MetricStat.Metric`.\r\n\r\n### Reproduction template\r\n\r\n```yaml\r\nAWSTemplateFormatVersion: \"2010-09-09\"\r\n\r\nDescription: AXIS ALB alarms\r\n\r\nParameters:\r\n pLoadBalancerId:\r\n Type: String\r\n Default: app/private-api-proxy/ced2a65499b104e7\r\n\r\n pAlarmPrefix:\r\n Type: String\r\n Default: MySampleApp\r\n\r\nResources:\r\n rAlb5xxPercentage:\r\n Type: AWS::CloudWatch::Alarm\r\n Properties:\r\n AlarmName: !Sub \"${pAlarmPrefix}-ALB-5XX-Percentage\"\r\n AlarmDescription: >-\r\n This alarm fires when the ALB is returning HTTP 5XX errors. It is\r\n usually due to a misconfiguration of the ALB or not having any\r\n associated targets.\r\n\r\n\r\n See [runbook](https://google.com) for more details.\r\n ActionsEnabled: true\r\n OKActions: []\r\n AlarmActions: []\r\n InsufficientDataActions: []\r\n Dimensions: []\r\n EvaluationPeriods: 15\r\n DatapointsToAlarm: 3\r\n Threshold: 5\r\n ComparisonOperator: GreaterThanOrEqualToThreshold\r\n TreatMissingData: notBreaching\r\n Metrics:\r\n - Id: e1\r\n Label: ALB 5XX Percentage\r\n ReturnData: true\r\n Expression: (m2/(m1+m2+m3+0.001))*100\r\n - Id: m1\r\n ReturnData: false\r\n MetricStat:\r\n Metric:\r\n Namespace: AWS/ApplicationELB\r\n MetricName: RequestCount\r\n Dimensions:\r\n - Name: LoadBalancer\r\n Value: !Ref pLoadBalancerId\r\n Period: 60\r\n Stat: Sum\r\n - Id: m2\r\n ReturnData: false\r\n MetricStat:\r\n Metric:\r\n Namespace: AWS/ApplicationELB\r\n MetricName: HTTPCode_ELB_5XX_Count\r\n Dimensions:\r\n - Name: LoadBalancer\r\n Value: !Ref pLoadBalancerId\r\n Period: 60\r\n Stat: Sum\r\n - Id: m3\r\n ReturnData: false\r\n MetricStat:\r\n Metric:\r\n Namespace: AWS/ApplicationELB\r\n MetricName: HTTPCode_ELB_4XX_Count\r\n Dimensions:\r\n - Name: LoadBalancer\r\n Value: !Ref pLoadBalancerId\r\n Period: 60\r\n Stat: Sum\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n if obj.get(excl_property):\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}]}
| 1,814 | 139 |
gh_patches_debug_252
|
rasdani/github-patches
|
git_diff
|
google-deepmind__dm-haiku-48
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jax version upgrade (AttributeError: CallPrimitive)
Using the current version of master 66f9c69 of Haiku, I am getting the following error on Colab
```
AttributeError Traceback (most recent call last)
<ipython-input-3-3a9e6adbfff5> in <module>()
----> 1 import haiku as hk
/usr/local/lib/python3.6/dist-packages/haiku/__init__.py in <module>()
17
18 from haiku import data_structures
---> 19 from haiku import experimental
20 from haiku import initializers
21 from haiku import nets
/usr/local/lib/python3.6/dist-packages/haiku/experimental.py in <module>()
22 from haiku._src.base import custom_getter
23 from haiku._src.base import ParamContext
---> 24 from haiku._src.dot import to_dot
25 from haiku._src.lift import lift
26 from haiku._src.module import profiler_name_scopes
/usr/local/lib/python3.6/dist-packages/haiku/_src/dot.py in <module>()
23
24 from haiku._src import data_structures
---> 25 from haiku._src import module
26 from haiku._src import utils
27 import jax
/usr/local/lib/python3.6/dist-packages/haiku/_src/module.py in <module>()
26 from haiku._src import base
27 from haiku._src import data_structures
---> 28 from haiku._src import named_call
29 from haiku._src import utils
30 import jax.numpy as jnp
/usr/local/lib/python3.6/dist-packages/haiku/_src/named_call.py in <module>()
29
30 # Registering named call as a primitive
---> 31 named_call_p = core.CallPrimitive('named_call')
32 # named_call is implemented as a plain core.call and only diverges
33 # under compilation (see named_call_translation_rule)
AttributeError: module 'jax.core' has no attribute 'CallPrimitive'
```
I believe that's because Haiku now requires `jax>=0.1.71`, while the version by default on Colab is `jax==0.1.69`. `CallPrimitive` was introduced in jax 0.1.71.
https://github.com/google/jax/blob/1545a29e6d69a7b3c7fdf9a49b38004759a9fbfa/jax/core.py#L1106-L1115
To reproduce (inside a Colab):
```python
import jax
print(jax.__version__) # 0.1.69
!pip install -q git+https://github.com/deepmind/dm-haiku
import haiku as hk
```
Run `!pip install -q --upgrade jax jaxlib` first in your Colab to fix this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Lint as: python3
2 # Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 # ==============================================================================
16 """Setup for pip package."""
17
18 from setuptools import find_namespace_packages
19 from setuptools import setup
20
21
22 def _get_version():
23 with open('haiku/__init__.py') as fp:
24 for line in fp:
25 if line.startswith('__version__'):
26 g = {}
27 exec(line, g) # pylint: disable=exec-used
28 return g['__version__']
29 raise ValueError('`__version__` not defined in `haiku/__init__.py`')
30
31
32 def _parse_requirements(requirements_txt_path):
33 with open(requirements_txt_path) as fp:
34 return fp.read().splitlines()
35
36
37 _VERSION = _get_version()
38
39 EXTRA_PACKAGES = {
40 'jax': ['jax>=0.1.55'],
41 'jaxlib': ['jaxlib>=0.1.37'],
42 }
43
44 setup(
45 name='dm-haiku',
46 version=_VERSION,
47 url='https://github.com/deepmind/dm-haiku',
48 license='Apache 2.0',
49 author='DeepMind',
50 description='Haiku is a library for building neural networks in JAX.',
51 long_description=open('README.md').read(),
52 long_description_content_type='text/markdown',
53 author_email='[email protected]',
54 # Contained modules and scripts.
55 packages=find_namespace_packages(exclude=['*_test.py']),
56 install_requires=_parse_requirements('requirements.txt'),
57 extras_require=EXTRA_PACKAGES,
58 tests_require=_parse_requirements('requirements-test.txt'),
59 requires_python='>=3.6',
60 include_package_data=True,
61 zip_safe=False,
62 # PyPI package information.
63 classifiers=[
64 'Development Status :: 4 - Beta',
65 'Intended Audience :: Developers',
66 'Intended Audience :: Education',
67 'Intended Audience :: Science/Research',
68 'License :: OSI Approved :: Apache Software License',
69 'Programming Language :: Python :: 3',
70 'Programming Language :: Python :: 3.6',
71 'Programming Language :: Python :: 3.7',
72 'Topic :: Scientific/Engineering :: Mathematics',
73 'Topic :: Software Development :: Libraries :: Python Modules',
74 'Topic :: Software Development :: Libraries',
75 ],
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -37,8 +37,8 @@
_VERSION = _get_version()
EXTRA_PACKAGES = {
- 'jax': ['jax>=0.1.55'],
- 'jaxlib': ['jaxlib>=0.1.37'],
+ 'jax': ['jax>=0.1.71'],
+ 'jaxlib': ['jaxlib>=0.1.49'],
}
setup(
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -37,8 +37,8 @@\n _VERSION = _get_version()\n \n EXTRA_PACKAGES = {\n- 'jax': ['jax>=0.1.55'],\n- 'jaxlib': ['jaxlib>=0.1.37'],\n+ 'jax': ['jax>=0.1.71'],\n+ 'jaxlib': ['jaxlib>=0.1.49'],\n }\n \n setup(\n", "issue": "Jax version upgrade (AttributeError: CallPrimitive)\nUsing the current version of master 66f9c69 of Haiku, I am getting the following error on Colab\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-3-3a9e6adbfff5> in <module>()\r\n----> 1 import haiku as hk\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/__init__.py in <module>()\r\n 17 \r\n 18 from haiku import data_structures\r\n---> 19 from haiku import experimental\r\n 20 from haiku import initializers\r\n 21 from haiku import nets\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/experimental.py in <module>()\r\n 22 from haiku._src.base import custom_getter\r\n 23 from haiku._src.base import ParamContext\r\n---> 24 from haiku._src.dot import to_dot\r\n 25 from haiku._src.lift import lift\r\n 26 from haiku._src.module import profiler_name_scopes\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/dot.py in <module>()\r\n 23 \r\n 24 from haiku._src import data_structures\r\n---> 25 from haiku._src import module\r\n 26 from haiku._src import utils\r\n 27 import jax\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/module.py in <module>()\r\n 26 from haiku._src import base\r\n 27 from haiku._src import data_structures\r\n---> 28 from haiku._src import named_call\r\n 29 from haiku._src import utils\r\n 30 import jax.numpy as jnp\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/named_call.py in <module>()\r\n 29 \r\n 30 # Registering named call as a primitive\r\n---> 31 named_call_p = core.CallPrimitive('named_call')\r\n 32 # named_call is implemented as a plain core.call and only diverges\r\n 33 # under compilation (see named_call_translation_rule)\r\n\r\nAttributeError: module 'jax.core' has no attribute 'CallPrimitive'\r\n```\r\n\r\nI believe that's because Haiku now requires `jax>=0.1.71`, while the version by default on Colab is `jax==0.1.69`. `CallPrimitive` was introduced in jax 0.1.71.\r\nhttps://github.com/google/jax/blob/1545a29e6d69a7b3c7fdf9a49b38004759a9fbfa/jax/core.py#L1106-L1115\r\n\r\nTo reproduce (inside a Colab):\r\n```python\r\nimport jax\r\nprint(jax.__version__) # 0.1.69\r\n\r\n!pip install -q git+https://github.com/deepmind/dm-haiku\r\nimport haiku as hk\r\n```\r\n\r\nRun `!pip install -q --upgrade jax jaxlib` first in your Colab to fix this issue.\n", "before_files": [{"content": "# Lint as: python3\n# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Setup for pip package.\"\"\"\n\nfrom setuptools import find_namespace_packages\nfrom setuptools import setup\n\n\ndef _get_version():\n with open('haiku/__init__.py') as fp:\n for line in fp:\n if line.startswith('__version__'):\n g = {}\n exec(line, g) # pylint: disable=exec-used\n return g['__version__']\n raise ValueError('`__version__` not defined in `haiku/__init__.py`')\n\n\ndef _parse_requirements(requirements_txt_path):\n with open(requirements_txt_path) as fp:\n return fp.read().splitlines()\n\n\n_VERSION = _get_version()\n\nEXTRA_PACKAGES = {\n 'jax': ['jax>=0.1.55'],\n 'jaxlib': ['jaxlib>=0.1.37'],\n}\n\nsetup(\n name='dm-haiku',\n version=_VERSION,\n url='https://github.com/deepmind/dm-haiku',\n license='Apache 2.0',\n author='DeepMind',\n description='Haiku is a library for building neural networks in JAX.',\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n author_email='[email protected]',\n # Contained modules and scripts.\n packages=find_namespace_packages(exclude=['*_test.py']),\n install_requires=_parse_requirements('requirements.txt'),\n extras_require=EXTRA_PACKAGES,\n tests_require=_parse_requirements('requirements-test.txt'),\n requires_python='>=3.6',\n include_package_data=True,\n zip_safe=False,\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Lint as: python3\n# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Setup for pip package.\"\"\"\n\nfrom setuptools import find_namespace_packages\nfrom setuptools import setup\n\n\ndef _get_version():\n with open('haiku/__init__.py') as fp:\n for line in fp:\n if line.startswith('__version__'):\n g = {}\n exec(line, g) # pylint: disable=exec-used\n return g['__version__']\n raise ValueError('`__version__` not defined in `haiku/__init__.py`')\n\n\ndef _parse_requirements(requirements_txt_path):\n with open(requirements_txt_path) as fp:\n return fp.read().splitlines()\n\n\n_VERSION = _get_version()\n\nEXTRA_PACKAGES = {\n 'jax': ['jax>=0.1.71'],\n 'jaxlib': ['jaxlib>=0.1.49'],\n}\n\nsetup(\n name='dm-haiku',\n version=_VERSION,\n url='https://github.com/deepmind/dm-haiku',\n license='Apache 2.0',\n author='DeepMind',\n description='Haiku is a library for building neural networks in JAX.',\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n author_email='[email protected]',\n # Contained modules and scripts.\n packages=find_namespace_packages(exclude=['*_test.py']),\n install_requires=_parse_requirements('requirements.txt'),\n extras_require=EXTRA_PACKAGES,\n tests_require=_parse_requirements('requirements-test.txt'),\n requires_python='>=3.6',\n include_package_data=True,\n zip_safe=False,\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]}
| 1,727 | 113 |
gh_patches_debug_3065
|
rasdani/github-patches
|
git_diff
|
coala__coala-3348
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong doc string syntax in coalib.bearlib.aspects.Root
The doc string of the `Root` aspectclass has a formatting issue at https://github.com/coala/coala/blob/master/coalib/bearlib/aspects/__init__.py#L61
You can see the wrongly rendered result at https://api.coala.io/en/latest/coalib.bearlib.aspects.html#module-coalib.bearlib.aspects
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/bearlib/aspects/__init__.py`
Content:
```
1 from .base import aspectbase
2 from .meta import aspectclass
3 from .taste import Taste, TasteError
4
5 __all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']
6
7
8 class Root(aspectbase, metaclass=aspectclass):
9 """
10 The root aspectclass.
11
12 Define sub-aspectclasses with class-bound ``.subaspect`` decorator.
13 Definition string is taken from doc-string of decorated class.
14 Remaining docs are taken from a nested ``docs`` class.
15 Tastes are defined as class attributes that are instances of
16 :class:`coalib.bearlib.aspectclasses.Taste`.
17
18 >>> @Root.subaspect
19 ... class Formatting:
20 ... \"""
21 ... A parent aspect for code formatting aspects...
22 ... \"""
23
24 We can now create subaspects like this:
25
26 >>> @Formatting.subaspect
27 ... class LineLength:
28 ... \"""
29 ... This aspect controls the length of a line...
30 ... \"""
31 ... class docs:
32 ... example = "..."
33 ... example_language = "..."
34 ... importance_reason = "..."
35 ... fix_suggestions = "..."
36 ...
37 ... max_line_length = Taste[int](
38 ... "Maximum length allowed for a line.",
39 ... (80, 90, 120), default=80)
40
41 The representation will show the full "path" to the leaf of the tree:
42
43 >>> Root.Formatting.LineLength
44 <aspectclass 'Root.Formatting.LineLength'>
45
46 We can see, which settings are availables:
47
48 >>> Formatting.tastes
49 {}
50 >>> LineLength.tastes
51 {'max_line_length': <....Taste[int] object at ...>}
52
53 And instantiate the aspect with the values, they will be automatically
54 converted:
55
56 >>> Formatting('Python')
57 <coalib.bearlib.aspects.Root.Formatting object at 0x...>
58 >>> LineLength('Python', max_line_length="100").tastes
59 {'max_line_length': 100}
60
61 If no settings are given, the defaults will be taken>
62 >>> LineLength('Python').tastes
63 {'max_line_length': 80}
64
65 Tastes can also be made available for only specific languages:
66
67 >>> from coalib.bearlib.languages import Language
68 >>> @Language
69 ... class GreaterTrumpScript:
70 ... pass
71
72 >>> @Formatting.subaspect
73 ... class Greatness:
74 ... \"""
75 ... This aspect controls the greatness of a file...
76 ... \"""
77 ...
78 ... min_greatness = Taste[int](
79 ... "Minimum greatness factor needed for a TrumpScript file. "
80 ... "This is fact.",
81 ... (1000000, 1000000000, 1000000000000), default=1000000,
82 ... languages=('GreaterTrumpScript' ,))
83
84 >>> Greatness.tastes
85 {'min_greatness': <....Taste[int] object at ...>}
86 >>> Greatness('GreaterTrumpScript').tastes
87 {'min_greatness': 1000000}
88 >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes
89 {'min_greatness': 1000000000000}
90
91 >>> Greatness('Python').tastes
92 {}
93
94 >>> Greatness('Python', min_greatness=1000000000)
95 ... # doctest: +NORMALIZE_WHITESPACE
96 Traceback (most recent call last):
97 ...
98 coalib.bearlib.aspects.taste.TasteError:
99 Root.Formatting.Greatness.min_greatness is not available ...
100
101 >>> Greatness('Python').min_greatness
102 ... # doctest: +NORMALIZE_WHITESPACE
103 Traceback (most recent call last):
104 ...
105 coalib.bearlib.aspects.taste.TasteError:
106 Root.Formatting.Greatness.min_greatness is not available ...
107 """
108 parent = None
109
110 _tastes = {}
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/coalib/bearlib/aspects/__init__.py b/coalib/bearlib/aspects/__init__.py
--- a/coalib/bearlib/aspects/__init__.py
+++ b/coalib/bearlib/aspects/__init__.py
@@ -58,7 +58,8 @@
>>> LineLength('Python', max_line_length="100").tastes
{'max_line_length': 100}
- If no settings are given, the defaults will be taken>
+ If no settings are given, the defaults will be taken:
+
>>> LineLength('Python').tastes
{'max_line_length': 80}
|
{"golden_diff": "diff --git a/coalib/bearlib/aspects/__init__.py b/coalib/bearlib/aspects/__init__.py\n--- a/coalib/bearlib/aspects/__init__.py\n+++ b/coalib/bearlib/aspects/__init__.py\n@@ -58,7 +58,8 @@\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n \n- If no settings are given, the defaults will be taken>\n+ If no settings are given, the defaults will be taken:\n+\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n", "issue": "Wrong doc string syntax in coalib.bearlib.aspects.Root\nThe doc string of the `Root` aspectclass has a formatting issue at https://github.com/coala/coala/blob/master/coalib/bearlib/aspects/__init__.py#L61\r\n\r\nYou can see the wrongly rendered result at https://api.coala.io/en/latest/coalib.bearlib.aspects.html#module-coalib.bearlib.aspects\n", "before_files": [{"content": "from .base import aspectbase\nfrom .meta import aspectclass\nfrom .taste import Taste, TasteError\n\n__all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']\n\n\nclass Root(aspectbase, metaclass=aspectclass):\n \"\"\"\n The root aspectclass.\n\n Define sub-aspectclasses with class-bound ``.subaspect`` decorator.\n Definition string is taken from doc-string of decorated class.\n Remaining docs are taken from a nested ``docs`` class.\n Tastes are defined as class attributes that are instances of\n :class:`coalib.bearlib.aspectclasses.Taste`.\n\n >>> @Root.subaspect\n ... class Formatting:\n ... \\\"\"\"\n ... A parent aspect for code formatting aspects...\n ... \\\"\"\"\n\n We can now create subaspects like this:\n\n >>> @Formatting.subaspect\n ... class LineLength:\n ... \\\"\"\"\n ... This aspect controls the length of a line...\n ... \\\"\"\"\n ... class docs:\n ... example = \"...\"\n ... example_language = \"...\"\n ... importance_reason = \"...\"\n ... fix_suggestions = \"...\"\n ...\n ... max_line_length = Taste[int](\n ... \"Maximum length allowed for a line.\",\n ... (80, 90, 120), default=80)\n\n The representation will show the full \"path\" to the leaf of the tree:\n\n >>> Root.Formatting.LineLength\n <aspectclass 'Root.Formatting.LineLength'>\n\n We can see, which settings are availables:\n\n >>> Formatting.tastes\n {}\n >>> LineLength.tastes\n {'max_line_length': <....Taste[int] object at ...>}\n\n And instantiate the aspect with the values, they will be automatically\n converted:\n\n >>> Formatting('Python')\n <coalib.bearlib.aspects.Root.Formatting object at 0x...>\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n\n If no settings are given, the defaults will be taken>\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n\n Tastes can also be made available for only specific languages:\n\n >>> from coalib.bearlib.languages import Language\n >>> @Language\n ... class GreaterTrumpScript:\n ... pass\n\n >>> @Formatting.subaspect\n ... class Greatness:\n ... \\\"\"\"\n ... This aspect controls the greatness of a file...\n ... \\\"\"\"\n ...\n ... min_greatness = Taste[int](\n ... \"Minimum greatness factor needed for a TrumpScript file. \"\n ... \"This is fact.\",\n ... (1000000, 1000000000, 1000000000000), default=1000000,\n ... languages=('GreaterTrumpScript' ,))\n\n >>> Greatness.tastes\n {'min_greatness': <....Taste[int] object at ...>}\n >>> Greatness('GreaterTrumpScript').tastes\n {'min_greatness': 1000000}\n >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes\n {'min_greatness': 1000000000000}\n\n >>> Greatness('Python').tastes\n {}\n\n >>> Greatness('Python', min_greatness=1000000000)\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n\n >>> Greatness('Python').min_greatness\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n \"\"\"\n parent = None\n\n _tastes = {}\n", "path": "coalib/bearlib/aspects/__init__.py"}], "after_files": [{"content": "from .base import aspectbase\nfrom .meta import aspectclass\nfrom .taste import Taste, TasteError\n\n__all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']\n\n\nclass Root(aspectbase, metaclass=aspectclass):\n \"\"\"\n The root aspectclass.\n\n Define sub-aspectclasses with class-bound ``.subaspect`` decorator.\n Definition string is taken from doc-string of decorated class.\n Remaining docs are taken from a nested ``docs`` class.\n Tastes are defined as class attributes that are instances of\n :class:`coalib.bearlib.aspectclasses.Taste`.\n\n >>> @Root.subaspect\n ... class Formatting:\n ... \\\"\"\"\n ... A parent aspect for code formatting aspects...\n ... \\\"\"\"\n\n We can now create subaspects like this:\n\n >>> @Formatting.subaspect\n ... class LineLength:\n ... \\\"\"\"\n ... This aspect controls the length of a line...\n ... \\\"\"\"\n ... class docs:\n ... example = \"...\"\n ... example_language = \"...\"\n ... importance_reason = \"...\"\n ... fix_suggestions = \"...\"\n ...\n ... max_line_length = Taste[int](\n ... \"Maximum length allowed for a line.\",\n ... (80, 90, 120), default=80)\n\n The representation will show the full \"path\" to the leaf of the tree:\n\n >>> Root.Formatting.LineLength\n <aspectclass 'Root.Formatting.LineLength'>\n\n We can see, which settings are availables:\n\n >>> Formatting.tastes\n {}\n >>> LineLength.tastes\n {'max_line_length': <....Taste[int] object at ...>}\n\n And instantiate the aspect with the values, they will be automatically\n converted:\n\n >>> Formatting('Python')\n <coalib.bearlib.aspects.Root.Formatting object at 0x...>\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n\n If no settings are given, the defaults will be taken:\n\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n\n Tastes can also be made available for only specific languages:\n\n >>> from coalib.bearlib.languages import Language\n >>> @Language\n ... class GreaterTrumpScript:\n ... pass\n\n >>> @Formatting.subaspect\n ... class Greatness:\n ... \\\"\"\"\n ... This aspect controls the greatness of a file...\n ... \\\"\"\"\n ...\n ... min_greatness = Taste[int](\n ... \"Minimum greatness factor needed for a TrumpScript file. \"\n ... \"This is fact.\",\n ... (1000000, 1000000000, 1000000000000), default=1000000,\n ... languages=('GreaterTrumpScript' ,))\n\n >>> Greatness.tastes\n {'min_greatness': <....Taste[int] object at ...>}\n >>> Greatness('GreaterTrumpScript').tastes\n {'min_greatness': 1000000}\n >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes\n {'min_greatness': 1000000000000}\n\n >>> Greatness('Python').tastes\n {}\n\n >>> Greatness('Python', min_greatness=1000000000)\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n\n >>> Greatness('Python').min_greatness\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n \"\"\"\n parent = None\n\n _tastes = {}\n", "path": "coalib/bearlib/aspects/__init__.py"}]}
| 1,518 | 151 |
gh_patches_debug_17047
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-2079
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Skipped Baggage entries in propagation still count against max entries
The decrement operation should be moved after the last continue block if the over-long entry is truly skipped, otherwise this behavior should probably be documented/tested for.
https://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L57-L65
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import typing
16 from urllib.parse import quote_plus, unquote_plus
17
18 from opentelemetry.baggage import get_all, set_baggage
19 from opentelemetry.context import get_current
20 from opentelemetry.context.context import Context
21 from opentelemetry.propagators import textmap
22
23
24 class W3CBaggagePropagator(textmap.TextMapPropagator):
25 """Extracts and injects Baggage which is used to annotate telemetry."""
26
27 _MAX_HEADER_LENGTH = 8192
28 _MAX_PAIR_LENGTH = 4096
29 _MAX_PAIRS = 180
30 _BAGGAGE_HEADER_NAME = "baggage"
31
32 def extract(
33 self,
34 carrier: textmap.CarrierT,
35 context: typing.Optional[Context] = None,
36 getter: textmap.Getter = textmap.default_getter,
37 ) -> Context:
38 """Extract Baggage from the carrier.
39
40 See
41 `opentelemetry.propagators.textmap.TextMapPropagator.extract`
42 """
43
44 if context is None:
45 context = get_current()
46
47 header = _extract_first_element(
48 getter.get(carrier, self._BAGGAGE_HEADER_NAME)
49 )
50
51 if not header or len(header) > self._MAX_HEADER_LENGTH:
52 return context
53
54 baggage_entries = header.split(",")
55 total_baggage_entries = self._MAX_PAIRS
56 for entry in baggage_entries:
57 if total_baggage_entries <= 0:
58 return context
59 total_baggage_entries -= 1
60 if len(entry) > self._MAX_PAIR_LENGTH:
61 continue
62 try:
63 name, value = entry.split("=", 1)
64 except Exception: # pylint: disable=broad-except
65 continue
66 context = set_baggage(
67 unquote_plus(name).strip(),
68 unquote_plus(value).strip(),
69 context=context,
70 )
71
72 return context
73
74 def inject(
75 self,
76 carrier: textmap.CarrierT,
77 context: typing.Optional[Context] = None,
78 setter: textmap.Setter = textmap.default_setter,
79 ) -> None:
80 """Injects Baggage into the carrier.
81
82 See
83 `opentelemetry.propagators.textmap.TextMapPropagator.inject`
84 """
85 baggage_entries = get_all(context=context)
86 if not baggage_entries:
87 return
88
89 baggage_string = _format_baggage(baggage_entries)
90 setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)
91
92 @property
93 def fields(self) -> typing.Set[str]:
94 """Returns a set with the fields set in `inject`."""
95 return {self._BAGGAGE_HEADER_NAME}
96
97
98 def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
99 return ",".join(
100 quote_plus(str(key)) + "=" + quote_plus(str(value))
101 for key, value in baggage_entries.items()
102 )
103
104
105 def _extract_first_element(
106 items: typing.Optional[typing.Iterable[textmap.CarrierT]],
107 ) -> typing.Optional[textmap.CarrierT]:
108 if items is None:
109 return None
110 return next(iter(items), None)
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
@@ -54,9 +54,6 @@
baggage_entries = header.split(",")
total_baggage_entries = self._MAX_PAIRS
for entry in baggage_entries:
- if total_baggage_entries <= 0:
- return context
- total_baggage_entries -= 1
if len(entry) > self._MAX_PAIR_LENGTH:
continue
try:
@@ -68,6 +65,9 @@
unquote_plus(value).strip(),
context=context,
)
+ total_baggage_entries -= 1
+ if total_baggage_entries == 0:
+ break
return context
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n@@ -54,9 +54,6 @@\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n- if total_baggage_entries <= 0:\n- return context\n- total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n@@ -68,6 +65,9 @@\n unquote_plus(value).strip(),\n context=context,\n )\n+ total_baggage_entries -= 1\n+ if total_baggage_entries == 0:\n+ break\n \n return context\n", "issue": "Skipped Baggage entries in propagation still count against max entries\nThe decrement operation should be moved after the last continue block if the over-long entry is truly skipped, otherwise this behavior should probably be documented/tested for.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L57-L65\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nfrom urllib.parse import quote_plus, unquote_plus\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if total_baggage_entries <= 0:\n return context\n total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = set_baggage(\n unquote_plus(name).strip(),\n unquote_plus(value).strip(),\n context=context,\n )\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nfrom urllib.parse import quote_plus, unquote_plus\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = set_baggage(\n unquote_plus(name).strip(),\n unquote_plus(value).strip(),\n context=context,\n )\n total_baggage_entries -= 1\n if total_baggage_entries == 0:\n break\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}]}
| 1,429 | 221 |
gh_patches_debug_30793
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-28849
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Tver] Can`t download Fuji TV video
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.04.07**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-f', 'best', 'https://tver.jp/corner/f0072083', '-o', 'D:\\video\\download\\a.mp4', '-v']
[debug] Encodings: locale cp932, fs mbcs, out cp932, pref cp932
[debug] youtube-dl version 2021.04.07
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041
[debug] exe versions: ffmpeg 4.2, ffprobe 4.2
[debug] Proxy map: {}
[TVer] Downloading JSON metadata
[TVer] f0072083: Downloading JSON metadata
[FujiTVFODPlus7] 6191645753001: Downloading m3u8 information
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
## Description
[TVer](tver.jp) is Japanese video site. Some TV stations are on this site posting a video.
I can no longer download videos from a TV station called Fuji TV. I think the cause is a specification change. it become the same as any other TV station. (https://tver.jp/info/notice/3137.html)
Can you please support a new specification.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/tver.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .common import InfoExtractor
7 from ..compat import compat_str
8 from ..utils import (
9 int_or_none,
10 remove_start,
11 smuggle_url,
12 strip_or_none,
13 try_get,
14 )
15
16
17 class TVerIE(InfoExtractor):
18 _VALID_URL = r'https?://(?:www\.)?tver\.jp/(?P<path>(?:corner|episode|feature)/(?P<id>f?\d+))'
19 # videos are only available for 7 days
20 _TESTS = [{
21 'url': 'https://tver.jp/corner/f0062178',
22 'only_matching': True,
23 }, {
24 'url': 'https://tver.jp/feature/f0062413',
25 'only_matching': True,
26 }, {
27 'url': 'https://tver.jp/episode/79622438',
28 'only_matching': True,
29 }, {
30 # subtitle = ' '
31 'url': 'https://tver.jp/corner/f0068870',
32 'only_matching': True,
33 }]
34 _TOKEN = None
35 BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'
36
37 def _real_initialize(self):
38 self._TOKEN = self._download_json(
39 'https://tver.jp/api/access_token.php', None)['token']
40
41 def _real_extract(self, url):
42 path, video_id = re.match(self._VALID_URL, url).groups()
43 main = self._download_json(
44 'https://api.tver.jp/v4/' + path, video_id,
45 query={'token': self._TOKEN})['main']
46 p_id = main['publisher_id']
47 service = remove_start(main['service'], 'ts_')
48 info = {
49 '_type': 'url_transparent',
50 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),
51 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),
52 }
53
54 if service == 'cx':
55 title = main['title']
56 subtitle = strip_or_none(main.get('subtitle'))
57 if subtitle:
58 title += ' - ' + subtitle
59 info.update({
60 'title': title,
61 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),
62 'ie_key': 'FujiTVFODPlus7',
63 })
64 else:
65 r_id = main['reference_id']
66 if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
67 r_id = 'ref:' + r_id
68 bc_url = smuggle_url(
69 self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
70 {'geo_countries': ['JP']})
71 info.update({
72 'url': bc_url,
73 'ie_key': 'BrightcoveNew',
74 })
75
76 return info
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/youtube_dl/extractor/tver.py b/youtube_dl/extractor/tver.py
--- a/youtube_dl/extractor/tver.py
+++ b/youtube_dl/extractor/tver.py
@@ -9,7 +9,6 @@
int_or_none,
remove_start,
smuggle_url,
- strip_or_none,
try_get,
)
@@ -45,32 +44,18 @@
query={'token': self._TOKEN})['main']
p_id = main['publisher_id']
service = remove_start(main['service'], 'ts_')
- info = {
+
+ r_id = main['reference_id']
+ if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
+ r_id = 'ref:' + r_id
+ bc_url = smuggle_url(
+ self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
+ {'geo_countries': ['JP']})
+
+ return {
'_type': 'url_transparent',
'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),
'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),
+ 'url': bc_url,
+ 'ie_key': 'BrightcoveNew',
}
-
- if service == 'cx':
- title = main['title']
- subtitle = strip_or_none(main.get('subtitle'))
- if subtitle:
- title += ' - ' + subtitle
- info.update({
- 'title': title,
- 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),
- 'ie_key': 'FujiTVFODPlus7',
- })
- else:
- r_id = main['reference_id']
- if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
- r_id = 'ref:' + r_id
- bc_url = smuggle_url(
- self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
- {'geo_countries': ['JP']})
- info.update({
- 'url': bc_url,
- 'ie_key': 'BrightcoveNew',
- })
-
- return info
|
{"golden_diff": "diff --git a/youtube_dl/extractor/tver.py b/youtube_dl/extractor/tver.py\n--- a/youtube_dl/extractor/tver.py\n+++ b/youtube_dl/extractor/tver.py\n@@ -9,7 +9,6 @@\n int_or_none,\n remove_start,\n smuggle_url,\n- strip_or_none,\n try_get,\n )\n \n@@ -45,32 +44,18 @@\n query={'token': self._TOKEN})['main']\n p_id = main['publisher_id']\n service = remove_start(main['service'], 'ts_')\n- info = {\n+\n+ r_id = main['reference_id']\n+ if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n+ r_id = 'ref:' + r_id\n+ bc_url = smuggle_url(\n+ self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n+ {'geo_countries': ['JP']})\n+\n+ return {\n '_type': 'url_transparent',\n 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),\n 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),\n+ 'url': bc_url,\n+ 'ie_key': 'BrightcoveNew',\n }\n-\n- if service == 'cx':\n- title = main['title']\n- subtitle = strip_or_none(main.get('subtitle'))\n- if subtitle:\n- title += ' - ' + subtitle\n- info.update({\n- 'title': title,\n- 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),\n- 'ie_key': 'FujiTVFODPlus7',\n- })\n- else:\n- r_id = main['reference_id']\n- if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n- r_id = 'ref:' + r_id\n- bc_url = smuggle_url(\n- self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n- {'geo_countries': ['JP']})\n- info.update({\n- 'url': bc_url,\n- 'ie_key': 'BrightcoveNew',\n- })\n-\n- return info\n", "issue": "[Tver] Can`t download Fuji TV video \n<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.\r\n- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.\r\n- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.\r\n- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [x] I'm reporting a broken site support\r\n- [x] I've verified that I'm running youtube-dl version **2021.04.07**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['-f', 'best', 'https://tver.jp/corner/f0072083', '-o', 'D:\\\\video\\\\download\\\\a.mp4', '-v']\r\n[debug] Encodings: locale cp932, fs mbcs, out cp932, pref cp932\r\n[debug] youtube-dl version 2021.04.07\r\n[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041\r\n[debug] exe versions: ffmpeg 4.2, ffprobe 4.2\r\n[debug] Proxy map: {}\r\n[TVer] Downloading JSON metadata\r\n[TVer] f0072083: Downloading JSON metadata\r\n[FujiTVFODPlus7] 6191645753001: Downloading m3u8 information\r\nERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\n```\r\n\r\n## Description\r\n\r\n[TVer](tver.jp) is Japanese video site. Some TV stations are on this site posting a video.\r\n\r\nI can no longer download videos from a TV station called Fuji TV. I think the cause is a specification change. it become the same as any other TV station. (https://tver.jp/info/notice/3137.html) \r\nCan you please support a new specification.\r\nThanks. \n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_str\nfrom ..utils import (\n int_or_none,\n remove_start,\n smuggle_url,\n strip_or_none,\n try_get,\n)\n\n\nclass TVerIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?tver\\.jp/(?P<path>(?:corner|episode|feature)/(?P<id>f?\\d+))'\n # videos are only available for 7 days\n _TESTS = [{\n 'url': 'https://tver.jp/corner/f0062178',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/feature/f0062413',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/episode/79622438',\n 'only_matching': True,\n }, {\n # subtitle = ' '\n 'url': 'https://tver.jp/corner/f0068870',\n 'only_matching': True,\n }]\n _TOKEN = None\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'\n\n def _real_initialize(self):\n self._TOKEN = self._download_json(\n 'https://tver.jp/api/access_token.php', None)['token']\n\n def _real_extract(self, url):\n path, video_id = re.match(self._VALID_URL, url).groups()\n main = self._download_json(\n 'https://api.tver.jp/v4/' + path, video_id,\n query={'token': self._TOKEN})['main']\n p_id = main['publisher_id']\n service = remove_start(main['service'], 'ts_')\n info = {\n '_type': 'url_transparent',\n 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),\n 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),\n }\n\n if service == 'cx':\n title = main['title']\n subtitle = strip_or_none(main.get('subtitle'))\n if subtitle:\n title += ' - ' + subtitle\n info.update({\n 'title': title,\n 'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),\n 'ie_key': 'FujiTVFODPlus7',\n })\n else:\n r_id = main['reference_id']\n if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n r_id = 'ref:' + r_id\n bc_url = smuggle_url(\n self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n {'geo_countries': ['JP']})\n info.update({\n 'url': bc_url,\n 'ie_key': 'BrightcoveNew',\n })\n\n return info\n", "path": "youtube_dl/extractor/tver.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_str\nfrom ..utils import (\n int_or_none,\n remove_start,\n smuggle_url,\n try_get,\n)\n\n\nclass TVerIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?tver\\.jp/(?P<path>(?:corner|episode|feature)/(?P<id>f?\\d+))'\n # videos are only available for 7 days\n _TESTS = [{\n 'url': 'https://tver.jp/corner/f0062178',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/feature/f0062413',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/episode/79622438',\n 'only_matching': True,\n }, {\n # subtitle = ' '\n 'url': 'https://tver.jp/corner/f0068870',\n 'only_matching': True,\n }]\n _TOKEN = None\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'\n\n def _real_initialize(self):\n self._TOKEN = self._download_json(\n 'https://tver.jp/api/access_token.php', None)['token']\n\n def _real_extract(self, url):\n path, video_id = re.match(self._VALID_URL, url).groups()\n main = self._download_json(\n 'https://api.tver.jp/v4/' + path, video_id,\n query={'token': self._TOKEN})['main']\n p_id = main['publisher_id']\n service = remove_start(main['service'], 'ts_')\n\n r_id = main['reference_id']\n if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):\n r_id = 'ref:' + r_id\n bc_url = smuggle_url(\n self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),\n {'geo_countries': ['JP']})\n\n return {\n '_type': 'url_transparent',\n 'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),\n 'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),\n 'url': bc_url,\n 'ie_key': 'BrightcoveNew',\n }\n", "path": "youtube_dl/extractor/tver.py"}]}
| 1,811 | 545 |
gh_patches_debug_3979
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-1246
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need binding to void GENERAL_NAMES_free(GENERAL_NAMES *)
the function call to d2i methods on the altSubjectName extension returned a dynamicly allocated memory object that must be garbage collected so binding for GENERAL_NAMES_free should be exposed from hazmat so that higher level code can avoid memory leaks. Not sure which module should expose the binding but I used x509v3.py module in the Proposed solution https://github.com/crc32a/cryptography/commit/24df02646de1e5c1773c9048076b5d67d4c5c0fa
this effects issue https://github.com/pyca/pyopenssl/issues/139 of pyopenssl and an example of its usage to avoid memory leaks is
https://github.com/rackerlabs/pyopenssl/commit/a479a74820619da13dfab8925cf49c4f766b6536
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/hazmat/bindings/openssl/x509v3.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 INCLUDES = """
17 #include <openssl/x509v3.h>
18 """
19
20 TYPES = """
21 typedef struct {
22 X509 *issuer_cert;
23 X509 *subject_cert;
24 ...;
25 } X509V3_CTX;
26
27 typedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);
28
29 typedef struct {
30 ASN1_ITEM_EXP *it;
31 X509V3_EXT_D2I d2i;
32 ...;
33 } X509V3_EXT_METHOD;
34
35 static const int GEN_OTHERNAME;
36 static const int GEN_EMAIL;
37 static const int GEN_X400;
38 static const int GEN_DNS;
39 static const int GEN_URI;
40 static const int GEN_DIRNAME;
41 static const int GEN_EDIPARTY;
42 static const int GEN_IPADD;
43 static const int GEN_RID;
44
45 typedef struct {
46 ...;
47 } OTHERNAME;
48
49 typedef struct {
50 ...;
51 } EDIPARTYNAME;
52
53 typedef struct {
54 int type;
55 union {
56 char *ptr;
57 OTHERNAME *otherName; /* otherName */
58 ASN1_IA5STRING *rfc822Name;
59 ASN1_IA5STRING *dNSName;
60 ASN1_TYPE *x400Address;
61 X509_NAME *directoryName;
62 EDIPARTYNAME *ediPartyName;
63 ASN1_IA5STRING *uniformResourceIdentifier;
64 ASN1_OCTET_STRING *iPAddress;
65 ASN1_OBJECT *registeredID;
66
67 /* Old names */
68 ASN1_OCTET_STRING *ip; /* iPAddress */
69 X509_NAME *dirn; /* dirn */
70 ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */
71 /* uniformResourceIdentifier */
72 ASN1_OBJECT *rid; /* registeredID */
73 ASN1_TYPE *other; /* x400Address */
74 } d;
75 ...;
76 } GENERAL_NAME;
77
78 typedef struct stack_st_GENERAL_NAME GENERAL_NAMES;
79 """
80
81 FUNCTIONS = """
82 void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);
83 X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);
84 int GENERAL_NAME_print(BIO *, GENERAL_NAME *);
85 """
86
87 MACROS = """
88 void *X509V3_set_ctx_nodb(X509V3_CTX *);
89 int sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);
90 int sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);
91 GENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);
92
93 /* These aren't macros these functions are all const X on openssl > 1.0.x */
94 const X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);
95 const X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);
96 """
97
98 CUSTOMIZATIONS = """
99 """
100
101 CONDITIONAL_NAMES = {}
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cryptography/hazmat/bindings/openssl/x509v3.py b/cryptography/hazmat/bindings/openssl/x509v3.py
--- a/cryptography/hazmat/bindings/openssl/x509v3.py
+++ b/cryptography/hazmat/bindings/openssl/x509v3.py
@@ -82,6 +82,7 @@
void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);
X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);
int GENERAL_NAME_print(BIO *, GENERAL_NAME *);
+void GENERAL_NAMES_free(GENERAL_NAMES *);
"""
MACROS = """
|
{"golden_diff": "diff --git a/cryptography/hazmat/bindings/openssl/x509v3.py b/cryptography/hazmat/bindings/openssl/x509v3.py\n--- a/cryptography/hazmat/bindings/openssl/x509v3.py\n+++ b/cryptography/hazmat/bindings/openssl/x509v3.py\n@@ -82,6 +82,7 @@\n void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\n X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\n int GENERAL_NAME_print(BIO *, GENERAL_NAME *);\n+void GENERAL_NAMES_free(GENERAL_NAMES *);\n \"\"\"\n \n MACROS = \"\"\"\n", "issue": "Need binding to void GENERAL_NAMES_free(GENERAL_NAMES *)\nthe function call to d2i methods on the altSubjectName extension returned a dynamicly allocated memory object that must be garbage collected so binding for GENERAL_NAMES_free should be exposed from hazmat so that higher level code can avoid memory leaks. Not sure which module should expose the binding but I used x509v3.py module in the Proposed solution https://github.com/crc32a/cryptography/commit/24df02646de1e5c1773c9048076b5d67d4c5c0fa\n\nthis effects issue https://github.com/pyca/pyopenssl/issues/139 of pyopenssl and an example of its usage to avoid memory leaks is\nhttps://github.com/rackerlabs/pyopenssl/commit/a479a74820619da13dfab8925cf49c4f766b6536\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <openssl/x509v3.h>\n\"\"\"\n\nTYPES = \"\"\"\ntypedef struct {\n X509 *issuer_cert;\n X509 *subject_cert;\n ...;\n} X509V3_CTX;\n\ntypedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);\n\ntypedef struct {\n ASN1_ITEM_EXP *it;\n X509V3_EXT_D2I d2i;\n ...;\n} X509V3_EXT_METHOD;\n\nstatic const int GEN_OTHERNAME;\nstatic const int GEN_EMAIL;\nstatic const int GEN_X400;\nstatic const int GEN_DNS;\nstatic const int GEN_URI;\nstatic const int GEN_DIRNAME;\nstatic const int GEN_EDIPARTY;\nstatic const int GEN_IPADD;\nstatic const int GEN_RID;\n\ntypedef struct {\n ...;\n} OTHERNAME;\n\ntypedef struct {\n ...;\n} EDIPARTYNAME;\n\ntypedef struct {\n int type;\n union {\n char *ptr;\n OTHERNAME *otherName; /* otherName */\n ASN1_IA5STRING *rfc822Name;\n ASN1_IA5STRING *dNSName;\n ASN1_TYPE *x400Address;\n X509_NAME *directoryName;\n EDIPARTYNAME *ediPartyName;\n ASN1_IA5STRING *uniformResourceIdentifier;\n ASN1_OCTET_STRING *iPAddress;\n ASN1_OBJECT *registeredID;\n\n /* Old names */\n ASN1_OCTET_STRING *ip; /* iPAddress */\n X509_NAME *dirn; /* dirn */\n ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */\n /* uniformResourceIdentifier */\n ASN1_OBJECT *rid; /* registeredID */\n ASN1_TYPE *other; /* x400Address */\n } d;\n ...;\n} GENERAL_NAME;\n\ntypedef struct stack_st_GENERAL_NAME GENERAL_NAMES;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nvoid X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\nX509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\nint GENERAL_NAME_print(BIO *, GENERAL_NAME *);\n\"\"\"\n\nMACROS = \"\"\"\nvoid *X509V3_set_ctx_nodb(X509V3_CTX *);\nint sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);\nint sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);\nGENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);\n\n/* These aren't macros these functions are all const X on openssl > 1.0.x */\nconst X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);\nconst X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n\"\"\"\n\nCONDITIONAL_NAMES = {}\n", "path": "cryptography/hazmat/bindings/openssl/x509v3.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <openssl/x509v3.h>\n\"\"\"\n\nTYPES = \"\"\"\ntypedef struct {\n X509 *issuer_cert;\n X509 *subject_cert;\n ...;\n} X509V3_CTX;\n\ntypedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);\n\ntypedef struct {\n ASN1_ITEM_EXP *it;\n X509V3_EXT_D2I d2i;\n ...;\n} X509V3_EXT_METHOD;\n\nstatic const int GEN_OTHERNAME;\nstatic const int GEN_EMAIL;\nstatic const int GEN_X400;\nstatic const int GEN_DNS;\nstatic const int GEN_URI;\nstatic const int GEN_DIRNAME;\nstatic const int GEN_EDIPARTY;\nstatic const int GEN_IPADD;\nstatic const int GEN_RID;\n\ntypedef struct {\n ...;\n} OTHERNAME;\n\ntypedef struct {\n ...;\n} EDIPARTYNAME;\n\ntypedef struct {\n int type;\n union {\n char *ptr;\n OTHERNAME *otherName; /* otherName */\n ASN1_IA5STRING *rfc822Name;\n ASN1_IA5STRING *dNSName;\n ASN1_TYPE *x400Address;\n X509_NAME *directoryName;\n EDIPARTYNAME *ediPartyName;\n ASN1_IA5STRING *uniformResourceIdentifier;\n ASN1_OCTET_STRING *iPAddress;\n ASN1_OBJECT *registeredID;\n\n /* Old names */\n ASN1_OCTET_STRING *ip; /* iPAddress */\n X509_NAME *dirn; /* dirn */\n ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */\n /* uniformResourceIdentifier */\n ASN1_OBJECT *rid; /* registeredID */\n ASN1_TYPE *other; /* x400Address */\n } d;\n ...;\n} GENERAL_NAME;\n\ntypedef struct stack_st_GENERAL_NAME GENERAL_NAMES;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nvoid X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\nX509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\nint GENERAL_NAME_print(BIO *, GENERAL_NAME *);\nvoid GENERAL_NAMES_free(GENERAL_NAMES *);\n\"\"\"\n\nMACROS = \"\"\"\nvoid *X509V3_set_ctx_nodb(X509V3_CTX *);\nint sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);\nint sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);\nGENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);\n\n/* These aren't macros these functions are all const X on openssl > 1.0.x */\nconst X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);\nconst X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n\"\"\"\n\nCONDITIONAL_NAMES = {}\n", "path": "cryptography/hazmat/bindings/openssl/x509v3.py"}]}
| 1,493 | 186 |
gh_patches_debug_5324
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-968
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEAT][CV] Add conditions to checks missing conditions
Some checks are missing conditions:
- [x] Heatmap
- [x] Image Drift
- [x] Train Test Drift
- [x] Robustness
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/vision/suites/default_suites.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """Functions for loading the default (built-in) vision suites for various validation stages.
12
13 Each function returns a new suite that is initialized with a list of checks and default conditions.
14 It is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.
15 """
16 from deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \
17 MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \
18 RobustnessReport, TrainTestPredictionDrift
19 from deepchecks.vision import Suite
20
21
22 __all__ = ['train_test_validation', 'model_evaluation', 'full_suite']
23
24 from deepchecks.vision.checks.distribution import HeatmapComparison
25
26
27 def train_test_validation() -> Suite:
28 """Create a suite that is meant to validate correctness of train-test split, including integrity, \
29 distribution and leakage checks."""
30 return Suite(
31 'Train Test Validation Suite',
32 HeatmapComparison(),
33 TrainTestLabelDrift(),
34 TrainTestPredictionDrift(),
35 ImagePropertyDrift().add_condition_drift_score_not_greater_than(),
36 ImageDatasetDrift()
37 )
38
39
40 def model_evaluation() -> Suite:
41 """Create a suite that is meant to test model performance and overfit."""
42 return Suite(
43 'Model Evaluation Suite',
44 ClassPerformance(),
45 MeanAveragePrecisionReport(),
46 MeanAverageRecallReport(),
47 SimpleModelComparison(),
48 ConfusionMatrixReport(),
49 RobustnessReport().add_condition_degradation_not_greater_than()
50 )
51
52
53 def full_suite() -> Suite:
54 """Create a suite that includes many of the implemented checks, for a quick overview of your model and data."""
55 return Suite(
56 'Full Suite',
57 model_evaluation(),
58 train_test_validation(),
59 )
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/vision/suites/default_suites.py b/deepchecks/vision/suites/default_suites.py
--- a/deepchecks/vision/suites/default_suites.py
+++ b/deepchecks/vision/suites/default_suites.py
@@ -31,7 +31,7 @@
'Train Test Validation Suite',
HeatmapComparison(),
TrainTestLabelDrift(),
- TrainTestPredictionDrift(),
+ TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),
ImagePropertyDrift().add_condition_drift_score_not_greater_than(),
ImageDatasetDrift()
)
|
{"golden_diff": "diff --git a/deepchecks/vision/suites/default_suites.py b/deepchecks/vision/suites/default_suites.py\n--- a/deepchecks/vision/suites/default_suites.py\n+++ b/deepchecks/vision/suites/default_suites.py\n@@ -31,7 +31,7 @@\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n- TrainTestPredictionDrift(),\n+ TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n", "issue": "[FEAT][CV] Add conditions to checks missing conditions\nSome checks are missing conditions:\r\n\r\n- [x] Heatmap\r\n- [x] Image Drift\r\n- [x] Train Test Drift\r\n- [x] Robustness \n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Functions for loading the default (built-in) vision suites for various validation stages.\n\nEach function returns a new suite that is initialized with a list of checks and default conditions.\nIt is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.\n\"\"\"\nfrom deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \\\n MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \\\n RobustnessReport, TrainTestPredictionDrift\nfrom deepchecks.vision import Suite\n\n\n__all__ = ['train_test_validation', 'model_evaluation', 'full_suite']\n\nfrom deepchecks.vision.checks.distribution import HeatmapComparison\n\n\ndef train_test_validation() -> Suite:\n \"\"\"Create a suite that is meant to validate correctness of train-test split, including integrity, \\\n distribution and leakage checks.\"\"\"\n return Suite(\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n TrainTestPredictionDrift(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n\n\ndef model_evaluation() -> Suite:\n \"\"\"Create a suite that is meant to test model performance and overfit.\"\"\"\n return Suite(\n 'Model Evaluation Suite',\n ClassPerformance(),\n MeanAveragePrecisionReport(),\n MeanAverageRecallReport(),\n SimpleModelComparison(),\n ConfusionMatrixReport(),\n RobustnessReport().add_condition_degradation_not_greater_than()\n )\n\n\ndef full_suite() -> Suite:\n \"\"\"Create a suite that includes many of the implemented checks, for a quick overview of your model and data.\"\"\"\n return Suite(\n 'Full Suite',\n model_evaluation(),\n train_test_validation(),\n )\n", "path": "deepchecks/vision/suites/default_suites.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Functions for loading the default (built-in) vision suites for various validation stages.\n\nEach function returns a new suite that is initialized with a list of checks and default conditions.\nIt is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.\n\"\"\"\nfrom deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \\\n MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \\\n RobustnessReport, TrainTestPredictionDrift\nfrom deepchecks.vision import Suite\n\n\n__all__ = ['train_test_validation', 'model_evaluation', 'full_suite']\n\nfrom deepchecks.vision.checks.distribution import HeatmapComparison\n\n\ndef train_test_validation() -> Suite:\n \"\"\"Create a suite that is meant to validate correctness of train-test split, including integrity, \\\n distribution and leakage checks.\"\"\"\n return Suite(\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n\n\ndef model_evaluation() -> Suite:\n \"\"\"Create a suite that is meant to test model performance and overfit.\"\"\"\n return Suite(\n 'Model Evaluation Suite',\n ClassPerformance(),\n MeanAveragePrecisionReport(),\n MeanAverageRecallReport(),\n SimpleModelComparison(),\n ConfusionMatrixReport(),\n RobustnessReport().add_condition_degradation_not_greater_than()\n )\n\n\ndef full_suite() -> Suite:\n \"\"\"Create a suite that includes many of the implemented checks, for a quick overview of your model and data.\"\"\"\n return Suite(\n 'Full Suite',\n model_evaluation(),\n train_test_validation(),\n )\n", "path": "deepchecks/vision/suites/default_suites.py"}]}
| 908 | 143 |
gh_patches_debug_35089
|
rasdani/github-patches
|
git_diff
|
aio-libs__aiohttp-2237
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'NoneType' object has no attribute 'errno'
## Long story short
Trying to resolve a domain which is an alias for another one, which does not have an A or CNAME record, raises AttributeError: 'NoneType' object has no attribute 'errno'
## Expected behaviour
Raise an error correctly, socket.gaierror probably.
## Actual behaviour
```Traceback (most recent call last):
File "xtest.py", line 16, in <module>
process()
File "/usr/lib/python3.6/asyncio/base_events.py", line 449, in run_until_complete
return future.result()
File "/usr/lib/python3.6/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/myenv/lib/python3.6/site-packages/aiohttp/helpers.py", line 72, in send
return self._coro.send(arg)
File "/myenv/lib/python3.6/site-packages/aiohttp/client.py", line 233, in _request
conn = yield from self._connector.connect(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 378, in connect
proto = yield from self._create_connection(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 687, in _create_connection
_, proto = yield from self._create_direct_connection(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 735, in _create_direct_connection
exc.errno,
AttributeError: 'NoneType' object has no attribute 'errno'
```
## Steps to reproduce
This script will reproduce the error.
```
import asyncio
import aiohttp
from aiohttp.resolver import AsyncResolver
def process():
url = 'http://esly.win/'
resolver = AsyncResolver()
conn = aiohttp.TCPConnector(resolver=resolver, verify_ssl=False)
session = aiohttp.ClientSession(connector=conn)
return session.get(url)
loop = asyncio.get_event_loop()
loop.run_until_complete(
process()
)
```
If I use the session without setting the connector it first raises a socket.gaierror but then
> During handling of the above exception, another exception occurred...
And the same traceback appears.
## Your environment
Python 3.6.0b2
Ubuntu 10.10
aiohttp==2.2,5
Also happens with aiohttp==2.3.0a0 (installed from git on 29/Aug/2017)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/resolver.py`
Content:
```
1 import asyncio
2 import socket
3
4 from .abc import AbstractResolver
5
6
7 __all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')
8
9 try:
10 import aiodns
11 # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
12 except ImportError: # pragma: no cover
13 aiodns = None
14
15 aiodns_default = False
16
17
18 class ThreadedResolver(AbstractResolver):
19 """Use Executor for synchronous getaddrinfo() calls, which defaults to
20 concurrent.futures.ThreadPoolExecutor.
21 """
22
23 def __init__(self, loop=None):
24 if loop is None:
25 loop = asyncio.get_event_loop()
26 self._loop = loop
27
28 @asyncio.coroutine
29 def resolve(self, host, port=0, family=socket.AF_INET):
30 infos = yield from self._loop.getaddrinfo(
31 host, port, type=socket.SOCK_STREAM, family=family)
32
33 hosts = []
34 for family, _, proto, _, address in infos:
35 hosts.append(
36 {'hostname': host,
37 'host': address[0], 'port': address[1],
38 'family': family, 'proto': proto,
39 'flags': socket.AI_NUMERICHOST})
40
41 return hosts
42
43 @asyncio.coroutine
44 def close(self):
45 pass
46
47
48 class AsyncResolver(AbstractResolver):
49 """Use the `aiodns` package to make asynchronous DNS lookups"""
50
51 def __init__(self, loop=None, *args, **kwargs):
52 if loop is None:
53 loop = asyncio.get_event_loop()
54
55 if aiodns is None:
56 raise RuntimeError("Resolver requires aiodns library")
57
58 self._loop = loop
59 self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)
60
61 if not hasattr(self._resolver, 'gethostbyname'):
62 # aiodns 1.1 is not available, fallback to DNSResolver.query
63 self.resolve = self.resolve_with_query
64
65 @asyncio.coroutine
66 def resolve(self, host, port=0, family=socket.AF_INET):
67 hosts = []
68 resp = yield from self._resolver.gethostbyname(host, family)
69
70 for address in resp.addresses:
71 hosts.append(
72 {'hostname': host,
73 'host': address, 'port': port,
74 'family': family, 'proto': 0,
75 'flags': socket.AI_NUMERICHOST})
76 return hosts
77
78 @asyncio.coroutine
79 def resolve_with_query(self, host, port=0, family=socket.AF_INET):
80 if family == socket.AF_INET6:
81 qtype = 'AAAA'
82 else:
83 qtype = 'A'
84
85 hosts = []
86 resp = yield from self._resolver.query(host, qtype)
87
88 for rr in resp:
89 hosts.append(
90 {'hostname': host,
91 'host': rr.host, 'port': port,
92 'family': family, 'proto': 0,
93 'flags': socket.AI_NUMERICHOST})
94
95 return hosts
96
97 @asyncio.coroutine
98 def close(self):
99 return self._resolver.cancel()
100
101
102 DefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py
--- a/aiohttp/resolver.py
+++ b/aiohttp/resolver.py
@@ -60,31 +60,42 @@
if not hasattr(self._resolver, 'gethostbyname'):
# aiodns 1.1 is not available, fallback to DNSResolver.query
- self.resolve = self.resolve_with_query
+ self.resolve = self._resolve_with_query
@asyncio.coroutine
def resolve(self, host, port=0, family=socket.AF_INET):
+ try:
+ resp = yield from self._resolver.gethostbyname(host, family)
+ except aiodns.error.DNSError as exc:
+ msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
+ raise OSError(msg) from exc
hosts = []
- resp = yield from self._resolver.gethostbyname(host, family)
-
for address in resp.addresses:
hosts.append(
{'hostname': host,
'host': address, 'port': port,
'family': family, 'proto': 0,
'flags': socket.AI_NUMERICHOST})
+
+ if not hosts:
+ raise OSError("DNS lookup failed")
+
return hosts
@asyncio.coroutine
- def resolve_with_query(self, host, port=0, family=socket.AF_INET):
+ def _resolve_with_query(self, host, port=0, family=socket.AF_INET):
if family == socket.AF_INET6:
qtype = 'AAAA'
else:
qtype = 'A'
- hosts = []
- resp = yield from self._resolver.query(host, qtype)
+ try:
+ resp = yield from self._resolver.query(host, qtype)
+ except aiodns.error.DNSError as exc:
+ msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
+ raise OSError(msg) from exc
+ hosts = []
for rr in resp:
hosts.append(
{'hostname': host,
@@ -92,6 +103,9 @@
'family': family, 'proto': 0,
'flags': socket.AI_NUMERICHOST})
+ if not hosts:
+ raise OSError("DNS lookup failed")
+
return hosts
@asyncio.coroutine
|
{"golden_diff": "diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py\n--- a/aiohttp/resolver.py\n+++ b/aiohttp/resolver.py\n@@ -60,31 +60,42 @@\n \n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n- self.resolve = self.resolve_with_query\n+ self.resolve = self._resolve_with_query\n \n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n+ try:\n+ resp = yield from self._resolver.gethostbyname(host, family)\n+ except aiodns.error.DNSError as exc:\n+ msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n+ raise OSError(msg) from exc\n hosts = []\n- resp = yield from self._resolver.gethostbyname(host, family)\n-\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n+\n+ if not hosts:\n+ raise OSError(\"DNS lookup failed\")\n+\n return hosts\n \n @asyncio.coroutine\n- def resolve_with_query(self, host, port=0, family=socket.AF_INET):\n+ def _resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n \n- hosts = []\n- resp = yield from self._resolver.query(host, qtype)\n+ try:\n+ resp = yield from self._resolver.query(host, qtype)\n+ except aiodns.error.DNSError as exc:\n+ msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n+ raise OSError(msg) from exc\n \n+ hosts = []\n for rr in resp:\n hosts.append(\n {'hostname': host,\n@@ -92,6 +103,9 @@\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n \n+ if not hosts:\n+ raise OSError(\"DNS lookup failed\")\n+\n return hosts\n \n @asyncio.coroutine\n", "issue": "AttributeError: 'NoneType' object has no attribute 'errno'\n## Long story short\r\n\r\nTrying to resolve a domain which is an alias for another one, which does not have an A or CNAME record, raises AttributeError: 'NoneType' object has no attribute 'errno'\r\n\r\n## Expected behaviour\r\n\r\nRaise an error correctly, socket.gaierror probably.\r\n\r\n## Actual behaviour\r\n\r\n```Traceback (most recent call last):\r\n File \"xtest.py\", line 16, in <module>\r\n process()\r\n File \"/usr/lib/python3.6/asyncio/base_events.py\", line 449, in run_until_complete\r\n return future.result()\r\n File \"/usr/lib/python3.6/asyncio/tasks.py\", line 239, in _step\r\n result = coro.send(None)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/helpers.py\", line 72, in send\r\n return self._coro.send(arg)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/client.py\", line 233, in _request\r\n conn = yield from self._connector.connect(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 378, in connect\r\n proto = yield from self._create_connection(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 687, in _create_connection\r\n _, proto = yield from self._create_direct_connection(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 735, in _create_direct_connection\r\n exc.errno,\r\nAttributeError: 'NoneType' object has no attribute 'errno'\r\n```\r\n\r\n## Steps to reproduce\r\n\r\nThis script will reproduce the error.\r\n\r\n```\r\nimport asyncio\r\nimport aiohttp\r\nfrom aiohttp.resolver import AsyncResolver\r\n\r\ndef process():\r\n url = 'http://esly.win/'\r\n resolver = AsyncResolver()\r\n conn = aiohttp.TCPConnector(resolver=resolver, verify_ssl=False)\r\n session = aiohttp.ClientSession(connector=conn)\r\n return session.get(url)\r\n\r\nloop = asyncio.get_event_loop()\r\nloop.run_until_complete(\r\n process()\r\n)\r\n```\r\n\r\nIf I use the session without setting the connector it first raises a socket.gaierror but then \r\n> During handling of the above exception, another exception occurred...\r\n\r\nAnd the same traceback appears.\r\n\r\n## Your environment\r\nPython 3.6.0b2\r\nUbuntu 10.10\r\naiohttp==2.2,5 \r\nAlso happens with aiohttp==2.3.0a0 (installed from git on 29/Aug/2017)\n", "before_files": [{"content": "import asyncio\nimport socket\n\nfrom .abc import AbstractResolver\n\n\n__all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')\n\ntry:\n import aiodns\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self, loop=None):\n if loop is None:\n loop = asyncio.get_event_loop()\n self._loop = loop\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n infos = yield from self._loop.getaddrinfo(\n host, port, type=socket.SOCK_STREAM, family=family)\n\n hosts = []\n for family, _, proto, _, address in infos:\n hosts.append(\n {'hostname': host,\n 'host': address[0], 'port': address[1],\n 'family': family, 'proto': proto,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, loop=None, *args, **kwargs):\n if loop is None:\n loop = asyncio.get_event_loop()\n\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = loop\n self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)\n\n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n self.resolve = self.resolve_with_query\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n hosts = []\n resp = yield from self._resolver.gethostbyname(host, family)\n\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n return hosts\n\n @asyncio.coroutine\n def resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n\n hosts = []\n resp = yield from self._resolver.query(host, qtype)\n\n for rr in resp:\n hosts.append(\n {'hostname': host,\n 'host': rr.host, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}], "after_files": [{"content": "import asyncio\nimport socket\n\nfrom .abc import AbstractResolver\n\n\n__all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')\n\ntry:\n import aiodns\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self, loop=None):\n if loop is None:\n loop = asyncio.get_event_loop()\n self._loop = loop\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n infos = yield from self._loop.getaddrinfo(\n host, port, type=socket.SOCK_STREAM, family=family)\n\n hosts = []\n for family, _, proto, _, address in infos:\n hosts.append(\n {'hostname': host,\n 'host': address[0], 'port': address[1],\n 'family': family, 'proto': proto,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, loop=None, *args, **kwargs):\n if loop is None:\n loop = asyncio.get_event_loop()\n\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = loop\n self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)\n\n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n self.resolve = self._resolve_with_query\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n try:\n resp = yield from self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n @asyncio.coroutine\n def _resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n\n try:\n resp = yield from self._resolver.query(host, qtype)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n\n hosts = []\n for rr in resp:\n hosts.append(\n {'hostname': host,\n 'host': rr.host, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}]}
| 1,740 | 525 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.