problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_15363 | rasdani/github-patches | git_diff | ansible__awx-8496 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
awx.awx.tower not working with null groups
##### ISSUE TYPE
- Bug Report
##### ENVIRONMENT
* AWX version: 15.0.0
* AWX install method: docker on linux
* Ansible version: 2.9.13
* Operating System: macOS
##### STEPS TO REPRODUCE
Try parse inventory from cli where some groups has no any hosts
```
ansible-inventory -i inventory_tower/test_awx_tower.yml --graph
[WARNING]: You are running collection version 15.0.1 but connecting to tower version 15.0.0
@all:
|--@test_group1:
| |--@test_group2:
| | |--test_host_1
| |--test_host_1
|--@ungrouped:
```
if remove test_host1 from test_group2
```
ansible-inventory -i inventory_tower/test_awx_tower.yml --graph
[WARNING]: You are running collection version 15.0.1 but connecting to tower version 15.0.0
[WARNING]: * Failed to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml with auto plugin: test_group2 is not a known host nor group
[WARNING]: * Failed to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml with yaml plugin: Plugin configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed, this
character is reserved to provide a port.
[WARNING]: Unable to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
@all:
|--@ungrouped:
```
cat inventory_tower/test_awx_tower.yml
```
plugin: awx.awx.tower
host: awx.local.lan
inventory_id: 58
validate_certs: no
include_metadata: yes
```
<!-- Please describe exactly how to reproduce the problem. -->
##### EXPECTED RESULTS
```
ansible-inventory -i inventory_tower/test_awx_tower.yml --graph
[WARNING]: You are running collection version 15.0.1 but connecting to tower version 15.0.0
@all:
|--@test_group1:
| |--@test_group2:
| |--test_host_1
|--@ungrouped:
```
</issue>
<code>
[start of awx_collection/plugins/inventory/tower.py]
1 # Copyright (c) 2018 Ansible Project
2 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
3
4 from __future__ import (absolute_import, division, print_function)
5
6 __metaclass__ = type
7
8 DOCUMENTATION = '''
9 name: tower
10 plugin_type: inventory
11 author:
12 - Matthew Jones (@matburt)
13 - Yunfan Zhang (@YunfanZhang42)
14 short_description: Ansible dynamic inventory plugin for Ansible Tower.
15 description:
16 - Reads inventories from Ansible Tower.
17 - Supports reading configuration from both YAML config file and environment variables.
18 - If reading from the YAML file, the file name must end with tower.(yml|yaml) or tower_inventory.(yml|yaml),
19 the path in the command would be /path/to/tower_inventory.(yml|yaml). If some arguments in the config file
20 are missing, this plugin will try to fill in missing arguments by reading from environment variables.
21 - If reading configurations from environment variables, the path in the command must be @tower_inventory.
22 extends_documentation_fragment: awx.awx.auth_plugin
23 options:
24 inventory_id:
25 description:
26 - The ID of the Ansible Tower inventory that you wish to import.
27 - This is allowed to be either the inventory primary key or its named URL slug.
28 - Primary key values will be accepted as strings or integers, and URL slugs must be strings.
29 - Named URL slugs follow the syntax of "inventory_name++organization_name".
30 type: raw
31 env:
32 - name: TOWER_INVENTORY
33 required: True
34 include_metadata:
35 description: Make extra requests to provide all group vars with metadata about the source Ansible Tower host.
36 type: bool
37 default: False
38 '''
39
40 EXAMPLES = '''
41 # Before you execute the following commands, you should make sure this file is in your plugin path,
42 # and you enabled this plugin.
43
44 # Example for using tower_inventory.yml file
45
46 plugin: awx.awx.tower
47 host: your_ansible_tower_server_network_address
48 username: your_ansible_tower_username
49 password: your_ansible_tower_password
50 inventory_id: the_ID_of_targeted_ansible_tower_inventory
51 # Then you can run the following command.
52 # If some of the arguments are missing, Ansible will attempt to read them from environment variables.
53 # ansible-inventory -i /path/to/tower_inventory.yml --list
54
55 # Example for reading from environment variables:
56
57 # Set environment variables:
58 # export TOWER_HOST=YOUR_TOWER_HOST_ADDRESS
59 # export TOWER_USERNAME=YOUR_TOWER_USERNAME
60 # export TOWER_PASSWORD=YOUR_TOWER_PASSWORD
61 # export TOWER_INVENTORY=THE_ID_OF_TARGETED_INVENTORY
62 # Read the inventory specified in TOWER_INVENTORY from Ansible Tower, and list them.
63 # The inventory path must always be @tower_inventory if you are reading all settings from environment variables.
64 # ansible-inventory -i @tower_inventory --list
65 '''
66
67 import os
68
69 from ansible.module_utils import six
70 from ansible.module_utils._text import to_text, to_native
71 from ansible.errors import AnsibleParserError, AnsibleOptionsError
72 from ansible.plugins.inventory import BaseInventoryPlugin
73 from ansible.config.manager import ensure_type
74
75 from ..module_utils.tower_api import TowerAPIModule
76
77
78 def handle_error(**kwargs):
79 raise AnsibleParserError(to_native(kwargs.get('msg')))
80
81
82 class InventoryModule(BaseInventoryPlugin):
83 NAME = 'awx.awx.tower' # REPLACE
84 # Stays backward compatible with tower inventory script.
85 # If the user supplies '@tower_inventory' as path, the plugin will read from environment variables.
86 no_config_file_supplied = False
87
88 def verify_file(self, path):
89 if path.endswith('@tower_inventory'):
90 self.no_config_file_supplied = True
91 return True
92 elif super(InventoryModule, self).verify_file(path):
93 return path.endswith(('tower_inventory.yml', 'tower_inventory.yaml', 'tower.yml', 'tower.yaml'))
94 else:
95 return False
96
97 def warn_callback(self, warning):
98 self.display.warning(warning)
99
100 def parse(self, inventory, loader, path, cache=True):
101 super(InventoryModule, self).parse(inventory, loader, path)
102 if not self.no_config_file_supplied and os.path.isfile(path):
103 self._read_config_data(path)
104
105 # Defer processing of params to logic shared with the modules
106 module_params = {}
107 for plugin_param, module_param in TowerAPIModule.short_params.items():
108 opt_val = self.get_option(plugin_param)
109 if opt_val is not None:
110 module_params[module_param] = opt_val
111
112 module = TowerAPIModule(
113 argument_spec={}, direct_params=module_params,
114 error_callback=handle_error, warn_callback=self.warn_callback
115 )
116
117 # validate type of inventory_id because we allow two types as special case
118 inventory_id = self.get_option('inventory_id')
119 if isinstance(inventory_id, int):
120 inventory_id = to_text(inventory_id, nonstring='simplerepr')
121 else:
122 try:
123 inventory_id = ensure_type(inventory_id, 'str')
124 except ValueError as e:
125 raise AnsibleOptionsError(
126 'Invalid type for configuration option inventory_id, '
127 'not integer, and cannot convert to string: {err}'.format(err=to_native(e))
128 )
129 inventory_id = inventory_id.replace('/', '')
130 inventory_url = '/api/v2/inventories/{inv_id}/script/'.format(inv_id=inventory_id)
131
132 inventory = module.get_endpoint(
133 inventory_url, data={'hostvars': '1', 'towervars': '1', 'all': '1'}
134 )['json']
135
136 # To start with, create all the groups.
137 for group_name in inventory:
138 if group_name != '_meta':
139 self.inventory.add_group(group_name)
140
141 # Then, create all hosts and add the host vars.
142 all_hosts = inventory['_meta']['hostvars']
143 for host_name, host_vars in six.iteritems(all_hosts):
144 self.inventory.add_host(host_name)
145 for var_name, var_value in six.iteritems(host_vars):
146 self.inventory.set_variable(host_name, var_name, var_value)
147
148 # Lastly, create to group-host and group-group relationships, and set group vars.
149 for group_name, group_content in six.iteritems(inventory):
150 if group_name != 'all' and group_name != '_meta':
151 # First add hosts to groups
152 for host_name in group_content.get('hosts', []):
153 self.inventory.add_host(host_name, group_name)
154 # Then add the parent-children group relationships.
155 for child_group_name in group_content.get('children', []):
156 self.inventory.add_child(group_name, child_group_name)
157 # Set the group vars. Note we should set group var for 'all', but not '_meta'.
158 if group_name != '_meta':
159 for var_name, var_value in six.iteritems(group_content.get('vars', {})):
160 self.inventory.set_variable(group_name, var_name, var_value)
161
162 # Fetch extra variables if told to do so
163 if self.get_option('include_metadata'):
164
165 config_data = module.get_endpoint('/api/v2/config/')['json']
166
167 server_data = {}
168 server_data['license_type'] = config_data.get('license_info', {}).get('license_type', 'unknown')
169 for key in ('version', 'ansible_version'):
170 server_data[key] = config_data.get(key, 'unknown')
171 self.inventory.set_variable('all', 'tower_metadata', server_data)
172
173 # Clean up the inventory.
174 self.inventory.reconcile_inventory()
175
[end of awx_collection/plugins/inventory/tower.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/awx_collection/plugins/inventory/tower.py b/awx_collection/plugins/inventory/tower.py
--- a/awx_collection/plugins/inventory/tower.py
+++ b/awx_collection/plugins/inventory/tower.py
@@ -153,6 +153,8 @@
self.inventory.add_host(host_name, group_name)
# Then add the parent-children group relationships.
for child_group_name in group_content.get('children', []):
+ # add the child group to groups, if its already there it will just throw a warning
+ self.inventory.add_group(child_group_name)
self.inventory.add_child(group_name, child_group_name)
# Set the group vars. Note we should set group var for 'all', but not '_meta'.
if group_name != '_meta':
| {"golden_diff": "diff --git a/awx_collection/plugins/inventory/tower.py b/awx_collection/plugins/inventory/tower.py\n--- a/awx_collection/plugins/inventory/tower.py\n+++ b/awx_collection/plugins/inventory/tower.py\n@@ -153,6 +153,8 @@\n self.inventory.add_host(host_name, group_name)\n # Then add the parent-children group relationships.\n for child_group_name in group_content.get('children', []):\n+ # add the child group to groups, if its already there it will just throw a warning\n+ self.inventory.add_group(child_group_name)\n self.inventory.add_child(group_name, child_group_name)\n # Set the group vars. Note we should set group var for 'all', but not '_meta'.\n if group_name != '_meta':\n", "issue": "awx.awx.tower not working with null groups\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### ENVIRONMENT\r\n* AWX version: 15.0.0\r\n* AWX install method: docker on linux\r\n* Ansible version: 2.9.13\r\n* Operating System: macOS\r\n\r\n##### STEPS TO REPRODUCE\r\nTry parse inventory from cli where some groups has no any hosts\r\n ```\r\nansible-inventory -i inventory_tower/test_awx_tower.yml --graph\r\n[WARNING]: You are running collection version 15.0.1 but connecting to tower version 15.0.0\r\n@all:\r\n |--@test_group1:\r\n | |--@test_group2:\r\n | | |--test_host_1\r\n | |--test_host_1\r\n |--@ungrouped:\r\n```\r\nif remove test_host1 from test_group2\r\n```\r\nansible-inventory -i inventory_tower/test_awx_tower.yml --graph\r\n[WARNING]: You are running collection version 15.0.1 but connecting to tower version 15.0.0\r\n[WARNING]: * Failed to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml with auto plugin: test_group2 is not a known host nor group\r\n[WARNING]: * Failed to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml with yaml plugin: Plugin configuration YAML file, not YAML inventory\r\n[WARNING]: * Failed to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed, this\r\ncharacter is reserved to provide a port.\r\n[WARNING]: Unable to parse /Users/meonacist/playbook/inventory_tower/test_awx_tower.yml as an inventory source\r\n[WARNING]: No inventory was parsed, only implicit localhost is available\r\n@all:\r\n |--@ungrouped:\r\n```\r\ncat inventory_tower/test_awx_tower.yml\r\n```\r\nplugin: awx.awx.tower\r\nhost: awx.local.lan\r\ninventory_id: 58\r\nvalidate_certs: no\r\ninclude_metadata: yes\r\n```\r\n\r\n<!-- Please describe exactly how to reproduce the problem. -->\r\n\r\n##### EXPECTED RESULTS\r\n ```\r\nansible-inventory -i inventory_tower/test_awx_tower.yml --graph\r\n[WARNING]: You are running collection version 15.0.1 but connecting to tower version 15.0.0\r\n@all:\r\n |--@test_group1:\r\n | |--@test_group2:\r\n | |--test_host_1\r\n |--@ungrouped:\r\n```\r\n\n", "before_files": [{"content": "# Copyright (c) 2018 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import (absolute_import, division, print_function)\n\n__metaclass__ = type\n\nDOCUMENTATION = '''\nname: tower\nplugin_type: inventory\nauthor:\n - Matthew Jones (@matburt)\n - Yunfan Zhang (@YunfanZhang42)\nshort_description: Ansible dynamic inventory plugin for Ansible Tower.\ndescription:\n - Reads inventories from Ansible Tower.\n - Supports reading configuration from both YAML config file and environment variables.\n - If reading from the YAML file, the file name must end with tower.(yml|yaml) or tower_inventory.(yml|yaml),\n the path in the command would be /path/to/tower_inventory.(yml|yaml). If some arguments in the config file\n are missing, this plugin will try to fill in missing arguments by reading from environment variables.\n - If reading configurations from environment variables, the path in the command must be @tower_inventory.\nextends_documentation_fragment: awx.awx.auth_plugin\noptions:\n inventory_id:\n description:\n - The ID of the Ansible Tower inventory that you wish to import.\n - This is allowed to be either the inventory primary key or its named URL slug.\n - Primary key values will be accepted as strings or integers, and URL slugs must be strings.\n - Named URL slugs follow the syntax of \"inventory_name++organization_name\".\n type: raw\n env:\n - name: TOWER_INVENTORY\n required: True\n include_metadata:\n description: Make extra requests to provide all group vars with metadata about the source Ansible Tower host.\n type: bool\n default: False\n'''\n\nEXAMPLES = '''\n# Before you execute the following commands, you should make sure this file is in your plugin path,\n# and you enabled this plugin.\n\n# Example for using tower_inventory.yml file\n\nplugin: awx.awx.tower\nhost: your_ansible_tower_server_network_address\nusername: your_ansible_tower_username\npassword: your_ansible_tower_password\ninventory_id: the_ID_of_targeted_ansible_tower_inventory\n# Then you can run the following command.\n# If some of the arguments are missing, Ansible will attempt to read them from environment variables.\n# ansible-inventory -i /path/to/tower_inventory.yml --list\n\n# Example for reading from environment variables:\n\n# Set environment variables:\n# export TOWER_HOST=YOUR_TOWER_HOST_ADDRESS\n# export TOWER_USERNAME=YOUR_TOWER_USERNAME\n# export TOWER_PASSWORD=YOUR_TOWER_PASSWORD\n# export TOWER_INVENTORY=THE_ID_OF_TARGETED_INVENTORY\n# Read the inventory specified in TOWER_INVENTORY from Ansible Tower, and list them.\n# The inventory path must always be @tower_inventory if you are reading all settings from environment variables.\n# ansible-inventory -i @tower_inventory --list\n'''\n\nimport os\n\nfrom ansible.module_utils import six\nfrom ansible.module_utils._text import to_text, to_native\nfrom ansible.errors import AnsibleParserError, AnsibleOptionsError\nfrom ansible.plugins.inventory import BaseInventoryPlugin\nfrom ansible.config.manager import ensure_type\n\nfrom ..module_utils.tower_api import TowerAPIModule\n\n\ndef handle_error(**kwargs):\n raise AnsibleParserError(to_native(kwargs.get('msg')))\n\n\nclass InventoryModule(BaseInventoryPlugin):\n NAME = 'awx.awx.tower' # REPLACE\n # Stays backward compatible with tower inventory script.\n # If the user supplies '@tower_inventory' as path, the plugin will read from environment variables.\n no_config_file_supplied = False\n\n def verify_file(self, path):\n if path.endswith('@tower_inventory'):\n self.no_config_file_supplied = True\n return True\n elif super(InventoryModule, self).verify_file(path):\n return path.endswith(('tower_inventory.yml', 'tower_inventory.yaml', 'tower.yml', 'tower.yaml'))\n else:\n return False\n\n def warn_callback(self, warning):\n self.display.warning(warning)\n\n def parse(self, inventory, loader, path, cache=True):\n super(InventoryModule, self).parse(inventory, loader, path)\n if not self.no_config_file_supplied and os.path.isfile(path):\n self._read_config_data(path)\n\n # Defer processing of params to logic shared with the modules\n module_params = {}\n for plugin_param, module_param in TowerAPIModule.short_params.items():\n opt_val = self.get_option(plugin_param)\n if opt_val is not None:\n module_params[module_param] = opt_val\n\n module = TowerAPIModule(\n argument_spec={}, direct_params=module_params,\n error_callback=handle_error, warn_callback=self.warn_callback\n )\n\n # validate type of inventory_id because we allow two types as special case\n inventory_id = self.get_option('inventory_id')\n if isinstance(inventory_id, int):\n inventory_id = to_text(inventory_id, nonstring='simplerepr')\n else:\n try:\n inventory_id = ensure_type(inventory_id, 'str')\n except ValueError as e:\n raise AnsibleOptionsError(\n 'Invalid type for configuration option inventory_id, '\n 'not integer, and cannot convert to string: {err}'.format(err=to_native(e))\n )\n inventory_id = inventory_id.replace('/', '')\n inventory_url = '/api/v2/inventories/{inv_id}/script/'.format(inv_id=inventory_id)\n\n inventory = module.get_endpoint(\n inventory_url, data={'hostvars': '1', 'towervars': '1', 'all': '1'}\n )['json']\n\n # To start with, create all the groups.\n for group_name in inventory:\n if group_name != '_meta':\n self.inventory.add_group(group_name)\n\n # Then, create all hosts and add the host vars.\n all_hosts = inventory['_meta']['hostvars']\n for host_name, host_vars in six.iteritems(all_hosts):\n self.inventory.add_host(host_name)\n for var_name, var_value in six.iteritems(host_vars):\n self.inventory.set_variable(host_name, var_name, var_value)\n\n # Lastly, create to group-host and group-group relationships, and set group vars.\n for group_name, group_content in six.iteritems(inventory):\n if group_name != 'all' and group_name != '_meta':\n # First add hosts to groups\n for host_name in group_content.get('hosts', []):\n self.inventory.add_host(host_name, group_name)\n # Then add the parent-children group relationships.\n for child_group_name in group_content.get('children', []):\n self.inventory.add_child(group_name, child_group_name)\n # Set the group vars. Note we should set group var for 'all', but not '_meta'.\n if group_name != '_meta':\n for var_name, var_value in six.iteritems(group_content.get('vars', {})):\n self.inventory.set_variable(group_name, var_name, var_value)\n\n # Fetch extra variables if told to do so\n if self.get_option('include_metadata'):\n\n config_data = module.get_endpoint('/api/v2/config/')['json']\n\n server_data = {}\n server_data['license_type'] = config_data.get('license_info', {}).get('license_type', 'unknown')\n for key in ('version', 'ansible_version'):\n server_data[key] = config_data.get(key, 'unknown')\n self.inventory.set_variable('all', 'tower_metadata', server_data)\n\n # Clean up the inventory.\n self.inventory.reconcile_inventory()\n", "path": "awx_collection/plugins/inventory/tower.py"}]} | 3,187 | 175 |
gh_patches_debug_53783 | rasdani/github-patches | git_diff | pypa__pipenv-5778 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Requirements output different since 2023.7.1 causing pip install issues
### Issue description
The output of `pipenv requirements --hash` has changed slightly in `2023.7.1` (#5757) and `pip` appears to be sensitive to it in some scenarios, causing `pip` to be unable to install the package(s) from the generated requirements.txt.
Snippet of requirements.txt generated with `2023.6.26`
```
pyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298
```
Snippet of requirements.txt generated with `2023.7.1` - The hash is now before the marker
```
pyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'
```
### Expected result
- `2023.7.1` generates a requirements.txt as per `2023.6.26`
### Actual result
- `2023.7.1` generates a slightly different requirements.txt
### Steps to replicate
Pip successfully installs the package with the `2023.6.26` requirements.txt:
```
$ pipenv run pip --version
pip 23.1.2
$ cat requirements_2023.6.26.txt
pyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298
$ pipenv run pip install -r requirements_2023.6.26.txt -t test_dir
Collecting pyzip==0.2.0 (from -r requirements_2023.6.26.txt (line 1))
Using cached pyzip-0.2.0-py3-none-any.whl
Installing collected packages: pyzip
Successfully installed pyzip-0.2.0
```
Pip fails to install the package with the `2023.7.3` requirements.txt, thinking there is a hash mismatch even though it displays two identical shas:
```
$ pipenv run pip --version
pip 23.1.2
$ cat requirements_2023.7.1.txt
pyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'
$ pipenv run pip install -r requirements_2023.7.1.txt -t test_dir
Collecting pyzip==0.2.0 (from -r requirements_2023.7.1.txt (line 1))
Using cached pyzip-0.2.0-py3-none-any.whl
WARNING: The hashes of the source archive found in cache entry don't match, ignoring cached built wheel and re-downloading source.
Using cached pyzip-0.2.0.tar.gz (6.3 kB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
pyzip==0.2.0 from https://files.pythonhosted.org/packages/40/72/e29470ecfb5f2bc8cdd2a1b8a6aa14af8d44aa08fe5efa407cd991ce2c64/pyzip-0.2.0.tar.gz (from -r requirements_2023.7.1.txt (line 1)):
Expected sha256 c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298;
Got c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298
```
I will raise a PR with a fix for consideration.
</issue>
<code>
[start of pipenv/routines/requirements.py]
1 import re
2 import sys
3
4 from pipenv.utils.dependencies import get_lockfile_section_using_pipfile_category
5 from pipenv.vendor import click
6
7
8 def requirements_from_deps(deps, include_hashes=True, include_markers=True):
9 pip_packages = []
10
11 for package_name, package_info in deps.items():
12 # Handling git repositories
13 if "git" in package_info:
14 git = package_info["git"]
15 ref = package_info.get("ref", "")
16 extras = (
17 "[{}]".format(",".join(package_info.get("extras", [])))
18 if "extras" in package_info
19 else ""
20 )
21 pip_package = f"{package_name}{extras} @ git+{git}@{ref}"
22 else:
23 # Handling packages with hashes and markers
24 version = package_info.get("version", "").replace("==", "")
25 hashes = (
26 " --hash={}".format(" --hash=".join(package_info["hashes"]))
27 if include_hashes and "hashes" in package_info
28 else ""
29 )
30 markers = (
31 "; {}".format(package_info["markers"])
32 if include_markers and "markers" in package_info
33 else ""
34 )
35 pip_package = f"{package_name}=={version}{hashes}{markers}"
36
37 # Append to the list
38 pip_packages.append(pip_package)
39
40 # pip_packages contains the pip-installable lines
41 return pip_packages
42
43
44 def generate_requirements(
45 project,
46 dev=False,
47 dev_only=False,
48 include_hashes=False,
49 include_markers=True,
50 categories="",
51 ):
52 lockfile = project.load_lockfile(expand_env_vars=False)
53
54 for i, package_index in enumerate(lockfile["_meta"]["sources"]):
55 prefix = "-i" if i == 0 else "--extra-index-url"
56 click.echo(" ".join([prefix, package_index["url"]]))
57
58 deps = {}
59 categories_list = re.split(r", *| ", categories) if categories else []
60
61 if categories_list:
62 for category in categories_list:
63 category = get_lockfile_section_using_pipfile_category(category.strip())
64 deps.update(lockfile.get(category, {}))
65 else:
66 if dev or dev_only:
67 deps.update(lockfile["develop"])
68 if not dev_only:
69 deps.update(lockfile["default"])
70
71 pip_installable_lines = requirements_from_deps(
72 deps, include_hashes=include_hashes, include_markers=include_markers
73 )
74
75 for line in pip_installable_lines:
76 click.echo(line)
77
78 sys.exit(0)
79
[end of pipenv/routines/requirements.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pipenv/routines/requirements.py b/pipenv/routines/requirements.py
--- a/pipenv/routines/requirements.py
+++ b/pipenv/routines/requirements.py
@@ -32,7 +32,7 @@
if include_markers and "markers" in package_info
else ""
)
- pip_package = f"{package_name}=={version}{hashes}{markers}"
+ pip_package = f"{package_name}=={version}{markers}{hashes}"
# Append to the list
pip_packages.append(pip_package)
| {"golden_diff": "diff --git a/pipenv/routines/requirements.py b/pipenv/routines/requirements.py\n--- a/pipenv/routines/requirements.py\n+++ b/pipenv/routines/requirements.py\n@@ -32,7 +32,7 @@\n if include_markers and \"markers\" in package_info\n else \"\"\n )\n- pip_package = f\"{package_name}=={version}{hashes}{markers}\"\n+ pip_package = f\"{package_name}=={version}{markers}{hashes}\"\n \n # Append to the list\n pip_packages.append(pip_package)\n", "issue": "Requirements output different since 2023.7.1 causing pip install issues\n### Issue description\r\n\r\nThe output of `pipenv requirements --hash` has changed slightly in `2023.7.1` (#5757) and `pip` appears to be sensitive to it in some scenarios, causing `pip` to be unable to install the package(s) from the generated requirements.txt.\r\n\r\nSnippet of requirements.txt generated with `2023.6.26`\r\n\r\n```\r\npyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298\r\n```\r\n\r\nSnippet of requirements.txt generated with `2023.7.1` - The hash is now before the marker\r\n\r\n```\r\npyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'\r\n```\r\n\r\n### Expected result\r\n\r\n- `2023.7.1` generates a requirements.txt as per `2023.6.26`\r\n\r\n### Actual result\r\n\r\n- `2023.7.1` generates a slightly different requirements.txt\r\n\r\n### Steps to replicate\r\nPip successfully installs the package with the `2023.6.26` requirements.txt:\r\n\r\n```\r\n$ pipenv run pip --version\r\npip 23.1.2\r\n\r\n$ cat requirements_2023.6.26.txt\r\npyzip==0.2.0 ; python_version >= '3.1' --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298\r\n\r\n$ pipenv run pip install -r requirements_2023.6.26.txt -t test_dir\r\nCollecting pyzip==0.2.0 (from -r requirements_2023.6.26.txt (line 1))\r\n Using cached pyzip-0.2.0-py3-none-any.whl\r\nInstalling collected packages: pyzip\r\nSuccessfully installed pyzip-0.2.0\r\n```\r\n\r\nPip fails to install the package with the `2023.7.3` requirements.txt, thinking there is a hash mismatch even though it displays two identical shas:\r\n\r\n```\r\n$ pipenv run pip --version\r\npip 23.1.2\r\n\r\n$ cat requirements_2023.7.1.txt\r\npyzip==0.2.0 --hash=sha256:c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298; python_version >= '3.1'\r\n\r\n$ pipenv run pip install -r requirements_2023.7.1.txt -t test_dir\r\nCollecting pyzip==0.2.0 (from -r requirements_2023.7.1.txt (line 1))\r\n Using cached pyzip-0.2.0-py3-none-any.whl\r\n WARNING: The hashes of the source archive found in cache entry don't match, ignoring cached built wheel and re-downloading source.\r\n Using cached pyzip-0.2.0.tar.gz (6.3 kB)\r\nERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.\r\n pyzip==0.2.0 from https://files.pythonhosted.org/packages/40/72/e29470ecfb5f2bc8cdd2a1b8a6aa14af8d44aa08fe5efa407cd991ce2c64/pyzip-0.2.0.tar.gz (from -r requirements_2023.7.1.txt (line 1)):\r\n Expected sha256 c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298;\r\n Got c0b10776d798e4be9d5fe6eec541dd0a9740b6550b12fd4cfa238a085686a298\r\n```\r\n\r\nI will raise a PR with a fix for consideration.\n", "before_files": [{"content": "import re\nimport sys\n\nfrom pipenv.utils.dependencies import get_lockfile_section_using_pipfile_category\nfrom pipenv.vendor import click\n\n\ndef requirements_from_deps(deps, include_hashes=True, include_markers=True):\n pip_packages = []\n\n for package_name, package_info in deps.items():\n # Handling git repositories\n if \"git\" in package_info:\n git = package_info[\"git\"]\n ref = package_info.get(\"ref\", \"\")\n extras = (\n \"[{}]\".format(\",\".join(package_info.get(\"extras\", [])))\n if \"extras\" in package_info\n else \"\"\n )\n pip_package = f\"{package_name}{extras} @ git+{git}@{ref}\"\n else:\n # Handling packages with hashes and markers\n version = package_info.get(\"version\", \"\").replace(\"==\", \"\")\n hashes = (\n \" --hash={}\".format(\" --hash=\".join(package_info[\"hashes\"]))\n if include_hashes and \"hashes\" in package_info\n else \"\"\n )\n markers = (\n \"; {}\".format(package_info[\"markers\"])\n if include_markers and \"markers\" in package_info\n else \"\"\n )\n pip_package = f\"{package_name}=={version}{hashes}{markers}\"\n\n # Append to the list\n pip_packages.append(pip_package)\n\n # pip_packages contains the pip-installable lines\n return pip_packages\n\n\ndef generate_requirements(\n project,\n dev=False,\n dev_only=False,\n include_hashes=False,\n include_markers=True,\n categories=\"\",\n):\n lockfile = project.load_lockfile(expand_env_vars=False)\n\n for i, package_index in enumerate(lockfile[\"_meta\"][\"sources\"]):\n prefix = \"-i\" if i == 0 else \"--extra-index-url\"\n click.echo(\" \".join([prefix, package_index[\"url\"]]))\n\n deps = {}\n categories_list = re.split(r\", *| \", categories) if categories else []\n\n if categories_list:\n for category in categories_list:\n category = get_lockfile_section_using_pipfile_category(category.strip())\n deps.update(lockfile.get(category, {}))\n else:\n if dev or dev_only:\n deps.update(lockfile[\"develop\"])\n if not dev_only:\n deps.update(lockfile[\"default\"])\n\n pip_installable_lines = requirements_from_deps(\n deps, include_hashes=include_hashes, include_markers=include_markers\n )\n\n for line in pip_installable_lines:\n click.echo(line)\n\n sys.exit(0)\n", "path": "pipenv/routines/requirements.py"}]} | 2,373 | 127 |
gh_patches_debug_29715 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-4090 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Detecting and alerting of duplication keys/components/entries in YAML file
### Is your feature request related to a problem? Please describe
it was found in release 1.3.11 , a PR to update [manifest](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/1.3.11/opensearch-1.3.11.yml) has duplicated components name.
It would cause the resource wasted on CI to rebuild the duplicated components
### Describe the solution you'd like
We want to have a check to detect if there is any duplication entries based on keys/components/names and probably fail the GitHub check
### Describe alternatives you've considered
Manually check for duplicate values
### Acceptance Criteria
* The manifest check should fail at CI level for components with duplicate components.name values in opensearch and opensearch-dashboard as well as test manifests. See what are [manifests](https://github.com/opensearch-project/opensearch-build/wiki/Building-an-OpenSearch-and-OpenSearch-Dashboards-Distribution#what-are-manifests)
</issue>
<code>
[start of src/ci_workflow/ci_manifests.py]
1 # Copyright OpenSearch Contributors
2 # SPDX-License-Identifier: Apache-2.0
3 #
4 # The OpenSearch Contributors require contributions made to
5 # this file be licensed under the Apache-2.0 license or a
6 # compatible open source license.
7
8
9 import re
10 from io import TextIOWrapper
11 from typing import Type, Union
12
13 from ci_workflow.ci_args import CiArgs
14 from ci_workflow.ci_input_manifest import CiInputManifest
15 from ci_workflow.ci_test_manifest import CiTestManifest
16
17
18 class CiManifests:
19 @staticmethod
20 def __klass(filename: str) -> Union[Type[CiTestManifest], Type[CiInputManifest]]:
21 if re.search("-test.yml$", filename):
22 return CiTestManifest
23 else:
24 return CiInputManifest
25
26 @classmethod
27 def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:
28 return cls.__klass(file.name)(file, args)
29
[end of src/ci_workflow/ci_manifests.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/ci_workflow/ci_manifests.py b/src/ci_workflow/ci_manifests.py
--- a/src/ci_workflow/ci_manifests.py
+++ b/src/ci_workflow/ci_manifests.py
@@ -7,9 +7,12 @@
import re
+from collections import Counter
from io import TextIOWrapper
from typing import Type, Union
+import yaml
+
from ci_workflow.ci_args import CiArgs
from ci_workflow.ci_input_manifest import CiInputManifest
from ci_workflow.ci_test_manifest import CiTestManifest
@@ -23,6 +26,29 @@
else:
return CiInputManifest
+ @staticmethod
+ def __get_duplicate_component_names(count_component_names: Counter) -> list:
+ duplicate_component_names = []
+ for component_name, count in count_component_names.items():
+ if count > 1:
+ duplicate_component_names.append(component_name)
+ return duplicate_component_names
+
+ @staticmethod
+ def __check_duplicate_component_names(file: TextIOWrapper) -> None:
+ yaml_dict = yaml.safe_load(file)
+ component_names = []
+ for component in yaml_dict['components']:
+ component_names.append(component['name'])
+ count_component_names = Counter(component_names)
+
+ if set(count_component_names.values()) != set([1]):
+ duplicate_component_names = CiManifests.__get_duplicate_component_names(count_component_names)
+ duplicate_component_names_string = ', '.join(duplicate_component_names)
+ raise ValueError(f"Found {duplicate_component_names_string} as a duplicate component(s) in manifest {file.name}. ")
+ file.seek(0)
+
@classmethod
def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:
+ cls.__check_duplicate_component_names(file)
return cls.__klass(file.name)(file, args)
| {"golden_diff": "diff --git a/src/ci_workflow/ci_manifests.py b/src/ci_workflow/ci_manifests.py\n--- a/src/ci_workflow/ci_manifests.py\n+++ b/src/ci_workflow/ci_manifests.py\n@@ -7,9 +7,12 @@\n \n \n import re\n+from collections import Counter\n from io import TextIOWrapper\n from typing import Type, Union\n \n+import yaml\n+\n from ci_workflow.ci_args import CiArgs\n from ci_workflow.ci_input_manifest import CiInputManifest\n from ci_workflow.ci_test_manifest import CiTestManifest\n@@ -23,6 +26,29 @@\n else:\n return CiInputManifest\n \n+ @staticmethod\n+ def __get_duplicate_component_names(count_component_names: Counter) -> list:\n+ duplicate_component_names = []\n+ for component_name, count in count_component_names.items():\n+ if count > 1:\n+ duplicate_component_names.append(component_name)\n+ return duplicate_component_names\n+\n+ @staticmethod\n+ def __check_duplicate_component_names(file: TextIOWrapper) -> None:\n+ yaml_dict = yaml.safe_load(file)\n+ component_names = []\n+ for component in yaml_dict['components']:\n+ component_names.append(component['name'])\n+ count_component_names = Counter(component_names)\n+\n+ if set(count_component_names.values()) != set([1]):\n+ duplicate_component_names = CiManifests.__get_duplicate_component_names(count_component_names)\n+ duplicate_component_names_string = ', '.join(duplicate_component_names)\n+ raise ValueError(f\"Found {duplicate_component_names_string} as a duplicate component(s) in manifest {file.name}. \")\n+ file.seek(0)\n+\n @classmethod\n def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:\n+ cls.__check_duplicate_component_names(file)\n return cls.__klass(file.name)(file, args)\n", "issue": "Detecting and alerting of duplication keys/components/entries in YAML file\n### Is your feature request related to a problem? Please describe\r\n\r\nit was found in release 1.3.11 , a PR to update [manifest](https://github.com/opensearch-project/opensearch-build/blob/main/manifests/1.3.11/opensearch-1.3.11.yml) has duplicated components name.\r\nIt would cause the resource wasted on CI to rebuild the duplicated components \r\n\r\n### Describe the solution you'd like\r\n\r\nWe want to have a check to detect if there is any duplication entries based on keys/components/names and probably fail the GitHub check\r\n\r\n### Describe alternatives you've considered\r\n\r\nManually check for duplicate values\r\n\r\n### Acceptance Criteria\r\n* The manifest check should fail at CI level for components with duplicate components.name values in opensearch and opensearch-dashboard as well as test manifests. See what are [manifests](https://github.com/opensearch-project/opensearch-build/wiki/Building-an-OpenSearch-and-OpenSearch-Dashboards-Distribution#what-are-manifests)\n", "before_files": [{"content": "# Copyright OpenSearch Contributors\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\n\nimport re\nfrom io import TextIOWrapper\nfrom typing import Type, Union\n\nfrom ci_workflow.ci_args import CiArgs\nfrom ci_workflow.ci_input_manifest import CiInputManifest\nfrom ci_workflow.ci_test_manifest import CiTestManifest\n\n\nclass CiManifests:\n @staticmethod\n def __klass(filename: str) -> Union[Type[CiTestManifest], Type[CiInputManifest]]:\n if re.search(\"-test.yml$\", filename):\n return CiTestManifest\n else:\n return CiInputManifest\n\n @classmethod\n def from_file(cls, file: TextIOWrapper, args: CiArgs) -> Union[CiTestManifest, CiInputManifest]:\n return cls.__klass(file.name)(file, args)\n", "path": "src/ci_workflow/ci_manifests.py"}]} | 1,030 | 414 |
gh_patches_debug_21251 | rasdani/github-patches | git_diff | dask__dask-1576 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nrows= in dd.read_csv should fail loudly
We don't support this keyword argument. We should fail loudly if someone tries to use it.
</issue>
<code>
[start of dask/dataframe/csv.py]
1 from __future__ import print_function, division, absolute_import
2
3 from io import BytesIO
4 from warnings import warn
5
6 try:
7 import psutil
8 except ImportError:
9 psutil = None
10
11 import numpy as np
12 import pandas as pd
13
14 from ..delayed import delayed
15 from .io import from_delayed
16
17 from ..bytes import read_bytes
18 from ..bytes.compression import seekable_files, files as cfiles
19
20
21 delayed = delayed(pure=True)
22
23
24 def bytes_read_csv(b, header, kwargs, dtypes=None, columns=None,
25 write_header=True, enforce=False):
26 """ Convert a block of bytes to a Pandas DataFrame
27
28 Parameters
29 ----------
30 b: bytestring
31 The content to be parsed with pandas.read_csv
32 header: bytestring
33 An optional header to prepend to b
34 kwargs: dict
35 A dictionary of keyword arguments to be passed to pandas.read_csv
36 dtypes: dict
37 DTypes to assign to columns
38
39 See Also:
40 dask.dataframe.csv.read_csv_from_bytes
41 """
42 bio = BytesIO()
43 if write_header and not b.startswith(header.rstrip()):
44 bio.write(header)
45 bio.write(b)
46 bio.seek(0)
47 df = pd.read_csv(bio, **kwargs)
48 if dtypes:
49 coerce_dtypes(df, dtypes)
50
51 if enforce and columns and (list(df.columns) != list(columns)):
52 raise ValueError("Columns do not match", df.columns, columns)
53 return df
54
55
56 def coerce_dtypes(df, dtypes):
57 """ Coerce dataframe to dtypes safely
58
59 Operates in place
60
61 Parameters
62 ----------
63 df: Pandas DataFrame
64 dtypes: dict like {'x': float}
65 """
66 for c in df.columns:
67 if c in dtypes and df.dtypes[c] != dtypes[c]:
68 if (np.issubdtype(df.dtypes[c], np.floating) and
69 np.issubdtype(dtypes[c], np.integer)):
70 if (df[c] % 1).any():
71 raise TypeError("Runtime type mismatch. "
72 "Add {'%s': float} to dtype= keyword in read_csv" % c)
73 df[c] = df[c].astype(dtypes[c])
74
75
76 def read_csv_from_bytes(block_lists, header, head, kwargs, collection=True,
77 enforce=False):
78 """ Convert blocks of bytes to a dask.dataframe or other high-level object
79
80 This accepts a list of lists of values of bytes where each list corresponds
81 to one file, and the value of bytes concatenate to comprise the entire
82 file, in order.
83
84 Parameters
85 ----------
86 block_lists: list of lists of delayed values of bytes
87 The lists of bytestrings where each list corresponds to one logical file
88 header: bytestring
89 The header, found at the front of the first file, to be prepended to
90 all blocks
91 head: pd.DataFrame
92 An example Pandas DataFrame to be used for metadata.
93 Can be ``None`` if ``collection==False``
94 kwargs: dict
95 Keyword arguments to pass down to ``pd.read_csv``
96 collection: boolean, optional (defaults to True)
97
98 Returns
99 -------
100 A dask.dataframe or list of delayed values
101 """
102 dtypes = head.dtypes.to_dict()
103 columns = list(head.columns)
104 delayed_bytes_read_csv = delayed(bytes_read_csv)
105 dfs = []
106 for blocks in block_lists:
107 if not blocks:
108 continue
109 df = delayed_bytes_read_csv(blocks[0], header, kwargs, dtypes,
110 columns, write_header=False,
111 enforce=enforce)
112 dfs.append(df)
113 for b in blocks[1:]:
114 dfs.append(delayed_bytes_read_csv(b, header, kwargs, dtypes,
115 columns, enforce=enforce))
116
117 if collection:
118 return from_delayed(dfs, head)
119 else:
120 return dfs
121
122
123 def auto_blocksize(total_memory, cpu_count):
124 memory_factor = 10
125 blocksize = int(total_memory // cpu_count / memory_factor)
126 return min(blocksize, int(64e6))
127
128
129 # guess blocksize if psutil is installed or use acceptable default one if not
130 if psutil is not None:
131 TOTAL_MEM = psutil.virtual_memory().total
132 CPU_COUNT = psutil.cpu_count()
133 AUTO_BLOCKSIZE = auto_blocksize(TOTAL_MEM, CPU_COUNT)
134 else:
135 AUTO_BLOCKSIZE = 2**25
136
137
138 def read_csv(urlpath, blocksize=AUTO_BLOCKSIZE, chunkbytes=None,
139 collection=True, lineterminator=None, compression=None,
140 sample=256000, enforce=False, storage_options=None, **kwargs):
141 """ Read CSV files into a Dask.DataFrame
142
143 This parallelizes the ``pandas.read_csv`` file in the following ways:
144
145 1. It supports loading many files at once using globstrings as follows:
146
147 >>> df = dd.read_csv('myfiles.*.csv') # doctest: +SKIP
148
149 2. In some cases it can break up large files as follows:
150
151 >>> df = dd.read_csv('largefile.csv', blocksize=25e6) # 25MB chunks # doctest: +SKIP
152
153 3. You can read CSV files from external resources (e.g. S3, HDFS)
154 providing a URL:
155
156 >>> df = dd.read_csv('s3://bucket/myfiles.*.csv') # doctest: +SKIP
157 >>> df = dd.read_csv('hdfs:///myfiles.*.csv') # doctest: +SKIP
158 >>> df = dd.read_csv('hdfs://namenode.example.com/myfiles.*.csv') # doctest: +SKIP
159
160 Internally dd.read_csv uses pandas.read_csv and so supports many of the
161 same keyword arguments with the same performance guarantees.
162
163 See the docstring for ``pandas.read_csv`` for more information on available
164 keyword arguments.
165
166 Note that this function may fail if a CSV file includes quoted strings that
167 contain the line terminator.
168
169 Parameters
170 ----------
171
172 urlpath: string
173 Absolute or relative filepath, URL (may include protocols like
174 ``s3://``), or globstring for CSV files.
175 blocksize: int or None
176 Number of bytes by which to cut up larger files. Default value is
177 computed based on available physical memory and the number of cores.
178 If ``None``, use a single block for each file.
179 collection: boolean
180 Return a dask.dataframe if True or list of dask.delayed objects if False
181 sample: int
182 Number of bytes to use when determining dtypes
183 storage_options: dict
184 Extra options that make sense to a particular storage connection, e.g.
185 host, port, username, password, etc.
186 **kwargs: dict
187 Options to pass down to ``pandas.read_csv``
188 """
189 if lineterminator is not None and len(lineterminator) == 1:
190 kwargs['lineterminator'] = lineterminator
191 else:
192 lineterminator = '\n'
193 if chunkbytes is not None:
194 warn("Deprecation warning: chunksize csv keyword renamed to blocksize")
195 blocksize = chunkbytes
196 if 'index' in kwargs or 'index_col' in kwargs:
197 raise ValueError("Keyword 'index' not supported "
198 "dd.read_csv(...).set_index('my-index') instead")
199 for kw in ['iterator', 'chunksize']:
200 if kw in kwargs:
201 raise ValueError("%s not supported for dd.read_csv" % kw)
202 if isinstance(kwargs.get('skiprows'), list):
203 raise TypeError("List of skiprows not supported for dd.read_csv")
204 if isinstance(kwargs.get('header'), list):
205 raise TypeError("List of header rows not supported for dd.read_csv")
206
207 if blocksize and compression not in seekable_files:
208 warn("Warning %s compression does not support breaking apart files\n"
209 "Please ensure that each individual file can fit in memory and\n"
210 "use the keyword ``blocksize=None to remove this message``\n"
211 "Setting ``blocksize=None``" % compression)
212 blocksize = None
213 if compression not in seekable_files and compression not in cfiles:
214 raise NotImplementedError("Compression format %s not installed" %
215 compression)
216
217 b_lineterminator = lineterminator.encode()
218 sample, values = read_bytes(urlpath, delimiter=b_lineterminator,
219 blocksize=blocksize,
220 sample=sample,
221 compression=compression,
222 **(storage_options or {}))
223
224 if not isinstance(values[0], (tuple, list)):
225 values = [values]
226
227 if 'nrows' in kwargs:
228 values = [[values[0][0]]]
229
230 if kwargs.get('header', 'infer') is None:
231 header = b''
232 else:
233 header = sample.split(b_lineterminator)[0] + b_lineterminator
234
235 head = pd.read_csv(BytesIO(sample), **kwargs)
236
237 df = read_csv_from_bytes(values, header, head, kwargs,
238 collection=collection, enforce=enforce)
239
240 return df
241
[end of dask/dataframe/csv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dask/dataframe/csv.py b/dask/dataframe/csv.py
--- a/dask/dataframe/csv.py
+++ b/dask/dataframe/csv.py
@@ -199,6 +199,10 @@
for kw in ['iterator', 'chunksize']:
if kw in kwargs:
raise ValueError("%s not supported for dd.read_csv" % kw)
+ if kwargs.get('nrows', None):
+ raise ValueError("The 'nrows' keyword is not supported by "
+ "`dd.read_csv`. To achieve the same behavior, it's "
+ "recommended to use `dd.read_csv(...).head(n=nrows)`")
if isinstance(kwargs.get('skiprows'), list):
raise TypeError("List of skiprows not supported for dd.read_csv")
if isinstance(kwargs.get('header'), list):
@@ -224,9 +228,6 @@
if not isinstance(values[0], (tuple, list)):
values = [values]
- if 'nrows' in kwargs:
- values = [[values[0][0]]]
-
if kwargs.get('header', 'infer') is None:
header = b''
else:
| {"golden_diff": "diff --git a/dask/dataframe/csv.py b/dask/dataframe/csv.py\n--- a/dask/dataframe/csv.py\n+++ b/dask/dataframe/csv.py\n@@ -199,6 +199,10 @@\n for kw in ['iterator', 'chunksize']:\n if kw in kwargs:\n raise ValueError(\"%s not supported for dd.read_csv\" % kw)\n+ if kwargs.get('nrows', None):\n+ raise ValueError(\"The 'nrows' keyword is not supported by \"\n+ \"`dd.read_csv`. To achieve the same behavior, it's \"\n+ \"recommended to use `dd.read_csv(...).head(n=nrows)`\")\n if isinstance(kwargs.get('skiprows'), list):\n raise TypeError(\"List of skiprows not supported for dd.read_csv\")\n if isinstance(kwargs.get('header'), list):\n@@ -224,9 +228,6 @@\n if not isinstance(values[0], (tuple, list)):\n values = [values]\n \n- if 'nrows' in kwargs:\n- values = [[values[0][0]]]\n-\n if kwargs.get('header', 'infer') is None:\n header = b''\n else:\n", "issue": "nrows= in dd.read_csv should fail loudly\nWe don't support this keyword argument. We should fail loudly if someone tries to use it.\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nfrom io import BytesIO\nfrom warnings import warn\n\ntry:\n import psutil\nexcept ImportError:\n psutil = None\n\nimport numpy as np\nimport pandas as pd\n\nfrom ..delayed import delayed\nfrom .io import from_delayed\n\nfrom ..bytes import read_bytes\nfrom ..bytes.compression import seekable_files, files as cfiles\n\n\ndelayed = delayed(pure=True)\n\n\ndef bytes_read_csv(b, header, kwargs, dtypes=None, columns=None,\n write_header=True, enforce=False):\n \"\"\" Convert a block of bytes to a Pandas DataFrame\n\n Parameters\n ----------\n b: bytestring\n The content to be parsed with pandas.read_csv\n header: bytestring\n An optional header to prepend to b\n kwargs: dict\n A dictionary of keyword arguments to be passed to pandas.read_csv\n dtypes: dict\n DTypes to assign to columns\n\n See Also:\n dask.dataframe.csv.read_csv_from_bytes\n \"\"\"\n bio = BytesIO()\n if write_header and not b.startswith(header.rstrip()):\n bio.write(header)\n bio.write(b)\n bio.seek(0)\n df = pd.read_csv(bio, **kwargs)\n if dtypes:\n coerce_dtypes(df, dtypes)\n\n if enforce and columns and (list(df.columns) != list(columns)):\n raise ValueError(\"Columns do not match\", df.columns, columns)\n return df\n\n\ndef coerce_dtypes(df, dtypes):\n \"\"\" Coerce dataframe to dtypes safely\n\n Operates in place\n\n Parameters\n ----------\n df: Pandas DataFrame\n dtypes: dict like {'x': float}\n \"\"\"\n for c in df.columns:\n if c in dtypes and df.dtypes[c] != dtypes[c]:\n if (np.issubdtype(df.dtypes[c], np.floating) and\n np.issubdtype(dtypes[c], np.integer)):\n if (df[c] % 1).any():\n raise TypeError(\"Runtime type mismatch. \"\n \"Add {'%s': float} to dtype= keyword in read_csv\" % c)\n df[c] = df[c].astype(dtypes[c])\n\n\ndef read_csv_from_bytes(block_lists, header, head, kwargs, collection=True,\n enforce=False):\n \"\"\" Convert blocks of bytes to a dask.dataframe or other high-level object\n\n This accepts a list of lists of values of bytes where each list corresponds\n to one file, and the value of bytes concatenate to comprise the entire\n file, in order.\n\n Parameters\n ----------\n block_lists: list of lists of delayed values of bytes\n The lists of bytestrings where each list corresponds to one logical file\n header: bytestring\n The header, found at the front of the first file, to be prepended to\n all blocks\n head: pd.DataFrame\n An example Pandas DataFrame to be used for metadata.\n Can be ``None`` if ``collection==False``\n kwargs: dict\n Keyword arguments to pass down to ``pd.read_csv``\n collection: boolean, optional (defaults to True)\n\n Returns\n -------\n A dask.dataframe or list of delayed values\n \"\"\"\n dtypes = head.dtypes.to_dict()\n columns = list(head.columns)\n delayed_bytes_read_csv = delayed(bytes_read_csv)\n dfs = []\n for blocks in block_lists:\n if not blocks:\n continue\n df = delayed_bytes_read_csv(blocks[0], header, kwargs, dtypes,\n columns, write_header=False,\n enforce=enforce)\n dfs.append(df)\n for b in blocks[1:]:\n dfs.append(delayed_bytes_read_csv(b, header, kwargs, dtypes,\n columns, enforce=enforce))\n\n if collection:\n return from_delayed(dfs, head)\n else:\n return dfs\n\n\ndef auto_blocksize(total_memory, cpu_count):\n memory_factor = 10\n blocksize = int(total_memory // cpu_count / memory_factor)\n return min(blocksize, int(64e6))\n\n\n# guess blocksize if psutil is installed or use acceptable default one if not\nif psutil is not None:\n TOTAL_MEM = psutil.virtual_memory().total\n CPU_COUNT = psutil.cpu_count()\n AUTO_BLOCKSIZE = auto_blocksize(TOTAL_MEM, CPU_COUNT)\nelse:\n AUTO_BLOCKSIZE = 2**25\n\n\ndef read_csv(urlpath, blocksize=AUTO_BLOCKSIZE, chunkbytes=None,\n collection=True, lineterminator=None, compression=None,\n sample=256000, enforce=False, storage_options=None, **kwargs):\n \"\"\" Read CSV files into a Dask.DataFrame\n\n This parallelizes the ``pandas.read_csv`` file in the following ways:\n\n 1. It supports loading many files at once using globstrings as follows:\n\n >>> df = dd.read_csv('myfiles.*.csv') # doctest: +SKIP\n\n 2. In some cases it can break up large files as follows:\n\n >>> df = dd.read_csv('largefile.csv', blocksize=25e6) # 25MB chunks # doctest: +SKIP\n\n 3. You can read CSV files from external resources (e.g. S3, HDFS)\n providing a URL:\n\n >>> df = dd.read_csv('s3://bucket/myfiles.*.csv') # doctest: +SKIP\n >>> df = dd.read_csv('hdfs:///myfiles.*.csv') # doctest: +SKIP\n >>> df = dd.read_csv('hdfs://namenode.example.com/myfiles.*.csv') # doctest: +SKIP\n\n Internally dd.read_csv uses pandas.read_csv and so supports many of the\n same keyword arguments with the same performance guarantees.\n\n See the docstring for ``pandas.read_csv`` for more information on available\n keyword arguments.\n\n Note that this function may fail if a CSV file includes quoted strings that\n contain the line terminator.\n\n Parameters\n ----------\n\n urlpath: string\n Absolute or relative filepath, URL (may include protocols like\n ``s3://``), or globstring for CSV files.\n blocksize: int or None\n Number of bytes by which to cut up larger files. Default value is\n computed based on available physical memory and the number of cores.\n If ``None``, use a single block for each file.\n collection: boolean\n Return a dask.dataframe if True or list of dask.delayed objects if False\n sample: int\n Number of bytes to use when determining dtypes\n storage_options: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n **kwargs: dict\n Options to pass down to ``pandas.read_csv``\n \"\"\"\n if lineterminator is not None and len(lineterminator) == 1:\n kwargs['lineterminator'] = lineterminator\n else:\n lineterminator = '\\n'\n if chunkbytes is not None:\n warn(\"Deprecation warning: chunksize csv keyword renamed to blocksize\")\n blocksize = chunkbytes\n if 'index' in kwargs or 'index_col' in kwargs:\n raise ValueError(\"Keyword 'index' not supported \"\n \"dd.read_csv(...).set_index('my-index') instead\")\n for kw in ['iterator', 'chunksize']:\n if kw in kwargs:\n raise ValueError(\"%s not supported for dd.read_csv\" % kw)\n if isinstance(kwargs.get('skiprows'), list):\n raise TypeError(\"List of skiprows not supported for dd.read_csv\")\n if isinstance(kwargs.get('header'), list):\n raise TypeError(\"List of header rows not supported for dd.read_csv\")\n\n if blocksize and compression not in seekable_files:\n warn(\"Warning %s compression does not support breaking apart files\\n\"\n \"Please ensure that each individual file can fit in memory and\\n\"\n \"use the keyword ``blocksize=None to remove this message``\\n\"\n \"Setting ``blocksize=None``\" % compression)\n blocksize = None\n if compression not in seekable_files and compression not in cfiles:\n raise NotImplementedError(\"Compression format %s not installed\" %\n compression)\n\n b_lineterminator = lineterminator.encode()\n sample, values = read_bytes(urlpath, delimiter=b_lineterminator,\n blocksize=blocksize,\n sample=sample,\n compression=compression,\n **(storage_options or {}))\n\n if not isinstance(values[0], (tuple, list)):\n values = [values]\n\n if 'nrows' in kwargs:\n values = [[values[0][0]]]\n\n if kwargs.get('header', 'infer') is None:\n header = b''\n else:\n header = sample.split(b_lineterminator)[0] + b_lineterminator\n\n head = pd.read_csv(BytesIO(sample), **kwargs)\n\n df = read_csv_from_bytes(values, header, head, kwargs,\n collection=collection, enforce=enforce)\n\n return df\n", "path": "dask/dataframe/csv.py"}]} | 3,181 | 258 |
gh_patches_debug_25642 | rasdani/github-patches | git_diff | pypa__setuptools-1890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
package metadata is no longer picklable
Since switching to OrderedDict, package metadata can no longer be pickled.
This can cause problems if you try to use setuptools with multiprocessing.
```
Traceback (most recent call last):
File "/Users/dan/Documents/colcon_ws/src/colcon-python-setup-py/colcon_python_setup_py/package_identification/python_setup_py.py", line 257, in get_setup_information
'stop_after': 'config'
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 261, in apply
return self.apply_async(func, args, kwds).get()
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '<distutils.dist.DistributionMetadata object at 0x10c1fcd50>'. Reason: 'PicklingError("Can't pickle <class 'setuptools._vendor.ordered_set.OrderedSet'>: it's not the same object as setuptools._vendor.ordered_set.OrderedSet")'
```
_Originally posted by @rotu in https://github.com/pypa/setuptools/pull/1690#issuecomment-545992670_
package metadata is no longer picklable
Since switching to OrderedDict, package metadata can no longer be pickled.
This can cause problems if you try to use setuptools with multiprocessing.
```
Traceback (most recent call last):
File "/Users/dan/Documents/colcon_ws/src/colcon-python-setup-py/colcon_python_setup_py/package_identification/python_setup_py.py", line 257, in get_setup_information
'stop_after': 'config'
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 261, in apply
return self.apply_async(func, args, kwds).get()
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '<distutils.dist.DistributionMetadata object at 0x10c1fcd50>'. Reason: 'PicklingError("Can't pickle <class 'setuptools._vendor.ordered_set.OrderedSet'>: it's not the same object as setuptools._vendor.ordered_set.OrderedSet")'
```
_Originally posted by @rotu in https://github.com/pypa/setuptools/pull/1690#issuecomment-545992670_
</issue>
<code>
[start of setuptools/extern/__init__.py]
1 import sys
2
3
4 class VendorImporter:
5 """
6 A PEP 302 meta path importer for finding optionally-vendored
7 or otherwise naturally-installed packages from root_name.
8 """
9
10 def __init__(self, root_name, vendored_names=(), vendor_pkg=None):
11 self.root_name = root_name
12 self.vendored_names = set(vendored_names)
13 self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')
14
15 @property
16 def search_path(self):
17 """
18 Search first the vendor package then as a natural package.
19 """
20 yield self.vendor_pkg + '.'
21 yield ''
22
23 def find_module(self, fullname, path=None):
24 """
25 Return self when fullname starts with root_name and the
26 target module is one vendored through this importer.
27 """
28 root, base, target = fullname.partition(self.root_name + '.')
29 if root:
30 return
31 if not any(map(target.startswith, self.vendored_names)):
32 return
33 return self
34
35 def load_module(self, fullname):
36 """
37 Iterate over the search path to locate and load fullname.
38 """
39 root, base, target = fullname.partition(self.root_name + '.')
40 for prefix in self.search_path:
41 try:
42 extant = prefix + target
43 __import__(extant)
44 mod = sys.modules[extant]
45 sys.modules[fullname] = mod
46 # mysterious hack:
47 # Remove the reference to the extant package/module
48 # on later Python versions to cause relative imports
49 # in the vendor package to resolve the same modules
50 # as those going through this importer.
51 if sys.version_info >= (3, ):
52 del sys.modules[extant]
53 return mod
54 except ImportError:
55 pass
56 else:
57 raise ImportError(
58 "The '{target}' package is required; "
59 "normally this is bundled with this package so if you get "
60 "this warning, consult the packager of your "
61 "distribution.".format(**locals())
62 )
63
64 def install(self):
65 """
66 Install this importer into sys.meta_path if not already present.
67 """
68 if self not in sys.meta_path:
69 sys.meta_path.append(self)
70
71
72 names = 'six', 'packaging', 'pyparsing', 'ordered_set',
73 VendorImporter(__name__, names, 'setuptools._vendor').install()
74
[end of setuptools/extern/__init__.py]
[start of pkg_resources/extern/__init__.py]
1 import sys
2
3
4 class VendorImporter:
5 """
6 A PEP 302 meta path importer for finding optionally-vendored
7 or otherwise naturally-installed packages from root_name.
8 """
9
10 def __init__(self, root_name, vendored_names=(), vendor_pkg=None):
11 self.root_name = root_name
12 self.vendored_names = set(vendored_names)
13 self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')
14
15 @property
16 def search_path(self):
17 """
18 Search first the vendor package then as a natural package.
19 """
20 yield self.vendor_pkg + '.'
21 yield ''
22
23 def find_module(self, fullname, path=None):
24 """
25 Return self when fullname starts with root_name and the
26 target module is one vendored through this importer.
27 """
28 root, base, target = fullname.partition(self.root_name + '.')
29 if root:
30 return
31 if not any(map(target.startswith, self.vendored_names)):
32 return
33 return self
34
35 def load_module(self, fullname):
36 """
37 Iterate over the search path to locate and load fullname.
38 """
39 root, base, target = fullname.partition(self.root_name + '.')
40 for prefix in self.search_path:
41 try:
42 extant = prefix + target
43 __import__(extant)
44 mod = sys.modules[extant]
45 sys.modules[fullname] = mod
46 # mysterious hack:
47 # Remove the reference to the extant package/module
48 # on later Python versions to cause relative imports
49 # in the vendor package to resolve the same modules
50 # as those going through this importer.
51 if prefix and sys.version_info > (3, 3):
52 del sys.modules[extant]
53 return mod
54 except ImportError:
55 pass
56 else:
57 raise ImportError(
58 "The '{target}' package is required; "
59 "normally this is bundled with this package so if you get "
60 "this warning, consult the packager of your "
61 "distribution.".format(**locals())
62 )
63
64 def install(self):
65 """
66 Install this importer into sys.meta_path if not already present.
67 """
68 if self not in sys.meta_path:
69 sys.meta_path.append(self)
70
71
72 names = 'packaging', 'pyparsing', 'six', 'appdirs'
73 VendorImporter(__name__, names).install()
74
[end of pkg_resources/extern/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pkg_resources/extern/__init__.py b/pkg_resources/extern/__init__.py
--- a/pkg_resources/extern/__init__.py
+++ b/pkg_resources/extern/__init__.py
@@ -43,13 +43,6 @@
__import__(extant)
mod = sys.modules[extant]
sys.modules[fullname] = mod
- # mysterious hack:
- # Remove the reference to the extant package/module
- # on later Python versions to cause relative imports
- # in the vendor package to resolve the same modules
- # as those going through this importer.
- if prefix and sys.version_info > (3, 3):
- del sys.modules[extant]
return mod
except ImportError:
pass
diff --git a/setuptools/extern/__init__.py b/setuptools/extern/__init__.py
--- a/setuptools/extern/__init__.py
+++ b/setuptools/extern/__init__.py
@@ -43,13 +43,6 @@
__import__(extant)
mod = sys.modules[extant]
sys.modules[fullname] = mod
- # mysterious hack:
- # Remove the reference to the extant package/module
- # on later Python versions to cause relative imports
- # in the vendor package to resolve the same modules
- # as those going through this importer.
- if sys.version_info >= (3, ):
- del sys.modules[extant]
return mod
except ImportError:
pass
| {"golden_diff": "diff --git a/pkg_resources/extern/__init__.py b/pkg_resources/extern/__init__.py\n--- a/pkg_resources/extern/__init__.py\n+++ b/pkg_resources/extern/__init__.py\n@@ -43,13 +43,6 @@\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n- # mysterious hack:\n- # Remove the reference to the extant package/module\n- # on later Python versions to cause relative imports\n- # in the vendor package to resolve the same modules\n- # as those going through this importer.\n- if prefix and sys.version_info > (3, 3):\n- del sys.modules[extant]\n return mod\n except ImportError:\n pass\ndiff --git a/setuptools/extern/__init__.py b/setuptools/extern/__init__.py\n--- a/setuptools/extern/__init__.py\n+++ b/setuptools/extern/__init__.py\n@@ -43,13 +43,6 @@\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n- # mysterious hack:\n- # Remove the reference to the extant package/module\n- # on later Python versions to cause relative imports\n- # in the vendor package to resolve the same modules\n- # as those going through this importer.\n- if sys.version_info >= (3, ):\n- del sys.modules[extant]\n return mod\n except ImportError:\n pass\n", "issue": "package metadata is no longer picklable\nSince switching to OrderedDict, package metadata can no longer be pickled.\r\n\r\nThis can cause problems if you try to use setuptools with multiprocessing.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/dan/Documents/colcon_ws/src/colcon-python-setup-py/colcon_python_setup_py/package_identification/python_setup_py.py\", line 257, in get_setup_information\r\n 'stop_after': 'config'\r\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py\", line 261, in apply\r\n return self.apply_async(func, args, kwds).get()\r\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py\", line 657, in get\r\n raise self._value\r\nmultiprocessing.pool.MaybeEncodingError: Error sending result: '<distutils.dist.DistributionMetadata object at 0x10c1fcd50>'. Reason: 'PicklingError(\"Can't pickle <class 'setuptools._vendor.ordered_set.OrderedSet'>: it's not the same object as setuptools._vendor.ordered_set.OrderedSet\")'\r\n```\r\n\r\n_Originally posted by @rotu in https://github.com/pypa/setuptools/pull/1690#issuecomment-545992670_\npackage metadata is no longer picklable\nSince switching to OrderedDict, package metadata can no longer be pickled.\r\n\r\nThis can cause problems if you try to use setuptools with multiprocessing.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/dan/Documents/colcon_ws/src/colcon-python-setup-py/colcon_python_setup_py/package_identification/python_setup_py.py\", line 257, in get_setup_information\r\n 'stop_after': 'config'\r\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py\", line 261, in apply\r\n return self.apply_async(func, args, kwds).get()\r\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py\", line 657, in get\r\n raise self._value\r\nmultiprocessing.pool.MaybeEncodingError: Error sending result: '<distutils.dist.DistributionMetadata object at 0x10c1fcd50>'. Reason: 'PicklingError(\"Can't pickle <class 'setuptools._vendor.ordered_set.OrderedSet'>: it's not the same object as setuptools._vendor.ordered_set.OrderedSet\")'\r\n```\r\n\r\n_Originally posted by @rotu in https://github.com/pypa/setuptools/pull/1690#issuecomment-545992670_\n", "before_files": [{"content": "import sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def find_module(self, fullname, path=None):\n \"\"\"\n Return self when fullname starts with root_name and the\n target module is one vendored through this importer.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n if root:\n return\n if not any(map(target.startswith, self.vendored_names)):\n return\n return self\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n # mysterious hack:\n # Remove the reference to the extant package/module\n # on later Python versions to cause relative imports\n # in the vendor package to resolve the same modules\n # as those going through this importer.\n if sys.version_info >= (3, ):\n del sys.modules[extant]\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'six', 'packaging', 'pyparsing', 'ordered_set',\nVendorImporter(__name__, names, 'setuptools._vendor').install()\n", "path": "setuptools/extern/__init__.py"}, {"content": "import sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def find_module(self, fullname, path=None):\n \"\"\"\n Return self when fullname starts with root_name and the\n target module is one vendored through this importer.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n if root:\n return\n if not any(map(target.startswith, self.vendored_names)):\n return\n return self\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n # mysterious hack:\n # Remove the reference to the extant package/module\n # on later Python versions to cause relative imports\n # in the vendor package to resolve the same modules\n # as those going through this importer.\n if prefix and sys.version_info > (3, 3):\n del sys.modules[extant]\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'packaging', 'pyparsing', 'six', 'appdirs'\nVendorImporter(__name__, names).install()\n", "path": "pkg_resources/extern/__init__.py"}]} | 2,520 | 338 |
gh_patches_debug_4601 | rasdani/github-patches | git_diff | python__typeshed-9779 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change `ignore_missing_stub` default to `false`?
I've been wondering about this.
Possible reasons:
1. More and more stubs are "stubtest" complete (not counting pyright and mypy)
1.1. It should be relatively easy to "minimally complete" missing ones using stugben. Especially after the reduction of stubtest false-positives by https://github.com/python/mypy/pull/14270
1.2. Eventually most if not all stubs will have `ignore_missing_stub = false`, which could be the only entry under `[tool.stubtest]`
2. It's easier to search for `ignore_missing_stub = true` than to search for a missing line.
3. `None` would mean `False` in the metadata parser. (feels more intuitive?)
4. #8955
> I don't want to encourage new incomplete stubs
Cons:
- Have to update the metadata parser
- A commit that flips (adds/removes) the value in all metadata.toml
- Maybe wait until there's even less (if not all) "stubtest incomplete" stubs left.
</issue>
<code>
[start of scripts/create_baseline_stubs.py]
1 #!/usr/bin/env python3
2
3 """Script to generate unannotated baseline stubs using stubgen.
4
5 Basic usage:
6 $ python3 scripts/create_baseline_stubs.py <project on PyPI>
7
8 Run with -h for more help.
9 """
10
11 from __future__ import annotations
12
13 import argparse
14 import os
15 import re
16 import subprocess
17 import sys
18
19 if sys.version_info >= (3, 8):
20 from importlib.metadata import distribution
21
22 PYRIGHT_CONFIG = "pyrightconfig.stricter.json"
23
24
25 def search_pip_freeze_output(project: str, output: str) -> tuple[str, str] | None:
26 # Look for lines such as "typed-ast==1.4.2". '-' matches '_' and
27 # '_' matches '-' in project name, so that "typed_ast" matches
28 # "typed-ast", and vice versa.
29 regex = "^(" + re.sub(r"[-_]", "[-_]", project) + ")==(.*)"
30 m = re.search(regex, output, flags=re.IGNORECASE | re.MULTILINE)
31 if not m:
32 return None
33 return m.group(1), m.group(2)
34
35
36 def get_installed_package_info(project: str) -> tuple[str, str] | None:
37 """Find package information from pip freeze output.
38
39 Match project name somewhat fuzzily (case sensitive; '-' matches '_', and
40 vice versa).
41
42 Return (normalized project name, installed version) if successful.
43 """
44 r = subprocess.run(["pip", "freeze"], capture_output=True, text=True, check=True)
45 return search_pip_freeze_output(project, r.stdout)
46
47
48 def run_stubgen(package: str, output: str) -> None:
49 print(f"Running stubgen: stubgen -o {output} -p {package}")
50 subprocess.run(["stubgen", "-o", output, "-p", package, "--export-less"], check=True)
51
52
53 def run_black(stub_dir: str) -> None:
54 print(f"Running black: black {stub_dir}")
55 subprocess.run(["black", stub_dir])
56
57
58 def run_isort(stub_dir: str) -> None:
59 print(f"Running isort: isort {stub_dir}")
60 subprocess.run(["python3", "-m", "isort", stub_dir])
61
62
63 def create_metadata(stub_dir: str, version: str) -> None:
64 """Create a METADATA.toml file."""
65 match = re.match(r"[0-9]+.[0-9]+", version)
66 if match is None:
67 sys.exit(f"Error: Cannot parse version number: {version}")
68 filename = os.path.join(stub_dir, "METADATA.toml")
69 version = match.group(0)
70 if os.path.exists(filename):
71 return
72 print(f"Writing {filename}")
73 with open(filename, "w", encoding="UTF-8") as file:
74 file.write(
75 f"""\
76 version = "{version}.*"
77
78 [tool.stubtest]
79 ignore_missing_stub = false
80 """
81 )
82
83
84 def add_pyright_exclusion(stub_dir: str) -> None:
85 """Exclude stub_dir from strict pyright checks."""
86 with open(PYRIGHT_CONFIG, encoding="UTF-8") as f:
87 lines = f.readlines()
88 i = 0
89 while i < len(lines) and not lines[i].strip().startswith('"exclude": ['):
90 i += 1
91 assert i < len(lines), f"Error parsing {PYRIGHT_CONFIG}"
92 while not lines[i].strip().startswith("]"):
93 i += 1
94 # Must use forward slash in the .json file
95 line_to_add = f' "{stub_dir}",'.replace("\\", "/")
96 initial = i - 1
97 while lines[i].lower() > line_to_add.lower():
98 i -= 1
99 if lines[i + 1].strip().rstrip(",") == line_to_add.strip().rstrip(","):
100 print(f"{PYRIGHT_CONFIG} already up-to-date")
101 return
102 if i == initial:
103 # Special case: when adding to the end of the list, commas need tweaking
104 line_to_add = line_to_add.rstrip(",")
105 lines[i] = lines[i].rstrip() + ",\n"
106 lines.insert(i + 1, line_to_add + "\n")
107 print(f"Updating {PYRIGHT_CONFIG}")
108 with open(PYRIGHT_CONFIG, "w", encoding="UTF-8") as f:
109 f.writelines(lines)
110
111
112 def main() -> None:
113 parser = argparse.ArgumentParser(
114 description="""Generate baseline stubs automatically for an installed pip package
115 using stubgen. Also run black and isort. If the name of
116 the project is different from the runtime Python package name, you may
117 need to use --package (example: --package yaml PyYAML)."""
118 )
119 parser.add_argument("project", help="name of PyPI project for which to generate stubs under stubs/")
120 parser.add_argument("--package", help="generate stubs for this Python package (default is autodetected)")
121 args = parser.parse_args()
122 project = args.project
123 package = args.package
124
125 if not re.match(r"[a-zA-Z0-9-_.]+$", project):
126 sys.exit(f"Invalid character in project name: {project!r}")
127
128 if not package:
129 package = project # default
130 # Try to find which packages are provided by the project
131 # Use default if that fails or if several packages are found
132 #
133 # The importlib.metadata module is used for projects whose name is different
134 # from the runtime Python package name (example: PyYAML/yaml)
135 if sys.version_info >= (3, 8):
136 dist = distribution(project).read_text("top_level.txt")
137 if dist is not None:
138 packages = [name for name in dist.split() if not name.startswith("_")]
139 if len(packages) == 1:
140 package = packages[0]
141 print(f'Using detected package "{package}" for project "{project}"', file=sys.stderr)
142 print("Suggestion: Try again with --package argument if that's not what you wanted", file=sys.stderr)
143
144 if not os.path.isdir("stubs") or not os.path.isdir("stdlib"):
145 sys.exit("Error: Current working directory must be the root of typeshed repository")
146
147 # Get normalized project name and version of installed package.
148 info = get_installed_package_info(project)
149 if info is None:
150 print(f'Error: "{project}" is not installed', file=sys.stderr)
151 print("", file=sys.stderr)
152 print(f'Suggestion: Run "python3 -m pip install {project}" and try again', file=sys.stderr)
153 sys.exit(1)
154 project, version = info
155
156 stub_dir = os.path.join("stubs", project)
157 package_dir = os.path.join(stub_dir, package)
158 if os.path.exists(package_dir):
159 sys.exit(f"Error: {package_dir} already exists (delete it first)")
160
161 run_stubgen(package, stub_dir)
162
163 run_isort(stub_dir)
164 run_black(stub_dir)
165
166 create_metadata(stub_dir, version)
167
168 # Since the generated stubs won't have many type annotations, we
169 # have to exclude them from strict pyright checks.
170 add_pyright_exclusion(stub_dir)
171
172 print("\nDone!\n\nSuggested next steps:")
173 print(f" 1. Manually review the generated stubs in {stub_dir}")
174 print(" 2. Optionally run tests and autofixes (see tests/README.md for details)")
175 print(" 3. Commit the changes on a new branch and create a typeshed PR (don't force-push!)")
176
177
178 if __name__ == "__main__":
179 main()
180
[end of scripts/create_baseline_stubs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/create_baseline_stubs.py b/scripts/create_baseline_stubs.py
--- a/scripts/create_baseline_stubs.py
+++ b/scripts/create_baseline_stubs.py
@@ -71,14 +71,7 @@
return
print(f"Writing {filename}")
with open(filename, "w", encoding="UTF-8") as file:
- file.write(
- f"""\
-version = "{version}.*"
-
-[tool.stubtest]
-ignore_missing_stub = false
-"""
- )
+ file.write(f'version = "{version}.*"')
def add_pyright_exclusion(stub_dir: str) -> None:
| {"golden_diff": "diff --git a/scripts/create_baseline_stubs.py b/scripts/create_baseline_stubs.py\n--- a/scripts/create_baseline_stubs.py\n+++ b/scripts/create_baseline_stubs.py\n@@ -71,14 +71,7 @@\n return\n print(f\"Writing {filename}\")\n with open(filename, \"w\", encoding=\"UTF-8\") as file:\n- file.write(\n- f\"\"\"\\\n-version = \"{version}.*\"\n-\n-[tool.stubtest]\n-ignore_missing_stub = false\n-\"\"\"\n- )\n+ file.write(f'version = \"{version}.*\"')\n \n \n def add_pyright_exclusion(stub_dir: str) -> None:\n", "issue": "Change `ignore_missing_stub` default to `false`?\nI've been wondering about this.\r\n\r\nPossible reasons:\r\n1. More and more stubs are \"stubtest\" complete (not counting pyright and mypy)\r\n\t1.1. It should be relatively easy to \"minimally complete\" missing ones using stugben. Especially after the reduction of stubtest false-positives by https://github.com/python/mypy/pull/14270\r\n\t1.2. Eventually most if not all stubs will have `ignore_missing_stub = false`, which could be the only entry under `[tool.stubtest]`\r\n2. It's easier to search for `ignore_missing_stub = true` than to search for a missing line.\r\n3. `None` would mean `False` in the metadata parser. (feels more intuitive?)\r\n4. #8955\r\n > I don't want to encourage new incomplete stubs\r\n \r\nCons:\r\n- Have to update the metadata parser\r\n- A commit that flips (adds/removes) the value in all metadata.toml\r\n- Maybe wait until there's even less (if not all) \"stubtest incomplete\" stubs left.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Script to generate unannotated baseline stubs using stubgen.\n\nBasic usage:\n$ python3 scripts/create_baseline_stubs.py <project on PyPI>\n\nRun with -h for more help.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport os\nimport re\nimport subprocess\nimport sys\n\nif sys.version_info >= (3, 8):\n from importlib.metadata import distribution\n\nPYRIGHT_CONFIG = \"pyrightconfig.stricter.json\"\n\n\ndef search_pip_freeze_output(project: str, output: str) -> tuple[str, str] | None:\n # Look for lines such as \"typed-ast==1.4.2\". '-' matches '_' and\n # '_' matches '-' in project name, so that \"typed_ast\" matches\n # \"typed-ast\", and vice versa.\n regex = \"^(\" + re.sub(r\"[-_]\", \"[-_]\", project) + \")==(.*)\"\n m = re.search(regex, output, flags=re.IGNORECASE | re.MULTILINE)\n if not m:\n return None\n return m.group(1), m.group(2)\n\n\ndef get_installed_package_info(project: str) -> tuple[str, str] | None:\n \"\"\"Find package information from pip freeze output.\n\n Match project name somewhat fuzzily (case sensitive; '-' matches '_', and\n vice versa).\n\n Return (normalized project name, installed version) if successful.\n \"\"\"\n r = subprocess.run([\"pip\", \"freeze\"], capture_output=True, text=True, check=True)\n return search_pip_freeze_output(project, r.stdout)\n\n\ndef run_stubgen(package: str, output: str) -> None:\n print(f\"Running stubgen: stubgen -o {output} -p {package}\")\n subprocess.run([\"stubgen\", \"-o\", output, \"-p\", package, \"--export-less\"], check=True)\n\n\ndef run_black(stub_dir: str) -> None:\n print(f\"Running black: black {stub_dir}\")\n subprocess.run([\"black\", stub_dir])\n\n\ndef run_isort(stub_dir: str) -> None:\n print(f\"Running isort: isort {stub_dir}\")\n subprocess.run([\"python3\", \"-m\", \"isort\", stub_dir])\n\n\ndef create_metadata(stub_dir: str, version: str) -> None:\n \"\"\"Create a METADATA.toml file.\"\"\"\n match = re.match(r\"[0-9]+.[0-9]+\", version)\n if match is None:\n sys.exit(f\"Error: Cannot parse version number: {version}\")\n filename = os.path.join(stub_dir, \"METADATA.toml\")\n version = match.group(0)\n if os.path.exists(filename):\n return\n print(f\"Writing {filename}\")\n with open(filename, \"w\", encoding=\"UTF-8\") as file:\n file.write(\n f\"\"\"\\\nversion = \"{version}.*\"\n\n[tool.stubtest]\nignore_missing_stub = false\n\"\"\"\n )\n\n\ndef add_pyright_exclusion(stub_dir: str) -> None:\n \"\"\"Exclude stub_dir from strict pyright checks.\"\"\"\n with open(PYRIGHT_CONFIG, encoding=\"UTF-8\") as f:\n lines = f.readlines()\n i = 0\n while i < len(lines) and not lines[i].strip().startswith('\"exclude\": ['):\n i += 1\n assert i < len(lines), f\"Error parsing {PYRIGHT_CONFIG}\"\n while not lines[i].strip().startswith(\"]\"):\n i += 1\n # Must use forward slash in the .json file\n line_to_add = f' \"{stub_dir}\",'.replace(\"\\\\\", \"/\")\n initial = i - 1\n while lines[i].lower() > line_to_add.lower():\n i -= 1\n if lines[i + 1].strip().rstrip(\",\") == line_to_add.strip().rstrip(\",\"):\n print(f\"{PYRIGHT_CONFIG} already up-to-date\")\n return\n if i == initial:\n # Special case: when adding to the end of the list, commas need tweaking\n line_to_add = line_to_add.rstrip(\",\")\n lines[i] = lines[i].rstrip() + \",\\n\"\n lines.insert(i + 1, line_to_add + \"\\n\")\n print(f\"Updating {PYRIGHT_CONFIG}\")\n with open(PYRIGHT_CONFIG, \"w\", encoding=\"UTF-8\") as f:\n f.writelines(lines)\n\n\ndef main() -> None:\n parser = argparse.ArgumentParser(\n description=\"\"\"Generate baseline stubs automatically for an installed pip package\n using stubgen. Also run black and isort. If the name of\n the project is different from the runtime Python package name, you may\n need to use --package (example: --package yaml PyYAML).\"\"\"\n )\n parser.add_argument(\"project\", help=\"name of PyPI project for which to generate stubs under stubs/\")\n parser.add_argument(\"--package\", help=\"generate stubs for this Python package (default is autodetected)\")\n args = parser.parse_args()\n project = args.project\n package = args.package\n\n if not re.match(r\"[a-zA-Z0-9-_.]+$\", project):\n sys.exit(f\"Invalid character in project name: {project!r}\")\n\n if not package:\n package = project # default\n # Try to find which packages are provided by the project\n # Use default if that fails or if several packages are found\n #\n # The importlib.metadata module is used for projects whose name is different\n # from the runtime Python package name (example: PyYAML/yaml)\n if sys.version_info >= (3, 8):\n dist = distribution(project).read_text(\"top_level.txt\")\n if dist is not None:\n packages = [name for name in dist.split() if not name.startswith(\"_\")]\n if len(packages) == 1:\n package = packages[0]\n print(f'Using detected package \"{package}\" for project \"{project}\"', file=sys.stderr)\n print(\"Suggestion: Try again with --package argument if that's not what you wanted\", file=sys.stderr)\n\n if not os.path.isdir(\"stubs\") or not os.path.isdir(\"stdlib\"):\n sys.exit(\"Error: Current working directory must be the root of typeshed repository\")\n\n # Get normalized project name and version of installed package.\n info = get_installed_package_info(project)\n if info is None:\n print(f'Error: \"{project}\" is not installed', file=sys.stderr)\n print(\"\", file=sys.stderr)\n print(f'Suggestion: Run \"python3 -m pip install {project}\" and try again', file=sys.stderr)\n sys.exit(1)\n project, version = info\n\n stub_dir = os.path.join(\"stubs\", project)\n package_dir = os.path.join(stub_dir, package)\n if os.path.exists(package_dir):\n sys.exit(f\"Error: {package_dir} already exists (delete it first)\")\n\n run_stubgen(package, stub_dir)\n\n run_isort(stub_dir)\n run_black(stub_dir)\n\n create_metadata(stub_dir, version)\n\n # Since the generated stubs won't have many type annotations, we\n # have to exclude them from strict pyright checks.\n add_pyright_exclusion(stub_dir)\n\n print(\"\\nDone!\\n\\nSuggested next steps:\")\n print(f\" 1. Manually review the generated stubs in {stub_dir}\")\n print(\" 2. Optionally run tests and autofixes (see tests/README.md for details)\")\n print(\" 3. Commit the changes on a new branch and create a typeshed PR (don't force-push!)\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/create_baseline_stubs.py"}]} | 2,895 | 142 |
gh_patches_debug_15480 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-4194 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
aioredis raises CancelledError in _finish_span
### Which version of dd-trace-py are you using?
~~0.53.0~~ 0.58.0
### Which version of pip are you using?
21.3.1
### Which version of the libraries are you using?
django==3.2.11
django-redis==5.0.0
channels==3.0.4
daphne==3.0.2
### How can we reproduce your problem?
I am using code similar to the following:
asgi.py
```
import django
from channels.routing import get_default_application
from ddtrace.contrib.asgi import TraceMiddleware
django.setup()
application = TraceMiddleware(get_default_application())
```
routing.py
```
from django.urls import re_path
import my_app.consumers
websocket_urlpatterns = [
re_path(r"^ws/test/$", consumers.TestConsumer.as_asgi()),
]
```
my_app/consumers.py
```
from channels.generic.websocket import WebsocketConsumer
class TestConsumer(WebsocketConsumer):
groups = ["broadcast"]
def connect(self):
self.accept()
def receive(self, text_data=None, bytes_data=None):
raise Exception("An test exception")
```
I am running the application with: `ddtrace-run daphne asgi:application --bind 0.0.0.0 --port 8001`
### What is the result that you get?
I don't get any traces at all, and my logs show this:
```
handle: <Handle traced_13_execute_command.<locals>._finish_span(<Future cancelled>) at /usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py:140>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py", line 146, in _finish_span
future.result()
asyncio.exceptions.CancelledError
```
### What is the result that you expected?
No errors
</issue>
<code>
[start of ddtrace/contrib/aioredis/patch.py]
1 import asyncio
2 import sys
3
4 import aioredis
5
6 from ddtrace import config
7 from ddtrace.internal.utils.wrappers import unwrap as _u
8 from ddtrace.pin import Pin
9 from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
10
11 from .. import trace_utils
12 from ...constants import ANALYTICS_SAMPLE_RATE_KEY
13 from ...constants import SPAN_MEASURED_KEY
14 from ...ext import SpanTypes
15 from ...ext import net
16 from ...ext import redis as redisx
17 from ...internal.utils.formats import stringify_cache_args
18 from ..redis.util import _trace_redis_cmd
19 from ..redis.util import _trace_redis_execute_pipeline
20
21
22 try:
23 from aioredis.commands.transaction import _RedisBuffer
24 except ImportError:
25 _RedisBuffer = None
26
27 config._add("aioredis", dict(_default_service="redis"))
28
29 aioredis_version_str = getattr(aioredis, "__version__", "0.0.0")
30 aioredis_version = tuple([int(i) for i in aioredis_version_str.split(".")])
31
32
33 def patch():
34 if getattr(aioredis, "_datadog_patch", False):
35 return
36 setattr(aioredis, "_datadog_patch", True)
37 pin = Pin()
38 if aioredis_version >= (2, 0):
39 _w("aioredis.client", "Redis.execute_command", traced_execute_command)
40 _w("aioredis.client", "Redis.pipeline", traced_pipeline)
41 _w("aioredis.client", "Pipeline.execute", traced_execute_pipeline)
42 pin.onto(aioredis.client.Redis)
43 else:
44 _w("aioredis", "Redis.execute", traced_13_execute_command)
45 _w("aioredis", "Redis.pipeline", traced_13_pipeline)
46 _w("aioredis.commands.transaction", "Pipeline.execute", traced_13_execute_pipeline)
47 pin.onto(aioredis.Redis)
48
49
50 def unpatch():
51 if not getattr(aioredis, "_datadog_patch", False):
52 return
53
54 setattr(aioredis, "_datadog_patch", False)
55 if aioredis_version >= (2, 0):
56 _u(aioredis.client.Redis, "execute_command")
57 _u(aioredis.client.Redis, "pipeline")
58 _u(aioredis.client.Pipeline, "execute")
59 else:
60 _u(aioredis.Redis, "execute")
61 _u(aioredis.Redis, "pipeline")
62 _u(aioredis.commands.transaction.Pipeline, "execute")
63
64
65 async def traced_execute_command(func, instance, args, kwargs):
66 pin = Pin.get_from(instance)
67 if not pin or not pin.enabled():
68 return await func(*args, **kwargs)
69
70 with _trace_redis_cmd(pin, config.aioredis, instance, args):
71 return await func(*args, **kwargs)
72
73
74 def traced_pipeline(func, instance, args, kwargs):
75 pipeline = func(*args, **kwargs)
76 pin = Pin.get_from(instance)
77 if pin:
78 pin.onto(pipeline)
79 return pipeline
80
81
82 async def traced_execute_pipeline(func, instance, args, kwargs):
83 pin = Pin.get_from(instance)
84 if not pin or not pin.enabled():
85 return await func(*args, **kwargs)
86
87 cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]
88 resource = "\n".join(cmds)
89 with _trace_redis_execute_pipeline(pin, config.aioredis, resource, instance):
90 return await func(*args, **kwargs)
91
92
93 def traced_13_pipeline(func, instance, args, kwargs):
94 pipeline = func(*args, **kwargs)
95 pin = Pin.get_from(instance)
96 if pin:
97 pin.onto(pipeline)
98 return pipeline
99
100
101 def traced_13_execute_command(func, instance, args, kwargs):
102 # If we have a _RedisBuffer then we are in a pipeline
103 if isinstance(instance.connection, _RedisBuffer):
104 return func(*args, **kwargs)
105
106 pin = Pin.get_from(instance)
107 if not pin or not pin.enabled():
108 return func(*args, **kwargs)
109
110 # Don't activate the span since this operation is performed as a future which concludes sometime later on in
111 # execution so subsequent operations in the stack are not necessarily semantically related
112 # (we don't want this span to be the parent of all other spans created before the future is resolved)
113 parent = pin.tracer.current_span()
114 span = pin.tracer.start_span(
115 redisx.CMD,
116 service=trace_utils.ext_service(pin, config.aioredis),
117 span_type=SpanTypes.REDIS,
118 activate=False,
119 child_of=parent,
120 )
121
122 span.set_tag(SPAN_MEASURED_KEY)
123 query = stringify_cache_args(args)
124 span.resource = query
125 span.set_tag(redisx.RAWCMD, query)
126 if pin.tags:
127 span.set_tags(pin.tags)
128
129 span.set_tags(
130 {
131 net.TARGET_HOST: instance.address[0],
132 net.TARGET_PORT: instance.address[1],
133 redisx.DB: instance.db or 0,
134 }
135 )
136 span.set_metric(redisx.ARGS_LEN, len(args))
137 # set analytics sample rate if enabled
138 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())
139
140 def _finish_span(future):
141 try:
142 # Accessing the result will raise an exception if:
143 # - The future was cancelled
144 # - There was an error executing the future (`future.exception()`)
145 # - The future is in an invalid state
146 future.result()
147 except Exception:
148 span.set_exc_info(*sys.exc_info())
149 finally:
150 span.finish()
151
152 task = func(*args, **kwargs)
153 # Execute command returns a coroutine when no free connections are available
154 # https://github.com/aio-libs/aioredis-py/blob/v1.3.1/aioredis/pool.py#L191
155 task = asyncio.ensure_future(task)
156 task.add_done_callback(_finish_span)
157 return task
158
159
160 async def traced_13_execute_pipeline(func, instance, args, kwargs):
161 pin = Pin.get_from(instance)
162 if not pin or not pin.enabled():
163 return await func(*args, **kwargs)
164
165 cmds = []
166 for _, cmd, cmd_args, _ in instance._pipeline:
167 parts = [cmd]
168 parts.extend(cmd_args)
169 cmds.append(stringify_cache_args(parts))
170 resource = "\n".join(cmds)
171 with pin.tracer.trace(
172 redisx.CMD,
173 resource=resource,
174 service=trace_utils.ext_service(pin, config.aioredis),
175 span_type=SpanTypes.REDIS,
176 ) as span:
177
178 span.set_tags(
179 {
180 net.TARGET_HOST: instance._pool_or_conn.address[0],
181 net.TARGET_PORT: instance._pool_or_conn.address[1],
182 redisx.DB: instance._pool_or_conn.db or 0,
183 }
184 )
185
186 span.set_tag(SPAN_MEASURED_KEY)
187 span.set_tag(redisx.RAWCMD, resource)
188 span.set_metric(redisx.PIPELINE_LEN, len(instance._pipeline))
189 # set analytics sample rate if enabled
190 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())
191
192 return await func(*args, **kwargs)
193
[end of ddtrace/contrib/aioredis/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/aioredis/patch.py b/ddtrace/contrib/aioredis/patch.py
--- a/ddtrace/contrib/aioredis/patch.py
+++ b/ddtrace/contrib/aioredis/patch.py
@@ -140,11 +140,12 @@
def _finish_span(future):
try:
# Accessing the result will raise an exception if:
- # - The future was cancelled
+ # - The future was cancelled (CancelledError)
# - There was an error executing the future (`future.exception()`)
# - The future is in an invalid state
future.result()
- except Exception:
+ # CancelledError exceptions extend from BaseException as of Python 3.8, instead of usual Exception
+ except BaseException:
span.set_exc_info(*sys.exc_info())
finally:
span.finish()
| {"golden_diff": "diff --git a/ddtrace/contrib/aioredis/patch.py b/ddtrace/contrib/aioredis/patch.py\n--- a/ddtrace/contrib/aioredis/patch.py\n+++ b/ddtrace/contrib/aioredis/patch.py\n@@ -140,11 +140,12 @@\n def _finish_span(future):\n try:\n # Accessing the result will raise an exception if:\n- # - The future was cancelled\n+ # - The future was cancelled (CancelledError)\n # - There was an error executing the future (`future.exception()`)\n # - The future is in an invalid state\n future.result()\n- except Exception:\n+ # CancelledError exceptions extend from BaseException as of Python 3.8, instead of usual Exception\n+ except BaseException:\n span.set_exc_info(*sys.exc_info())\n finally:\n span.finish()\n", "issue": "aioredis raises CancelledError in _finish_span \n### Which version of dd-trace-py are you using?\r\n\r\n~~0.53.0~~ 0.58.0\r\n\r\n### Which version of pip are you using?\r\n\r\n21.3.1\r\n\r\n### Which version of the libraries are you using?\r\n\r\ndjango==3.2.11\r\ndjango-redis==5.0.0\r\nchannels==3.0.4\r\ndaphne==3.0.2\r\n\r\n### How can we reproduce your problem?\r\n\r\nI am using code similar to the following:\r\n\r\nasgi.py\r\n\r\n```\r\nimport django\r\nfrom channels.routing import get_default_application\r\nfrom ddtrace.contrib.asgi import TraceMiddleware\r\n\r\ndjango.setup()\r\napplication = TraceMiddleware(get_default_application())\r\n```\r\n\r\nrouting.py\r\n\r\n```\r\nfrom django.urls import re_path\r\nimport my_app.consumers\r\n\r\nwebsocket_urlpatterns = [\r\n re_path(r\"^ws/test/$\", consumers.TestConsumer.as_asgi()),\r\n]\r\n```\r\n\r\nmy_app/consumers.py\r\n\r\n```\r\nfrom channels.generic.websocket import WebsocketConsumer\r\n\r\nclass TestConsumer(WebsocketConsumer):\r\n groups = [\"broadcast\"]\r\n\r\n def connect(self):\r\n self.accept()\r\n\r\n def receive(self, text_data=None, bytes_data=None):\r\n raise Exception(\"An test exception\")\r\n```\r\n\r\nI am running the application with: `ddtrace-run daphne asgi:application --bind 0.0.0.0 --port 8001`\r\n\r\n### What is the result that you get?\r\n\r\nI don't get any traces at all, and my logs show this:\r\n\r\n```\r\nhandle: <Handle traced_13_execute_command.<locals>._finish_span(<Future cancelled>) at /usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py:140>\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/asyncio/events.py\", line 80, in _run\r\n self._context.run(self._callback, *self._args)\r\n File \"/usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py\", line 146, in _finish_span\r\n future.result()\r\nasyncio.exceptions.CancelledError\r\n```\r\n\r\n\r\n### What is the result that you expected?\r\n\r\nNo errors\r\n\n", "before_files": [{"content": "import asyncio\nimport sys\n\nimport aioredis\n\nfrom ddtrace import config\nfrom ddtrace.internal.utils.wrappers import unwrap as _u\nfrom ddtrace.pin import Pin\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...constants import SPAN_MEASURED_KEY\nfrom ...ext import SpanTypes\nfrom ...ext import net\nfrom ...ext import redis as redisx\nfrom ...internal.utils.formats import stringify_cache_args\nfrom ..redis.util import _trace_redis_cmd\nfrom ..redis.util import _trace_redis_execute_pipeline\n\n\ntry:\n from aioredis.commands.transaction import _RedisBuffer\nexcept ImportError:\n _RedisBuffer = None\n\nconfig._add(\"aioredis\", dict(_default_service=\"redis\"))\n\naioredis_version_str = getattr(aioredis, \"__version__\", \"0.0.0\")\naioredis_version = tuple([int(i) for i in aioredis_version_str.split(\".\")])\n\n\ndef patch():\n if getattr(aioredis, \"_datadog_patch\", False):\n return\n setattr(aioredis, \"_datadog_patch\", True)\n pin = Pin()\n if aioredis_version >= (2, 0):\n _w(\"aioredis.client\", \"Redis.execute_command\", traced_execute_command)\n _w(\"aioredis.client\", \"Redis.pipeline\", traced_pipeline)\n _w(\"aioredis.client\", \"Pipeline.execute\", traced_execute_pipeline)\n pin.onto(aioredis.client.Redis)\n else:\n _w(\"aioredis\", \"Redis.execute\", traced_13_execute_command)\n _w(\"aioredis\", \"Redis.pipeline\", traced_13_pipeline)\n _w(\"aioredis.commands.transaction\", \"Pipeline.execute\", traced_13_execute_pipeline)\n pin.onto(aioredis.Redis)\n\n\ndef unpatch():\n if not getattr(aioredis, \"_datadog_patch\", False):\n return\n\n setattr(aioredis, \"_datadog_patch\", False)\n if aioredis_version >= (2, 0):\n _u(aioredis.client.Redis, \"execute_command\")\n _u(aioredis.client.Redis, \"pipeline\")\n _u(aioredis.client.Pipeline, \"execute\")\n else:\n _u(aioredis.Redis, \"execute\")\n _u(aioredis.Redis, \"pipeline\")\n _u(aioredis.commands.transaction.Pipeline, \"execute\")\n\n\nasync def traced_execute_command(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n with _trace_redis_cmd(pin, config.aioredis, instance, args):\n return await func(*args, **kwargs)\n\n\ndef traced_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\nasync def traced_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]\n resource = \"\\n\".join(cmds)\n with _trace_redis_execute_pipeline(pin, config.aioredis, resource, instance):\n return await func(*args, **kwargs)\n\n\ndef traced_13_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\ndef traced_13_execute_command(func, instance, args, kwargs):\n # If we have a _RedisBuffer then we are in a pipeline\n if isinstance(instance.connection, _RedisBuffer):\n return func(*args, **kwargs)\n\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n # Don't activate the span since this operation is performed as a future which concludes sometime later on in\n # execution so subsequent operations in the stack are not necessarily semantically related\n # (we don't want this span to be the parent of all other spans created before the future is resolved)\n parent = pin.tracer.current_span()\n span = pin.tracer.start_span(\n redisx.CMD,\n service=trace_utils.ext_service(pin, config.aioredis),\n span_type=SpanTypes.REDIS,\n activate=False,\n child_of=parent,\n )\n\n span.set_tag(SPAN_MEASURED_KEY)\n query = stringify_cache_args(args)\n span.resource = query\n span.set_tag(redisx.RAWCMD, query)\n if pin.tags:\n span.set_tags(pin.tags)\n\n span.set_tags(\n {\n net.TARGET_HOST: instance.address[0],\n net.TARGET_PORT: instance.address[1],\n redisx.DB: instance.db or 0,\n }\n )\n span.set_metric(redisx.ARGS_LEN, len(args))\n # set analytics sample rate if enabled\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())\n\n def _finish_span(future):\n try:\n # Accessing the result will raise an exception if:\n # - The future was cancelled\n # - There was an error executing the future (`future.exception()`)\n # - The future is in an invalid state\n future.result()\n except Exception:\n span.set_exc_info(*sys.exc_info())\n finally:\n span.finish()\n\n task = func(*args, **kwargs)\n # Execute command returns a coroutine when no free connections are available\n # https://github.com/aio-libs/aioredis-py/blob/v1.3.1/aioredis/pool.py#L191\n task = asyncio.ensure_future(task)\n task.add_done_callback(_finish_span)\n return task\n\n\nasync def traced_13_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n cmds = []\n for _, cmd, cmd_args, _ in instance._pipeline:\n parts = [cmd]\n parts.extend(cmd_args)\n cmds.append(stringify_cache_args(parts))\n resource = \"\\n\".join(cmds)\n with pin.tracer.trace(\n redisx.CMD,\n resource=resource,\n service=trace_utils.ext_service(pin, config.aioredis),\n span_type=SpanTypes.REDIS,\n ) as span:\n\n span.set_tags(\n {\n net.TARGET_HOST: instance._pool_or_conn.address[0],\n net.TARGET_PORT: instance._pool_or_conn.address[1],\n redisx.DB: instance._pool_or_conn.db or 0,\n }\n )\n\n span.set_tag(SPAN_MEASURED_KEY)\n span.set_tag(redisx.RAWCMD, resource)\n span.set_metric(redisx.PIPELINE_LEN, len(instance._pipeline))\n # set analytics sample rate if enabled\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())\n\n return await func(*args, **kwargs)\n", "path": "ddtrace/contrib/aioredis/patch.py"}]} | 3,106 | 202 |
gh_patches_debug_33397 | rasdani/github-patches | git_diff | meltano__meltano-7188 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feature: Add Meltano command information to Hub API request
### Feature scope
CLI (options, error messages, logging, etc.)
### Description
When Meltano sends requests to the Hub API, currently sets the `X-Project-ID` and `User-Agent` headers: https://github.com/meltano/meltano/blob/193a3ae4c7adfa29e102a65d31b7e865d9eea349/src/meltano/core/hub/client.py#L113-L125
However, it'd be good to include the CLI command where the request is coming from.
</issue>
<code>
[start of src/meltano/core/hub/client.py]
1 """Meltano Hub Client."""
2
3 from __future__ import annotations
4
5 from http import HTTPStatus
6 from typing import Any
7
8 import requests
9 from requests.adapters import HTTPAdapter
10 from structlog.stdlib import get_logger
11 from urllib3 import Retry
12
13 import meltano
14 from meltano.core.plugin import (
15 BasePlugin,
16 PluginDefinition,
17 PluginRef,
18 PluginType,
19 Variant,
20 )
21 from meltano.core.plugin.error import PluginNotFoundError
22 from meltano.core.plugin.factory import base_plugin_factory
23 from meltano.core.plugin_discovery_service import PluginRepository
24 from meltano.core.project import Project
25 from meltano.core.project_settings_service import ProjectSettingsService
26
27 from .schema import IndexedPlugin, VariantRef
28
29 logger = get_logger(__name__)
30
31
32 class HubPluginTypeNotFoundError(Exception):
33 """Raised when a Hub plugin type is not found."""
34
35 def __init__(self, plugin_type: PluginType):
36 """Create a new HubPluginVariantNotFound.
37
38 Args:
39 plugin_type: The type of the plugin.
40 """
41 self.plugin_type = plugin_type
42
43 def __str__(self) -> str:
44 """Return a string representation of the error.
45
46 Returns:
47 The string representation of the error.
48 """
49 return "{type} is not supported in Meltano Hub. Available plugin types: {types}".format(
50 type=self.plugin_type.descriptor.capitalize(),
51 types=PluginType.plurals(),
52 )
53
54
55 class HubConnectionError(Exception):
56 """Raised when a Hub connection error occurs."""
57
58 def __init__(self, reason: str):
59 """Create a new HubConnectionError.
60
61 Args:
62 reason: The reason for the error.
63 """
64 message = f"Could not connect to Meltano Hub. {reason}"
65 super().__init__(message)
66
67
68 class HubPluginVariantNotFoundError(Exception):
69 """Raised when a Hub plugin variant is not found."""
70
71 def __init__(
72 self,
73 plugin_type: PluginType,
74 plugin: IndexedPlugin,
75 variant_name: str,
76 ):
77 """Create a new HubPluginVariantNotFound.
78
79 Args:
80 plugin_type: The type of the plugin.
81 plugin: The indexed plugin.
82 variant_name: The name of the variant that was not found.
83 """
84 self.plugin_type = plugin_type
85 self.plugin = plugin
86 self.variant_name = variant_name
87
88 def __str__(self) -> str:
89 """Return a string representation of the error.
90
91 Returns:
92 The string representation of the error.
93 """
94 return "{type} '{name}' variant '{variant}' is not known to Meltano. Variants: {variant_labels}".format(
95 type=self.plugin_type.descriptor.capitalize(),
96 name=self.plugin.name,
97 variant=self.variant_name,
98 variant_labels=self.plugin.variant_labels,
99 )
100
101
102 class MeltanoHubService(PluginRepository): # noqa: WPS214
103 """PluginRepository implementation for the Meltano Hub."""
104
105 def __init__(self, project: Project) -> None:
106 """Initialize the service.
107
108 Args:
109 project: The Meltano project.
110 """
111 self.project = project
112 self.session = requests.Session()
113 self.session.headers.update(
114 {
115 "Accept": "application/json",
116 "User-Agent": f"Meltano/{meltano.__version__}",
117 }
118 )
119
120 self.settings_service = ProjectSettingsService(self.project)
121
122 if self.settings_service.get("send_anonymous_usage_stats"):
123 project_id = self.settings_service.get("project_id")
124
125 self.session.headers["X-Project-ID"] = project_id
126
127 if self.hub_url_auth:
128 self.session.headers.update({"Authorization": self.hub_url_auth})
129
130 adapter = HTTPAdapter(
131 max_retries=Retry(
132 total=3,
133 backoff_factor=0,
134 status_forcelist=[
135 HTTPStatus.TOO_MANY_REQUESTS,
136 HTTPStatus.INTERNAL_SERVER_ERROR,
137 HTTPStatus.BAD_GATEWAY,
138 HTTPStatus.SERVICE_UNAVAILABLE,
139 HTTPStatus.GATEWAY_TIMEOUT,
140 ],
141 raise_on_status=False,
142 ),
143 )
144 self.session.mount("http://", adapter)
145 self.session.mount("https://", adapter)
146
147 @property
148 def hub_api_url(self) -> str:
149 """Return the URL of the Hub API.
150
151 Returns:
152 The URL of the Hub API.
153 """
154 hub_api_root = self.settings_service.get("hub_api_root")
155 hub_url = self.settings_service.get("hub_url")
156
157 return hub_api_root or f"{hub_url}/meltano/api/v1"
158
159 @property
160 def hub_url_auth(self):
161 """Return the `hub_url_auth` setting.
162
163 Returns:
164 The `hub_url_auth` setting.
165 """
166 return self.settings_service.get("hub_url_auth")
167
168 def plugin_type_endpoint(self, plugin_type: PluginType) -> str:
169 """Return the list endpoint for the given plugin type.
170
171 Args:
172 plugin_type: The plugin type.
173
174 Returns:
175 The endpoint for the given plugin type.
176 """
177 return f"{self.hub_api_url}/plugins/{plugin_type.value}/index"
178
179 def plugin_endpoint(
180 self,
181 plugin_type: PluginType,
182 plugin_name: str,
183 variant_name: str | None = None,
184 ) -> str:
185 """Return the resource endpoint for the given plugin.
186
187 Args:
188 plugin_type: The plugin type.
189 plugin_name: The plugin name.
190 variant_name: The plugin variant name.
191
192 Returns:
193 The endpoint for the given plugin type.
194 """
195 url = f"{self.hub_api_url}/plugins/{plugin_type.value}/{plugin_name}"
196 if variant_name:
197 url = f"{url}--{variant_name}"
198
199 return url
200
201 def find_definition(
202 self,
203 plugin_type: PluginType,
204 plugin_name: str,
205 variant_name: str | None = None,
206 ) -> PluginDefinition:
207 """Find a locked plugin definition.
208
209 Args:
210 plugin_type: The plugin type.
211 plugin_name: The plugin name.
212 variant_name: The plugin variant name.
213
214 Returns:
215 The plugin definition.
216
217 Raises:
218 PluginNotFoundError: If the plugin definition could not be found.
219 HubPluginVariantNotFoundError: If the plugin variant could not be found.
220 HubConnectionError: If the Hub API could not be reached.
221 """
222 plugins = self.get_plugins_of_type(plugin_type)
223
224 try:
225 plugin = plugins[plugin_name]
226 except KeyError as plugins_key_err:
227 raise PluginNotFoundError(
228 PluginRef(plugin_type, plugin_name)
229 ) from plugins_key_err
230
231 if variant_name is None or variant_name in {
232 Variant.DEFAULT_NAME,
233 Variant.ORIGINAL_NAME,
234 }:
235 variant_name = plugin.default_variant
236
237 try:
238 url = plugin.variants[variant_name].ref
239 except KeyError as variant_key_err:
240 raise HubPluginVariantNotFoundError(
241 plugin_type, plugin, variant_name
242 ) from variant_key_err
243
244 response = self.session.get(url)
245
246 try:
247 response.raise_for_status()
248 except requests.HTTPError as http_err:
249 logger.error(
250 "Can not retrieve plugin",
251 status_code=http_err.response.status_code,
252 error=http_err,
253 )
254 raise HubConnectionError(str(http_err)) from http_err
255
256 return PluginDefinition(**response.json(), plugin_type=plugin_type)
257
258 def find_base_plugin(
259 self,
260 plugin_type: PluginType,
261 plugin_name: str,
262 variant: str | None = None,
263 ) -> BasePlugin:
264 """Get the base plugin for a project plugin.
265
266 Args:
267 plugin_type: The plugin type.
268 plugin_name: The plugin name.
269 variant: The plugin variant.
270
271 Returns:
272 The base plugin.
273 """
274 plugin = self.find_definition(
275 plugin_type,
276 plugin_name,
277 variant_name=variant,
278 )
279
280 return base_plugin_factory(plugin, plugin.variants[0])
281
282 def get_plugins_of_type( # noqa: WPS210
283 self,
284 plugin_type: PluginType,
285 ) -> dict[str, IndexedPlugin]:
286 """Get all plugins of a given type.
287
288 Args:
289 plugin_type: The plugin type.
290
291 Returns:
292 The plugin definitions.
293
294 Raises:
295 HubPluginTypeNotFoundError: If the plugin type is not supported.
296 HubConnectionError: If the Hub API could not be reached.
297 """
298 if not plugin_type.discoverable:
299 return {}
300
301 url = self.plugin_type_endpoint(plugin_type)
302 response = self.session.get(url)
303
304 try:
305 response.raise_for_status()
306 except requests.HTTPError as err:
307 logger.error(
308 "Can not retrieve plugin type",
309 status_code=err.response.status_code,
310 error=err,
311 )
312 if err.response.status_code < HTTPStatus.TOO_MANY_REQUESTS:
313 raise HubPluginTypeNotFoundError(plugin_type) from err
314 raise HubConnectionError(err.response.reason) from err
315
316 plugins: dict[str, dict[str, Any]] = response.json()
317 return {
318 name: IndexedPlugin(
319 name,
320 logo_url=plugin["logo_url"],
321 default_variant=plugin["default_variant"],
322 variants={
323 variant_name: VariantRef(variant_name, ref=variant["ref"])
324 for variant_name, variant in plugin["variants"].items()
325 },
326 )
327 for name, plugin in plugins.items()
328 }
329
[end of src/meltano/core/hub/client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/meltano/core/hub/client.py b/src/meltano/core/hub/client.py
--- a/src/meltano/core/hub/client.py
+++ b/src/meltano/core/hub/client.py
@@ -5,6 +5,7 @@
from http import HTTPStatus
from typing import Any
+import click
import requests
from requests.adapters import HTTPAdapter
from structlog.stdlib import get_logger
@@ -198,6 +199,41 @@
return url
+ def _build_request(self, method: str, url: str) -> requests.PreparedRequest:
+ """Build a request to the Hub API.
+
+ Args:
+ method: The HTTP method.
+ url: The URL to request.
+
+ Returns:
+ The prepared request.
+ """
+ request = requests.Request(method, url)
+ click_context = click.get_current_context(silent=True)
+
+ if click_context:
+ request.headers["X-Meltano-Command"] = click_context.command_path
+
+ return self.session.prepare_request(request)
+
+ def _get(self, url: str) -> requests.Response:
+ """Make a GET request to the Hub API.
+
+ Args:
+ url: The URL to request.
+
+ Returns:
+ The response.
+
+ Raises:
+ HubConnectionError: If the Hub API could not be reached.
+ """
+ try:
+ return self.session.send(self._build_request("GET", url))
+ except requests.exceptions.ConnectionError as connection_err:
+ raise HubConnectionError("Could not reach Meltano Hub.") from connection_err
+
def find_definition(
self,
plugin_type: PluginType,
@@ -241,7 +277,7 @@
plugin_type, plugin, variant_name
) from variant_key_err
- response = self.session.get(url)
+ response = self._get(url)
try:
response.raise_for_status()
@@ -299,7 +335,7 @@
return {}
url = self.plugin_type_endpoint(plugin_type)
- response = self.session.get(url)
+ response = self._get(url)
try:
response.raise_for_status()
| {"golden_diff": "diff --git a/src/meltano/core/hub/client.py b/src/meltano/core/hub/client.py\n--- a/src/meltano/core/hub/client.py\n+++ b/src/meltano/core/hub/client.py\n@@ -5,6 +5,7 @@\n from http import HTTPStatus\n from typing import Any\n \n+import click\n import requests\n from requests.adapters import HTTPAdapter\n from structlog.stdlib import get_logger\n@@ -198,6 +199,41 @@\n \n return url\n \n+ def _build_request(self, method: str, url: str) -> requests.PreparedRequest:\n+ \"\"\"Build a request to the Hub API.\n+\n+ Args:\n+ method: The HTTP method.\n+ url: The URL to request.\n+\n+ Returns:\n+ The prepared request.\n+ \"\"\"\n+ request = requests.Request(method, url)\n+ click_context = click.get_current_context(silent=True)\n+\n+ if click_context:\n+ request.headers[\"X-Meltano-Command\"] = click_context.command_path\n+\n+ return self.session.prepare_request(request)\n+\n+ def _get(self, url: str) -> requests.Response:\n+ \"\"\"Make a GET request to the Hub API.\n+\n+ Args:\n+ url: The URL to request.\n+\n+ Returns:\n+ The response.\n+\n+ Raises:\n+ HubConnectionError: If the Hub API could not be reached.\n+ \"\"\"\n+ try:\n+ return self.session.send(self._build_request(\"GET\", url))\n+ except requests.exceptions.ConnectionError as connection_err:\n+ raise HubConnectionError(\"Could not reach Meltano Hub.\") from connection_err\n+\n def find_definition(\n self,\n plugin_type: PluginType,\n@@ -241,7 +277,7 @@\n plugin_type, plugin, variant_name\n ) from variant_key_err\n \n- response = self.session.get(url)\n+ response = self._get(url)\n \n try:\n response.raise_for_status()\n@@ -299,7 +335,7 @@\n return {}\n \n url = self.plugin_type_endpoint(plugin_type)\n- response = self.session.get(url)\n+ response = self._get(url)\n \n try:\n response.raise_for_status()\n", "issue": "feature: Add Meltano command information to Hub API request\n### Feature scope\r\n\r\nCLI (options, error messages, logging, etc.)\r\n\r\n### Description\r\n\r\nWhen Meltano sends requests to the Hub API, currently sets the `X-Project-ID` and `User-Agent` headers: https://github.com/meltano/meltano/blob/193a3ae4c7adfa29e102a65d31b7e865d9eea349/src/meltano/core/hub/client.py#L113-L125\r\n\r\nHowever, it'd be good to include the CLI command where the request is coming from.\n", "before_files": [{"content": "\"\"\"Meltano Hub Client.\"\"\"\n\nfrom __future__ import annotations\n\nfrom http import HTTPStatus\nfrom typing import Any\n\nimport requests\nfrom requests.adapters import HTTPAdapter\nfrom structlog.stdlib import get_logger\nfrom urllib3 import Retry\n\nimport meltano\nfrom meltano.core.plugin import (\n BasePlugin,\n PluginDefinition,\n PluginRef,\n PluginType,\n Variant,\n)\nfrom meltano.core.plugin.error import PluginNotFoundError\nfrom meltano.core.plugin.factory import base_plugin_factory\nfrom meltano.core.plugin_discovery_service import PluginRepository\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\n\nfrom .schema import IndexedPlugin, VariantRef\n\nlogger = get_logger(__name__)\n\n\nclass HubPluginTypeNotFoundError(Exception):\n \"\"\"Raised when a Hub plugin type is not found.\"\"\"\n\n def __init__(self, plugin_type: PluginType):\n \"\"\"Create a new HubPluginVariantNotFound.\n\n Args:\n plugin_type: The type of the plugin.\n \"\"\"\n self.plugin_type = plugin_type\n\n def __str__(self) -> str:\n \"\"\"Return a string representation of the error.\n\n Returns:\n The string representation of the error.\n \"\"\"\n return \"{type} is not supported in Meltano Hub. Available plugin types: {types}\".format(\n type=self.plugin_type.descriptor.capitalize(),\n types=PluginType.plurals(),\n )\n\n\nclass HubConnectionError(Exception):\n \"\"\"Raised when a Hub connection error occurs.\"\"\"\n\n def __init__(self, reason: str):\n \"\"\"Create a new HubConnectionError.\n\n Args:\n reason: The reason for the error.\n \"\"\"\n message = f\"Could not connect to Meltano Hub. {reason}\"\n super().__init__(message)\n\n\nclass HubPluginVariantNotFoundError(Exception):\n \"\"\"Raised when a Hub plugin variant is not found.\"\"\"\n\n def __init__(\n self,\n plugin_type: PluginType,\n plugin: IndexedPlugin,\n variant_name: str,\n ):\n \"\"\"Create a new HubPluginVariantNotFound.\n\n Args:\n plugin_type: The type of the plugin.\n plugin: The indexed plugin.\n variant_name: The name of the variant that was not found.\n \"\"\"\n self.plugin_type = plugin_type\n self.plugin = plugin\n self.variant_name = variant_name\n\n def __str__(self) -> str:\n \"\"\"Return a string representation of the error.\n\n Returns:\n The string representation of the error.\n \"\"\"\n return \"{type} '{name}' variant '{variant}' is not known to Meltano. Variants: {variant_labels}\".format(\n type=self.plugin_type.descriptor.capitalize(),\n name=self.plugin.name,\n variant=self.variant_name,\n variant_labels=self.plugin.variant_labels,\n )\n\n\nclass MeltanoHubService(PluginRepository): # noqa: WPS214\n \"\"\"PluginRepository implementation for the Meltano Hub.\"\"\"\n\n def __init__(self, project: Project) -> None:\n \"\"\"Initialize the service.\n\n Args:\n project: The Meltano project.\n \"\"\"\n self.project = project\n self.session = requests.Session()\n self.session.headers.update(\n {\n \"Accept\": \"application/json\",\n \"User-Agent\": f\"Meltano/{meltano.__version__}\",\n }\n )\n\n self.settings_service = ProjectSettingsService(self.project)\n\n if self.settings_service.get(\"send_anonymous_usage_stats\"):\n project_id = self.settings_service.get(\"project_id\")\n\n self.session.headers[\"X-Project-ID\"] = project_id\n\n if self.hub_url_auth:\n self.session.headers.update({\"Authorization\": self.hub_url_auth})\n\n adapter = HTTPAdapter(\n max_retries=Retry(\n total=3,\n backoff_factor=0,\n status_forcelist=[\n HTTPStatus.TOO_MANY_REQUESTS,\n HTTPStatus.INTERNAL_SERVER_ERROR,\n HTTPStatus.BAD_GATEWAY,\n HTTPStatus.SERVICE_UNAVAILABLE,\n HTTPStatus.GATEWAY_TIMEOUT,\n ],\n raise_on_status=False,\n ),\n )\n self.session.mount(\"http://\", adapter)\n self.session.mount(\"https://\", adapter)\n\n @property\n def hub_api_url(self) -> str:\n \"\"\"Return the URL of the Hub API.\n\n Returns:\n The URL of the Hub API.\n \"\"\"\n hub_api_root = self.settings_service.get(\"hub_api_root\")\n hub_url = self.settings_service.get(\"hub_url\")\n\n return hub_api_root or f\"{hub_url}/meltano/api/v1\"\n\n @property\n def hub_url_auth(self):\n \"\"\"Return the `hub_url_auth` setting.\n\n Returns:\n The `hub_url_auth` setting.\n \"\"\"\n return self.settings_service.get(\"hub_url_auth\")\n\n def plugin_type_endpoint(self, plugin_type: PluginType) -> str:\n \"\"\"Return the list endpoint for the given plugin type.\n\n Args:\n plugin_type: The plugin type.\n\n Returns:\n The endpoint for the given plugin type.\n \"\"\"\n return f\"{self.hub_api_url}/plugins/{plugin_type.value}/index\"\n\n def plugin_endpoint(\n self,\n plugin_type: PluginType,\n plugin_name: str,\n variant_name: str | None = None,\n ) -> str:\n \"\"\"Return the resource endpoint for the given plugin.\n\n Args:\n plugin_type: The plugin type.\n plugin_name: The plugin name.\n variant_name: The plugin variant name.\n\n Returns:\n The endpoint for the given plugin type.\n \"\"\"\n url = f\"{self.hub_api_url}/plugins/{plugin_type.value}/{plugin_name}\"\n if variant_name:\n url = f\"{url}--{variant_name}\"\n\n return url\n\n def find_definition(\n self,\n plugin_type: PluginType,\n plugin_name: str,\n variant_name: str | None = None,\n ) -> PluginDefinition:\n \"\"\"Find a locked plugin definition.\n\n Args:\n plugin_type: The plugin type.\n plugin_name: The plugin name.\n variant_name: The plugin variant name.\n\n Returns:\n The plugin definition.\n\n Raises:\n PluginNotFoundError: If the plugin definition could not be found.\n HubPluginVariantNotFoundError: If the plugin variant could not be found.\n HubConnectionError: If the Hub API could not be reached.\n \"\"\"\n plugins = self.get_plugins_of_type(plugin_type)\n\n try:\n plugin = plugins[plugin_name]\n except KeyError as plugins_key_err:\n raise PluginNotFoundError(\n PluginRef(plugin_type, plugin_name)\n ) from plugins_key_err\n\n if variant_name is None or variant_name in {\n Variant.DEFAULT_NAME,\n Variant.ORIGINAL_NAME,\n }:\n variant_name = plugin.default_variant\n\n try:\n url = plugin.variants[variant_name].ref\n except KeyError as variant_key_err:\n raise HubPluginVariantNotFoundError(\n plugin_type, plugin, variant_name\n ) from variant_key_err\n\n response = self.session.get(url)\n\n try:\n response.raise_for_status()\n except requests.HTTPError as http_err:\n logger.error(\n \"Can not retrieve plugin\",\n status_code=http_err.response.status_code,\n error=http_err,\n )\n raise HubConnectionError(str(http_err)) from http_err\n\n return PluginDefinition(**response.json(), plugin_type=plugin_type)\n\n def find_base_plugin(\n self,\n plugin_type: PluginType,\n plugin_name: str,\n variant: str | None = None,\n ) -> BasePlugin:\n \"\"\"Get the base plugin for a project plugin.\n\n Args:\n plugin_type: The plugin type.\n plugin_name: The plugin name.\n variant: The plugin variant.\n\n Returns:\n The base plugin.\n \"\"\"\n plugin = self.find_definition(\n plugin_type,\n plugin_name,\n variant_name=variant,\n )\n\n return base_plugin_factory(plugin, plugin.variants[0])\n\n def get_plugins_of_type( # noqa: WPS210\n self,\n plugin_type: PluginType,\n ) -> dict[str, IndexedPlugin]:\n \"\"\"Get all plugins of a given type.\n\n Args:\n plugin_type: The plugin type.\n\n Returns:\n The plugin definitions.\n\n Raises:\n HubPluginTypeNotFoundError: If the plugin type is not supported.\n HubConnectionError: If the Hub API could not be reached.\n \"\"\"\n if not plugin_type.discoverable:\n return {}\n\n url = self.plugin_type_endpoint(plugin_type)\n response = self.session.get(url)\n\n try:\n response.raise_for_status()\n except requests.HTTPError as err:\n logger.error(\n \"Can not retrieve plugin type\",\n status_code=err.response.status_code,\n error=err,\n )\n if err.response.status_code < HTTPStatus.TOO_MANY_REQUESTS:\n raise HubPluginTypeNotFoundError(plugin_type) from err\n raise HubConnectionError(err.response.reason) from err\n\n plugins: dict[str, dict[str, Any]] = response.json()\n return {\n name: IndexedPlugin(\n name,\n logo_url=plugin[\"logo_url\"],\n default_variant=plugin[\"default_variant\"],\n variants={\n variant_name: VariantRef(variant_name, ref=variant[\"ref\"])\n for variant_name, variant in plugin[\"variants\"].items()\n },\n )\n for name, plugin in plugins.items()\n }\n", "path": "src/meltano/core/hub/client.py"}]} | 3,583 | 493 |
gh_patches_debug_47463 | rasdani/github-patches | git_diff | bokeh__bokeh-5968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Transform docstring ends abruptly
```
Bases: bokeh.model.Model
Base class for Transform models that represent a computation to be carried out on the client-side.
JavaScript implementations should implement the following methods:
```
<img width="879" alt="screen shot 2017-02-17 at 2 43 31 am" src="https://cloud.githubusercontent.com/assets/1796208/23058499/e52042e8-f4ba-11e6-8f8a-596498e00084.png">
Should add the methods that need to be implemented.
</issue>
<code>
[start of bokeh/models/transforms.py]
1 '''
2
3 '''
4 from __future__ import absolute_import
5
6 from ..core.enums import StepMode, JitterRandomDistribution
7 from ..core.has_props import abstract
8 from ..core.properties import Bool, Either, Enum, Float, Instance, Seq, String
9 from ..model import Model
10
11 from .sources import ColumnarDataSource
12
13 @abstract
14 class Transform(Model):
15 ''' Base class for ``Transform`` models that represent a computation
16 to be carried out on the client-side.
17
18 JavaScript implementations should implement the following methods:
19
20 .. code-block: coffeescript
21
22 compute: (x) ->
23 # compute the transform of a single value
24
25 v_compute: (xs) ->
26 # compute the transform of an array of values
27
28 '''
29 pass
30
31
32 class Jitter(Transform):
33 ''' Apply either a uniform or normally sampled random jitter to data.
34
35 '''
36
37
38 mean = Float(default=0, help="""
39 The central value for the random sample
40 """)
41
42 width = Float(default=1, help="""
43 The width (absolute for uniform distribution and sigma for the normal distribution) of the random sample.
44 """)
45
46 distribution = Enum(JitterRandomDistribution, default='uniform', help="""
47 The random distribution upon which to pull the random scatter
48 """)
49
50 @abstract
51 class Interpolator(Transform):
52 ''' Base class for interpolator transforms.
53
54 Interpolators return the value of a function which has been evaluated
55 between specified (x, y) pairs of data. As an example, if two control
56 point pairs were provided to the interpolator, a linear interpolaction
57 at a specific value of 'x' would result in the value of 'y' which existed
58 on the line conneting the two control points.
59
60 The control point pairs for the interpolators can be specified through either
61
62 * A literal sequence of values:
63
64 .. code-block: python
65
66 interp = Interpolator(x=[1, 2, 3, 4, 5], y=[2, 5, 10, 12, 16])
67
68 * or a pair of columns defined in a `ColumnDataSource` object:
69
70 .. code-block: python
71
72 interp = Interpolator(x="year", y="earnings", data=jewlery_prices))
73
74
75 This is the base class and is not intended to end use. Please see the
76 documentation for the final derived classes (Jitter, LineraInterpolator,
77 StepInterpolator) for mor information on their specific methods of
78 interpolation.
79
80 '''
81 x = Either(String, Seq(Float), help="""
82 Independant coordiante denoting the location of a point.
83 """)
84
85 y = Either(String, Seq(Float), help="""
86 Dependant coordinate denoting the value of a point at a location.
87 """)
88
89 data = Instance(ColumnarDataSource, help="""
90 Data which defines the source for the named columns if a string is passed to either the ``x`` or ``y`` parameters.
91 """)
92
93 clip = Bool(True, help="""
94 Determine if the interpolation should clip the result to include only values inside its predefined range.
95 If this is set to False, it will return the most value of the closest point.
96 """)
97
98 # Define an initialization routine to do some cross checking of input values
99 def __init__(self, **kwargs):
100 super(Interpolator, self).__init__(**kwargs)
101
102
103 class LinearInterpolator(Interpolator):
104 ''' Compute a linear interpolation between the control points provided through
105 the ``x``, ``y``, and ``data`` parameters.
106
107 '''
108 pass
109
110
111 class StepInterpolator(Interpolator):
112 ''' Compute a step-wise interpolation between the points provided through
113 the ``x``, ``y``, and ``data`` parameters.
114
115 '''
116
117 mode = Enum(StepMode, default="after", help="""
118 Adjust the behavior of the returned value in relation to the control points. The parameter can assume one of three values:
119
120 * ``after`` (default): Assume the y-value associated with the nearest x-value which is less than or equal to the point to transform.
121 * ``before``: Assume the y-value associated with the nearest x-value which is greater than the point to transform.
122 * ``center``: Assume the y-value associated with the nearest x-value to the point to transform.
123 """)
124
[end of bokeh/models/transforms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bokeh/models/transforms.py b/bokeh/models/transforms.py
--- a/bokeh/models/transforms.py
+++ b/bokeh/models/transforms.py
@@ -19,11 +19,11 @@
.. code-block: coffeescript
- compute: (x) ->
- # compute the transform of a single value
+ compute: (x) ->
+ # compute the transform of a single value
- v_compute: (xs) ->
- # compute the transform of an array of values
+ v_compute: (xs) ->
+ # compute the transform of an array of values
'''
pass
| {"golden_diff": "diff --git a/bokeh/models/transforms.py b/bokeh/models/transforms.py\n--- a/bokeh/models/transforms.py\n+++ b/bokeh/models/transforms.py\n@@ -19,11 +19,11 @@\n \n .. code-block: coffeescript\n \n- compute: (x) ->\n- # compute the transform of a single value\n+ compute: (x) ->\n+ # compute the transform of a single value\n \n- v_compute: (xs) ->\n- # compute the transform of an array of values\n+ v_compute: (xs) ->\n+ # compute the transform of an array of values\n \n '''\n pass\n", "issue": "Transform docstring ends abruptly\n```\r\n Bases: bokeh.model.Model\r\n Base class for Transform models that represent a computation to be carried out on the client-side.\r\n JavaScript implementations should implement the following methods:\r\n```\r\n<img width=\"879\" alt=\"screen shot 2017-02-17 at 2 43 31 am\" src=\"https://cloud.githubusercontent.com/assets/1796208/23058499/e52042e8-f4ba-11e6-8f8a-596498e00084.png\">\r\n\r\nShould add the methods that need to be implemented.\r\n\n", "before_files": [{"content": "'''\n\n'''\nfrom __future__ import absolute_import\n\nfrom ..core.enums import StepMode, JitterRandomDistribution\nfrom ..core.has_props import abstract\nfrom ..core.properties import Bool, Either, Enum, Float, Instance, Seq, String\nfrom ..model import Model\n\nfrom .sources import ColumnarDataSource\n\n@abstract\nclass Transform(Model):\n ''' Base class for ``Transform`` models that represent a computation\n to be carried out on the client-side.\n\n JavaScript implementations should implement the following methods:\n\n .. code-block: coffeescript\n\n compute: (x) ->\n # compute the transform of a single value\n\n v_compute: (xs) ->\n # compute the transform of an array of values\n\n '''\n pass\n\n\nclass Jitter(Transform):\n ''' Apply either a uniform or normally sampled random jitter to data.\n\n '''\n\n\n mean = Float(default=0, help=\"\"\"\n The central value for the random sample\n \"\"\")\n\n width = Float(default=1, help=\"\"\"\n The width (absolute for uniform distribution and sigma for the normal distribution) of the random sample.\n \"\"\")\n\n distribution = Enum(JitterRandomDistribution, default='uniform', help=\"\"\"\n The random distribution upon which to pull the random scatter\n \"\"\")\n\n@abstract\nclass Interpolator(Transform):\n ''' Base class for interpolator transforms.\n\n Interpolators return the value of a function which has been evaluated\n between specified (x, y) pairs of data. As an example, if two control\n point pairs were provided to the interpolator, a linear interpolaction\n at a specific value of 'x' would result in the value of 'y' which existed\n on the line conneting the two control points.\n\n The control point pairs for the interpolators can be specified through either\n\n * A literal sequence of values:\n\n .. code-block: python\n\n interp = Interpolator(x=[1, 2, 3, 4, 5], y=[2, 5, 10, 12, 16])\n\n * or a pair of columns defined in a `ColumnDataSource` object:\n\n .. code-block: python\n\n interp = Interpolator(x=\"year\", y=\"earnings\", data=jewlery_prices))\n\n\n This is the base class and is not intended to end use. Please see the\n documentation for the final derived classes (Jitter, LineraInterpolator,\n StepInterpolator) for mor information on their specific methods of\n interpolation.\n\n '''\n x = Either(String, Seq(Float), help=\"\"\"\n Independant coordiante denoting the location of a point.\n \"\"\")\n\n y = Either(String, Seq(Float), help=\"\"\"\n Dependant coordinate denoting the value of a point at a location.\n \"\"\")\n\n data = Instance(ColumnarDataSource, help=\"\"\"\n Data which defines the source for the named columns if a string is passed to either the ``x`` or ``y`` parameters.\n \"\"\")\n\n clip = Bool(True, help=\"\"\"\n Determine if the interpolation should clip the result to include only values inside its predefined range.\n If this is set to False, it will return the most value of the closest point.\n \"\"\")\n\n # Define an initialization routine to do some cross checking of input values\n def __init__(self, **kwargs):\n super(Interpolator, self).__init__(**kwargs)\n\n\nclass LinearInterpolator(Interpolator):\n ''' Compute a linear interpolation between the control points provided through\n the ``x``, ``y``, and ``data`` parameters.\n\n '''\n pass\n\n\nclass StepInterpolator(Interpolator):\n ''' Compute a step-wise interpolation between the points provided through\n the ``x``, ``y``, and ``data`` parameters.\n\n '''\n\n mode = Enum(StepMode, default=\"after\", help=\"\"\"\n Adjust the behavior of the returned value in relation to the control points. The parameter can assume one of three values:\n\n * ``after`` (default): Assume the y-value associated with the nearest x-value which is less than or equal to the point to transform.\n * ``before``: Assume the y-value associated with the nearest x-value which is greater than the point to transform.\n * ``center``: Assume the y-value associated with the nearest x-value to the point to transform.\n \"\"\")\n", "path": "bokeh/models/transforms.py"}]} | 1,902 | 149 |
gh_patches_debug_13774 | rasdani/github-patches | git_diff | xonsh__xonsh-3428 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[color] in .gitconfig should be stripped in the prompt branch functions
In clean install with no `.xonshrc`, when I change into a git repo, my prompt looks like:

Is there some setting I'm missing somewhere?
## xonfig
<details>
```
+------------------+----------------------+
| xonsh | 0.9.13.dev1 |
| Git SHA | 9f7ccc65 |
| Commit Date | Oct 15 17:14:50 2019 |
| Python | 3.7.6 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.0 |
| shell type | prompt_toolkit2 |
| pygments | 2.5.2 |
| on posix | True |
| on linux | True |
| distro | ubuntu |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
+------------------+----------------------+
```
</details>
## Expected Behavior
I think something like this (without the ansi escape codes):
```
mcrowe@mike-XPS-15-9560 ~/.dotfiles/bash_it [master] $
```
</issue>
<code>
[start of xonsh/prompt/vc.py]
1 # -*- coding: utf-8 -*-
2 """Prompt formatter for simple version control branches"""
3 # pylint:disable=no-member, invalid-name
4
5 import os
6 import sys
7 import queue
8 import builtins
9 import threading
10 import subprocess
11
12 import xonsh.tools as xt
13
14
15 def _get_git_branch(q):
16 denv = builtins.__xonsh__.env.detype()
17 try:
18 branches = xt.decode_bytes(
19 subprocess.check_output(
20 ["git", "branch"], env=denv, stderr=subprocess.DEVNULL
21 )
22 ).splitlines()
23 except (subprocess.CalledProcessError, OSError, FileNotFoundError):
24 q.put(None)
25 else:
26 for branch in branches:
27 if not branch.startswith("* "):
28 continue
29 elif branch.endswith(")"):
30 branch = branch.split()[-1][:-1]
31 else:
32 branch = branch.split()[-1]
33
34 q.put(branch)
35 break
36 else:
37 q.put(None)
38
39
40 def get_git_branch():
41 """Attempts to find the current git branch. If this could not
42 be determined (timeout, not in a git repo, etc.) then this returns None.
43 """
44 branch = None
45 timeout = builtins.__xonsh__.env.get("VC_BRANCH_TIMEOUT")
46 q = queue.Queue()
47
48 t = threading.Thread(target=_get_git_branch, args=(q,))
49 t.start()
50 t.join(timeout=timeout)
51 try:
52 branch = q.get_nowait()
53 except queue.Empty:
54 branch = None
55 return branch
56
57
58 def _get_hg_root(q):
59 _curpwd = builtins.__xonsh__.env["PWD"]
60 while True:
61 if not os.path.isdir(_curpwd):
62 return False
63 try:
64 dot_hg_is_in_curwd = any([b.name == ".hg" for b in xt.scandir(_curpwd)])
65 except OSError:
66 return False
67 if dot_hg_is_in_curwd:
68 q.put(_curpwd)
69 break
70 else:
71 _oldpwd = _curpwd
72 _curpwd = os.path.split(_curpwd)[0]
73 if _oldpwd == _curpwd:
74 return False
75
76
77 def get_hg_branch(root=None):
78 """Try to get the mercurial branch of the current directory,
79 return None if not in a repo or subprocess.TimeoutExpired if timed out.
80 """
81 env = builtins.__xonsh__.env
82 timeout = env["VC_BRANCH_TIMEOUT"]
83 q = queue.Queue()
84 t = threading.Thread(target=_get_hg_root, args=(q,))
85 t.start()
86 t.join(timeout=timeout)
87 try:
88 root = q.get_nowait()
89 except queue.Empty:
90 return None
91 if env.get("VC_HG_SHOW_BRANCH"):
92 # get branch name
93 branch_path = os.path.sep.join([root, ".hg", "branch"])
94 if os.path.exists(branch_path):
95 with open(branch_path, "r") as branch_file:
96 branch = branch_file.read()
97 else:
98 branch = "default"
99 else:
100 branch = ""
101 # add bookmark, if we can
102 bookmark_path = os.path.sep.join([root, ".hg", "bookmarks.current"])
103 if os.path.exists(bookmark_path):
104 with open(bookmark_path, "r") as bookmark_file:
105 active_bookmark = bookmark_file.read()
106 if env.get("VC_HG_SHOW_BRANCH") is True:
107 branch = "{0}, {1}".format(
108 *(b.strip(os.linesep) for b in (branch, active_bookmark))
109 )
110 else:
111 branch = active_bookmark.strip(os.linesep)
112 else:
113 branch = branch.strip(os.linesep)
114 return branch
115
116
117 _FIRST_BRANCH_TIMEOUT = True
118
119
120 def _first_branch_timeout_message():
121 global _FIRST_BRANCH_TIMEOUT
122 sbtm = builtins.__xonsh__.env["SUPPRESS_BRANCH_TIMEOUT_MESSAGE"]
123 if not _FIRST_BRANCH_TIMEOUT or sbtm:
124 return
125 _FIRST_BRANCH_TIMEOUT = False
126 print(
127 "xonsh: branch timeout: computing the branch name, color, or both "
128 "timed out while formatting the prompt. You may avoid this by "
129 "increasing the value of $VC_BRANCH_TIMEOUT or by removing branch "
130 "fields, like {curr_branch}, from your $PROMPT. See the FAQ "
131 "for more details. This message will be suppressed for the remainder "
132 "of this session. To suppress this message permanently, set "
133 "$SUPPRESS_BRANCH_TIMEOUT_MESSAGE = True in your xonshrc file.",
134 file=sys.stderr,
135 )
136
137
138 def current_branch():
139 """Gets the branch for a current working directory. Returns an empty string
140 if the cwd is not a repository. This currently only works for git and hg
141 and should be extended in the future. If a timeout occurred, the string
142 '<branch-timeout>' is returned.
143 """
144 branch = None
145 cmds = builtins.__xonsh__.commands_cache
146 # check for binary only once
147 if cmds.is_empty():
148 has_git = bool(cmds.locate_binary("git", ignore_alias=True))
149 has_hg = bool(cmds.locate_binary("hg", ignore_alias=True))
150 else:
151 has_git = bool(cmds.lazy_locate_binary("git", ignore_alias=True))
152 has_hg = bool(cmds.lazy_locate_binary("hg", ignore_alias=True))
153 if has_git:
154 branch = get_git_branch()
155 if not branch and has_hg:
156 branch = get_hg_branch()
157 if isinstance(branch, subprocess.TimeoutExpired):
158 branch = "<branch-timeout>"
159 _first_branch_timeout_message()
160 return branch or None
161
162
163 def _git_dirty_working_directory(q, include_untracked):
164 status = None
165 denv = builtins.__xonsh__.env.detype()
166 try:
167 cmd = ["git", "status", "--porcelain"]
168 if include_untracked:
169 cmd.append("--untracked-files=normal")
170 else:
171 cmd.append("--untracked-files=no")
172 status = subprocess.check_output(cmd, stderr=subprocess.DEVNULL, env=denv)
173 except (subprocess.CalledProcessError, OSError, FileNotFoundError):
174 q.put(None)
175 if status is not None:
176 return q.put(bool(status))
177
178
179 def git_dirty_working_directory(include_untracked=False):
180 """Returns whether or not the git directory is dirty. If this could not
181 be determined (timeout, file not found, etc.) then this returns None.
182 """
183 timeout = builtins.__xonsh__.env.get("VC_BRANCH_TIMEOUT")
184 q = queue.Queue()
185 t = threading.Thread(
186 target=_git_dirty_working_directory, args=(q, include_untracked)
187 )
188 t.start()
189 t.join(timeout=timeout)
190 try:
191 return q.get_nowait()
192 except queue.Empty:
193 return None
194
195
196 def hg_dirty_working_directory():
197 """Computes whether or not the mercurial working directory is dirty or not.
198 If this cannot be determined, None is returned.
199 """
200 env = builtins.__xonsh__.env
201 cwd = env["PWD"]
202 denv = env.detype()
203 vcbt = env["VC_BRANCH_TIMEOUT"]
204 # Override user configurations settings and aliases
205 denv["HGRCPATH"] = ""
206 try:
207 s = subprocess.check_output(
208 ["hg", "identify", "--id"],
209 stderr=subprocess.PIPE,
210 cwd=cwd,
211 timeout=vcbt,
212 universal_newlines=True,
213 env=denv,
214 )
215 return s.strip(os.linesep).endswith("+")
216 except (
217 subprocess.CalledProcessError,
218 subprocess.TimeoutExpired,
219 FileNotFoundError,
220 ):
221 return None
222
223
224 def dirty_working_directory():
225 """Returns a boolean as to whether there are uncommitted files in version
226 control repository we are inside. If this cannot be determined, returns
227 None. Currently supports git and hg.
228 """
229 dwd = None
230 cmds = builtins.__xonsh__.commands_cache
231 if cmds.lazy_locate_binary("git", ignore_alias=True):
232 dwd = git_dirty_working_directory()
233 if cmds.lazy_locate_binary("hg", ignore_alias=True) and dwd is None:
234 dwd = hg_dirty_working_directory()
235 return dwd
236
237
238 def branch_color():
239 """Return red if the current branch is dirty, yellow if the dirtiness can
240 not be determined, and green if it clean. These are bold, intense colors
241 for the foreground.
242 """
243 dwd = dirty_working_directory()
244 if dwd is None:
245 color = "{BOLD_INTENSE_YELLOW}"
246 elif dwd:
247 color = "{BOLD_INTENSE_RED}"
248 else:
249 color = "{BOLD_INTENSE_GREEN}"
250 return color
251
252
253 def branch_bg_color():
254 """Return red if the current branch is dirty, yellow if the dirtiness can
255 not be determined, and green if it clean. These are background colors.
256 """
257 dwd = dirty_working_directory()
258 if dwd is None:
259 color = "{BACKGROUND_YELLOW}"
260 elif dwd:
261 color = "{BACKGROUND_RED}"
262 else:
263 color = "{BACKGROUND_GREEN}"
264 return color
265
[end of xonsh/prompt/vc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/prompt/vc.py b/xonsh/prompt/vc.py
--- a/xonsh/prompt/vc.py
+++ b/xonsh/prompt/vc.py
@@ -8,8 +8,16 @@
import builtins
import threading
import subprocess
+import re
import xonsh.tools as xt
+from xonsh.lazyasd import LazyObject
+
+RE_REMOVE_ANSI = LazyObject(
+ lambda: re.compile(r"(?:\x1B[@-_]|[\x80-\x9F])[0-?]*[ -/]*[@-~]"),
+ globals(),
+ "RE_REMOVE_ANSI",
+)
def _get_git_branch(q):
@@ -50,6 +58,9 @@
t.join(timeout=timeout)
try:
branch = q.get_nowait()
+ # branch = RE_REMOVE_ANSI.sub("", branch or "")
+ if branch:
+ branch = RE_REMOVE_ANSI.sub("", branch)
except queue.Empty:
branch = None
return branch
| {"golden_diff": "diff --git a/xonsh/prompt/vc.py b/xonsh/prompt/vc.py\n--- a/xonsh/prompt/vc.py\n+++ b/xonsh/prompt/vc.py\n@@ -8,8 +8,16 @@\n import builtins\n import threading\n import subprocess\n+import re\n \n import xonsh.tools as xt\n+from xonsh.lazyasd import LazyObject\n+\n+RE_REMOVE_ANSI = LazyObject(\n+ lambda: re.compile(r\"(?:\\x1B[@-_]|[\\x80-\\x9F])[0-?]*[ -/]*[@-~]\"),\n+ globals(),\n+ \"RE_REMOVE_ANSI\",\n+)\n \n \n def _get_git_branch(q):\n@@ -50,6 +58,9 @@\n t.join(timeout=timeout)\n try:\n branch = q.get_nowait()\n+ # branch = RE_REMOVE_ANSI.sub(\"\", branch or \"\")\n+ if branch:\n+ branch = RE_REMOVE_ANSI.sub(\"\", branch)\n except queue.Empty:\n branch = None\n return branch\n", "issue": "[color] in .gitconfig should be stripped in the prompt branch functions\nIn clean install with no `.xonshrc`, when I change into a git repo, my prompt looks like:\r\n\r\n\r\n\r\nIs there some setting I'm missing somewhere?\r\n\r\n## xonfig\r\n\r\n<details>\r\n\r\n```\r\n+------------------+----------------------+\r\n| xonsh | 0.9.13.dev1 |\r\n| Git SHA | 9f7ccc65 |\r\n| Commit Date | Oct 15 17:14:50 2019 |\r\n| Python | 3.7.6 |\r\n| PLY | 3.11 |\r\n| have readline | True |\r\n| prompt toolkit | 3.0.0 |\r\n| shell type | prompt_toolkit2 |\r\n| pygments | 2.5.2 |\r\n| on posix | True |\r\n| on linux | True |\r\n| distro | ubuntu |\r\n| on darwin | False |\r\n| on windows | False |\r\n| on cygwin | False |\r\n| on msys2 | False |\r\n| is superuser | False |\r\n| default encoding | utf-8 |\r\n| xonsh encoding | utf-8 |\r\n| encoding errors | surrogateescape |\r\n+------------------+----------------------+\r\n```\r\n\r\n\r\n</details>\r\n\r\n## Expected Behavior\r\nI think something like this (without the ansi escape codes):\r\n```\r\nmcrowe@mike-XPS-15-9560 ~/.dotfiles/bash_it [master] $\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Prompt formatter for simple version control branches\"\"\"\n# pylint:disable=no-member, invalid-name\n\nimport os\nimport sys\nimport queue\nimport builtins\nimport threading\nimport subprocess\n\nimport xonsh.tools as xt\n\n\ndef _get_git_branch(q):\n denv = builtins.__xonsh__.env.detype()\n try:\n branches = xt.decode_bytes(\n subprocess.check_output(\n [\"git\", \"branch\"], env=denv, stderr=subprocess.DEVNULL\n )\n ).splitlines()\n except (subprocess.CalledProcessError, OSError, FileNotFoundError):\n q.put(None)\n else:\n for branch in branches:\n if not branch.startswith(\"* \"):\n continue\n elif branch.endswith(\")\"):\n branch = branch.split()[-1][:-1]\n else:\n branch = branch.split()[-1]\n\n q.put(branch)\n break\n else:\n q.put(None)\n\n\ndef get_git_branch():\n \"\"\"Attempts to find the current git branch. If this could not\n be determined (timeout, not in a git repo, etc.) then this returns None.\n \"\"\"\n branch = None\n timeout = builtins.__xonsh__.env.get(\"VC_BRANCH_TIMEOUT\")\n q = queue.Queue()\n\n t = threading.Thread(target=_get_git_branch, args=(q,))\n t.start()\n t.join(timeout=timeout)\n try:\n branch = q.get_nowait()\n except queue.Empty:\n branch = None\n return branch\n\n\ndef _get_hg_root(q):\n _curpwd = builtins.__xonsh__.env[\"PWD\"]\n while True:\n if not os.path.isdir(_curpwd):\n return False\n try:\n dot_hg_is_in_curwd = any([b.name == \".hg\" for b in xt.scandir(_curpwd)])\n except OSError:\n return False\n if dot_hg_is_in_curwd:\n q.put(_curpwd)\n break\n else:\n _oldpwd = _curpwd\n _curpwd = os.path.split(_curpwd)[0]\n if _oldpwd == _curpwd:\n return False\n\n\ndef get_hg_branch(root=None):\n \"\"\"Try to get the mercurial branch of the current directory,\n return None if not in a repo or subprocess.TimeoutExpired if timed out.\n \"\"\"\n env = builtins.__xonsh__.env\n timeout = env[\"VC_BRANCH_TIMEOUT\"]\n q = queue.Queue()\n t = threading.Thread(target=_get_hg_root, args=(q,))\n t.start()\n t.join(timeout=timeout)\n try:\n root = q.get_nowait()\n except queue.Empty:\n return None\n if env.get(\"VC_HG_SHOW_BRANCH\"):\n # get branch name\n branch_path = os.path.sep.join([root, \".hg\", \"branch\"])\n if os.path.exists(branch_path):\n with open(branch_path, \"r\") as branch_file:\n branch = branch_file.read()\n else:\n branch = \"default\"\n else:\n branch = \"\"\n # add bookmark, if we can\n bookmark_path = os.path.sep.join([root, \".hg\", \"bookmarks.current\"])\n if os.path.exists(bookmark_path):\n with open(bookmark_path, \"r\") as bookmark_file:\n active_bookmark = bookmark_file.read()\n if env.get(\"VC_HG_SHOW_BRANCH\") is True:\n branch = \"{0}, {1}\".format(\n *(b.strip(os.linesep) for b in (branch, active_bookmark))\n )\n else:\n branch = active_bookmark.strip(os.linesep)\n else:\n branch = branch.strip(os.linesep)\n return branch\n\n\n_FIRST_BRANCH_TIMEOUT = True\n\n\ndef _first_branch_timeout_message():\n global _FIRST_BRANCH_TIMEOUT\n sbtm = builtins.__xonsh__.env[\"SUPPRESS_BRANCH_TIMEOUT_MESSAGE\"]\n if not _FIRST_BRANCH_TIMEOUT or sbtm:\n return\n _FIRST_BRANCH_TIMEOUT = False\n print(\n \"xonsh: branch timeout: computing the branch name, color, or both \"\n \"timed out while formatting the prompt. You may avoid this by \"\n \"increasing the value of $VC_BRANCH_TIMEOUT or by removing branch \"\n \"fields, like {curr_branch}, from your $PROMPT. See the FAQ \"\n \"for more details. This message will be suppressed for the remainder \"\n \"of this session. To suppress this message permanently, set \"\n \"$SUPPRESS_BRANCH_TIMEOUT_MESSAGE = True in your xonshrc file.\",\n file=sys.stderr,\n )\n\n\ndef current_branch():\n \"\"\"Gets the branch for a current working directory. Returns an empty string\n if the cwd is not a repository. This currently only works for git and hg\n and should be extended in the future. If a timeout occurred, the string\n '<branch-timeout>' is returned.\n \"\"\"\n branch = None\n cmds = builtins.__xonsh__.commands_cache\n # check for binary only once\n if cmds.is_empty():\n has_git = bool(cmds.locate_binary(\"git\", ignore_alias=True))\n has_hg = bool(cmds.locate_binary(\"hg\", ignore_alias=True))\n else:\n has_git = bool(cmds.lazy_locate_binary(\"git\", ignore_alias=True))\n has_hg = bool(cmds.lazy_locate_binary(\"hg\", ignore_alias=True))\n if has_git:\n branch = get_git_branch()\n if not branch and has_hg:\n branch = get_hg_branch()\n if isinstance(branch, subprocess.TimeoutExpired):\n branch = \"<branch-timeout>\"\n _first_branch_timeout_message()\n return branch or None\n\n\ndef _git_dirty_working_directory(q, include_untracked):\n status = None\n denv = builtins.__xonsh__.env.detype()\n try:\n cmd = [\"git\", \"status\", \"--porcelain\"]\n if include_untracked:\n cmd.append(\"--untracked-files=normal\")\n else:\n cmd.append(\"--untracked-files=no\")\n status = subprocess.check_output(cmd, stderr=subprocess.DEVNULL, env=denv)\n except (subprocess.CalledProcessError, OSError, FileNotFoundError):\n q.put(None)\n if status is not None:\n return q.put(bool(status))\n\n\ndef git_dirty_working_directory(include_untracked=False):\n \"\"\"Returns whether or not the git directory is dirty. If this could not\n be determined (timeout, file not found, etc.) then this returns None.\n \"\"\"\n timeout = builtins.__xonsh__.env.get(\"VC_BRANCH_TIMEOUT\")\n q = queue.Queue()\n t = threading.Thread(\n target=_git_dirty_working_directory, args=(q, include_untracked)\n )\n t.start()\n t.join(timeout=timeout)\n try:\n return q.get_nowait()\n except queue.Empty:\n return None\n\n\ndef hg_dirty_working_directory():\n \"\"\"Computes whether or not the mercurial working directory is dirty or not.\n If this cannot be determined, None is returned.\n \"\"\"\n env = builtins.__xonsh__.env\n cwd = env[\"PWD\"]\n denv = env.detype()\n vcbt = env[\"VC_BRANCH_TIMEOUT\"]\n # Override user configurations settings and aliases\n denv[\"HGRCPATH\"] = \"\"\n try:\n s = subprocess.check_output(\n [\"hg\", \"identify\", \"--id\"],\n stderr=subprocess.PIPE,\n cwd=cwd,\n timeout=vcbt,\n universal_newlines=True,\n env=denv,\n )\n return s.strip(os.linesep).endswith(\"+\")\n except (\n subprocess.CalledProcessError,\n subprocess.TimeoutExpired,\n FileNotFoundError,\n ):\n return None\n\n\ndef dirty_working_directory():\n \"\"\"Returns a boolean as to whether there are uncommitted files in version\n control repository we are inside. If this cannot be determined, returns\n None. Currently supports git and hg.\n \"\"\"\n dwd = None\n cmds = builtins.__xonsh__.commands_cache\n if cmds.lazy_locate_binary(\"git\", ignore_alias=True):\n dwd = git_dirty_working_directory()\n if cmds.lazy_locate_binary(\"hg\", ignore_alias=True) and dwd is None:\n dwd = hg_dirty_working_directory()\n return dwd\n\n\ndef branch_color():\n \"\"\"Return red if the current branch is dirty, yellow if the dirtiness can\n not be determined, and green if it clean. These are bold, intense colors\n for the foreground.\n \"\"\"\n dwd = dirty_working_directory()\n if dwd is None:\n color = \"{BOLD_INTENSE_YELLOW}\"\n elif dwd:\n color = \"{BOLD_INTENSE_RED}\"\n else:\n color = \"{BOLD_INTENSE_GREEN}\"\n return color\n\n\ndef branch_bg_color():\n \"\"\"Return red if the current branch is dirty, yellow if the dirtiness can\n not be determined, and green if it clean. These are background colors.\n \"\"\"\n dwd = dirty_working_directory()\n if dwd is None:\n color = \"{BACKGROUND_YELLOW}\"\n elif dwd:\n color = \"{BACKGROUND_RED}\"\n else:\n color = \"{BACKGROUND_GREEN}\"\n return color\n", "path": "xonsh/prompt/vc.py"}]} | 3,565 | 233 |
gh_patches_debug_575 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1921 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unexpected behaviour with page.is_homepage
Starting a new site and rolling my own theme. Came across some slightly odd behaviour.
Mkdocs version 1.0.4
Python version 3.7.1
**Expected:**
`page.is_homepage` evaluates to True on the home (index.md) of the site, and false on all other pages.
**Actual:**
`page.is_homepage` evaluates to True on the home (index.md), and on any other index.md that is included in the nav object without nesting.
**Examples:**
The unexpected result:
```
nav:
- Home: index.md <--- page.is_homepage evaluates to True
- About: about.md <--- page.is_homepage evaluates to False
- Projects: projects/index.md <--- page.is_homepage evaluates to True
```
Changing the filename causes it to evaluate to false:
```
nav:
- Home: index.md <--- page.is_homepage evaluates to True
- About: about.md <--- page.is_homepage evaluates to False
- Projects: projects/test.md <--- page.is_homepage evaluates to False
```
If I tweak it a bit, so that the sections are nested, then it evaluates to false as I'd expect:
```
nav:
- About:
- About: about.md <--- page.is_homepage evaluates to False
- Projects:
- Project home: projects/index.md <--- page.is_homepage evaluates to False
```
This feels like a bug - especially as simply changing the markdown file name causes the behaviour to change.
</issue>
<code>
[start of mkdocs/structure/pages.py]
1 # coding: utf-8
2
3 from __future__ import unicode_literals
4
5 import os
6 import io
7 import datetime
8 import logging
9
10 import markdown
11 from markdown.extensions import Extension
12 from markdown.treeprocessors import Treeprocessor
13 from markdown.util import AMP_SUBSTITUTE
14
15 from mkdocs.structure.toc import get_toc
16 from mkdocs.utils import meta, urlparse, urlunparse, urljoin, urlunquote, get_markdown_title, warning_filter
17
18 log = logging.getLogger(__name__)
19 log.addFilter(warning_filter)
20
21
22 class Page(object):
23 def __init__(self, title, file, config):
24 file.page = self
25 self.file = file
26 self.title = title
27
28 # Navigation attributes
29 self.parent = None
30 self.children = None
31 self.previous_page = None
32 self.next_page = None
33 self.active = False
34
35 self.is_section = False
36 self.is_page = True
37 self.is_link = False
38
39 # Support SOURCE_DATE_EPOCH environment variable for "reproducible" builds.
40 # See https://reproducible-builds.org/specs/source-date-epoch/
41 if 'SOURCE_DATE_EPOCH' in os.environ:
42 self.update_date = datetime.datetime.utcfromtimestamp(
43 int(os.environ['SOURCE_DATE_EPOCH'])
44 ).strftime("%Y-%m-%d")
45 else:
46 self.update_date = datetime.datetime.now().strftime("%Y-%m-%d")
47
48 self._set_canonical_url(config.get('site_url', None))
49 self._set_edit_url(config.get('repo_url', None), config.get('edit_uri', None))
50
51 # Placeholders to be filled in later in the build process.
52 self.markdown = None
53 self.content = None
54 self.toc = []
55 self.meta = {}
56
57 def __eq__(self, other):
58
59 def sub_dict(d):
60 return dict((key, value) for key, value in d.items() if key in ['title', 'file'])
61
62 return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))
63
64 def __ne__(self, other):
65 return not self.__eq__(other)
66
67 def __repr__(self):
68 title = "'{}'".format(self.title) if (self.title is not None) else '[blank]'
69 return "Page(title={}, url='{}')".format(title, self.abs_url or self.file.url)
70
71 def _indent_print(self, depth=0):
72 return '{}{}'.format(' ' * depth, repr(self))
73
74 def _get_active(self):
75 """ Return active status of page. """
76 return self.__active
77
78 def _set_active(self, value):
79 """ Set active status of page and ancestors. """
80 self.__active = bool(value)
81 if self.parent is not None:
82 self.parent.active = bool(value)
83
84 active = property(_get_active, _set_active)
85
86 @property
87 def is_index(self):
88 return self.file.name == 'index'
89
90 @property
91 def is_top_level(self):
92 return self.parent is None
93
94 @property
95 def is_homepage(self):
96 return self.is_top_level and self.is_index
97
98 @property
99 def url(self):
100 return '' if self.file.url == '.' else self.file.url
101
102 @property
103 def ancestors(self):
104 if self.parent is None:
105 return []
106 return [self.parent] + self.parent.ancestors
107
108 def _set_canonical_url(self, base):
109 if base:
110 if not base.endswith('/'):
111 base += '/'
112 self.canonical_url = urljoin(base, self.url)
113 self.abs_url = urlparse(self.canonical_url).path
114 else:
115 self.canonical_url = None
116 self.abs_url = None
117
118 def _set_edit_url(self, repo_url, edit_uri):
119 if repo_url and edit_uri:
120 src_path = self.file.src_path.replace('\\', '/')
121 self.edit_url = urljoin(repo_url, edit_uri + src_path)
122 else:
123 self.edit_url = None
124
125 def read_source(self, config):
126 source = config['plugins'].run_event(
127 'page_read_source', page=self, config=config
128 )
129 if source is None:
130 try:
131 with io.open(self.file.abs_src_path, 'r', encoding='utf-8-sig', errors='strict') as f:
132 source = f.read()
133 except IOError:
134 log.error('File not found: {}'.format(self.file.src_path))
135 raise
136 except ValueError:
137 log.error('Encoding error reading file: {}'.format(self.file.src_path))
138 raise
139
140 self.markdown, self.meta = meta.get_data(source)
141 self._set_title()
142
143 def _set_title(self):
144 """
145 Set the title for a Markdown document.
146
147 Check these in order and use the first that returns a valid title:
148 - value provided on init (passed in from config)
149 - value of metadata 'title'
150 - content of the first H1 in Markdown content
151 - convert filename to title
152 """
153 if self.title is not None:
154 return
155
156 if 'title' in self.meta:
157 self.title = self.meta['title']
158 return
159
160 title = get_markdown_title(self.markdown)
161
162 if title is None:
163 if self.is_homepage:
164 title = 'Home'
165 else:
166 title = self.file.name.replace('-', ' ').replace('_', ' ')
167 # Capitalize if the filename was all lowercase, otherwise leave it as-is.
168 if title.lower() == title:
169 title = title.capitalize()
170
171 self.title = title
172
173 def render(self, config, files):
174 """
175 Convert the Markdown source file to HTML as per the config.
176 """
177
178 extensions = [
179 _RelativePathExtension(self.file, files)
180 ] + config['markdown_extensions']
181
182 md = markdown.Markdown(
183 extensions=extensions,
184 extension_configs=config['mdx_configs'] or {}
185 )
186 self.content = md.convert(self.markdown)
187 self.toc = get_toc(getattr(md, 'toc', ''))
188
189
190 class _RelativePathTreeprocessor(Treeprocessor):
191 def __init__(self, file, files):
192 self.file = file
193 self.files = files
194
195 def run(self, root):
196 """
197 Update urls on anchors and images to make them relative
198
199 Iterates through the full document tree looking for specific
200 tags and then makes them relative based on the site navigation
201 """
202 for element in root.iter():
203 if element.tag == 'a':
204 key = 'href'
205 elif element.tag == 'img':
206 key = 'src'
207 else:
208 continue
209
210 url = element.get(key)
211 new_url = self.path_to_url(url)
212 element.set(key, new_url)
213
214 return root
215
216 def path_to_url(self, url):
217 scheme, netloc, path, params, query, fragment = urlparse(url)
218
219 if (scheme or netloc or not path or url.startswith('/')
220 or AMP_SUBSTITUTE in url or '.' not in os.path.split(path)[-1]):
221 # Ignore URLs unless they are a relative link to a source file.
222 # AMP_SUBSTITUTE is used internally by Markdown only for email.
223 # No '.' in the last part of a path indicates path does not point to a file.
224 return url
225
226 # Determine the filepath of the target.
227 target_path = os.path.join(os.path.dirname(self.file.src_path), urlunquote(path))
228 target_path = os.path.normpath(target_path).lstrip(os.sep)
229
230 # Validate that the target exists in files collection.
231 if target_path not in self.files:
232 log.warning(
233 "Documentation file '{}' contains a link to '{}' which is not found "
234 "in the documentation files.".format(self.file.src_path, target_path)
235 )
236 return url
237 target_file = self.files.get_file_from_path(target_path)
238 path = target_file.url_relative_to(self.file)
239 components = (scheme, netloc, path, params, query, fragment)
240 return urlunparse(components)
241
242
243 class _RelativePathExtension(Extension):
244 """
245 The Extension class is what we pass to markdown, it then
246 registers the Treeprocessor.
247 """
248
249 def __init__(self, file, files):
250 self.file = file
251 self.files = files
252
253 def extendMarkdown(self, md, md_globals):
254 relpath = _RelativePathTreeprocessor(self.file, self.files)
255 md.treeprocessors.add("relpath", relpath, "_end")
256
[end of mkdocs/structure/pages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/structure/pages.py b/mkdocs/structure/pages.py
--- a/mkdocs/structure/pages.py
+++ b/mkdocs/structure/pages.py
@@ -93,7 +93,7 @@
@property
def is_homepage(self):
- return self.is_top_level and self.is_index
+ return self.is_top_level and self.is_index and self.file.url == '.'
@property
def url(self):
| {"golden_diff": "diff --git a/mkdocs/structure/pages.py b/mkdocs/structure/pages.py\n--- a/mkdocs/structure/pages.py\n+++ b/mkdocs/structure/pages.py\n@@ -93,7 +93,7 @@\n \n @property\n def is_homepage(self):\n- return self.is_top_level and self.is_index\n+ return self.is_top_level and self.is_index and self.file.url == '.'\n \n @property\n def url(self):\n", "issue": "Unexpected behaviour with page.is_homepage\nStarting a new site and rolling my own theme. Came across some slightly odd behaviour.\r\n\r\nMkdocs version 1.0.4\r\nPython version 3.7.1\r\n\r\n**Expected:**\r\n`page.is_homepage` evaluates to True on the home (index.md) of the site, and false on all other pages.\r\n\r\n**Actual:**\r\n`page.is_homepage` evaluates to True on the home (index.md), and on any other index.md that is included in the nav object without nesting.\r\n\r\n**Examples:**\r\n\r\nThe unexpected result:\r\n\r\n```\r\nnav:\r\n - Home: index.md <--- page.is_homepage evaluates to True\r\n - About: about.md <--- page.is_homepage evaluates to False\r\n - Projects: projects/index.md <--- page.is_homepage evaluates to True\r\n```\r\n\r\nChanging the filename causes it to evaluate to false:\r\n\r\n```\r\nnav:\r\n - Home: index.md <--- page.is_homepage evaluates to True\r\n - About: about.md <--- page.is_homepage evaluates to False\r\n - Projects: projects/test.md <--- page.is_homepage evaluates to False\r\n```\r\n\r\nIf I tweak it a bit, so that the sections are nested, then it evaluates to false as I'd expect:\r\n\r\n```\r\nnav:\r\n - About: \r\n - About: about.md <--- page.is_homepage evaluates to False\r\n - Projects: \r\n - Project home: projects/index.md <--- page.is_homepage evaluates to False\r\n```\r\n\r\nThis feels like a bug - especially as simply changing the markdown file name causes the behaviour to change.\r\n\r\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom __future__ import unicode_literals\n\nimport os\nimport io\nimport datetime\nimport logging\n\nimport markdown\nfrom markdown.extensions import Extension\nfrom markdown.treeprocessors import Treeprocessor\nfrom markdown.util import AMP_SUBSTITUTE\n\nfrom mkdocs.structure.toc import get_toc\nfrom mkdocs.utils import meta, urlparse, urlunparse, urljoin, urlunquote, get_markdown_title, warning_filter\n\nlog = logging.getLogger(__name__)\nlog.addFilter(warning_filter)\n\n\nclass Page(object):\n def __init__(self, title, file, config):\n file.page = self\n self.file = file\n self.title = title\n\n # Navigation attributes\n self.parent = None\n self.children = None\n self.previous_page = None\n self.next_page = None\n self.active = False\n\n self.is_section = False\n self.is_page = True\n self.is_link = False\n\n # Support SOURCE_DATE_EPOCH environment variable for \"reproducible\" builds.\n # See https://reproducible-builds.org/specs/source-date-epoch/\n if 'SOURCE_DATE_EPOCH' in os.environ:\n self.update_date = datetime.datetime.utcfromtimestamp(\n int(os.environ['SOURCE_DATE_EPOCH'])\n ).strftime(\"%Y-%m-%d\")\n else:\n self.update_date = datetime.datetime.now().strftime(\"%Y-%m-%d\")\n\n self._set_canonical_url(config.get('site_url', None))\n self._set_edit_url(config.get('repo_url', None), config.get('edit_uri', None))\n\n # Placeholders to be filled in later in the build process.\n self.markdown = None\n self.content = None\n self.toc = []\n self.meta = {}\n\n def __eq__(self, other):\n\n def sub_dict(d):\n return dict((key, value) for key, value in d.items() if key in ['title', 'file'])\n\n return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __repr__(self):\n title = \"'{}'\".format(self.title) if (self.title is not None) else '[blank]'\n return \"Page(title={}, url='{}')\".format(title, self.abs_url or self.file.url)\n\n def _indent_print(self, depth=0):\n return '{}{}'.format(' ' * depth, repr(self))\n\n def _get_active(self):\n \"\"\" Return active status of page. \"\"\"\n return self.__active\n\n def _set_active(self, value):\n \"\"\" Set active status of page and ancestors. \"\"\"\n self.__active = bool(value)\n if self.parent is not None:\n self.parent.active = bool(value)\n\n active = property(_get_active, _set_active)\n\n @property\n def is_index(self):\n return self.file.name == 'index'\n\n @property\n def is_top_level(self):\n return self.parent is None\n\n @property\n def is_homepage(self):\n return self.is_top_level and self.is_index\n\n @property\n def url(self):\n return '' if self.file.url == '.' else self.file.url\n\n @property\n def ancestors(self):\n if self.parent is None:\n return []\n return [self.parent] + self.parent.ancestors\n\n def _set_canonical_url(self, base):\n if base:\n if not base.endswith('/'):\n base += '/'\n self.canonical_url = urljoin(base, self.url)\n self.abs_url = urlparse(self.canonical_url).path\n else:\n self.canonical_url = None\n self.abs_url = None\n\n def _set_edit_url(self, repo_url, edit_uri):\n if repo_url and edit_uri:\n src_path = self.file.src_path.replace('\\\\', '/')\n self.edit_url = urljoin(repo_url, edit_uri + src_path)\n else:\n self.edit_url = None\n\n def read_source(self, config):\n source = config['plugins'].run_event(\n 'page_read_source', page=self, config=config\n )\n if source is None:\n try:\n with io.open(self.file.abs_src_path, 'r', encoding='utf-8-sig', errors='strict') as f:\n source = f.read()\n except IOError:\n log.error('File not found: {}'.format(self.file.src_path))\n raise\n except ValueError:\n log.error('Encoding error reading file: {}'.format(self.file.src_path))\n raise\n\n self.markdown, self.meta = meta.get_data(source)\n self._set_title()\n\n def _set_title(self):\n \"\"\"\n Set the title for a Markdown document.\n\n Check these in order and use the first that returns a valid title:\n - value provided on init (passed in from config)\n - value of metadata 'title'\n - content of the first H1 in Markdown content\n - convert filename to title\n \"\"\"\n if self.title is not None:\n return\n\n if 'title' in self.meta:\n self.title = self.meta['title']\n return\n\n title = get_markdown_title(self.markdown)\n\n if title is None:\n if self.is_homepage:\n title = 'Home'\n else:\n title = self.file.name.replace('-', ' ').replace('_', ' ')\n # Capitalize if the filename was all lowercase, otherwise leave it as-is.\n if title.lower() == title:\n title = title.capitalize()\n\n self.title = title\n\n def render(self, config, files):\n \"\"\"\n Convert the Markdown source file to HTML as per the config.\n \"\"\"\n\n extensions = [\n _RelativePathExtension(self.file, files)\n ] + config['markdown_extensions']\n\n md = markdown.Markdown(\n extensions=extensions,\n extension_configs=config['mdx_configs'] or {}\n )\n self.content = md.convert(self.markdown)\n self.toc = get_toc(getattr(md, 'toc', ''))\n\n\nclass _RelativePathTreeprocessor(Treeprocessor):\n def __init__(self, file, files):\n self.file = file\n self.files = files\n\n def run(self, root):\n \"\"\"\n Update urls on anchors and images to make them relative\n\n Iterates through the full document tree looking for specific\n tags and then makes them relative based on the site navigation\n \"\"\"\n for element in root.iter():\n if element.tag == 'a':\n key = 'href'\n elif element.tag == 'img':\n key = 'src'\n else:\n continue\n\n url = element.get(key)\n new_url = self.path_to_url(url)\n element.set(key, new_url)\n\n return root\n\n def path_to_url(self, url):\n scheme, netloc, path, params, query, fragment = urlparse(url)\n\n if (scheme or netloc or not path or url.startswith('/')\n or AMP_SUBSTITUTE in url or '.' not in os.path.split(path)[-1]):\n # Ignore URLs unless they are a relative link to a source file.\n # AMP_SUBSTITUTE is used internally by Markdown only for email.\n # No '.' in the last part of a path indicates path does not point to a file.\n return url\n\n # Determine the filepath of the target.\n target_path = os.path.join(os.path.dirname(self.file.src_path), urlunquote(path))\n target_path = os.path.normpath(target_path).lstrip(os.sep)\n\n # Validate that the target exists in files collection.\n if target_path not in self.files:\n log.warning(\n \"Documentation file '{}' contains a link to '{}' which is not found \"\n \"in the documentation files.\".format(self.file.src_path, target_path)\n )\n return url\n target_file = self.files.get_file_from_path(target_path)\n path = target_file.url_relative_to(self.file)\n components = (scheme, netloc, path, params, query, fragment)\n return urlunparse(components)\n\n\nclass _RelativePathExtension(Extension):\n \"\"\"\n The Extension class is what we pass to markdown, it then\n registers the Treeprocessor.\n \"\"\"\n\n def __init__(self, file, files):\n self.file = file\n self.files = files\n\n def extendMarkdown(self, md, md_globals):\n relpath = _RelativePathTreeprocessor(self.file, self.files)\n md.treeprocessors.add(\"relpath\", relpath, \"_end\")\n", "path": "mkdocs/structure/pages.py"}]} | 3,399 | 104 |
gh_patches_debug_14281 | rasdani/github-patches | git_diff | litestar-org__litestar-3179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: CORS Middleware not setting all headers as per spec
### Description
Right now, there's only a handful of headers that are only being set for the preflight request. They must be set for both the preflight and actual request.
https://fetch.spec.whatwg.org/#http-responses
Only `Access-Control-Allow-Origin` is being set here.
https://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/middleware/cors.py#L61-L73
Only `Access-Control-Allow-Credentials` and `Access-Control-Expose-Headers` get set here, and this is what the above code uses to update headers
https://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/config/cors.py#L123-L136
This still doesn't account for:
- Access-Control-Allow-Methods
- Access-Control-Allow-Headers
which are only set on preflight, but should also be set to the actual request.
### Litestar Version
2.2.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/3178">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of litestar/middleware/cors.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from litestar.datastructures import Headers, MutableScopeHeaders
6 from litestar.enums import ScopeType
7 from litestar.middleware.base import AbstractMiddleware
8
9 __all__ = ("CORSMiddleware",)
10
11
12 if TYPE_CHECKING:
13 from litestar.config.cors import CORSConfig
14 from litestar.types import ASGIApp, Message, Receive, Scope, Send
15
16
17 class CORSMiddleware(AbstractMiddleware):
18 """CORS Middleware."""
19
20 __slots__ = ("config",)
21
22 def __init__(self, app: ASGIApp, config: CORSConfig) -> None:
23 """Middleware that adds CORS validation to the application.
24
25 Args:
26 app: The ``next`` ASGI app to call.
27 config: An instance of :class:`CORSConfig <litestar.config.cors.CORSConfig>`
28 """
29 super().__init__(app=app, scopes={ScopeType.HTTP})
30 self.config = config
31
32 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
33 """ASGI callable.
34
35 Args:
36 scope: The ASGI connection scope.
37 receive: The ASGI receive function.
38 send: The ASGI send function.
39
40 Returns:
41 None
42 """
43 headers = Headers.from_scope(scope=scope)
44 if origin := headers.get("origin"):
45 await self.app(scope, receive, self.send_wrapper(send=send, origin=origin, has_cookie="cookie" in headers))
46 else:
47 await self.app(scope, receive, send)
48
49 def send_wrapper(self, send: Send, origin: str, has_cookie: bool) -> Send:
50 """Wrap ``send`` to ensure that state is not disconnected.
51
52 Args:
53 has_cookie: Boolean flag dictating if the connection has a cookie set.
54 origin: The value of the ``Origin`` header.
55 send: The ASGI send function.
56
57 Returns:
58 An ASGI send function.
59 """
60
61 async def wrapped_send(message: Message) -> None:
62 if message["type"] == "http.response.start":
63 message.setdefault("headers", [])
64 headers = MutableScopeHeaders.from_message(message=message)
65 headers.update(self.config.simple_headers)
66
67 if (self.config.is_allow_all_origins and has_cookie) or (
68 not self.config.is_allow_all_origins and self.config.is_origin_allowed(origin=origin)
69 ):
70 headers["Access-Control-Allow-Origin"] = origin
71 headers["Vary"] = "Origin"
72
73 await send(message)
74
75 return wrapped_send
76
[end of litestar/middleware/cors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/middleware/cors.py b/litestar/middleware/cors.py
--- a/litestar/middleware/cors.py
+++ b/litestar/middleware/cors.py
@@ -70,6 +70,15 @@
headers["Access-Control-Allow-Origin"] = origin
headers["Vary"] = "Origin"
+ # We don't want to overwrite this for preflight requests.
+ allow_headers = headers.get("Access-Control-Allow-Headers")
+ if not allow_headers and self.config.allow_headers:
+ headers["Access-Control-Allow-Headers"] = ", ".join(sorted(set(self.config.allow_headers)))
+
+ allow_methods = headers.get("Access-Control-Allow-Methods")
+ if not allow_methods and self.config.allow_methods:
+ headers["Access-Control-Allow-Methods"] = ", ".join(sorted(set(self.config.allow_methods)))
+
await send(message)
return wrapped_send
| {"golden_diff": "diff --git a/litestar/middleware/cors.py b/litestar/middleware/cors.py\n--- a/litestar/middleware/cors.py\n+++ b/litestar/middleware/cors.py\n@@ -70,6 +70,15 @@\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers[\"Vary\"] = \"Origin\"\n \n+ # We don't want to overwrite this for preflight requests.\n+ allow_headers = headers.get(\"Access-Control-Allow-Headers\")\n+ if not allow_headers and self.config.allow_headers:\n+ headers[\"Access-Control-Allow-Headers\"] = \", \".join(sorted(set(self.config.allow_headers)))\n+\n+ allow_methods = headers.get(\"Access-Control-Allow-Methods\")\n+ if not allow_methods and self.config.allow_methods:\n+ headers[\"Access-Control-Allow-Methods\"] = \", \".join(sorted(set(self.config.allow_methods)))\n+\n await send(message)\n \n return wrapped_send\n", "issue": "Bug: CORS Middleware not setting all headers as per spec\n### Description\r\n\r\nRight now, there's only a handful of headers that are only being set for the preflight request. They must be set for both the preflight and actual request. \r\nhttps://fetch.spec.whatwg.org/#http-responses\r\n\r\nOnly `Access-Control-Allow-Origin` is being set here.\r\nhttps://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/middleware/cors.py#L61-L73\r\n\r\nOnly `Access-Control-Allow-Credentials` and `Access-Control-Expose-Headers` get set here, and this is what the above code uses to update headers\r\nhttps://github.com/litestar-org/litestar/blob/1fb981da4b6171cd3fa348c9ffe1c575c5bc862f/litestar/config/cors.py#L123-L136\r\n\r\nThis still doesn't account for:\r\n- Access-Control-Allow-Methods\r\n- Access-Control-Allow-Headers\r\n\r\nwhich are only set on preflight, but should also be set to the actual request.\r\n\r\n### Litestar Version\r\n\r\n2.2.1\r\n\r\n### Platform\r\n\r\n- [X] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [ ] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n---\r\n> [!NOTE] \r\n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \r\n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\r\n>\r\n> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/3178\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/3178/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom litestar.datastructures import Headers, MutableScopeHeaders\nfrom litestar.enums import ScopeType\nfrom litestar.middleware.base import AbstractMiddleware\n\n__all__ = (\"CORSMiddleware\",)\n\n\nif TYPE_CHECKING:\n from litestar.config.cors import CORSConfig\n from litestar.types import ASGIApp, Message, Receive, Scope, Send\n\n\nclass CORSMiddleware(AbstractMiddleware):\n \"\"\"CORS Middleware.\"\"\"\n\n __slots__ = (\"config\",)\n\n def __init__(self, app: ASGIApp, config: CORSConfig) -> None:\n \"\"\"Middleware that adds CORS validation to the application.\n\n Args:\n app: The ``next`` ASGI app to call.\n config: An instance of :class:`CORSConfig <litestar.config.cors.CORSConfig>`\n \"\"\"\n super().__init__(app=app, scopes={ScopeType.HTTP})\n self.config = config\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"ASGI callable.\n\n Args:\n scope: The ASGI connection scope.\n receive: The ASGI receive function.\n send: The ASGI send function.\n\n Returns:\n None\n \"\"\"\n headers = Headers.from_scope(scope=scope)\n if origin := headers.get(\"origin\"):\n await self.app(scope, receive, self.send_wrapper(send=send, origin=origin, has_cookie=\"cookie\" in headers))\n else:\n await self.app(scope, receive, send)\n\n def send_wrapper(self, send: Send, origin: str, has_cookie: bool) -> Send:\n \"\"\"Wrap ``send`` to ensure that state is not disconnected.\n\n Args:\n has_cookie: Boolean flag dictating if the connection has a cookie set.\n origin: The value of the ``Origin`` header.\n send: The ASGI send function.\n\n Returns:\n An ASGI send function.\n \"\"\"\n\n async def wrapped_send(message: Message) -> None:\n if message[\"type\"] == \"http.response.start\":\n message.setdefault(\"headers\", [])\n headers = MutableScopeHeaders.from_message(message=message)\n headers.update(self.config.simple_headers)\n\n if (self.config.is_allow_all_origins and has_cookie) or (\n not self.config.is_allow_all_origins and self.config.is_origin_allowed(origin=origin)\n ):\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers[\"Vary\"] = \"Origin\"\n\n await send(message)\n\n return wrapped_send\n", "path": "litestar/middleware/cors.py"}]} | 1,836 | 197 |
gh_patches_debug_30165 | rasdani/github-patches | git_diff | pytorch__ignite-281 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature Request] More general metrics
I find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.
For instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.
The mask is necessary in the loss to know which outputs to use in the final averaging/ loss.
[Feature Request] More general metrics
I find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.
For instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.
The mask is necessary in the loss to know which outputs to use in the final averaging/ loss.
</issue>
<code>
[start of ignite/metrics/loss.py]
1 from __future__ import division
2
3 from ignite.exceptions import NotComputableError
4 from ignite.metrics.metric import Metric
5
6
7 class Loss(Metric):
8 """
9 Calculates the average loss according to the passed loss_fn.
10
11 - `loss_fn` must return the average loss over all observations in the batch.
12 - `update` must receive output of the form `(y_pred, y)`.
13 """
14 def __init__(self, loss_fn, output_transform=lambda x: x):
15 super(Loss, self).__init__(output_transform)
16 self._loss_fn = loss_fn
17
18 def reset(self):
19 self._sum = 0
20 self._num_examples = 0
21
22 def update(self, output):
23 y_pred, y = output
24 average_loss = self._loss_fn(y_pred, y)
25 assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
26 self._sum += average_loss.item() * y.shape[0]
27 self._num_examples += y.shape[0]
28
29 def compute(self):
30 if self._num_examples == 0:
31 raise NotComputableError(
32 'Loss must have at least one example before it can be computed')
33 return self._sum / self._num_examples
34
[end of ignite/metrics/loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py
--- a/ignite/metrics/loss.py
+++ b/ignite/metrics/loss.py
@@ -8,9 +8,21 @@
"""
Calculates the average loss according to the passed loss_fn.
- - `loss_fn` must return the average loss over all observations in the batch.
- - `update` must receive output of the form `(y_pred, y)`.
+ Args:
+ loss_fn (callable): a callable taking a prediction tensor, a target
+ tensor, optionally other arguments, and returns the average loss
+ over all observations in the batch.
+ output_transform (callable): a callable that is used to transform the
+ :class:`ignite.engine.Engine`'s `process_function`'s output into the
+ form expected by the metric.
+ This can be useful if, for example, you have a multi-output model and
+ you want to compute the metric with respect to one of the outputs.
+ The output is is expected to be a tuple (prediction, target) or
+ (prediction, target, kwargs) where kwargs is a dictionary of extra
+ keywords arguments.
+
"""
+
def __init__(self, loss_fn, output_transform=lambda x: x):
super(Loss, self).__init__(output_transform)
self._loss_fn = loss_fn
@@ -20,8 +32,12 @@
self._num_examples = 0
def update(self, output):
- y_pred, y = output
- average_loss = self._loss_fn(y_pred, y)
+ if len(output) == 2:
+ y_pred, y = output
+ kwargs = {}
+ else:
+ y_pred, y, kwargs = output
+ average_loss = self._loss_fn(y_pred, y, **kwargs)
assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
self._sum += average_loss.item() * y.shape[0]
self._num_examples += y.shape[0]
| {"golden_diff": "diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py\n--- a/ignite/metrics/loss.py\n+++ b/ignite/metrics/loss.py\n@@ -8,9 +8,21 @@\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n \n- - `loss_fn` must return the average loss over all observations in the batch.\n- - `update` must receive output of the form `(y_pred, y)`.\n+ Args:\n+ loss_fn (callable): a callable taking a prediction tensor, a target\n+ tensor, optionally other arguments, and returns the average loss\n+ over all observations in the batch.\n+ output_transform (callable): a callable that is used to transform the\n+ :class:`ignite.engine.Engine`'s `process_function`'s output into the\n+ form expected by the metric.\n+ This can be useful if, for example, you have a multi-output model and\n+ you want to compute the metric with respect to one of the outputs.\n+ The output is is expected to be a tuple (prediction, target) or\n+ (prediction, target, kwargs) where kwargs is a dictionary of extra\n+ keywords arguments.\n+\n \"\"\"\n+\n def __init__(self, loss_fn, output_transform=lambda x: x):\n super(Loss, self).__init__(output_transform)\n self._loss_fn = loss_fn\n@@ -20,8 +32,12 @@\n self._num_examples = 0\n \n def update(self, output):\n- y_pred, y = output\n- average_loss = self._loss_fn(y_pred, y)\n+ if len(output) == 2:\n+ y_pred, y = output\n+ kwargs = {}\n+ else:\n+ y_pred, y, kwargs = output\n+ average_loss = self._loss_fn(y_pred, y, **kwargs)\n assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n", "issue": "[Feature Request] More general metrics\nI find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.\r\nFor instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.\r\nThe mask is necessary in the loss to know which outputs to use in the final averaging/ loss.\n[Feature Request] More general metrics\nI find the metrics to be a bit limited as one might want to pass additional options (even tensors) to the loss.\r\nFor instance in recurrent models with different sequence lengths, one would use a mask to avoid counting errors on padded time steps.\r\nThe mask is necessary in the loss to know which outputs to use in the final averaging/ loss.\n", "before_files": [{"content": "from __future__ import division\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric\n\n\nclass Loss(Metric):\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n\n - `loss_fn` must return the average loss over all observations in the batch.\n - `update` must receive output of the form `(y_pred, y)`.\n \"\"\"\n def __init__(self, loss_fn, output_transform=lambda x: x):\n super(Loss, self).__init__(output_transform)\n self._loss_fn = loss_fn\n\n def reset(self):\n self._sum = 0\n self._num_examples = 0\n\n def update(self, output):\n y_pred, y = output\n average_loss = self._loss_fn(y_pred, y)\n assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\n 'Loss must have at least one example before it can be computed')\n return self._sum / self._num_examples\n", "path": "ignite/metrics/loss.py"}]} | 1,022 | 467 |
gh_patches_debug_165 | rasdani/github-patches | git_diff | biopython__biopython-4577 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'SeqFeature' object has no attribute 'strand'
## Setup
- **Biopython Version:** 1.82
- **Python Version:** 3.11.5 (main, Sep 27 2023, 11:42:37) [GCC 11.4.0]
- **Operating System:** Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- **Python Implementation:** CPython
## Expected Behaviour
When processing GenBank files using Biopython, each gene feature should have a 'strand' attribute.
## Actual Behaviour
The 'strand' attributes are missing for gene features in GenBank files.
## Steps to Reproduce
1. Run the following script to parse a GenBank file and check for the presence of 'strand' attributes in gene features:
```python
from Bio import SeqIO
import sys; print(sys.version)
import platform; print(platform.python_implementation()); print(platform.platform())
import Bio; print(Bio.__version__)
def check_strand_in_genes(genome_file):
count = 0
for record in SeqIO.parse(genome_file, "genbank"):
for feature in record.features:
if feature.type == "gene":
gene_name = feature.qualifiers.get('gene', ['Unknown'])[0]
if hasattr(feature, 'strand'):
print(f"'strand' attribute exists for {gene_name} in record {record.id}")
count += 1
else:
print(f"'strand' attribute does not exist for {gene_name} in record {record.id}")
count += 1
if count >= 5:
return
# Replace 'genome_file.gb' with the path to your GenBank file
# Example file: https://www.ncbi.nlm.nih.gov/nuccore/U00096.3/
check_strand_in_genes('sequence.gb')
```
2. Note that the output indicates the absence of 'strand' attributes for the genes.
## Observed Output
```bash
3.11.5 (main, Sep 27 2023, 11:42:37) [GCC 11.4.0]
CPython
Linux-5.15.0-91-generic-x86_64-with-glibc2.35
1.82
'strand' attribute does not exist for thrL in record U00096.3
'strand' attribute does not exist for thrA in record U00096.3
'strand' attribute does not exist for thrB in record U00096.3
'strand' attribute does not exist for thrC in record U00096.3
'strand' attribute does not exist for yaaX in record U00096.3
```
</issue>
<code>
[start of Bio/__init__.py]
1 # Copyright 1999-2003 by Jeffrey Chang. All rights reserved.
2 #
3 # This file is part of the Biopython distribution and governed by your
4 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
5 # Please see the LICENSE file that should have been included as part of this
6 # package.
7 """Collection of modules for dealing with biological data in Python.
8
9 The Biopython Project is an international association of developers
10 of freely available Python tools for computational molecular biology.
11
12 https://biopython.org
13 """
14
15 import os
16 import warnings
17
18 __version__ = "1.83.dev0"
19
20
21 class MissingExternalDependencyError(Exception):
22 """Missing an external dependency.
23
24 Used for things like missing command line tools. Important for our unit
25 tests to allow skipping tests with missing external dependencies.
26 """
27
28
29 class MissingPythonDependencyError(MissingExternalDependencyError, ImportError):
30 """Missing an external python dependency (subclass of ImportError).
31
32 Used for missing Python modules (rather than just a typical ImportError).
33 Important for our unit tests to allow skipping tests with missing external
34 python dependencies, while also allowing the exception to be caught as an
35 ImportError.
36 """
37
38
39 class StreamModeError(ValueError):
40 """Incorrect stream mode (text vs binary).
41
42 This error should be raised when a stream (file or file-like object)
43 argument is in text mode while the receiving function expects binary mode,
44 or vice versa.
45 """
46
47
48 class BiopythonWarning(Warning):
49 """Biopython warning.
50
51 Biopython should use this warning (or subclasses of it), making it easy to
52 silence all our warning messages should you wish to:
53
54 >>> import warnings
55 >>> from Bio import BiopythonWarning
56 >>> warnings.simplefilter('ignore', BiopythonWarning)
57
58 Consult the warnings module documentation for more details.
59 """
60
61
62 class BiopythonParserWarning(BiopythonWarning):
63 """Biopython parser warning.
64
65 Some in-valid data files cannot be parsed and will trigger an exception.
66 Where a reasonable interpretation is possible, Biopython will issue this
67 warning to indicate a potential problem. To silence these warnings, use:
68
69 >>> import warnings
70 >>> from Bio import BiopythonParserWarning
71 >>> warnings.simplefilter('ignore', BiopythonParserWarning)
72
73 Consult the warnings module documentation for more details.
74 """
75
76
77 class BiopythonDeprecationWarning(BiopythonWarning):
78 """Biopython deprecation warning.
79
80 Biopython uses this warning instead of the built in DeprecationWarning
81 since those are ignored by default since Python 2.7.
82
83 To silence all our deprecation warning messages, use:
84
85 >>> import warnings
86 >>> from Bio import BiopythonDeprecationWarning
87 >>> warnings.simplefilter('ignore', BiopythonDeprecationWarning)
88
89 Code marked as deprecated is likely to be removed in a future version
90 of Biopython. To avoid removal of this code, please contact the Biopython
91 developers via the mailing list or GitHub.
92 """
93
94
95 class BiopythonExperimentalWarning(BiopythonWarning):
96 """Biopython experimental code warning.
97
98 Biopython uses this warning for experimental code ('alpha' or 'beta'
99 level code) which is released as part of the standard releases to mark
100 sub-modules or functions for early adopters to test & give feedback.
101
102 Code issuing this warning is likely to change (or even be removed) in
103 a subsequent release of Biopython. Such code should NOT be used for
104 production/stable code. It should only be used if:
105
106 - You are running the latest release of Biopython, or ideally the
107 latest code from our repository.
108 - You are subscribed to the biopython-dev mailing list to provide
109 feedback on this code, and to be alerted of changes to it.
110
111 If all goes well, experimental code would be promoted to stable in
112 a subsequent release, and this warning removed from it.
113 """
114
115
116 _parent_dir = os.path.dirname(os.path.dirname(__file__))
117 if os.path.exists(os.path.join(_parent_dir, "setup.py")):
118 # Looks like we are running from our source directory,
119 # a bad idea except if installed in development mode.
120 #
121 # See https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html
122 # Do we have .../site-packages/biopython.egg-link present?
123 #
124 # Note "pip install -e ." currently calls setuptools internally
125 import site
126
127 _dev_mode = False
128 for _p in site.getsitepackages():
129 if os.path.isfile(os.path.join(_p, "biopython.egg-link")):
130 _dev_mode = True
131 break
132 # Also check the user specific site packages
133 if not _dev_mode and os.path.isfile(
134 os.path.join(site.getusersitepackages(), "biopython.egg-link")
135 ):
136 _dev_mode = True
137 if not _dev_mode:
138 warnings.warn(
139 "You may be importing Biopython from inside the source tree."
140 " This is bad practice and might lead to downstream issues."
141 " In particular, you might encounter ImportErrors due to"
142 " missing compiled C extensions. We recommend that you"
143 " try running your code from outside the source tree."
144 " If you are outside the source tree then you have a"
145 " setup.py file in an unexpected directory: " + _parent_dir,
146 BiopythonWarning,
147 )
148
[end of Bio/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Bio/__init__.py b/Bio/__init__.py
--- a/Bio/__init__.py
+++ b/Bio/__init__.py
@@ -15,7 +15,7 @@
import os
import warnings
-__version__ = "1.83.dev0"
+__version__ = "1.84.dev0"
class MissingExternalDependencyError(Exception):
| {"golden_diff": "diff --git a/Bio/__init__.py b/Bio/__init__.py\n--- a/Bio/__init__.py\n+++ b/Bio/__init__.py\n@@ -15,7 +15,7 @@\n import os\n import warnings\n \n-__version__ = \"1.83.dev0\"\n+__version__ = \"1.84.dev0\"\n \n \n class MissingExternalDependencyError(Exception):\n", "issue": "AttributeError: 'SeqFeature' object has no attribute 'strand'\n## Setup\r\n- **Biopython Version:** 1.82\r\n- **Python Version:** 3.11.5 (main, Sep 27 2023, 11:42:37) [GCC 11.4.0]\r\n- **Operating System:** Linux-5.15.0-91-generic-x86_64-with-glibc2.35\r\n- **Python Implementation:** CPython\r\n\r\n## Expected Behaviour\r\nWhen processing GenBank files using Biopython, each gene feature should have a 'strand' attribute.\r\n\r\n## Actual Behaviour\r\nThe 'strand' attributes are missing for gene features in GenBank files.\r\n\r\n## Steps to Reproduce\r\n1. Run the following script to parse a GenBank file and check for the presence of 'strand' attributes in gene features:\r\n ```python\r\n from Bio import SeqIO\r\n\r\n import sys; print(sys.version)\r\n import platform; print(platform.python_implementation()); print(platform.platform())\r\n import Bio; print(Bio.__version__)\r\n\r\n def check_strand_in_genes(genome_file):\r\n count = 0\r\n for record in SeqIO.parse(genome_file, \"genbank\"):\r\n for feature in record.features:\r\n if feature.type == \"gene\":\r\n gene_name = feature.qualifiers.get('gene', ['Unknown'])[0]\r\n if hasattr(feature, 'strand'):\r\n print(f\"'strand' attribute exists for {gene_name} in record {record.id}\")\r\n count += 1\r\n else:\r\n print(f\"'strand' attribute does not exist for {gene_name} in record {record.id}\")\r\n count += 1\r\n if count >= 5:\r\n return\r\n\r\n # Replace 'genome_file.gb' with the path to your GenBank file\r\n # Example file: https://www.ncbi.nlm.nih.gov/nuccore/U00096.3/\r\n check_strand_in_genes('sequence.gb')\r\n ```\r\n\r\n2. Note that the output indicates the absence of 'strand' attributes for the genes.\r\n\r\n## Observed Output\r\n```bash\r\n3.11.5 (main, Sep 27 2023, 11:42:37) [GCC 11.4.0]\r\nCPython\r\nLinux-5.15.0-91-generic-x86_64-with-glibc2.35\r\n1.82\r\n'strand' attribute does not exist for thrL in record U00096.3\r\n'strand' attribute does not exist for thrA in record U00096.3\r\n'strand' attribute does not exist for thrB in record U00096.3\r\n'strand' attribute does not exist for thrC in record U00096.3\r\n'strand' attribute does not exist for yaaX in record U00096.3\r\n```\n", "before_files": [{"content": "# Copyright 1999-2003 by Jeffrey Chang. All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Collection of modules for dealing with biological data in Python.\n\nThe Biopython Project is an international association of developers\nof freely available Python tools for computational molecular biology.\n\nhttps://biopython.org\n\"\"\"\n\nimport os\nimport warnings\n\n__version__ = \"1.83.dev0\"\n\n\nclass MissingExternalDependencyError(Exception):\n \"\"\"Missing an external dependency.\n\n Used for things like missing command line tools. Important for our unit\n tests to allow skipping tests with missing external dependencies.\n \"\"\"\n\n\nclass MissingPythonDependencyError(MissingExternalDependencyError, ImportError):\n \"\"\"Missing an external python dependency (subclass of ImportError).\n\n Used for missing Python modules (rather than just a typical ImportError).\n Important for our unit tests to allow skipping tests with missing external\n python dependencies, while also allowing the exception to be caught as an\n ImportError.\n \"\"\"\n\n\nclass StreamModeError(ValueError):\n \"\"\"Incorrect stream mode (text vs binary).\n\n This error should be raised when a stream (file or file-like object)\n argument is in text mode while the receiving function expects binary mode,\n or vice versa.\n \"\"\"\n\n\nclass BiopythonWarning(Warning):\n \"\"\"Biopython warning.\n\n Biopython should use this warning (or subclasses of it), making it easy to\n silence all our warning messages should you wish to:\n\n >>> import warnings\n >>> from Bio import BiopythonWarning\n >>> warnings.simplefilter('ignore', BiopythonWarning)\n\n Consult the warnings module documentation for more details.\n \"\"\"\n\n\nclass BiopythonParserWarning(BiopythonWarning):\n \"\"\"Biopython parser warning.\n\n Some in-valid data files cannot be parsed and will trigger an exception.\n Where a reasonable interpretation is possible, Biopython will issue this\n warning to indicate a potential problem. To silence these warnings, use:\n\n >>> import warnings\n >>> from Bio import BiopythonParserWarning\n >>> warnings.simplefilter('ignore', BiopythonParserWarning)\n\n Consult the warnings module documentation for more details.\n \"\"\"\n\n\nclass BiopythonDeprecationWarning(BiopythonWarning):\n \"\"\"Biopython deprecation warning.\n\n Biopython uses this warning instead of the built in DeprecationWarning\n since those are ignored by default since Python 2.7.\n\n To silence all our deprecation warning messages, use:\n\n >>> import warnings\n >>> from Bio import BiopythonDeprecationWarning\n >>> warnings.simplefilter('ignore', BiopythonDeprecationWarning)\n\n Code marked as deprecated is likely to be removed in a future version\n of Biopython. To avoid removal of this code, please contact the Biopython\n developers via the mailing list or GitHub.\n \"\"\"\n\n\nclass BiopythonExperimentalWarning(BiopythonWarning):\n \"\"\"Biopython experimental code warning.\n\n Biopython uses this warning for experimental code ('alpha' or 'beta'\n level code) which is released as part of the standard releases to mark\n sub-modules or functions for early adopters to test & give feedback.\n\n Code issuing this warning is likely to change (or even be removed) in\n a subsequent release of Biopython. Such code should NOT be used for\n production/stable code. It should only be used if:\n\n - You are running the latest release of Biopython, or ideally the\n latest code from our repository.\n - You are subscribed to the biopython-dev mailing list to provide\n feedback on this code, and to be alerted of changes to it.\n\n If all goes well, experimental code would be promoted to stable in\n a subsequent release, and this warning removed from it.\n \"\"\"\n\n\n_parent_dir = os.path.dirname(os.path.dirname(__file__))\nif os.path.exists(os.path.join(_parent_dir, \"setup.py\")):\n # Looks like we are running from our source directory,\n # a bad idea except if installed in development mode.\n #\n # See https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html\n # Do we have .../site-packages/biopython.egg-link present?\n #\n # Note \"pip install -e .\" currently calls setuptools internally\n import site\n\n _dev_mode = False\n for _p in site.getsitepackages():\n if os.path.isfile(os.path.join(_p, \"biopython.egg-link\")):\n _dev_mode = True\n break\n # Also check the user specific site packages\n if not _dev_mode and os.path.isfile(\n os.path.join(site.getusersitepackages(), \"biopython.egg-link\")\n ):\n _dev_mode = True\n if not _dev_mode:\n warnings.warn(\n \"You may be importing Biopython from inside the source tree.\"\n \" This is bad practice and might lead to downstream issues.\"\n \" In particular, you might encounter ImportErrors due to\"\n \" missing compiled C extensions. We recommend that you\"\n \" try running your code from outside the source tree.\"\n \" If you are outside the source tree then you have a\"\n \" setup.py file in an unexpected directory: \" + _parent_dir,\n BiopythonWarning,\n )\n", "path": "Bio/__init__.py"}]} | 2,706 | 88 |
gh_patches_debug_34786 | rasdani/github-patches | git_diff | crytic__slither-813 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update solc-version detector with Solidity 0.7
Currently, the recommended versions are:
```
- 0.5.11 - 0.5.13,
- 0.5.15 - 0.5.17,
- 0.6.8,
- 0.6.10 - 0.6.11. Use a simple pragma version that allows any of these versions.
```
We need to review the 0.7.x branch and update the detector (including 0.6/0.8)
</issue>
<code>
[start of slither/detectors/attributes/incorrect_solc.py]
1 """
2 Check if an incorrect version of solc is used
3 """
4
5 import re
6 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
7 from slither.formatters.attributes.incorrect_solc import custom_format
8
9 # group:
10 # 0: ^ > >= < <= (optional)
11 # 1: ' ' (optional)
12 # 2: version number
13 # 3: version number
14 # 4: version number
15
16 # pylint: disable=anomalous-backslash-in-string
17 PATTERN = re.compile("(\^|>|>=|<|<=)?([ ]+)?(\d+)\.(\d+)\.(\d+)")
18
19
20 class IncorrectSolc(AbstractDetector):
21 """
22 Check if an old version of solc is used
23 """
24
25 ARGUMENT = "solc-version"
26 HELP = "Incorrect Solidity version"
27 IMPACT = DetectorClassification.INFORMATIONAL
28 CONFIDENCE = DetectorClassification.HIGH
29
30 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity"
31
32 WIKI_TITLE = "Incorrect versions of Solidity"
33 WIKI_DESCRIPTION = """
34 `solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
35 We also recommend avoiding complex `pragma` statement."""
36 WIKI_RECOMMENDATION = """
37 Deploy with any of the following Solidity versions:
38 - 0.5.11 - 0.5.13,
39 - 0.5.15 - 0.5.17,
40 - 0.6.8,
41 - 0.6.10 - 0.6.11.
42 Use a simple pragma version that allows any of these versions.
43 Consider using the latest version of Solidity for testing."""
44
45 COMPLEX_PRAGMA_TXT = "is too complex"
46 OLD_VERSION_TXT = "allows old versions"
47 LESS_THAN_TXT = "uses lesser than"
48
49 TOO_RECENT_VERSION_TXT = (
50 "necessitates a version too recent to be trusted. Consider deploying with 0.6.11"
51 )
52 BUGGY_VERSION_TXT = (
53 "is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)"
54 )
55
56 # Indicates the allowed versions. Must be formatted in increasing order.
57 ALLOWED_VERSIONS = [
58 "0.5.11",
59 "0.5.12",
60 "0.5.13",
61 "0.5.15",
62 "0.5.16",
63 "0.5.17",
64 "0.6.8",
65 "0.6.10",
66 "0.6.11",
67 ]
68
69 # Indicates the versions that should not be used.
70 BUGGY_VERSIONS = [
71 "0.4.22",
72 "^0.4.22",
73 "0.5.5",
74 "^0.5.5",
75 "0.5.6",
76 "^0.5.6",
77 "0.5.14",
78 "^0.5.14",
79 "0.6.9",
80 "^0.6.9",
81 ]
82
83 def _check_version(self, version):
84 op = version[0]
85 if op and op not in [">", ">=", "^"]:
86 return self.LESS_THAN_TXT
87 version_number = ".".join(version[2:])
88 if version_number not in self.ALLOWED_VERSIONS:
89 if list(map(int, version[2:])) > list(map(int, self.ALLOWED_VERSIONS[-1].split("."))):
90 return self.TOO_RECENT_VERSION_TXT
91 return self.OLD_VERSION_TXT
92 return None
93
94 def _check_pragma(self, version):
95 if version in self.BUGGY_VERSIONS:
96 return self.BUGGY_VERSION_TXT
97 versions = PATTERN.findall(version)
98 if len(versions) == 1:
99 version = versions[0]
100 return self._check_version(version)
101 if len(versions) == 2:
102 version_left = versions[0]
103 version_right = versions[1]
104 # Only allow two elements if the second one is
105 # <0.5.0 or <0.6.0
106 if version_right not in [
107 ("<", "", "0", "5", "0"),
108 ("<", "", "0", "6", "0"),
109 ("<", "", "0", "7", "0"),
110 ]:
111 return self.COMPLEX_PRAGMA_TXT
112 return self._check_version(version_left)
113 return self.COMPLEX_PRAGMA_TXT
114
115 def _detect(self):
116 """
117 Detects pragma statements that allow for outdated solc versions.
118 :return: Returns the relevant JSON data for the findings.
119 """
120 # Detect all version related pragmas and check if they are disallowed.
121 results = []
122 pragma = self.slither.pragma_directives
123 disallowed_pragmas = []
124
125 for p in pragma:
126 # Skip any pragma directives which do not refer to version
127 if len(p.directive) < 1 or p.directive[0] != "solidity":
128 continue
129
130 # This is version, so we test if this is disallowed.
131 reason = self._check_pragma(p.version)
132 if reason:
133 disallowed_pragmas.append((reason, p))
134
135 # If we found any disallowed pragmas, we output our findings.
136 if disallowed_pragmas:
137 for (reason, p) in disallowed_pragmas:
138 info = ["Pragma version", p, f" {reason}\n"]
139
140 json = self.generate_result(info)
141
142 results.append(json)
143
144 if self.slither.crytic_compile:
145 if self.slither.crytic_compile.compiler_version:
146 if (
147 self.slither.crytic_compile.compiler_version.version
148 not in self.ALLOWED_VERSIONS
149 ):
150 info = [
151 "solc-",
152 self.slither.crytic_compile.compiler_version.version,
153 " is not recommended for deployment\n",
154 ]
155
156 json = self.generate_result(info)
157
158 # TODO: Once crytic-compile adds config file info, add a source mapping element pointing to
159 # the line in the config that specifies the problematic version of solc
160
161 results.append(json)
162
163 return results
164
165 @staticmethod
166 def _format(slither, result):
167 custom_format(slither, result)
168
[end of slither/detectors/attributes/incorrect_solc.py]
[start of slither/detectors/statements/array_length_assignment.py]
1 """
2 Module detecting assignment of array length
3 """
4
5 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
6 from slither.core.cfg.node import NodeType
7 from slither.slithir.operations import Assignment, Length
8 from slither.slithir.variables.reference import ReferenceVariable
9 from slither.slithir.operations.binary import Binary
10 from slither.analyses.data_dependency.data_dependency import is_tainted
11
12
13 def detect_array_length_assignment(contract):
14 """
15 Detects and returns all nodes which assign array length.
16 :param contract: Contract to detect assignment within.
17 :return: A list of tuples with (Variable, node) where Variable references an array whose length was set by node.
18 """
19
20 # Create our result set.
21 results = set()
22
23 # Loop for each function and modifier.
24 # pylint: disable=too-many-nested-blocks
25 for function in contract.functions_and_modifiers_declared:
26 # Define a set of reference variables which refer to array length.
27 array_length_refs = set()
28
29 # Loop for every node in this function, looking for expressions where array length references are made,
30 # and subsequent expressions where array length references are assigned to.
31 for node in function.nodes:
32 if node.type == NodeType.EXPRESSION:
33 for ir in node.irs:
34
35 # First we look for the member access for 'length', for which a reference is created.
36 # We add the reference to our list of array length references.
37 if isinstance(ir, Length): # a
38 # if ir.variable_right == "length":
39 array_length_refs.add(ir.lvalue)
40
41 # If we have an assignment/binary operation, verify the left side refers to a reference variable
42 # which is in our list or array length references. (Array length is being assigned to).
43 elif isinstance(ir, (Assignment, Binary)):
44 if isinstance(ir.lvalue, ReferenceVariable):
45 if ir.lvalue in array_length_refs and any(
46 is_tainted(v, contract) for v in ir.read
47 ):
48 # the taint is not precise enough yet
49 # as a result, REF_0 = REF_0 + 1
50 # where REF_0 points to a LENGTH operation
51 # is considered as tainted
52 if ir.lvalue in ir.read:
53 continue
54 results.add(node)
55 break
56
57 # Return the resulting set of nodes which set array length.
58 return results
59
60
61 class ArrayLengthAssignment(AbstractDetector):
62 """
63 Array length assignment
64 """
65
66 ARGUMENT = "controlled-array-length"
67 HELP = "Tainted array length assignment"
68 IMPACT = DetectorClassification.HIGH
69 CONFIDENCE = DetectorClassification.MEDIUM
70
71 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation#array-length-assignment"
72
73 WIKI_TITLE = "Array Length Assignment"
74 WIKI_DESCRIPTION = """Detects the direct assignment of an array's length."""
75 WIKI_EXPLOIT_SCENARIO = """
76 ```solidity
77 contract A {
78 uint[] testArray; // dynamic size array
79
80 function f(uint usersCount) public {
81 // ...
82 testArray.length = usersCount;
83 // ...
84 }
85
86 function g(uint userIndex, uint val) public {
87 // ...
88 testArray[userIndex] = val;
89 // ...
90 }
91 }
92 ```
93 Contract storage/state-variables are indexed by a 256-bit integer.
94 The user can set the array length to `2**256-1` in order to index all storage slots.
95 In the example above, one could call the function `f` to set the array length, then call the function `g` to control any storage slot desired.
96 Note that storage slots here are indexed via a hash of the indexers; nonetheless, all storage will still be accessible and could be controlled by the attacker."""
97
98 WIKI_RECOMMENDATION = """Do not allow array lengths to be set directly set; instead, opt to add values as needed.
99 Otherwise, thoroughly review the contract to ensure a user-controlled variable cannot reach an array length assignment."""
100
101 def _detect(self):
102 """
103 Detect array length assignments
104 """
105 results = []
106 for contract in self.contracts:
107 array_length_assignments = detect_array_length_assignment(contract)
108 if array_length_assignments:
109 contract_info = [
110 contract,
111 " contract sets array length with a user-controlled value:\n",
112 ]
113 for node in array_length_assignments:
114 node_info = contract_info + ["\t- ", node, "\n"]
115 res = self.generate_result(node_info)
116 results.append(res)
117
118 return results
119
[end of slither/detectors/statements/array_length_assignment.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py
--- a/slither/detectors/attributes/incorrect_solc.py
+++ b/slither/detectors/attributes/incorrect_solc.py
@@ -35,10 +35,9 @@
We also recommend avoiding complex `pragma` statement."""
WIKI_RECOMMENDATION = """
Deploy with any of the following Solidity versions:
-- 0.5.11 - 0.5.13,
-- 0.5.15 - 0.5.17,
-- 0.6.8,
-- 0.6.10 - 0.6.11.
+- 0.5.16 - 0.5.17
+- 0.6.11 - 0.6.12
+- 0.7.5 - 0.7.6
Use a simple pragma version that allows any of these versions.
Consider using the latest version of Solidity for testing."""
@@ -47,7 +46,7 @@
LESS_THAN_TXT = "uses lesser than"
TOO_RECENT_VERSION_TXT = (
- "necessitates a version too recent to be trusted. Consider deploying with 0.6.11"
+ "necessitates a version too recent to be trusted. Consider deploying with 0.6.12/0.7.6"
)
BUGGY_VERSION_TXT = (
"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)"
@@ -55,15 +54,12 @@
# Indicates the allowed versions. Must be formatted in increasing order.
ALLOWED_VERSIONS = [
- "0.5.11",
- "0.5.12",
- "0.5.13",
- "0.5.15",
"0.5.16",
"0.5.17",
- "0.6.8",
- "0.6.10",
"0.6.11",
+ "0.6.12",
+ "0.7.5",
+ "0.7.6",
]
# Indicates the versions that should not be used.
diff --git a/slither/detectors/statements/array_length_assignment.py b/slither/detectors/statements/array_length_assignment.py
--- a/slither/detectors/statements/array_length_assignment.py
+++ b/slither/detectors/statements/array_length_assignment.py
@@ -103,6 +103,9 @@
Detect array length assignments
"""
results = []
+ # Starting from 0.6 .length is read only
+ if self.slither.solc_version >= "0.6.":
+ return results
for contract in self.contracts:
array_length_assignments = detect_array_length_assignment(contract)
if array_length_assignments:
| {"golden_diff": "diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py\n--- a/slither/detectors/attributes/incorrect_solc.py\n+++ b/slither/detectors/attributes/incorrect_solc.py\n@@ -35,10 +35,9 @@\n We also recommend avoiding complex `pragma` statement.\"\"\"\n WIKI_RECOMMENDATION = \"\"\"\n Deploy with any of the following Solidity versions:\n-- 0.5.11 - 0.5.13,\n-- 0.5.15 - 0.5.17,\n-- 0.6.8,\n-- 0.6.10 - 0.6.11.\n+- 0.5.16 - 0.5.17\n+- 0.6.11 - 0.6.12\n+- 0.7.5 - 0.7.6\n Use a simple pragma version that allows any of these versions.\n Consider using the latest version of Solidity for testing.\"\"\"\n \n@@ -47,7 +46,7 @@\n LESS_THAN_TXT = \"uses lesser than\"\n \n TOO_RECENT_VERSION_TXT = (\n- \"necessitates a version too recent to be trusted. Consider deploying with 0.6.11\"\n+ \"necessitates a version too recent to be trusted. Consider deploying with 0.6.12/0.7.6\"\n )\n BUGGY_VERSION_TXT = (\n \"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)\"\n@@ -55,15 +54,12 @@\n \n # Indicates the allowed versions. Must be formatted in increasing order.\n ALLOWED_VERSIONS = [\n- \"0.5.11\",\n- \"0.5.12\",\n- \"0.5.13\",\n- \"0.5.15\",\n \"0.5.16\",\n \"0.5.17\",\n- \"0.6.8\",\n- \"0.6.10\",\n \"0.6.11\",\n+ \"0.6.12\",\n+ \"0.7.5\",\n+ \"0.7.6\",\n ]\n \n # Indicates the versions that should not be used.\ndiff --git a/slither/detectors/statements/array_length_assignment.py b/slither/detectors/statements/array_length_assignment.py\n--- a/slither/detectors/statements/array_length_assignment.py\n+++ b/slither/detectors/statements/array_length_assignment.py\n@@ -103,6 +103,9 @@\n Detect array length assignments\n \"\"\"\n results = []\n+ # Starting from 0.6 .length is read only\n+ if self.slither.solc_version >= \"0.6.\":\n+ return results\n for contract in self.contracts:\n array_length_assignments = detect_array_length_assignment(contract)\n if array_length_assignments:\n", "issue": "Update solc-version detector with Solidity 0.7\nCurrently, the recommended versions are:\r\n```\r\n- 0.5.11 - 0.5.13,\r\n- 0.5.15 - 0.5.17,\r\n- 0.6.8,\r\n- 0.6.10 - 0.6.11. Use a simple pragma version that allows any of these versions. \r\n```\r\n\r\nWe need to review the 0.7.x branch and update the detector (including 0.6/0.8)\n", "before_files": [{"content": "\"\"\"\n Check if an incorrect version of solc is used\n\"\"\"\n\nimport re\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.formatters.attributes.incorrect_solc import custom_format\n\n# group:\n# 0: ^ > >= < <= (optional)\n# 1: ' ' (optional)\n# 2: version number\n# 3: version number\n# 4: version number\n\n# pylint: disable=anomalous-backslash-in-string\nPATTERN = re.compile(\"(\\^|>|>=|<|<=)?([ ]+)?(\\d+)\\.(\\d+)\\.(\\d+)\")\n\n\nclass IncorrectSolc(AbstractDetector):\n \"\"\"\n Check if an old version of solc is used\n \"\"\"\n\n ARGUMENT = \"solc-version\"\n HELP = \"Incorrect Solidity version\"\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity\"\n\n WIKI_TITLE = \"Incorrect versions of Solidity\"\n WIKI_DESCRIPTION = \"\"\"\n`solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\nWe also recommend avoiding complex `pragma` statement.\"\"\"\n WIKI_RECOMMENDATION = \"\"\"\nDeploy with any of the following Solidity versions:\n- 0.5.11 - 0.5.13,\n- 0.5.15 - 0.5.17,\n- 0.6.8,\n- 0.6.10 - 0.6.11.\nUse a simple pragma version that allows any of these versions.\nConsider using the latest version of Solidity for testing.\"\"\"\n\n COMPLEX_PRAGMA_TXT = \"is too complex\"\n OLD_VERSION_TXT = \"allows old versions\"\n LESS_THAN_TXT = \"uses lesser than\"\n\n TOO_RECENT_VERSION_TXT = (\n \"necessitates a version too recent to be trusted. Consider deploying with 0.6.11\"\n )\n BUGGY_VERSION_TXT = (\n \"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)\"\n )\n\n # Indicates the allowed versions. Must be formatted in increasing order.\n ALLOWED_VERSIONS = [\n \"0.5.11\",\n \"0.5.12\",\n \"0.5.13\",\n \"0.5.15\",\n \"0.5.16\",\n \"0.5.17\",\n \"0.6.8\",\n \"0.6.10\",\n \"0.6.11\",\n ]\n\n # Indicates the versions that should not be used.\n BUGGY_VERSIONS = [\n \"0.4.22\",\n \"^0.4.22\",\n \"0.5.5\",\n \"^0.5.5\",\n \"0.5.6\",\n \"^0.5.6\",\n \"0.5.14\",\n \"^0.5.14\",\n \"0.6.9\",\n \"^0.6.9\",\n ]\n\n def _check_version(self, version):\n op = version[0]\n if op and op not in [\">\", \">=\", \"^\"]:\n return self.LESS_THAN_TXT\n version_number = \".\".join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n if list(map(int, version[2:])) > list(map(int, self.ALLOWED_VERSIONS[-1].split(\".\"))):\n return self.TOO_RECENT_VERSION_TXT\n return self.OLD_VERSION_TXT\n return None\n\n def _check_pragma(self, version):\n if version in self.BUGGY_VERSIONS:\n return self.BUGGY_VERSION_TXT\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n return self._check_version(version)\n if len(versions) == 2:\n version_left = versions[0]\n version_right = versions[1]\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [\n (\"<\", \"\", \"0\", \"5\", \"0\"),\n (\"<\", \"\", \"0\", \"6\", \"0\"),\n (\"<\", \"\", \"0\", \"7\", \"0\"),\n ]:\n return self.COMPLEX_PRAGMA_TXT\n return self._check_version(version_left)\n return self.COMPLEX_PRAGMA_TXT\n\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n :return: Returns the relevant JSON data for the findings.\n \"\"\"\n # Detect all version related pragmas and check if they are disallowed.\n results = []\n pragma = self.slither.pragma_directives\n disallowed_pragmas = []\n\n for p in pragma:\n # Skip any pragma directives which do not refer to version\n if len(p.directive) < 1 or p.directive[0] != \"solidity\":\n continue\n\n # This is version, so we test if this is disallowed.\n reason = self._check_pragma(p.version)\n if reason:\n disallowed_pragmas.append((reason, p))\n\n # If we found any disallowed pragmas, we output our findings.\n if disallowed_pragmas:\n for (reason, p) in disallowed_pragmas:\n info = [\"Pragma version\", p, f\" {reason}\\n\"]\n\n json = self.generate_result(info)\n\n results.append(json)\n\n if self.slither.crytic_compile:\n if self.slither.crytic_compile.compiler_version:\n if (\n self.slither.crytic_compile.compiler_version.version\n not in self.ALLOWED_VERSIONS\n ):\n info = [\n \"solc-\",\n self.slither.crytic_compile.compiler_version.version,\n \" is not recommended for deployment\\n\",\n ]\n\n json = self.generate_result(info)\n\n # TODO: Once crytic-compile adds config file info, add a source mapping element pointing to\n # the line in the config that specifies the problematic version of solc\n\n results.append(json)\n\n return results\n\n @staticmethod\n def _format(slither, result):\n custom_format(slither, result)\n", "path": "slither/detectors/attributes/incorrect_solc.py"}, {"content": "\"\"\"\nModule detecting assignment of array length\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.core.cfg.node import NodeType\nfrom slither.slithir.operations import Assignment, Length\nfrom slither.slithir.variables.reference import ReferenceVariable\nfrom slither.slithir.operations.binary import Binary\nfrom slither.analyses.data_dependency.data_dependency import is_tainted\n\n\ndef detect_array_length_assignment(contract):\n \"\"\"\n Detects and returns all nodes which assign array length.\n :param contract: Contract to detect assignment within.\n :return: A list of tuples with (Variable, node) where Variable references an array whose length was set by node.\n \"\"\"\n\n # Create our result set.\n results = set()\n\n # Loop for each function and modifier.\n # pylint: disable=too-many-nested-blocks\n for function in contract.functions_and_modifiers_declared:\n # Define a set of reference variables which refer to array length.\n array_length_refs = set()\n\n # Loop for every node in this function, looking for expressions where array length references are made,\n # and subsequent expressions where array length references are assigned to.\n for node in function.nodes:\n if node.type == NodeType.EXPRESSION:\n for ir in node.irs:\n\n # First we look for the member access for 'length', for which a reference is created.\n # We add the reference to our list of array length references.\n if isinstance(ir, Length): # a\n # if ir.variable_right == \"length\":\n array_length_refs.add(ir.lvalue)\n\n # If we have an assignment/binary operation, verify the left side refers to a reference variable\n # which is in our list or array length references. (Array length is being assigned to).\n elif isinstance(ir, (Assignment, Binary)):\n if isinstance(ir.lvalue, ReferenceVariable):\n if ir.lvalue in array_length_refs and any(\n is_tainted(v, contract) for v in ir.read\n ):\n # the taint is not precise enough yet\n # as a result, REF_0 = REF_0 + 1\n # where REF_0 points to a LENGTH operation\n # is considered as tainted\n if ir.lvalue in ir.read:\n continue\n results.add(node)\n break\n\n # Return the resulting set of nodes which set array length.\n return results\n\n\nclass ArrayLengthAssignment(AbstractDetector):\n \"\"\"\n Array length assignment\n \"\"\"\n\n ARGUMENT = \"controlled-array-length\"\n HELP = \"Tainted array length assignment\"\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation#array-length-assignment\"\n\n WIKI_TITLE = \"Array Length Assignment\"\n WIKI_DESCRIPTION = \"\"\"Detects the direct assignment of an array's length.\"\"\"\n WIKI_EXPLOIT_SCENARIO = \"\"\"\n```solidity\ncontract A {\n\tuint[] testArray; // dynamic size array\n\n\tfunction f(uint usersCount) public {\n\t\t// ...\n\t\ttestArray.length = usersCount;\n\t\t// ...\n\t}\n\n\tfunction g(uint userIndex, uint val) public {\n\t\t// ...\n\t\ttestArray[userIndex] = val;\n\t\t// ...\n\t}\n}\n```\nContract storage/state-variables are indexed by a 256-bit integer.\nThe user can set the array length to `2**256-1` in order to index all storage slots. \nIn the example above, one could call the function `f` to set the array length, then call the function `g` to control any storage slot desired. \nNote that storage slots here are indexed via a hash of the indexers; nonetheless, all storage will still be accessible and could be controlled by the attacker.\"\"\"\n\n WIKI_RECOMMENDATION = \"\"\"Do not allow array lengths to be set directly set; instead, opt to add values as needed.\nOtherwise, thoroughly review the contract to ensure a user-controlled variable cannot reach an array length assignment.\"\"\"\n\n def _detect(self):\n \"\"\"\n Detect array length assignments\n \"\"\"\n results = []\n for contract in self.contracts:\n array_length_assignments = detect_array_length_assignment(contract)\n if array_length_assignments:\n contract_info = [\n contract,\n \" contract sets array length with a user-controlled value:\\n\",\n ]\n for node in array_length_assignments:\n node_info = contract_info + [\"\\t- \", node, \"\\n\"]\n res = self.generate_result(node_info)\n results.append(res)\n\n return results\n", "path": "slither/detectors/statements/array_length_assignment.py"}]} | 3,744 | 663 |
gh_patches_debug_12175 | rasdani/github-patches | git_diff | liqd__a4-opin-529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wording change for accept page
The page for accepting invites to private projects currently is a bit too straight-forward. ;) Let’s add some information
The headline should be changed to: Do you want to join “<project name>” ?
Then there should be another line underneath the headline set in our standard paragraph text style:
You were invited by the initiator of the project. If you accept you will be able to participate in the project. If you decline the invitation, you can also ask for membership at a later time.
The English label for the reject button should be changed to “decline”
The reject button looks strange. I think the button should be styled as a regular red button. Or is the small font-size on purpose?

</issue>
<code>
[start of euth/projects/rules.py]
1 import rules
2 from rules.predicates import is_superuser
3
4 from euth.organisations.predicates import is_initiator
5
6 from .predicates import is_live, is_member, is_public
7
8 rules.add_perm('euth_projects.edit_project',
9 is_superuser | is_initiator)
10
11
12 rules.add_perm('projects.view_project',
13 is_superuser | is_initiator |
14 ((is_public | is_member) & is_live))
15
[end of euth/projects/rules.py]
[start of euth/projects/views.py]
1 from django.shortcuts import redirect
2 from django.views import generic
3 from rules.contrib import views as rules_views
4
5 from . import mixins, models
6
7
8 class ProjectDetailView(rules_views.PermissionRequiredMixin,
9 mixins.PhaseDispatchMixin,
10 generic.DetailView):
11
12 model = models.Project
13 permission_required = 'projects.view_project'
14
15 @property
16 def raise_exception(self):
17 return self.request.user.is_authenticated()
18
19 def handle_no_permission(self):
20 """
21 Check if user clould join
22 """
23 membership_impossible = (
24 not self.request.user.is_authenticated()
25 or self.project.is_draft
26 or self.project.has_member(self.request.user)
27 )
28
29 if membership_impossible:
30 return super().handle_no_permission()
31 else:
32 return self._redirect_membership_request()
33
34 def _redirect_membership_request(self):
35 return redirect('memberships-request',
36 project_slug=self.project.slug)
37
38 @property
39 def project(self):
40 """
41 Emulate ProjectMixin interface for template sharing.
42 """
43 return self.get_object()
44
[end of euth/projects/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/projects/rules.py b/euth/projects/rules.py
--- a/euth/projects/rules.py
+++ b/euth/projects/rules.py
@@ -9,6 +9,6 @@
is_superuser | is_initiator)
-rules.add_perm('projects.view_project',
+rules.add_perm('euth_projects.view_project',
is_superuser | is_initiator |
((is_public | is_member) & is_live))
diff --git a/euth/projects/views.py b/euth/projects/views.py
--- a/euth/projects/views.py
+++ b/euth/projects/views.py
@@ -10,7 +10,7 @@
generic.DetailView):
model = models.Project
- permission_required = 'projects.view_project'
+ permission_required = 'euth_projects.view_project'
@property
def raise_exception(self):
| {"golden_diff": "diff --git a/euth/projects/rules.py b/euth/projects/rules.py\n--- a/euth/projects/rules.py\n+++ b/euth/projects/rules.py\n@@ -9,6 +9,6 @@\n is_superuser | is_initiator)\n \n \n-rules.add_perm('projects.view_project',\n+rules.add_perm('euth_projects.view_project',\n is_superuser | is_initiator |\n ((is_public | is_member) & is_live))\ndiff --git a/euth/projects/views.py b/euth/projects/views.py\n--- a/euth/projects/views.py\n+++ b/euth/projects/views.py\n@@ -10,7 +10,7 @@\n generic.DetailView):\n \n model = models.Project\n- permission_required = 'projects.view_project'\n+ permission_required = 'euth_projects.view_project'\n \n @property\n def raise_exception(self):\n", "issue": "Wording change for accept page\nThe page for accepting invites to private projects currently is a bit too straight-forward. ;) Let\u2019s add some information\r\n\r\nThe headline should be changed to: Do you want to join \u201c<project name>\u201d ?\r\nThen there should be another line underneath the headline set in our standard paragraph text style:\r\nYou were invited by the initiator of the project. If you accept you will be able to participate in the project. If you decline the invitation, you can also ask for membership at a later time.\r\n\r\nThe English label for the reject button should be changed to \u201cdecline\u201d\r\nThe reject button looks strange. I think the button should be styled as a regular red button. Or is the small font-size on purpose?\r\n\r\n\r\n\n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom euth.organisations.predicates import is_initiator\n\nfrom .predicates import is_live, is_member, is_public\n\nrules.add_perm('euth_projects.edit_project',\n is_superuser | is_initiator)\n\n\nrules.add_perm('projects.view_project',\n is_superuser | is_initiator |\n ((is_public | is_member) & is_live))\n", "path": "euth/projects/rules.py"}, {"content": "from django.shortcuts import redirect\nfrom django.views import generic\nfrom rules.contrib import views as rules_views\n\nfrom . import mixins, models\n\n\nclass ProjectDetailView(rules_views.PermissionRequiredMixin,\n mixins.PhaseDispatchMixin,\n generic.DetailView):\n\n model = models.Project\n permission_required = 'projects.view_project'\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def handle_no_permission(self):\n \"\"\"\n Check if user clould join\n \"\"\"\n membership_impossible = (\n not self.request.user.is_authenticated()\n or self.project.is_draft\n or self.project.has_member(self.request.user)\n )\n\n if membership_impossible:\n return super().handle_no_permission()\n else:\n return self._redirect_membership_request()\n\n def _redirect_membership_request(self):\n return redirect('memberships-request',\n project_slug=self.project.slug)\n\n @property\n def project(self):\n \"\"\"\n Emulate ProjectMixin interface for template sharing.\n \"\"\"\n return self.get_object()\n", "path": "euth/projects/views.py"}]} | 1,195 | 180 |
gh_patches_debug_43471 | rasdani/github-patches | git_diff | google__mobly-361 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong msg in snippet client's exception
Expected device id, but got snippet client `repr` in an error msg.
```
File "mobly/controllers/android_device_lib/callback_handler.py", line 94, in waitAndGet
self._id, event_name, timeout_ms)
File "mobly/controllers/android_device_lib/jsonrpc_client_base.py", line 295, in rpc_call
return self._rpc(name, *args)
File "mobly/controllers/android_device_lib/jsonrpc_client_base.py", line 274, in _rpc
ProtocolError.NO_RESPONSE_FROM_SERVER)
ProtocolError: <mobly.controllers.android_device_lib.snippet_client.SnippetClient object at 0x20c4490> No response from server.
```
</issue>
<code>
[start of mobly/controllers/android_device_lib/snippet_client.py]
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """JSON RPC interface to Mobly Snippet Lib."""
15 import logging
16 import re
17 import time
18
19 from mobly import utils
20 from mobly.controllers.android_device_lib import adb
21 from mobly.controllers.android_device_lib import errors
22 from mobly.controllers.android_device_lib import jsonrpc_client_base
23
24 _INSTRUMENTATION_RUNNER_PACKAGE = (
25 'com.google.android.mobly.snippet.SnippetRunner')
26
27 # Major version of the launch and communication protocol being used by this
28 # client.
29 # Incrementing this means that compatibility with clients using the older
30 # version is broken. Avoid breaking compatibility unless there is no other
31 # choice.
32 _PROTOCOL_MAJOR_VERSION = 1
33
34 # Minor version of the launch and communication protocol.
35 # Increment this when new features are added to the launch and communication
36 # protocol that are backwards compatible with the old protocol and don't break
37 # existing clients.
38 _PROTOCOL_MINOR_VERSION = 0
39
40 _LAUNCH_CMD = ('%s am instrument -w -e action start %s/' +
41 _INSTRUMENTATION_RUNNER_PACKAGE)
42
43 _STOP_CMD = (
44 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)
45
46 # Test that uses UiAutomation requires the shell session to be maintained while
47 # test is in progress. However, this requirement does not hold for the test that
48 # deals with device USB disconnection (Once device disconnects, the shell
49 # session that started the instrument ends, and UiAutomation fails with error:
50 # "UiAutomation not connected"). To keep the shell session and redirect
51 # stdin/stdout/stderr, use "setsid" or "nohup" while launching the
52 # instrumentation test. Because these commands may not be available in every
53 # android system, try to use them only if exists.
54 _SETSID_COMMAND = 'setsid'
55
56 _NOHUP_COMMAND = 'nohup'
57
58
59 class ProtocolVersionError(jsonrpc_client_base.AppStartError):
60 """Raised when the protocol reported by the snippet is unknown."""
61
62
63 class SnippetClient(jsonrpc_client_base.JsonRpcClientBase):
64 """A client for interacting with snippet APKs using Mobly Snippet Lib.
65
66 See superclass documentation for a list of public attributes.
67
68 For a description of the launch protocols, see the documentation in
69 mobly-snippet-lib, SnippetRunner.java.
70 """
71
72 def __init__(self, package, ad):
73 """Initializes a SnippetClient.
74
75 Args:
76 package: (str) The package name of the apk where the snippets are
77 defined.
78 ad: (AndroidDevice) the device object associated with this client.
79 """
80 super(SnippetClient, self).__init__(app_name=package, ad=ad)
81 self.package = package
82 self._ad = ad
83 self._adb = ad.adb
84 self._proc = None
85
86 def start_app_and_connect(self):
87 """Overrides superclass. Launches a snippet app and connects to it."""
88 self._check_app_installed()
89
90 persists_shell_cmd = self._get_persist_command()
91 # Use info here so people can follow along with the snippet startup
92 # process. Starting snippets can be slow, especially if there are
93 # multiple, and this avoids the perception that the framework is hanging
94 # for a long time doing nothing.
95 self.log.info('Launching snippet apk %s with protocol %d.%d',
96 self.package, _PROTOCOL_MAJOR_VERSION,
97 _PROTOCOL_MINOR_VERSION)
98 cmd = _LAUNCH_CMD % (persists_shell_cmd, self.package)
99 start_time = time.time()
100 self._proc = self._do_start_app(cmd)
101
102 # Check protocol version and get the device port
103 line = self._read_protocol_line()
104 match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$', line)
105 if not match or match.group(1) != '1':
106 raise ProtocolVersionError(self._ad, line)
107
108 line = self._read_protocol_line()
109 match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)
110 if not match:
111 raise ProtocolVersionError(self._ad, line)
112 self.device_port = int(match.group(1))
113
114 # Forward the device port to a new host port, and connect to that port
115 self.host_port = utils.get_available_host_port()
116 self._adb.forward(
117 ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
118 self.connect()
119
120 # Yaaay! We're done!
121 self.log.debug('Snippet %s started after %.1fs on host port %s',
122 self.package, time.time() - start_time, self.host_port)
123
124 def restore_app_connection(self, port=None):
125 """Restores the app after device got reconnected.
126
127 Instead of creating new instance of the client:
128 - Uses the given port (or find a new available host_port if none is
129 given).
130 - Tries to connect to remote server with selected port.
131
132 Args:
133 port: If given, this is the host port from which to connect to remote
134 device port. If not provided, find a new available port as host
135 port.
136
137 Raises:
138 AppRestoreConnectionError: When the app was not able to be started.
139 """
140 self.host_port = port or utils.get_available_host_port()
141 self._adb.forward(
142 ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
143 try:
144 self.connect()
145 except:
146 # Failed to connect to app, something went wrong.
147 raise jsonrpc_client_base.AppRestoreConnectionError(self._ad
148 ('Failed to restore app connection for %s at host port %s, '
149 'device port %s'), self.package, self.host_port,
150 self.device_port)
151
152 # Because the previous connection was lost, update self._proc
153 self._proc = None
154 self._restore_event_client()
155
156 def stop_app(self):
157 # Kill the pending 'adb shell am instrument -w' process if there is one.
158 # Although killing the snippet apk would abort this process anyway, we
159 # want to call stop_standing_subprocess() to perform a health check,
160 # print the failure stack trace if there was any, and reap it from the
161 # process table.
162 self.log.debug('Stopping snippet apk %s', self.package)
163 try:
164 # Close the socket connection.
165 self.disconnect()
166 if self._proc:
167 utils.stop_standing_subprocess(self._proc)
168 out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')
169 if 'OK (0 tests)' not in out:
170 raise errors.DeviceError(self._ad,
171 'Failed to stop existing apk. Unexpected output: %s' % out)
172 finally:
173 # Always clean up the adb port
174 if self.host_port:
175 self._adb.forward(['--remove', 'tcp:%d' % self.host_port])
176
177 def _start_event_client(self):
178 """Overrides superclass."""
179 event_client = SnippetClient(package=self.package, ad=self)
180 event_client.host_port = self.host_port
181 event_client.device_port = self.device_port
182 event_client.connect(self.uid,
183 jsonrpc_client_base.JsonRpcCommand.CONTINUE)
184 return event_client
185
186 def _restore_event_client(self):
187 """Restores previously created event client."""
188 if not self._event_client:
189 self._event_client = self._start_event_client()
190 return
191 self._event_client.host_port = self.host_port
192 self._event_client.device_port = self.device_port
193 self._event_client.connect()
194
195 def _check_app_installed(self):
196 # Check that the Mobly Snippet app is installed.
197 out = self._adb.shell('pm list package')
198 if not utils.grep('^package:%s$' % self.package, out):
199 raise jsonrpc_client_base.AppStartError(
200 self._ad, '%s is not installed.' % self.package)
201 # Check that the app is instrumented.
202 out = self._adb.shell('pm list instrumentation')
203 matched_out = utils.grep('^instrumentation:%s/%s' %
204 (self.package,
205 _INSTRUMENTATION_RUNNER_PACKAGE), out)
206 if not matched_out:
207 raise jsonrpc_client_base.AppStartError(self._ad,
208 '%s is installed, but it is not instrumented.' % self.package)
209 match = re.search('^instrumentation:(.*)\/(.*) \(target=(.*)\)$',
210 matched_out[0])
211 target_name = match.group(3)
212 # Check that the instrumentation target is installed if it's not the
213 # same as the snippet package.
214 if target_name != self.package:
215 out = self._adb.shell('pm list package')
216 if not utils.grep('^package:%s$' % target_name, out):
217 raise jsonrpc_client_base.AppStartError(self._ad,
218 'Instrumentation target %s is not installed.' %
219 target_name)
220
221 def _do_start_app(self, launch_cmd):
222 adb_cmd = [adb.ADB]
223 if self._adb.serial:
224 adb_cmd += ['-s', self._adb.serial]
225 adb_cmd += ['shell', launch_cmd]
226 return utils.start_standing_subprocess(adb_cmd, shell=False)
227
228 def _read_protocol_line(self):
229 """Reads the next line of instrumentation output relevant to snippets.
230
231 This method will skip over lines that don't start with 'SNIPPET' or
232 'INSTRUMENTATION_RESULT'.
233
234 Returns:
235 (str) Next line of snippet-related instrumentation output, stripped.
236
237 Raises:
238 jsonrpc_client_base.AppStartError: If EOF is reached without any
239 protocol lines being read.
240 """
241 while True:
242 line = self._proc.stdout.readline().decode('utf-8')
243 if not line:
244 raise jsonrpc_client_base.AppStartError(self._ad,
245 'Unexpected EOF waiting for app to start')
246 # readline() uses an empty string to mark EOF, and a single newline
247 # to mark regular empty lines in the output. Don't move the strip()
248 # call above the truthiness check, or this method will start
249 # considering any blank output line to be EOF.
250 line = line.strip()
251 if (line.startswith('INSTRUMENTATION_RESULT:') or
252 line.startswith('SNIPPET ')):
253 self.log.debug(
254 'Accepted line from instrumentation output: "%s"', line)
255 return line
256 self.log.debug('Discarded line from instrumentation output: "%s"',
257 line)
258
259 def _get_persist_command(self):
260 """Check availability and return path of command if available."""
261 for command in [_SETSID_COMMAND, _NOHUP_COMMAND]:
262 try:
263 if command in self._adb.shell(['which',
264 command]).decode('utf-8'):
265 return command
266 except adb.AdbError:
267 continue
268 self.log.warning(
269 'No %s and %s commands available to launch instrument '
270 'persistently, tests that depend on UiAutomator and '
271 'at the same time performs USB disconnection may fail',
272 _SETSID_COMMAND, _NOHUP_COMMAND)
273 return ''
274
[end of mobly/controllers/android_device_lib/snippet_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mobly/controllers/android_device_lib/snippet_client.py b/mobly/controllers/android_device_lib/snippet_client.py
--- a/mobly/controllers/android_device_lib/snippet_client.py
+++ b/mobly/controllers/android_device_lib/snippet_client.py
@@ -144,9 +144,10 @@
self.connect()
except:
# Failed to connect to app, something went wrong.
- raise jsonrpc_client_base.AppRestoreConnectionError(self._ad
- ('Failed to restore app connection for %s at host port %s, '
- 'device port %s'), self.package, self.host_port,
+ raise jsonrpc_client_base.AppRestoreConnectionError(
+ self._ad(
+ 'Failed to restore app connection for %s at host port %s, '
+ 'device port %s'), self.package, self.host_port,
self.device_port)
# Because the previous connection was lost, update self._proc
@@ -167,7 +168,8 @@
utils.stop_standing_subprocess(self._proc)
out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')
if 'OK (0 tests)' not in out:
- raise errors.DeviceError(self._ad,
+ raise errors.DeviceError(
+ self._ad,
'Failed to stop existing apk. Unexpected output: %s' % out)
finally:
# Always clean up the adb port
@@ -176,7 +178,7 @@
def _start_event_client(self):
"""Overrides superclass."""
- event_client = SnippetClient(package=self.package, ad=self)
+ event_client = SnippetClient(package=self.package, ad=self._ad)
event_client.host_port = self.host_port
event_client.device_port = self.device_port
event_client.connect(self.uid,
@@ -204,7 +206,8 @@
(self.package,
_INSTRUMENTATION_RUNNER_PACKAGE), out)
if not matched_out:
- raise jsonrpc_client_base.AppStartError(self._ad,
+ raise jsonrpc_client_base.AppStartError(
+ self._ad,
'%s is installed, but it is not instrumented.' % self.package)
match = re.search('^instrumentation:(.*)\/(.*) \(target=(.*)\)$',
matched_out[0])
@@ -214,8 +217,8 @@
if target_name != self.package:
out = self._adb.shell('pm list package')
if not utils.grep('^package:%s$' % target_name, out):
- raise jsonrpc_client_base.AppStartError(self._ad,
- 'Instrumentation target %s is not installed.' %
+ raise jsonrpc_client_base.AppStartError(
+ self._ad, 'Instrumentation target %s is not installed.' %
target_name)
def _do_start_app(self, launch_cmd):
@@ -241,8 +244,8 @@
while True:
line = self._proc.stdout.readline().decode('utf-8')
if not line:
- raise jsonrpc_client_base.AppStartError(self._ad,
- 'Unexpected EOF waiting for app to start')
+ raise jsonrpc_client_base.AppStartError(
+ self._ad, 'Unexpected EOF waiting for app to start')
# readline() uses an empty string to mark EOF, and a single newline
# to mark regular empty lines in the output. Don't move the strip()
# call above the truthiness check, or this method will start
| {"golden_diff": "diff --git a/mobly/controllers/android_device_lib/snippet_client.py b/mobly/controllers/android_device_lib/snippet_client.py\n--- a/mobly/controllers/android_device_lib/snippet_client.py\n+++ b/mobly/controllers/android_device_lib/snippet_client.py\n@@ -144,9 +144,10 @@\n self.connect()\n except:\n # Failed to connect to app, something went wrong.\n- raise jsonrpc_client_base.AppRestoreConnectionError(self._ad\n- ('Failed to restore app connection for %s at host port %s, '\n- 'device port %s'), self.package, self.host_port,\n+ raise jsonrpc_client_base.AppRestoreConnectionError(\n+ self._ad(\n+ 'Failed to restore app connection for %s at host port %s, '\n+ 'device port %s'), self.package, self.host_port,\n self.device_port)\n \n # Because the previous connection was lost, update self._proc\n@@ -167,7 +168,8 @@\n utils.stop_standing_subprocess(self._proc)\n out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')\n if 'OK (0 tests)' not in out:\n- raise errors.DeviceError(self._ad,\n+ raise errors.DeviceError(\n+ self._ad,\n 'Failed to stop existing apk. Unexpected output: %s' % out)\n finally:\n # Always clean up the adb port\n@@ -176,7 +178,7 @@\n \n def _start_event_client(self):\n \"\"\"Overrides superclass.\"\"\"\n- event_client = SnippetClient(package=self.package, ad=self)\n+ event_client = SnippetClient(package=self.package, ad=self._ad)\n event_client.host_port = self.host_port\n event_client.device_port = self.device_port\n event_client.connect(self.uid,\n@@ -204,7 +206,8 @@\n (self.package,\n _INSTRUMENTATION_RUNNER_PACKAGE), out)\n if not matched_out:\n- raise jsonrpc_client_base.AppStartError(self._ad,\n+ raise jsonrpc_client_base.AppStartError(\n+ self._ad,\n '%s is installed, but it is not instrumented.' % self.package)\n match = re.search('^instrumentation:(.*)\\/(.*) \\(target=(.*)\\)$',\n matched_out[0])\n@@ -214,8 +217,8 @@\n if target_name != self.package:\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % target_name, out):\n- raise jsonrpc_client_base.AppStartError(self._ad,\n- 'Instrumentation target %s is not installed.' %\n+ raise jsonrpc_client_base.AppStartError(\n+ self._ad, 'Instrumentation target %s is not installed.' %\n target_name)\n \n def _do_start_app(self, launch_cmd):\n@@ -241,8 +244,8 @@\n while True:\n line = self._proc.stdout.readline().decode('utf-8')\n if not line:\n- raise jsonrpc_client_base.AppStartError(self._ad,\n- 'Unexpected EOF waiting for app to start')\n+ raise jsonrpc_client_base.AppStartError(\n+ self._ad, 'Unexpected EOF waiting for app to start')\n # readline() uses an empty string to mark EOF, and a single newline\n # to mark regular empty lines in the output. Don't move the strip()\n # call above the truthiness check, or this method will start\n", "issue": "Wrong msg in snippet client's exception\nExpected device id, but got snippet client `repr` in an error msg.\r\n\r\n```\r\n File \"mobly/controllers/android_device_lib/callback_handler.py\", line 94, in waitAndGet\r\n self._id, event_name, timeout_ms)\r\n File \"mobly/controllers/android_device_lib/jsonrpc_client_base.py\", line 295, in rpc_call\r\n return self._rpc(name, *args)\r\n File \"mobly/controllers/android_device_lib/jsonrpc_client_base.py\", line 274, in _rpc\r\n ProtocolError.NO_RESPONSE_FROM_SERVER)\r\nProtocolError: <mobly.controllers.android_device_lib.snippet_client.SnippetClient object at 0x20c4490> No response from server.\r\n```\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"JSON RPC interface to Mobly Snippet Lib.\"\"\"\nimport logging\nimport re\nimport time\n\nfrom mobly import utils\nfrom mobly.controllers.android_device_lib import adb\nfrom mobly.controllers.android_device_lib import errors\nfrom mobly.controllers.android_device_lib import jsonrpc_client_base\n\n_INSTRUMENTATION_RUNNER_PACKAGE = (\n 'com.google.android.mobly.snippet.SnippetRunner')\n\n# Major version of the launch and communication protocol being used by this\n# client.\n# Incrementing this means that compatibility with clients using the older\n# version is broken. Avoid breaking compatibility unless there is no other\n# choice.\n_PROTOCOL_MAJOR_VERSION = 1\n\n# Minor version of the launch and communication protocol.\n# Increment this when new features are added to the launch and communication\n# protocol that are backwards compatible with the old protocol and don't break\n# existing clients.\n_PROTOCOL_MINOR_VERSION = 0\n\n_LAUNCH_CMD = ('%s am instrument -w -e action start %s/' +\n _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_STOP_CMD = (\n 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n# Test that uses UiAutomation requires the shell session to be maintained while\n# test is in progress. However, this requirement does not hold for the test that\n# deals with device USB disconnection (Once device disconnects, the shell\n# session that started the instrument ends, and UiAutomation fails with error:\n# \"UiAutomation not connected\"). To keep the shell session and redirect\n# stdin/stdout/stderr, use \"setsid\" or \"nohup\" while launching the\n# instrumentation test. Because these commands may not be available in every\n# android system, try to use them only if exists.\n_SETSID_COMMAND = 'setsid'\n\n_NOHUP_COMMAND = 'nohup'\n\n\nclass ProtocolVersionError(jsonrpc_client_base.AppStartError):\n \"\"\"Raised when the protocol reported by the snippet is unknown.\"\"\"\n\n\nclass SnippetClient(jsonrpc_client_base.JsonRpcClientBase):\n \"\"\"A client for interacting with snippet APKs using Mobly Snippet Lib.\n\n See superclass documentation for a list of public attributes.\n\n For a description of the launch protocols, see the documentation in\n mobly-snippet-lib, SnippetRunner.java.\n \"\"\"\n\n def __init__(self, package, ad):\n \"\"\"Initializes a SnippetClient.\n\n Args:\n package: (str) The package name of the apk where the snippets are\n defined.\n ad: (AndroidDevice) the device object associated with this client.\n \"\"\"\n super(SnippetClient, self).__init__(app_name=package, ad=ad)\n self.package = package\n self._ad = ad\n self._adb = ad.adb\n self._proc = None\n\n def start_app_and_connect(self):\n \"\"\"Overrides superclass. Launches a snippet app and connects to it.\"\"\"\n self._check_app_installed()\n\n persists_shell_cmd = self._get_persist_command()\n # Use info here so people can follow along with the snippet startup\n # process. Starting snippets can be slow, especially if there are\n # multiple, and this avoids the perception that the framework is hanging\n # for a long time doing nothing.\n self.log.info('Launching snippet apk %s with protocol %d.%d',\n self.package, _PROTOCOL_MAJOR_VERSION,\n _PROTOCOL_MINOR_VERSION)\n cmd = _LAUNCH_CMD % (persists_shell_cmd, self.package)\n start_time = time.time()\n self._proc = self._do_start_app(cmd)\n\n # Check protocol version and get the device port\n line = self._read_protocol_line()\n match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$', line)\n if not match or match.group(1) != '1':\n raise ProtocolVersionError(self._ad, line)\n\n line = self._read_protocol_line()\n match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)\n if not match:\n raise ProtocolVersionError(self._ad, line)\n self.device_port = int(match.group(1))\n\n # Forward the device port to a new host port, and connect to that port\n self.host_port = utils.get_available_host_port()\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n self.connect()\n\n # Yaaay! We're done!\n self.log.debug('Snippet %s started after %.1fs on host port %s',\n self.package, time.time() - start_time, self.host_port)\n\n def restore_app_connection(self, port=None):\n \"\"\"Restores the app after device got reconnected.\n\n Instead of creating new instance of the client:\n - Uses the given port (or find a new available host_port if none is\n given).\n - Tries to connect to remote server with selected port.\n\n Args:\n port: If given, this is the host port from which to connect to remote\n device port. If not provided, find a new available port as host\n port.\n\n Raises:\n AppRestoreConnectionError: When the app was not able to be started.\n \"\"\"\n self.host_port = port or utils.get_available_host_port()\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n try:\n self.connect()\n except:\n # Failed to connect to app, something went wrong.\n raise jsonrpc_client_base.AppRestoreConnectionError(self._ad\n ('Failed to restore app connection for %s at host port %s, '\n 'device port %s'), self.package, self.host_port,\n self.device_port)\n\n # Because the previous connection was lost, update self._proc\n self._proc = None\n self._restore_event_client()\n\n def stop_app(self):\n # Kill the pending 'adb shell am instrument -w' process if there is one.\n # Although killing the snippet apk would abort this process anyway, we\n # want to call stop_standing_subprocess() to perform a health check,\n # print the failure stack trace if there was any, and reap it from the\n # process table.\n self.log.debug('Stopping snippet apk %s', self.package)\n try:\n # Close the socket connection.\n self.disconnect()\n if self._proc:\n utils.stop_standing_subprocess(self._proc)\n out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')\n if 'OK (0 tests)' not in out:\n raise errors.DeviceError(self._ad,\n 'Failed to stop existing apk. Unexpected output: %s' % out)\n finally:\n # Always clean up the adb port\n if self.host_port:\n self._adb.forward(['--remove', 'tcp:%d' % self.host_port])\n\n def _start_event_client(self):\n \"\"\"Overrides superclass.\"\"\"\n event_client = SnippetClient(package=self.package, ad=self)\n event_client.host_port = self.host_port\n event_client.device_port = self.device_port\n event_client.connect(self.uid,\n jsonrpc_client_base.JsonRpcCommand.CONTINUE)\n return event_client\n\n def _restore_event_client(self):\n \"\"\"Restores previously created event client.\"\"\"\n if not self._event_client:\n self._event_client = self._start_event_client()\n return\n self._event_client.host_port = self.host_port\n self._event_client.device_port = self.device_port\n self._event_client.connect()\n\n def _check_app_installed(self):\n # Check that the Mobly Snippet app is installed.\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % self.package, out):\n raise jsonrpc_client_base.AppStartError(\n self._ad, '%s is not installed.' % self.package)\n # Check that the app is instrumented.\n out = self._adb.shell('pm list instrumentation')\n matched_out = utils.grep('^instrumentation:%s/%s' %\n (self.package,\n _INSTRUMENTATION_RUNNER_PACKAGE), out)\n if not matched_out:\n raise jsonrpc_client_base.AppStartError(self._ad,\n '%s is installed, but it is not instrumented.' % self.package)\n match = re.search('^instrumentation:(.*)\\/(.*) \\(target=(.*)\\)$',\n matched_out[0])\n target_name = match.group(3)\n # Check that the instrumentation target is installed if it's not the\n # same as the snippet package.\n if target_name != self.package:\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % target_name, out):\n raise jsonrpc_client_base.AppStartError(self._ad,\n 'Instrumentation target %s is not installed.' %\n target_name)\n\n def _do_start_app(self, launch_cmd):\n adb_cmd = [adb.ADB]\n if self._adb.serial:\n adb_cmd += ['-s', self._adb.serial]\n adb_cmd += ['shell', launch_cmd]\n return utils.start_standing_subprocess(adb_cmd, shell=False)\n\n def _read_protocol_line(self):\n \"\"\"Reads the next line of instrumentation output relevant to snippets.\n\n This method will skip over lines that don't start with 'SNIPPET' or\n 'INSTRUMENTATION_RESULT'.\n\n Returns:\n (str) Next line of snippet-related instrumentation output, stripped.\n\n Raises:\n jsonrpc_client_base.AppStartError: If EOF is reached without any\n protocol lines being read.\n \"\"\"\n while True:\n line = self._proc.stdout.readline().decode('utf-8')\n if not line:\n raise jsonrpc_client_base.AppStartError(self._ad,\n 'Unexpected EOF waiting for app to start')\n # readline() uses an empty string to mark EOF, and a single newline\n # to mark regular empty lines in the output. Don't move the strip()\n # call above the truthiness check, or this method will start\n # considering any blank output line to be EOF.\n line = line.strip()\n if (line.startswith('INSTRUMENTATION_RESULT:') or\n line.startswith('SNIPPET ')):\n self.log.debug(\n 'Accepted line from instrumentation output: \"%s\"', line)\n return line\n self.log.debug('Discarded line from instrumentation output: \"%s\"',\n line)\n\n def _get_persist_command(self):\n \"\"\"Check availability and return path of command if available.\"\"\"\n for command in [_SETSID_COMMAND, _NOHUP_COMMAND]:\n try:\n if command in self._adb.shell(['which',\n command]).decode('utf-8'):\n return command\n except adb.AdbError:\n continue\n self.log.warning(\n 'No %s and %s commands available to launch instrument '\n 'persistently, tests that depend on UiAutomator and '\n 'at the same time performs USB disconnection may fail',\n _SETSID_COMMAND, _NOHUP_COMMAND)\n return ''\n", "path": "mobly/controllers/android_device_lib/snippet_client.py"}]} | 3,976 | 777 |
gh_patches_debug_27470 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1530 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New keys causing problems in US-SPP
US-SPP is missing from the map due to changes in their data keys.
Error
```
Traceback (most recent call last):
File "/home/feeder/lib/fetch_data.py", line 131, in launch_parsers
**parser_kwargs)
File "/home/contrib/parsers/US_SPP.py", line 122, in fetch_production
processed_data = data_processor(raw_data, logger)
File "/home/contrib/parsers/US_SPP.py", line 71, in data_processor
production['coal'] = production['Coal Market'] + production['Coal Self']
KeyError: 'Coal Market'
```
Warning
```
New column 'Coal' present in US-SPP data source.
```
</issue>
<code>
[start of parsers/US_SPP.py]
1 #!usr/bin/env python3
2
3 """Parser for the Southwest Power Pool area of the United States."""
4
5 from dateutil import parser
6 from io import StringIO
7 from logging import getLogger
8 import pandas as pd
9 import requests
10
11 GENERATION_URL = 'https://marketplace.spp.org/public-data-api/gen-mix/asFile'
12
13 EXCHANGE_URL = 'https://marketplace.spp.org/public-data-api/interchange-trend/asFile'
14
15 MAPPING = {'Wind': 'wind',
16 'Nuclear': 'nuclear',
17 'Hydro': 'hydro',
18 'Solar': 'solar',
19 'Natural Gas': 'gas',
20 'Diesel Fuel Oil': 'oil',
21 'Waste Disposal Services': 'biomass'
22 }
23
24 TIE_MAPPING = {'US-MISO->US-SPP': ['AMRN', 'DPC', 'GRE', 'MDU', 'MEC', 'NSP', 'OTP']}
25
26 # NOTE
27 # Data sources return timestamps in GMT.
28 # Energy storage situation unclear as of 16/03/2018, likely to change quickly in future.
29
30
31 def get_data(url, session=None):
32 """Returns a pandas dataframe."""
33
34 s=session or requests.Session()
35 req = s.get(url, verify=False)
36 df = pd.read_csv(StringIO(req.text))
37
38 return df
39
40
41 def data_processor(df, logger):
42 """
43 Takes a dataframe and logging instance as input.
44 Checks for new generation types and logs awarning if any are found.
45 Parses the dataframe row by row removing unneeded keys.
46 Returns a list of 2 element tuples, each containing a datetime object
47 and production dictionary.
48 """
49
50 # Remove leading whitespace in column headers.
51 df.columns = df.columns.str.strip()
52
53 keys_to_remove = {'Coal Market', 'Coal Self', 'GMT MKT Interval', 'Average Actual Load',
54 'Other', 'Waste Heat'}
55
56 # Check for new generation columns.
57 known_keys = MAPPING.keys() | keys_to_remove
58 column_headers = set(df.columns)
59
60 unknown_keys = column_headers - known_keys
61
62 for heading in unknown_keys:
63 logger.warning('New column \'{}\' present in US-SPP data source.'.format(
64 heading), extra={'key': 'US-SPP'})
65
66 keys_to_remove = keys_to_remove | unknown_keys
67
68 processed_data = []
69 for index, row in df.iterrows():
70 production = row.to_dict()
71 production['coal'] = production['Coal Market'] + production['Coal Self']
72
73 extra_unknowns = sum([production[k] for k in unknown_keys])
74 production['unknown'] = production['Other'] + production['Waste Heat'] + extra_unknowns
75
76 dt_aware = parser.parse(production['GMT MKT Interval'])
77
78 for k in keys_to_remove:
79 production.pop(k, None)
80
81 mapped_production = {MAPPING.get(k,k):v for k,v in production.items()}
82
83 processed_data.append((dt_aware, mapped_production))
84
85 return processed_data
86
87
88 def fetch_production(zone_key = 'US-SPP', session=None, target_datetime=None, logger=getLogger(__name__)):
89 """
90 Requests the last known production mix (in MW) of a given zone
91 Arguments:
92 zone_key (optional) -- used in case a parser is able to fetch multiple zones
93 session (optional) -- request session passed in order to re-use an existing session
94 Return:
95 A dictionary in the form:
96 {
97 'zoneKey': 'FR',
98 'datetime': '2017-01-01T00:00:00Z',
99 'production': {
100 'biomass': 0.0,
101 'coal': 0.0,
102 'gas': 0.0,
103 'hydro': 0.0,
104 'nuclear': null,
105 'oil': 0.0,
106 'solar': 0.0,
107 'wind': 0.0,
108 'geothermal': 0.0,
109 'unknown': 0.0
110 },
111 'storage': {
112 'hydro': -10.0,
113 },
114 'source': 'mysource.com'
115 }
116 """
117
118 if target_datetime is not None:
119 raise NotImplementedError('This parser is not yet able to parse past dates')
120
121 raw_data = get_data(GENERATION_URL, session=session)
122 processed_data = data_processor(raw_data, logger)
123
124 data = []
125 for item in processed_data:
126 datapoint = {
127 'zoneKey': zone_key,
128 'datetime': item[0],
129 'production': item[1],
130 'storage': {},
131 'source': 'spp.org'
132 }
133 data.append(datapoint)
134
135 return data
136
137
138 # NOTE disabled until discrepancy in MISO SPP flows is resolved.
139 def fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=getLogger(__name__)):
140 """
141 Requests the last 24 hours of power exchange (in MW) between two zones
142 Arguments:
143 zone_key1 -- the first zone
144 zone_key2 -- the second zone; order of the two zones in params doesn't matter
145 session (optional) -- request session passed in order to re-use an existing session
146 Return:
147 A list of dictionaries in the form:
148 {
149 'sortedZoneKeys': 'DK->NO',
150 'datetime': '2017-01-01T00:00:00Z',
151 'netFlow': 0.0,
152 'source': 'mysource.com'
153 }
154 where net flow is from DK into NO
155 """
156
157 if target_datetime:
158 raise NotImplementedError('This parser is not yet able to parse past dates')
159
160 raw_data = get_data(EXCHANGE_URL, session=session)
161 sorted_codes = '->'.join(sorted([zone_key1, zone_key2]))
162
163 try:
164 exchange_ties = TIE_MAPPING[sorted_codes]
165 except KeyError as e:
166 raise NotImplementedError('The exchange {} is not implemented'.format(sorted_codes))
167
168 # TODO check glossary for flow direction.
169
170 exchange_data = []
171 for index, row in raw_data.iterrows():
172 all_exchanges = row.to_dict()
173
174 dt_aware = parser.parse(all_exchanges['GMTTime'])
175
176 flows = [all_exchanges[tie] for tie in exchange_ties]
177 netflow = sum(flows)
178
179 exchange = {
180 'sortedZoneKeys': sorted_codes,
181 'datetime': dt_aware,
182 'netFlow': netflow,
183 'source': 'spp.org'
184 }
185
186 exchange_data.append(exchange)
187
188 return exchange_data
189
190
191 if __name__ == '__main__':
192 print('fetch_production() -> ')
193 print(fetch_production())
194 # print('fetch_exchange() -> ')
195 # print(fetch_exchange('US-MISO', 'US-SPP'))
196
[end of parsers/US_SPP.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsers/US_SPP.py b/parsers/US_SPP.py
--- a/parsers/US_SPP.py
+++ b/parsers/US_SPP.py
@@ -18,7 +18,8 @@
'Solar': 'solar',
'Natural Gas': 'gas',
'Diesel Fuel Oil': 'oil',
- 'Waste Disposal Services': 'biomass'
+ 'Waste Disposal Services': 'biomass',
+ 'Coal': 'coal'
}
TIE_MAPPING = {'US-MISO->US-SPP': ['AMRN', 'DPC', 'GRE', 'MDU', 'MEC', 'NSP', 'OTP']}
@@ -50,8 +51,7 @@
# Remove leading whitespace in column headers.
df.columns = df.columns.str.strip()
- keys_to_remove = {'Coal Market', 'Coal Self', 'GMT MKT Interval', 'Average Actual Load',
- 'Other', 'Waste Heat'}
+ keys_to_remove = {'GMT MKT Interval', 'Average Actual Load', 'Other', 'Waste Heat'}
# Check for new generation columns.
known_keys = MAPPING.keys() | keys_to_remove
@@ -68,7 +68,6 @@
processed_data = []
for index, row in df.iterrows():
production = row.to_dict()
- production['coal'] = production['Coal Market'] + production['Coal Self']
extra_unknowns = sum([production[k] for k in unknown_keys])
production['unknown'] = production['Other'] + production['Waste Heat'] + extra_unknowns
| {"golden_diff": "diff --git a/parsers/US_SPP.py b/parsers/US_SPP.py\n--- a/parsers/US_SPP.py\n+++ b/parsers/US_SPP.py\n@@ -18,7 +18,8 @@\n 'Solar': 'solar',\n 'Natural Gas': 'gas',\n 'Diesel Fuel Oil': 'oil',\n- 'Waste Disposal Services': 'biomass'\n+ 'Waste Disposal Services': 'biomass',\n+ 'Coal': 'coal'\n }\n \n TIE_MAPPING = {'US-MISO->US-SPP': ['AMRN', 'DPC', 'GRE', 'MDU', 'MEC', 'NSP', 'OTP']}\n@@ -50,8 +51,7 @@\n # Remove leading whitespace in column headers.\n df.columns = df.columns.str.strip()\n \n- keys_to_remove = {'Coal Market', 'Coal Self', 'GMT MKT Interval', 'Average Actual Load',\n- 'Other', 'Waste Heat'}\n+ keys_to_remove = {'GMT MKT Interval', 'Average Actual Load', 'Other', 'Waste Heat'}\n \n # Check for new generation columns.\n known_keys = MAPPING.keys() | keys_to_remove\n@@ -68,7 +68,6 @@\n processed_data = []\n for index, row in df.iterrows():\n production = row.to_dict()\n- production['coal'] = production['Coal Market'] + production['Coal Self']\n \n extra_unknowns = sum([production[k] for k in unknown_keys])\n production['unknown'] = production['Other'] + production['Waste Heat'] + extra_unknowns\n", "issue": "New keys causing problems in US-SPP\nUS-SPP is missing from the map due to changes in their data keys.\r\n\r\nError\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/feeder/lib/fetch_data.py\", line 131, in launch_parsers\r\n **parser_kwargs)\r\n File \"/home/contrib/parsers/US_SPP.py\", line 122, in fetch_production\r\n processed_data = data_processor(raw_data, logger)\r\n File \"/home/contrib/parsers/US_SPP.py\", line 71, in data_processor\r\n production['coal'] = production['Coal Market'] + production['Coal Self']\r\nKeyError: 'Coal Market'\r\n```\r\nWarning\r\n```\r\nNew column 'Coal' present in US-SPP data source.\r\n```\n", "before_files": [{"content": "#!usr/bin/env python3\n\n\"\"\"Parser for the Southwest Power Pool area of the United States.\"\"\"\n\nfrom dateutil import parser\nfrom io import StringIO\nfrom logging import getLogger\nimport pandas as pd\nimport requests\n\nGENERATION_URL = 'https://marketplace.spp.org/public-data-api/gen-mix/asFile'\n\nEXCHANGE_URL = 'https://marketplace.spp.org/public-data-api/interchange-trend/asFile'\n\nMAPPING = {'Wind': 'wind',\n 'Nuclear': 'nuclear',\n 'Hydro': 'hydro',\n 'Solar': 'solar',\n 'Natural Gas': 'gas',\n 'Diesel Fuel Oil': 'oil',\n 'Waste Disposal Services': 'biomass'\n }\n\nTIE_MAPPING = {'US-MISO->US-SPP': ['AMRN', 'DPC', 'GRE', 'MDU', 'MEC', 'NSP', 'OTP']}\n\n# NOTE\n# Data sources return timestamps in GMT.\n# Energy storage situation unclear as of 16/03/2018, likely to change quickly in future.\n\n\ndef get_data(url, session=None):\n \"\"\"Returns a pandas dataframe.\"\"\"\n\n s=session or requests.Session()\n req = s.get(url, verify=False)\n df = pd.read_csv(StringIO(req.text))\n\n return df\n\n\ndef data_processor(df, logger):\n \"\"\"\n Takes a dataframe and logging instance as input.\n Checks for new generation types and logs awarning if any are found.\n Parses the dataframe row by row removing unneeded keys.\n Returns a list of 2 element tuples, each containing a datetime object\n and production dictionary.\n \"\"\"\n\n # Remove leading whitespace in column headers.\n df.columns = df.columns.str.strip()\n\n keys_to_remove = {'Coal Market', 'Coal Self', 'GMT MKT Interval', 'Average Actual Load',\n 'Other', 'Waste Heat'}\n\n # Check for new generation columns.\n known_keys = MAPPING.keys() | keys_to_remove\n column_headers = set(df.columns)\n\n unknown_keys = column_headers - known_keys\n\n for heading in unknown_keys:\n logger.warning('New column \\'{}\\' present in US-SPP data source.'.format(\n heading), extra={'key': 'US-SPP'})\n\n keys_to_remove = keys_to_remove | unknown_keys\n\n processed_data = []\n for index, row in df.iterrows():\n production = row.to_dict()\n production['coal'] = production['Coal Market'] + production['Coal Self']\n\n extra_unknowns = sum([production[k] for k in unknown_keys])\n production['unknown'] = production['Other'] + production['Waste Heat'] + extra_unknowns\n\n dt_aware = parser.parse(production['GMT MKT Interval'])\n\n for k in keys_to_remove:\n production.pop(k, None)\n\n mapped_production = {MAPPING.get(k,k):v for k,v in production.items()}\n\n processed_data.append((dt_aware, mapped_production))\n\n return processed_data\n\n\ndef fetch_production(zone_key = 'US-SPP', session=None, target_datetime=None, logger=getLogger(__name__)):\n \"\"\"\n Requests the last known production mix (in MW) of a given zone\n Arguments:\n zone_key (optional) -- used in case a parser is able to fetch multiple zones\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A dictionary in the form:\n {\n 'zoneKey': 'FR',\n 'datetime': '2017-01-01T00:00:00Z',\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': null,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {\n 'hydro': -10.0,\n },\n 'source': 'mysource.com'\n }\n \"\"\"\n\n if target_datetime is not None:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n raw_data = get_data(GENERATION_URL, session=session)\n processed_data = data_processor(raw_data, logger)\n\n data = []\n for item in processed_data:\n datapoint = {\n 'zoneKey': zone_key,\n 'datetime': item[0],\n 'production': item[1],\n 'storage': {},\n 'source': 'spp.org'\n }\n data.append(datapoint)\n\n return data\n\n\n# NOTE disabled until discrepancy in MISO SPP flows is resolved.\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=getLogger(__name__)):\n \"\"\"\n Requests the last 24 hours of power exchange (in MW) between two zones\n Arguments:\n zone_key1 -- the first zone\n zone_key2 -- the second zone; order of the two zones in params doesn't matter\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A list of dictionaries in the form:\n {\n 'sortedZoneKeys': 'DK->NO',\n 'datetime': '2017-01-01T00:00:00Z',\n 'netFlow': 0.0,\n 'source': 'mysource.com'\n }\n where net flow is from DK into NO\n \"\"\"\n\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n raw_data = get_data(EXCHANGE_URL, session=session)\n sorted_codes = '->'.join(sorted([zone_key1, zone_key2]))\n\n try:\n exchange_ties = TIE_MAPPING[sorted_codes]\n except KeyError as e:\n raise NotImplementedError('The exchange {} is not implemented'.format(sorted_codes))\n\n # TODO check glossary for flow direction.\n\n exchange_data = []\n for index, row in raw_data.iterrows():\n all_exchanges = row.to_dict()\n\n dt_aware = parser.parse(all_exchanges['GMTTime'])\n\n flows = [all_exchanges[tie] for tie in exchange_ties]\n netflow = sum(flows)\n\n exchange = {\n 'sortedZoneKeys': sorted_codes,\n 'datetime': dt_aware,\n 'netFlow': netflow,\n 'source': 'spp.org'\n }\n\n exchange_data.append(exchange)\n\n return exchange_data\n\n\nif __name__ == '__main__':\n print('fetch_production() -> ')\n print(fetch_production())\n # print('fetch_exchange() -> ')\n # print(fetch_exchange('US-MISO', 'US-SPP'))\n", "path": "parsers/US_SPP.py"}]} | 2,691 | 360 |
gh_patches_debug_18497 | rasdani/github-patches | git_diff | scikit-hep__pyhf-2357 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
jsonschema cannot find path specified \\1.0.0\\def.json
### Summary
When I try to install pyhf in a venv and then create a simple model, it gives me a n error
```
<urlopen error [WinError 3] The system cannot find the path specified: '\\1.0.0\\defs.json'>
```
When i check the path to the schema with
```
>>> pyhf.schema.path
WindowsPath('C:/Users/alexh/Downloads/pyhf_test/pyhf_venv/Lib/site-packages/pyhf/schemas')
```
Following this path then into the 1.0.0 folder, defs.json exists.
### OS / Environment
```console
# Windows 11
# Python 3.11.6
```
### Steps to Reproduce
```
$ python -m venv venv_pyhf
$ venv_pyhf/Scripts/activate.ps1
$ pip install pyhf
$ python
>>> import pyhf
>>> model = pyhf.simplemodels.uncorrelated_background(signal=[1,2,3], bkg=[4,5,6], bkg_uncertainty=[2,3,4])
_RefResolutionError: <urlopen error [WinError 3] The system cannot find the path specified: '\\1.0.0\\defs.json'>
```
### File Upload (optional)
_No response_
### Expected Results
Expected to get a pyhf.pdf.Model.
### Actual Results
```console
Traceback (most recent call last):
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 1097, in resolve_from_url
document = self.store[url]
~~~~~~~~~~^^^^^
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\_utils.py", line 20, in __getitem__
return self.store[self.normalize(uri)]
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
KeyError: 'file://C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\pyhf\\schemas/1.0.0/defs.json'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\urllib\request.py", line 1505, in open_local_file
stats = os.stat(localfile)
^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 3] The system cannot find the path specified: '\\1.0.0\\defs.json'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 1100, in resolve_from_url
document = self.resolve_remote(url)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 1204, in resolve_remote
with urlopen(uri) as url:
^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\urllib\request.py", line 519, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\urllib\request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\urllib\request.py", line 1483, in file_open
return self.open_local_file(req)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\urllib\request.py", line 1522, in open_local_file
raise URLError(exp)
urllib.error.URLError: <urlopen error [WinError 3] The system cannot find the path specified: '\\1.0.0\\defs.json'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\pyhf\simplemodels.py", line 149, in uncorrelated_background
return Model(spec, batch_size=batch_size, validate=validate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\pyhf\pdf.py", line 769, in __init__
schema.validate(self.spec, self.schema, version=self.version)
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\pyhf\schema\validator.py", line 93, in validate
return validator.validate(spec)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 434, in validate
for error in self.iter_errors(*args, **kwargs):
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 368, in iter_errors
for error in errors:
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\_keywords.py", line 284, in ref
yield from validator._validate_reference(ref=ref, instance=instance)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 461, in _validate_reference
scope, resolved = resolve(ref)
^^^^^^^^^^^^
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 1086, in resolve
return url, self._remote_cache(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\alexh\Downloads\pyhf_test\pyhf_venv\Lib\site-packages\jsonschema\validators.py", line 1102, in resolve_from_url
raise exceptions._RefResolutionError(exc)
jsonschema.exceptions._RefResolutionError: <urlopen error [WinError 3] The system cannot find the path specified: '\\1.0.0\\defs.json'>
```
### pyhf Version
```console
pyhf, version 0.7.4
```
### Code of Conduct
- [X] I agree to follow the Code of Conduct
</issue>
<code>
[start of src/pyhf/schema/validator.py]
1 import numbers
2 from typing import Mapping, Union
3
4 import jsonschema
5
6 import pyhf.exceptions
7 from pyhf import tensor
8 from pyhf.schema import variables
9 from pyhf.schema.loader import load_schema
10
11
12 def _is_array_or_tensor(checker, instance):
13 """
14 A helper function for allowing the validation of tensors as list types in schema validation.
15
16 .. warning:
17
18 This will check for valid array types using any backends that have been loaded so far.
19 """
20 return isinstance(instance, (list, *tensor.array_types))
21
22
23 def _is_number_or_tensor_subtype(checker, instance):
24 """
25 A helper function for allowing the validation of tensor contents as number types in schema validation.
26
27 .. warning:
28 This will check for valid array subtypes using any backends that have been loaded so far.
29 """
30 is_number = jsonschema._types.is_number(checker, instance)
31 if is_number:
32 return True
33 return isinstance(instance, (numbers.Number, *tensor.array_subtypes))
34
35
36 def validate(
37 spec: Mapping,
38 schema_name: str,
39 *,
40 version: Union[str, None] = None,
41 allow_tensors: bool = True,
42 ):
43 """
44 Validate the provided instance, ``spec``, against the schema associated with ``schema_name``.
45
46 Args:
47 spec (:obj:`object`): An object instance to validate against a schema.
48 schema_name (:obj:`string`): The name of a schema to validate against.
49 See :func:`pyhf.schema.load_schema` for more details.
50 version (:obj:`string`): The version of the schema to use.
51 See :func:`pyhf.schema.load_schema` for more details.
52 allow_tensors (:obj:`bool`): A flag to enable or disable tensors as part of schema validation.
53 If enabled, tensors in the ``spec`` will be treated like python :obj:`list`.
54 Default: ``True``.
55
56 Raises:
57 ~pyhf.exceptions.InvalidSpecification: if the provided instance does not validate against the schema.
58
59 Returns:
60 None: if there are no errors with the provided instance.
61
62 Example:
63 >>> import pyhf
64 >>> model = pyhf.simplemodels.uncorrelated_background(
65 ... signal=[12.0, 11.0], bkg=[50.0, 52.0], bkg_uncertainty=[3.0, 7.0]
66 ... )
67 >>> pyhf.schema.validate(model.spec, "model.json")
68 >>>
69 """
70
71 version = version or variables.SCHEMA_VERSION
72
73 schema = load_schema(f'{version}/{schema_name}')
74
75 # note: trailing slash needed for RefResolver to resolve correctly
76 resolver = jsonschema.RefResolver(
77 base_uri=f"file://{variables.schemas}/{version}/",
78 referrer=f"{schema_name}",
79 store=variables.SCHEMA_CACHE,
80 )
81
82 Validator = jsonschema.Draft6Validator
83
84 if allow_tensors:
85 type_checker = Validator.TYPE_CHECKER.redefine(
86 "array", _is_array_or_tensor
87 ).redefine("number", _is_number_or_tensor_subtype)
88 Validator = jsonschema.validators.extend(Validator, type_checker=type_checker)
89
90 validator = Validator(schema, resolver=resolver, format_checker=None)
91
92 try:
93 return validator.validate(spec)
94 except jsonschema.ValidationError as err:
95 raise pyhf.exceptions.InvalidSpecification(err, schema_name)
96
[end of src/pyhf/schema/validator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pyhf/schema/validator.py b/src/pyhf/schema/validator.py
--- a/src/pyhf/schema/validator.py
+++ b/src/pyhf/schema/validator.py
@@ -1,4 +1,5 @@
import numbers
+from pathlib import Path
from typing import Mapping, Union
import jsonschema
@@ -70,12 +71,15 @@
version = version or variables.SCHEMA_VERSION
- schema = load_schema(f'{version}/{schema_name}')
+ schema = load_schema(str(Path(version).joinpath(schema_name)))
- # note: trailing slash needed for RefResolver to resolve correctly
+ # note: trailing slash needed for RefResolver to resolve correctly and by
+ # design, pathlib strips trailing slashes. See ref below:
+ # * https://bugs.python.org/issue21039
+ # * https://github.com/python/cpython/issues/65238
resolver = jsonschema.RefResolver(
- base_uri=f"file://{variables.schemas}/{version}/",
- referrer=f"{schema_name}",
+ base_uri=f"{Path(variables.schemas).joinpath(version).as_uri()}/",
+ referrer=schema_name,
store=variables.SCHEMA_CACHE,
)
| {"golden_diff": "diff --git a/src/pyhf/schema/validator.py b/src/pyhf/schema/validator.py\n--- a/src/pyhf/schema/validator.py\n+++ b/src/pyhf/schema/validator.py\n@@ -1,4 +1,5 @@\n import numbers\n+from pathlib import Path\n from typing import Mapping, Union\n \n import jsonschema\n@@ -70,12 +71,15 @@\n \n version = version or variables.SCHEMA_VERSION\n \n- schema = load_schema(f'{version}/{schema_name}')\n+ schema = load_schema(str(Path(version).joinpath(schema_name)))\n \n- # note: trailing slash needed for RefResolver to resolve correctly\n+ # note: trailing slash needed for RefResolver to resolve correctly and by\n+ # design, pathlib strips trailing slashes. See ref below:\n+ # * https://bugs.python.org/issue21039\n+ # * https://github.com/python/cpython/issues/65238\n resolver = jsonschema.RefResolver(\n- base_uri=f\"file://{variables.schemas}/{version}/\",\n- referrer=f\"{schema_name}\",\n+ base_uri=f\"{Path(variables.schemas).joinpath(version).as_uri()}/\",\n+ referrer=schema_name,\n store=variables.SCHEMA_CACHE,\n )\n", "issue": "jsonschema cannot find path specified \\\\1.0.0\\\\def.json\n### Summary\r\n\r\nWhen I try to install pyhf in a venv and then create a simple model, it gives me a n error \r\n```\r\n<urlopen error [WinError 3] The system cannot find the path specified: '\\\\1.0.0\\\\defs.json'>\r\n```\r\n\r\nWhen i check the path to the schema with \r\n```\r\n>>> pyhf.schema.path\r\nWindowsPath('C:/Users/alexh/Downloads/pyhf_test/pyhf_venv/Lib/site-packages/pyhf/schemas')\r\n```\r\nFollowing this path then into the 1.0.0 folder, defs.json exists.\r\n\r\n### OS / Environment\r\n\r\n```console\r\n# Windows 11\r\n# Python 3.11.6\r\n```\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n\r\n```\r\n$ python -m venv venv_pyhf\r\n$ venv_pyhf/Scripts/activate.ps1\r\n$ pip install pyhf\r\n$ python\r\n>>> import pyhf\r\n>>> model = pyhf.simplemodels.uncorrelated_background(signal=[1,2,3], bkg=[4,5,6], bkg_uncertainty=[2,3,4])\r\n_RefResolutionError: <urlopen error [WinError 3] The system cannot find the path specified: '\\\\1.0.0\\\\defs.json'>\r\n```\r\n\r\n\r\n### File Upload (optional)\r\n\r\n_No response_\r\n\r\n### Expected Results\r\n\r\nExpected to get a pyhf.pdf.Model.\r\n\r\n### Actual Results\r\n\r\n```console\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 1097, in resolve_from_url\r\n document = self.store[url]\r\n ~~~~~~~~~~^^^^^\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\_utils.py\", line 20, in __getitem__\r\n return self.store[self.normalize(uri)]\r\n ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^\r\nKeyError: 'file://C:\\\\Users\\\\alexh\\\\Downloads\\\\pyhf_test\\\\pyhf_venv\\\\Lib\\\\site-packages\\\\pyhf\\\\schemas/1.0.0/defs.json'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\\Lib\\urllib\\request.py\", line 1505, in open_local_file\r\n stats = os.stat(localfile)\r\n ^^^^^^^^^^^^^^^^^^\r\nFileNotFoundError: [WinError 3] The system cannot find the path specified: '\\\\1.0.0\\\\defs.json'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 1100, in resolve_from_url\r\n document = self.resolve_remote(url)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 1204, in resolve_remote\r\n with urlopen(uri) as url:\r\n ^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\\Lib\\urllib\\request.py\", line 216, in urlopen\r\n return opener.open(url, data, timeout)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\\Lib\\urllib\\request.py\", line 519, in open\r\n response = self._open(req, data)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\\Lib\\urllib\\request.py\", line 536, in _open\r\n result = self._call_chain(self.handle_open, protocol, protocol +\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\\Lib\\urllib\\request.py\", line 496, in _call_chain\r\n result = func(*args)\r\n ^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\\Lib\\urllib\\request.py\", line 1483, in file_open\r\n return self.open_local_file(req)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\\Lib\\urllib\\request.py\", line 1522, in open_local_file\r\n raise URLError(exp)\r\nurllib.error.URLError: <urlopen error [WinError 3] The system cannot find the path specified: '\\\\1.0.0\\\\defs.json'>\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\pyhf\\simplemodels.py\", line 149, in uncorrelated_background\r\n return Model(spec, batch_size=batch_size, validate=validate)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\pyhf\\pdf.py\", line 769, in __init__\r\n schema.validate(self.spec, self.schema, version=self.version)\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\pyhf\\schema\\validator.py\", line 93, in validate\r\n return validator.validate(spec)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 434, in validate\r\n for error in self.iter_errors(*args, **kwargs):\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 368, in iter_errors\r\n for error in errors:\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\_keywords.py\", line 284, in ref\r\n yield from validator._validate_reference(ref=ref, instance=instance)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 461, in _validate_reference\r\n scope, resolved = resolve(ref)\r\n ^^^^^^^^^^^^\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 1086, in resolve\r\n return url, self._remote_cache(url)\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\alexh\\Downloads\\pyhf_test\\pyhf_venv\\Lib\\site-packages\\jsonschema\\validators.py\", line 1102, in resolve_from_url\r\n raise exceptions._RefResolutionError(exc)\r\njsonschema.exceptions._RefResolutionError: <urlopen error [WinError 3] The system cannot find the path specified: '\\\\1.0.0\\\\defs.json'>\r\n```\r\n\r\n\r\n### pyhf Version\r\n\r\n```console\r\npyhf, version 0.7.4\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Code of Conduct\n", "before_files": [{"content": "import numbers\nfrom typing import Mapping, Union\n\nimport jsonschema\n\nimport pyhf.exceptions\nfrom pyhf import tensor\nfrom pyhf.schema import variables\nfrom pyhf.schema.loader import load_schema\n\n\ndef _is_array_or_tensor(checker, instance):\n \"\"\"\n A helper function for allowing the validation of tensors as list types in schema validation.\n\n .. warning:\n\n This will check for valid array types using any backends that have been loaded so far.\n \"\"\"\n return isinstance(instance, (list, *tensor.array_types))\n\n\ndef _is_number_or_tensor_subtype(checker, instance):\n \"\"\"\n A helper function for allowing the validation of tensor contents as number types in schema validation.\n\n .. warning:\n This will check for valid array subtypes using any backends that have been loaded so far.\n \"\"\"\n is_number = jsonschema._types.is_number(checker, instance)\n if is_number:\n return True\n return isinstance(instance, (numbers.Number, *tensor.array_subtypes))\n\n\ndef validate(\n spec: Mapping,\n schema_name: str,\n *,\n version: Union[str, None] = None,\n allow_tensors: bool = True,\n):\n \"\"\"\n Validate the provided instance, ``spec``, against the schema associated with ``schema_name``.\n\n Args:\n spec (:obj:`object`): An object instance to validate against a schema.\n schema_name (:obj:`string`): The name of a schema to validate against.\n See :func:`pyhf.schema.load_schema` for more details.\n version (:obj:`string`): The version of the schema to use.\n See :func:`pyhf.schema.load_schema` for more details.\n allow_tensors (:obj:`bool`): A flag to enable or disable tensors as part of schema validation.\n If enabled, tensors in the ``spec`` will be treated like python :obj:`list`.\n Default: ``True``.\n\n Raises:\n ~pyhf.exceptions.InvalidSpecification: if the provided instance does not validate against the schema.\n\n Returns:\n None: if there are no errors with the provided instance.\n\n Example:\n >>> import pyhf\n >>> model = pyhf.simplemodels.uncorrelated_background(\n ... signal=[12.0, 11.0], bkg=[50.0, 52.0], bkg_uncertainty=[3.0, 7.0]\n ... )\n >>> pyhf.schema.validate(model.spec, \"model.json\")\n >>>\n \"\"\"\n\n version = version or variables.SCHEMA_VERSION\n\n schema = load_schema(f'{version}/{schema_name}')\n\n # note: trailing slash needed for RefResolver to resolve correctly\n resolver = jsonschema.RefResolver(\n base_uri=f\"file://{variables.schemas}/{version}/\",\n referrer=f\"{schema_name}\",\n store=variables.SCHEMA_CACHE,\n )\n\n Validator = jsonschema.Draft6Validator\n\n if allow_tensors:\n type_checker = Validator.TYPE_CHECKER.redefine(\n \"array\", _is_array_or_tensor\n ).redefine(\"number\", _is_number_or_tensor_subtype)\n Validator = jsonschema.validators.extend(Validator, type_checker=type_checker)\n\n validator = Validator(schema, resolver=resolver, format_checker=None)\n\n try:\n return validator.validate(spec)\n except jsonschema.ValidationError as err:\n raise pyhf.exceptions.InvalidSpecification(err, schema_name)\n", "path": "src/pyhf/schema/validator.py"}]} | 3,385 | 277 |
gh_patches_debug_13911 | rasdani/github-patches | git_diff | streamlit__streamlit-2100 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Video on main API page is broken
See: https://docs.streamlit.io/en/stable/api.html#streamlit.video

</issue>
<code>
[start of docs/api-examples-source/charts.video.py]
1 import streamlit as st
2
3 VIDEO_URL = "https://vod-progressive.akamaized.net/exp=1595241944~acl=%2A%2F664785003.mp4%2A~hmac=2a26d355839498b80bbb2c43e6808bd12bc0d2e7920616a8226a1e017c270217/vimeo-prod-skyfire-std-us/01/4526/7/197634410/664785003.mp4?filename=Star+-+6962.mp4"
4
5 st.video(VIDEO_URL)
6
7 st.write(
8 """
9 #### Video credit:
10
11 Creator: User _fxxu_ from _Pixabay_.
12
13 License: Free for commercial use. No attribution required.
14 https://pixabay.com/en/service/license/
15
16 URL:
17 https://pixabay.com/en/videos/star-long-exposure-starry-sky-sky-6962/
18
19 """
20 )
21
[end of docs/api-examples-source/charts.video.py]
[start of lib/streamlit/elements/media_proto.py]
1 # Copyright 2018-2020 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from streamlit import type_util
16
17 import io
18 import re
19
20 from validators import url
21
22 from streamlit import type_util
23 from streamlit.proto.Audio_pb2 import Audio as AudioProto
24 from streamlit.proto.Video_pb2 import Video as VideoProto
25 from streamlit.media_file_manager import media_file_manager
26
27
28 class MediaMixin:
29 def audio(dg, data, format="audio/wav", start_time=0):
30 """Display an audio player.
31
32 Parameters
33 ----------
34 data : str, bytes, BytesIO, numpy.ndarray, or file opened with
35 io.open().
36 Raw audio data, filename, or a URL pointing to the file to load.
37 Numpy arrays and raw data formats must include all necessary file
38 headers to match specified file format.
39 start_time: int
40 The time from which this element should start playing.
41 format : str
42 The mime type for the audio file. Defaults to 'audio/wav'.
43 See https://tools.ietf.org/html/rfc4281 for more info.
44
45 Example
46 -------
47 >>> audio_file = open('myaudio.ogg', 'rb')
48 >>> audio_bytes = audio_file.read()
49 >>>
50 >>> st.audio(audio_bytes, format='audio/ogg')
51
52 .. output::
53 https://static.streamlit.io/0.25.0-2JkNY/index.html?id=Dv3M9sA7Cg8gwusgnVNTHb
54 height: 400px
55
56 """
57 audio_proto = AudioProto()
58 coordinates = dg._get_coordinates() # type: ignore
59 marshall_audio(coordinates, audio_proto, data, format, start_time)
60 return dg._enqueue("audio", audio_proto) # type: ignore
61
62 def video(dg, data, format="video/mp4", start_time=0):
63 """Display a video player.
64
65 Parameters
66 ----------
67 data : str, bytes, BytesIO, numpy.ndarray, or file opened with
68 io.open().
69 Raw video data, filename, or URL pointing to a video to load.
70 Includes support for YouTube URLs.
71 Numpy arrays and raw data formats must include all necessary file
72 headers to match specified file format.
73 format : str
74 The mime type for the video file. Defaults to 'video/mp4'.
75 See https://tools.ietf.org/html/rfc4281 for more info.
76 start_time: int
77 The time from which this element should start playing.
78
79 Example
80 -------
81 >>> video_file = open('myvideo.mp4', 'rb')
82 >>> video_bytes = video_file.read()
83 >>>
84 >>> st.video(video_bytes)
85
86 .. output::
87 https://static.streamlit.io/0.61.0-yRE1/index.html?id=LZLtVFFTf1s41yfPExzRu8
88 height: 600px
89
90 .. note::
91 Some videos may not display if they are encoded using MP4V (which is an export option in OpenCV), as this codec is
92 not widely supported by browsers. Converting your video to H.264 will allow the video to be displayed in Streamlit.
93 See this `StackOverflow post <https://stackoverflow.com/a/49535220/2394542>`_ or this
94 `Streamlit forum post <https://discuss.streamlit.io/t/st-video-doesnt-show-opencv-generated-mp4/3193/2>`_
95 for more information.
96
97 """
98 video_proto = VideoProto()
99 coordinates = dg._get_coordinates() # type: ignore
100 marshall_video(coordinates, video_proto, data, format, start_time)
101 return dg._enqueue("video", video_proto) # type: ignore
102
103
104 # Regular expression explained at https://regexr.com/4n2l2 Covers any youtube
105 # URL (incl. shortlinks and embed links) and extracts its code.
106 YOUTUBE_RE = re.compile(
107 # Protocol
108 r"http(?:s?):\/\/"
109 # Domain
110 r"(?:www\.)?youtu(?:be\.com|\.be)\/"
111 # Path and query string
112 r"(?P<watch>(watch\?v=)|embed\/)?(?P<code>[\w\-\_]*)(&(amp;)?[\w\?=]*)?"
113 )
114
115
116 def _reshape_youtube_url(url):
117 """Return whether URL is any kind of YouTube embed or watch link. If so,
118 reshape URL into an embed link suitable for use in an iframe.
119
120 If not a YouTube URL, return None.
121
122 Parameters
123 ----------
124 url : str or bytes
125
126 Example
127 -------
128 >>> print(_reshape_youtube_url('https://youtu.be/_T8LGqJtuGc'))
129
130 .. output::
131 https://www.youtube.com/embed/_T8LGqJtuGc
132 """
133 match = YOUTUBE_RE.match(url)
134 if match:
135 return "https://www.youtube.com/embed/{code}".format(**match.groupdict())
136 return None
137
138
139 def _marshall_av_media(coordinates, proto, data, mimetype):
140 """Fill audio or video proto based on contents of data.
141
142 Given a string, check if it's a url; if so, send it out without modification.
143 Otherwise assume strings are filenames and let any OS errors raise.
144
145 Load data either from file or through bytes-processing methods into a
146 MediaFile object. Pack proto with generated Tornado-based URL.
147 """
148 # Audio and Video methods have already checked if this is a URL by this point.
149
150 if isinstance(data, str):
151 # Assume it's a filename or blank. Allow OS-based file errors.
152 with open(data, "rb") as fh:
153 this_file = media_file_manager.add(fh.read(), mimetype, coordinates)
154 proto.url = this_file.url
155 return
156
157 if data is None:
158 # Allow empty values so media players can be shown without media.
159 return
160
161 # Assume bytes; try methods until we run out.
162 if isinstance(data, bytes):
163 pass
164 elif isinstance(data, io.BytesIO):
165 data.seek(0)
166 data = data.getvalue()
167 elif isinstance(data, io.RawIOBase) or isinstance(data, io.BufferedReader):
168 data.seek(0)
169 data = data.read()
170 elif type_util.is_type(data, "numpy.ndarray"):
171 data = data.tobytes()
172 else:
173 raise RuntimeError("Invalid binary data format: %s" % type(data))
174
175 this_file = media_file_manager.add(data, mimetype, coordinates)
176 proto.url = this_file.url
177
178
179 def marshall_video(coordinates, proto, data, mimetype="video/mp4", start_time=0):
180 """Marshalls a video proto, using url processors as needed.
181
182 Parameters
183 ----------
184 coordinates : str
185 proto : the proto to fill. Must have a string field called "data".
186 data : str, bytes, BytesIO, numpy.ndarray, or file opened with
187 io.open().
188 Raw video data or a string with a URL pointing to the video
189 to load. Includes support for YouTube URLs.
190 If passing the raw data, this must include headers and any other
191 bytes required in the actual file.
192 mimetype : str
193 The mime type for the video file. Defaults to 'video/mp4'.
194 See https://tools.ietf.org/html/rfc4281 for more info.
195 start_time : int
196 The time from which this element should start playing. (default: 0)
197 """
198
199 proto.start_time = start_time
200
201 # "type" distinguishes between YouTube and non-YouTube links
202 proto.type = VideoProto.Type.NATIVE
203
204 if isinstance(data, str) and url(data):
205 youtube_url = _reshape_youtube_url(data)
206 if youtube_url:
207 proto.url = youtube_url
208 proto.type = VideoProto.Type.YOUTUBE_IFRAME
209 else:
210 proto.url = data
211
212 else:
213 _marshall_av_media(coordinates, proto, data, mimetype)
214
215
216 def marshall_audio(coordinates, proto, data, mimetype="audio/wav", start_time=0):
217 """Marshalls an audio proto, using data and url processors as needed.
218
219 Parameters
220 ----------
221 coordinates : str
222 proto : The proto to fill. Must have a string field called "url".
223 data : str, bytes, BytesIO, numpy.ndarray, or file opened with
224 io.open()
225 Raw audio data or a string with a URL pointing to the file to load.
226 If passing the raw data, this must include headers and any other bytes
227 required in the actual file.
228 mimetype : str
229 The mime type for the audio file. Defaults to "audio/wav".
230 See https://tools.ietf.org/html/rfc4281 for more info.
231 start_time : int
232 The time from which this element should start playing. (default: 0)
233 """
234
235 proto.start_time = start_time
236
237 if isinstance(data, str) and url(data):
238 proto.url = data
239
240 else:
241 _marshall_av_media(coordinates, proto, data, mimetype)
242
[end of lib/streamlit/elements/media_proto.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/api-examples-source/charts.video.py b/docs/api-examples-source/charts.video.py
--- a/docs/api-examples-source/charts.video.py
+++ b/docs/api-examples-source/charts.video.py
@@ -1,6 +1,6 @@
import streamlit as st
-VIDEO_URL = "https://vod-progressive.akamaized.net/exp=1595241944~acl=%2A%2F664785003.mp4%2A~hmac=2a26d355839498b80bbb2c43e6808bd12bc0d2e7920616a8226a1e017c270217/vimeo-prod-skyfire-std-us/01/4526/7/197634410/664785003.mp4?filename=Star+-+6962.mp4"
+VIDEO_URL = "https://static.streamlit.io/examples/star.mp4"
st.video(VIDEO_URL)
diff --git a/lib/streamlit/elements/media_proto.py b/lib/streamlit/elements/media_proto.py
--- a/lib/streamlit/elements/media_proto.py
+++ b/lib/streamlit/elements/media_proto.py
@@ -84,7 +84,7 @@
>>> st.video(video_bytes)
.. output::
- https://static.streamlit.io/0.61.0-yRE1/index.html?id=LZLtVFFTf1s41yfPExzRu8
+ https://static.streamlit.io/0.66.0-2BLtg/index.html?id=DzAouvizGRAyuLjkPpR894
height: 600px
.. note::
| {"golden_diff": "diff --git a/docs/api-examples-source/charts.video.py b/docs/api-examples-source/charts.video.py\n--- a/docs/api-examples-source/charts.video.py\n+++ b/docs/api-examples-source/charts.video.py\n@@ -1,6 +1,6 @@\n import streamlit as st\n \n-VIDEO_URL = \"https://vod-progressive.akamaized.net/exp=1595241944~acl=%2A%2F664785003.mp4%2A~hmac=2a26d355839498b80bbb2c43e6808bd12bc0d2e7920616a8226a1e017c270217/vimeo-prod-skyfire-std-us/01/4526/7/197634410/664785003.mp4?filename=Star+-+6962.mp4\"\n+VIDEO_URL = \"https://static.streamlit.io/examples/star.mp4\"\n \n st.video(VIDEO_URL)\n \ndiff --git a/lib/streamlit/elements/media_proto.py b/lib/streamlit/elements/media_proto.py\n--- a/lib/streamlit/elements/media_proto.py\n+++ b/lib/streamlit/elements/media_proto.py\n@@ -84,7 +84,7 @@\n >>> st.video(video_bytes)\n \n .. output::\n- https://static.streamlit.io/0.61.0-yRE1/index.html?id=LZLtVFFTf1s41yfPExzRu8\n+ https://static.streamlit.io/0.66.0-2BLtg/index.html?id=DzAouvizGRAyuLjkPpR894\n height: 600px\n \n .. note::\n", "issue": "Video on main API page is broken\nSee: https://docs.streamlit.io/en/stable/api.html#streamlit.video\r\n\r\n\n", "before_files": [{"content": "import streamlit as st\n\nVIDEO_URL = \"https://vod-progressive.akamaized.net/exp=1595241944~acl=%2A%2F664785003.mp4%2A~hmac=2a26d355839498b80bbb2c43e6808bd12bc0d2e7920616a8226a1e017c270217/vimeo-prod-skyfire-std-us/01/4526/7/197634410/664785003.mp4?filename=Star+-+6962.mp4\"\n\nst.video(VIDEO_URL)\n\nst.write(\n \"\"\"\n #### Video credit:\n\n Creator: User _fxxu_ from _Pixabay_.\n\n License: Free for commercial use. No attribution required.\n https://pixabay.com/en/service/license/\n\n URL:\n https://pixabay.com/en/videos/star-long-exposure-starry-sky-sky-6962/\n\n\"\"\"\n)\n", "path": "docs/api-examples-source/charts.video.py"}, {"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom streamlit import type_util\n\nimport io\nimport re\n\nfrom validators import url\n\nfrom streamlit import type_util\nfrom streamlit.proto.Audio_pb2 import Audio as AudioProto\nfrom streamlit.proto.Video_pb2 import Video as VideoProto\nfrom streamlit.media_file_manager import media_file_manager\n\n\nclass MediaMixin:\n def audio(dg, data, format=\"audio/wav\", start_time=0):\n \"\"\"Display an audio player.\n\n Parameters\n ----------\n data : str, bytes, BytesIO, numpy.ndarray, or file opened with\n io.open().\n Raw audio data, filename, or a URL pointing to the file to load.\n Numpy arrays and raw data formats must include all necessary file\n headers to match specified file format.\n start_time: int\n The time from which this element should start playing.\n format : str\n The mime type for the audio file. Defaults to 'audio/wav'.\n See https://tools.ietf.org/html/rfc4281 for more info.\n\n Example\n -------\n >>> audio_file = open('myaudio.ogg', 'rb')\n >>> audio_bytes = audio_file.read()\n >>>\n >>> st.audio(audio_bytes, format='audio/ogg')\n\n .. output::\n https://static.streamlit.io/0.25.0-2JkNY/index.html?id=Dv3M9sA7Cg8gwusgnVNTHb\n height: 400px\n\n \"\"\"\n audio_proto = AudioProto()\n coordinates = dg._get_coordinates() # type: ignore\n marshall_audio(coordinates, audio_proto, data, format, start_time)\n return dg._enqueue(\"audio\", audio_proto) # type: ignore\n\n def video(dg, data, format=\"video/mp4\", start_time=0):\n \"\"\"Display a video player.\n\n Parameters\n ----------\n data : str, bytes, BytesIO, numpy.ndarray, or file opened with\n io.open().\n Raw video data, filename, or URL pointing to a video to load.\n Includes support for YouTube URLs.\n Numpy arrays and raw data formats must include all necessary file\n headers to match specified file format.\n format : str\n The mime type for the video file. Defaults to 'video/mp4'.\n See https://tools.ietf.org/html/rfc4281 for more info.\n start_time: int\n The time from which this element should start playing.\n\n Example\n -------\n >>> video_file = open('myvideo.mp4', 'rb')\n >>> video_bytes = video_file.read()\n >>>\n >>> st.video(video_bytes)\n\n .. output::\n https://static.streamlit.io/0.61.0-yRE1/index.html?id=LZLtVFFTf1s41yfPExzRu8\n height: 600px\n\n .. note::\n Some videos may not display if they are encoded using MP4V (which is an export option in OpenCV), as this codec is\n not widely supported by browsers. Converting your video to H.264 will allow the video to be displayed in Streamlit.\n See this `StackOverflow post <https://stackoverflow.com/a/49535220/2394542>`_ or this\n `Streamlit forum post <https://discuss.streamlit.io/t/st-video-doesnt-show-opencv-generated-mp4/3193/2>`_\n for more information.\n\n \"\"\"\n video_proto = VideoProto()\n coordinates = dg._get_coordinates() # type: ignore\n marshall_video(coordinates, video_proto, data, format, start_time)\n return dg._enqueue(\"video\", video_proto) # type: ignore\n\n\n# Regular expression explained at https://regexr.com/4n2l2 Covers any youtube\n# URL (incl. shortlinks and embed links) and extracts its code.\nYOUTUBE_RE = re.compile(\n # Protocol\n r\"http(?:s?):\\/\\/\"\n # Domain\n r\"(?:www\\.)?youtu(?:be\\.com|\\.be)\\/\"\n # Path and query string\n r\"(?P<watch>(watch\\?v=)|embed\\/)?(?P<code>[\\w\\-\\_]*)(&(amp;)?[\\w\\?=]*)?\"\n)\n\n\ndef _reshape_youtube_url(url):\n \"\"\"Return whether URL is any kind of YouTube embed or watch link. If so,\n reshape URL into an embed link suitable for use in an iframe.\n\n If not a YouTube URL, return None.\n\n Parameters\n ----------\n url : str or bytes\n\n Example\n -------\n >>> print(_reshape_youtube_url('https://youtu.be/_T8LGqJtuGc'))\n\n .. output::\n https://www.youtube.com/embed/_T8LGqJtuGc\n \"\"\"\n match = YOUTUBE_RE.match(url)\n if match:\n return \"https://www.youtube.com/embed/{code}\".format(**match.groupdict())\n return None\n\n\ndef _marshall_av_media(coordinates, proto, data, mimetype):\n \"\"\"Fill audio or video proto based on contents of data.\n\n Given a string, check if it's a url; if so, send it out without modification.\n Otherwise assume strings are filenames and let any OS errors raise.\n\n Load data either from file or through bytes-processing methods into a\n MediaFile object. Pack proto with generated Tornado-based URL.\n \"\"\"\n # Audio and Video methods have already checked if this is a URL by this point.\n\n if isinstance(data, str):\n # Assume it's a filename or blank. Allow OS-based file errors.\n with open(data, \"rb\") as fh:\n this_file = media_file_manager.add(fh.read(), mimetype, coordinates)\n proto.url = this_file.url\n return\n\n if data is None:\n # Allow empty values so media players can be shown without media.\n return\n\n # Assume bytes; try methods until we run out.\n if isinstance(data, bytes):\n pass\n elif isinstance(data, io.BytesIO):\n data.seek(0)\n data = data.getvalue()\n elif isinstance(data, io.RawIOBase) or isinstance(data, io.BufferedReader):\n data.seek(0)\n data = data.read()\n elif type_util.is_type(data, \"numpy.ndarray\"):\n data = data.tobytes()\n else:\n raise RuntimeError(\"Invalid binary data format: %s\" % type(data))\n\n this_file = media_file_manager.add(data, mimetype, coordinates)\n proto.url = this_file.url\n\n\ndef marshall_video(coordinates, proto, data, mimetype=\"video/mp4\", start_time=0):\n \"\"\"Marshalls a video proto, using url processors as needed.\n\n Parameters\n ----------\n coordinates : str\n proto : the proto to fill. Must have a string field called \"data\".\n data : str, bytes, BytesIO, numpy.ndarray, or file opened with\n io.open().\n Raw video data or a string with a URL pointing to the video\n to load. Includes support for YouTube URLs.\n If passing the raw data, this must include headers and any other\n bytes required in the actual file.\n mimetype : str\n The mime type for the video file. Defaults to 'video/mp4'.\n See https://tools.ietf.org/html/rfc4281 for more info.\n start_time : int\n The time from which this element should start playing. (default: 0)\n \"\"\"\n\n proto.start_time = start_time\n\n # \"type\" distinguishes between YouTube and non-YouTube links\n proto.type = VideoProto.Type.NATIVE\n\n if isinstance(data, str) and url(data):\n youtube_url = _reshape_youtube_url(data)\n if youtube_url:\n proto.url = youtube_url\n proto.type = VideoProto.Type.YOUTUBE_IFRAME\n else:\n proto.url = data\n\n else:\n _marshall_av_media(coordinates, proto, data, mimetype)\n\n\ndef marshall_audio(coordinates, proto, data, mimetype=\"audio/wav\", start_time=0):\n \"\"\"Marshalls an audio proto, using data and url processors as needed.\n\n Parameters\n ----------\n coordinates : str\n proto : The proto to fill. Must have a string field called \"url\".\n data : str, bytes, BytesIO, numpy.ndarray, or file opened with\n io.open()\n Raw audio data or a string with a URL pointing to the file to load.\n If passing the raw data, this must include headers and any other bytes\n required in the actual file.\n mimetype : str\n The mime type for the audio file. Defaults to \"audio/wav\".\n See https://tools.ietf.org/html/rfc4281 for more info.\n start_time : int\n The time from which this element should start playing. (default: 0)\n \"\"\"\n\n proto.start_time = start_time\n\n if isinstance(data, str) and url(data):\n proto.url = data\n\n else:\n _marshall_av_media(coordinates, proto, data, mimetype)\n", "path": "lib/streamlit/elements/media_proto.py"}]} | 3,640 | 413 |
gh_patches_debug_1300 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-368 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Better check for codec names
currently, codec name argument is not checked. A typo would result in worker interpreting encoded data.
</issue>
<code>
[start of elasticdl/master/main.py]
1 import logging
2 import time
3 import argparse
4 import os
5
6 import grpc
7 import tensorflow as tf
8
9 tf.enable_eager_execution()
10
11 from concurrent import futures
12 from recordio import File
13 from elasticdl.proto import master_pb2_grpc
14 from elasticdl.master.servicer import MasterServicer
15 from elasticdl.master.task_queue import _TaskQueue
16 from elasticdl.master.k8s_worker_manager import WorkerManager
17 from elasticdl.common.model_helper import load_user_model, build_model
18
19
20 def _make_task_queue(data_dir, record_per_task, num_epoch):
21 f_records = {}
22 for f in os.listdir(data_dir):
23 p = os.path.join(data_dir, f)
24 with File(p, "r") as rio:
25 f_records[p] = rio.count()
26 return _TaskQueue(f_records, record_per_task, num_epoch)
27
28
29 def _parse_args():
30 parser = argparse.ArgumentParser(description="ElasticDL Master")
31 parser.add_argument(
32 "--model_file",
33 help="Full file path of user defined neural model",
34 required=True,
35 )
36 parser.add_argument(
37 "--train_data_dir",
38 help="Training data directory. Files should be in RecordIO format",
39 required=True,
40 )
41 parser.add_argument("--record_per_task", type=int, required=True)
42 parser.add_argument("--num_epoch", type=int, required=True)
43 parser.add_argument(
44 "--grads_to_wait",
45 type=int,
46 help="Number of gradients to wait before updating model",
47 required=True,
48 )
49 parser.add_argument(
50 "--minibatch_size",
51 type=int,
52 help="Minibatch size used by workers to compute gradients",
53 required=True,
54 )
55 parser.add_argument(
56 "--num_worker",
57 type=int,
58 help="the number of workers used in training",
59 default=0,
60 )
61 parser.add_argument(
62 "--worker_image", help="docker image for worker", default=None
63 )
64 parser.add_argument("--job_name", help="job name", required=True)
65 parser.add_argument(
66 "--codec-type",
67 default=None,
68 help="Type of codec(tf_example or None)",
69 )
70 return parser.parse_args()
71
72
73 def main():
74 # TODO: pass port via flags.
75 PORT = 50001
76 logger = logging.getLogger("master")
77 args = _parse_args()
78 task_q = _make_task_queue(
79 args.train_data_dir, args.record_per_task, args.num_epoch
80 )
81 model_module = load_user_model(args.model_file)
82 model_inst = model_module.model
83 build_model(model_inst, model_module.feature_columns())
84 optimizer = model_module.optimizer()
85
86 server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))
87 master_pb2_grpc.add_MasterServicer_to_server(
88 MasterServicer(
89 logger,
90 args.grads_to_wait,
91 args.minibatch_size,
92 optimizer,
93 task_q,
94 init_var=model_inst.trainable_variables,
95 ),
96 server,
97 )
98 server.add_insecure_port("[::]:{}".format(PORT))
99 server.start()
100 logger.warning("Server started at port: %d", PORT)
101
102 if args.num_worker:
103 master_addr = "%s:%d" % (os.getenv("MY_POD_IP", "localhost"), PORT)
104 worker_command = ["python"]
105 worker_args = [
106 "-m",
107 "elasticdl.worker.main",
108 "--model_file",
109 args.model_file,
110 "--master_addr",
111 master_addr,
112 "--codec-type",
113 args.codec_type
114 ]
115
116 worker_manager = WorkerManager(
117 job_name=args.job_name,
118 worker_image=args.worker_image,
119 command=worker_command,
120 args=worker_args,
121 namespace="default",
122 num_worker=args.num_worker,
123 )
124 worker_manager.start_workers(restart_policy="Never")
125
126 try:
127 while True:
128 if task_q.finished():
129 break
130 time.sleep(30)
131 except KeyboardInterrupt:
132 logger.warning("Server stopping")
133
134 if args.num_worker:
135 # TODO: worker_manager.remove_workers supports synchronized call
136 worker_manager.remove_workers()
137 # wait for worker pod to be deleted
138 max_check_num = 10
139 for _ in range(max_check_num):
140 time.sleep(3)
141 counters = worker_manager.get_counters()
142 if not counters:
143 break
144 server.stop(0)
145
146
147 if __name__ == "__main__":
148 logging.basicConfig()
149 main()
150
[end of elasticdl/master/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/master/main.py b/elasticdl/master/main.py
--- a/elasticdl/master/main.py
+++ b/elasticdl/master/main.py
@@ -65,6 +65,7 @@
parser.add_argument(
"--codec-type",
default=None,
+ choices=["tf_example"],
help="Type of codec(tf_example or None)",
)
return parser.parse_args()
| {"golden_diff": "diff --git a/elasticdl/master/main.py b/elasticdl/master/main.py\n--- a/elasticdl/master/main.py\n+++ b/elasticdl/master/main.py\n@@ -65,6 +65,7 @@\n parser.add_argument(\n \"--codec-type\",\n default=None,\n+ choices=[\"tf_example\"],\n help=\"Type of codec(tf_example or None)\",\n )\n return parser.parse_args()\n", "issue": "Better check for codec names\ncurrently, codec name argument is not checked. A typo would result in worker interpreting encoded data.\n", "before_files": [{"content": "import logging\nimport time\nimport argparse\nimport os\n\nimport grpc\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nfrom concurrent import futures\nfrom recordio import File\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.master.servicer import MasterServicer\nfrom elasticdl.master.task_queue import _TaskQueue\nfrom elasticdl.master.k8s_worker_manager import WorkerManager\nfrom elasticdl.common.model_helper import load_user_model, build_model\n\n\ndef _make_task_queue(data_dir, record_per_task, num_epoch):\n f_records = {}\n for f in os.listdir(data_dir):\n p = os.path.join(data_dir, f)\n with File(p, \"r\") as rio:\n f_records[p] = rio.count()\n return _TaskQueue(f_records, record_per_task, num_epoch)\n\n\ndef _parse_args():\n parser = argparse.ArgumentParser(description=\"ElasticDL Master\")\n parser.add_argument(\n \"--model_file\",\n help=\"Full file path of user defined neural model\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_dir\",\n help=\"Training data directory. Files should be in RecordIO format\",\n required=True,\n )\n parser.add_argument(\"--record_per_task\", type=int, required=True)\n parser.add_argument(\"--num_epoch\", type=int, required=True)\n parser.add_argument(\n \"--grads_to_wait\",\n type=int,\n help=\"Number of gradients to wait before updating model\",\n required=True,\n )\n parser.add_argument(\n \"--minibatch_size\",\n type=int,\n help=\"Minibatch size used by workers to compute gradients\",\n required=True,\n )\n parser.add_argument(\n \"--num_worker\",\n type=int,\n help=\"the number of workers used in training\",\n default=0,\n )\n parser.add_argument(\n \"--worker_image\", help=\"docker image for worker\", default=None\n )\n parser.add_argument(\"--job_name\", help=\"job name\", required=True)\n parser.add_argument(\n \"--codec-type\",\n default=None,\n help=\"Type of codec(tf_example or None)\",\n )\n return parser.parse_args()\n\n\ndef main():\n # TODO: pass port via flags.\n PORT = 50001\n logger = logging.getLogger(\"master\")\n args = _parse_args()\n task_q = _make_task_queue(\n args.train_data_dir, args.record_per_task, args.num_epoch\n )\n model_module = load_user_model(args.model_file)\n model_inst = model_module.model\n build_model(model_inst, model_module.feature_columns())\n optimizer = model_module.optimizer()\n\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))\n master_pb2_grpc.add_MasterServicer_to_server(\n MasterServicer(\n logger,\n args.grads_to_wait,\n args.minibatch_size,\n optimizer,\n task_q,\n init_var=model_inst.trainable_variables,\n ),\n server,\n )\n server.add_insecure_port(\"[::]:{}\".format(PORT))\n server.start()\n logger.warning(\"Server started at port: %d\", PORT)\n\n if args.num_worker:\n master_addr = \"%s:%d\" % (os.getenv(\"MY_POD_IP\", \"localhost\"), PORT)\n worker_command = [\"python\"]\n worker_args = [\n \"-m\",\n \"elasticdl.worker.main\",\n \"--model_file\",\n args.model_file,\n \"--master_addr\",\n master_addr,\n \"--codec-type\",\n args.codec_type\n ]\n\n worker_manager = WorkerManager(\n job_name=args.job_name,\n worker_image=args.worker_image,\n command=worker_command,\n args=worker_args,\n namespace=\"default\",\n num_worker=args.num_worker,\n )\n worker_manager.start_workers(restart_policy=\"Never\")\n\n try:\n while True:\n if task_q.finished():\n break\n time.sleep(30)\n except KeyboardInterrupt:\n logger.warning(\"Server stopping\")\n\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n # wait for worker pod to be deleted\n max_check_num = 10\n for _ in range(max_check_num):\n time.sleep(3)\n counters = worker_manager.get_counters()\n if not counters:\n break\n server.stop(0)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig()\n main()\n", "path": "elasticdl/master/main.py"}]} | 1,845 | 88 |
gh_patches_debug_30780 | rasdani/github-patches | git_diff | instadeepai__Mava-767 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Single Process Example Fails
### Describe the bug
Error:
```
Traceback (most recent call last):
File "/home/kaleab/Documents/Code/Mava/Mava/examples/jax/debugging/simple_spread/feedforward/decentralised/run_ippo_single_process.py", line 110, in <module>
app.run(main)
File "/home/kaleab/anaconda3/envs/mava/lib/python3.9/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/kaleab/anaconda3/envs/mava/lib/python3.9/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "/home/kaleab/Documents/Code/Mava/Mava/examples/jax/debugging/simple_spread/feedforward/decentralised/run_ippo_single_process.py", line 106, in main
system.launch()
File "/home/kaleab/Documents/Code/Mava/Mava/mava/systems/jax/system.py", line 185, in launch
self._builder.launch()
File "/home/kaleab/Documents/Code/Mava/Mava/mava/systems/jax/builder.py", line 244, in launch
self.on_building_launch()
File "/home/kaleab/Documents/Code/Mava/Mava/mava/callbacks/builder_mixin.py", line 190, in on_building_launch
callback.on_building_launch(self)
File "/home/kaleab/Documents/Code/Mava/Mava/mava/components/jax/building/distributor.py", line 147, in on_building_launch
builder.store.program.launch()
File "/home/kaleab/Documents/Code/Mava/Mava/mava/systems/jax/launcher.py", line 197, in launch
queue_threshold = data_server.server_info()["trainer"].max_size
KeyError: 'trainer'
[reverb/cc/platform/default/server.cc:84] Shutting down replay server
```
### To Reproduce
Steps to reproduce the behavior:
1. `python examples/jax/debugging/simple_spread/feedforward/decentralised/run_ippo_single_process.py`
### Expected behavior
Example to run.
### Context (Environment)
- Mava `dev`.
### Additional context
-
### Possible Solution
-
</issue>
<code>
[start of mava/systems/jax/launcher.py]
1 # python3
2 # Copyright 2021 InstaDeep Ltd. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """General launcher for systems"""
17 from typing import Any, Dict, List, Optional, Union
18
19 import launchpad as lp
20 import reverb
21
22 from mava.utils import lp_utils
23 from mava.utils.builder_utils import copy_node_fn
24
25
26 class NodeType:
27 """Specify launchpad node types that systems can use."""
28
29 reverb = lp.ReverbNode
30 courier = lp.CourierNode
31
32
33 class Launcher:
34 """This mava launcher can be used to launch multi-node systems using either single \
35 or distributed computation."""
36
37 def __init__(
38 self,
39 multi_process: bool,
40 nodes_on_gpu: List = [],
41 single_process_trainer_period: int = 1,
42 single_process_evaluator_period: int = 10,
43 single_process_max_episodes: Optional[int] = None,
44 name: str = "System",
45 terminal: str = "current_terminal",
46 is_test: Optional[bool] = False,
47 ) -> None:
48 """Initialise the launcher.
49
50 If multi-process, set up the launchpad program.
51 Otherwise, create a dictionary for the nodes in the system.
52
53 Args:
54 multi_process : whether to use launchpad to run nodes on separate processes.
55 nodes_on_gpu : which nodes should be run on the GPU.
56 single_process_trainer_period : number of episodes between single process
57 trainer steps.
58 single_process_evaluator_period : num episodes between single process
59 evaluator steps.
60 single_process_max_episodes: maximum number of episodes to run
61 before termination.
62 name : launchpad program name.
63 terminal : terminal for launchpad processes to be shown on.
64 is_test : whether to set testing launchpad launch_type.
65 """
66 self._is_test = is_test
67 self._multi_process = multi_process
68 self._name = name
69 self._single_process_trainer_period = single_process_trainer_period
70 self._single_process_evaluator_period = single_process_evaluator_period
71 self._single_process_max_episodes = single_process_max_episodes
72 self._terminal = terminal
73 if multi_process:
74 self._program = lp.Program(name=name)
75 self._nodes_on_gpu = nodes_on_gpu
76 else:
77 self._nodes: List = []
78 self._node_dict: Dict = {
79 "data_server": None,
80 "parameter_server": None,
81 "executor": None,
82 "evaluator": None,
83 "trainer": None,
84 }
85
86 def add(
87 self,
88 node_fn: Any,
89 arguments: Any = [],
90 node_type: Union[lp.ReverbNode, lp.CourierNode] = NodeType.courier,
91 name: str = "Node",
92 ) -> Any:
93 """Add a node to the system.
94
95 If multi-processing, add a node to the existing launchpad program,
96 grouped under the given name.
97 This means that when multi-processing,
98 you can have multiple nodes of the same name (e.g. executor).
99 If system is single-process, only one node per name is allowed in the system.
100
101 Args:
102 node_fn : Function returning the system process that will run on the node.
103 arguments : Arguments used when initialising the system process.
104 node_type : Type of launchpad node to use.
105 name : Node name (e.g. executor).
106
107 Raises:
108 ValueError: if single-process and node name is not supported.
109 ValueError: if single-process and trying to init a node more than once.
110
111 Returns:
112 The system process or launchpad node.
113 """
114 # Create a list of arguments
115 if type(arguments) is not list:
116 arguments = [arguments]
117
118 if self._multi_process:
119 with self._program.group(name):
120 if self._is_test:
121 node_fn = copy_node_fn(node_fn)
122 node = self._program.add_node(node_type(node_fn, *arguments))
123 return node
124 else:
125 if name not in self._node_dict:
126 raise ValueError(
127 f"{name} is not a valid node name."
128 + "Single process currently only supports "
129 + "nodes named: {list(self._node_dict.keys())}"
130 )
131 elif self._node_dict[name] is not None:
132 raise ValueError(
133 f"Node named {name} initialised more than once."
134 + "Single process currently only supports one node per type."
135 )
136
137 node_fn = copy_node_fn(node_fn)
138 process = node_fn(*arguments)
139 if node_type == lp.ReverbNode:
140 # Assigning server to self to keep it alive.
141 self._replay_server = reverb.Server(process, port=None)
142 process = reverb.Client(f"localhost:{self._replay_server.port}")
143 self._nodes.append(process)
144 self._node_dict[name] = process
145 return process
146
147 def get_nodes(self) -> List[Any]:
148 """Get the nodes of a single-process system.
149
150 Raises:
151 ValueError: if system is multi-process.
152
153 Returns:
154 System nodes.
155 """
156 if self._multi_process:
157 raise ValueError("Get nodes only implemented for single process setups.")
158
159 return self._nodes
160
161 def launch(self) -> None:
162 """Launch the launchpad program or start the single-process system loop.
163
164 Returns:
165 None.
166 """
167 if self._multi_process:
168 if self._is_test:
169 launch_type = lp.LaunchType.TEST_MULTI_THREADING
170 else:
171 launch_type = lp.LaunchType.LOCAL_MULTI_PROCESSING
172
173 local_resources = lp_utils.to_device(
174 program_nodes=self._program.groups.keys(),
175 nodes_on_gpu=self._nodes_on_gpu,
176 )
177
178 lp.launch(
179 self._program,
180 launch_type=launch_type,
181 terminal=self._terminal,
182 local_resources=local_resources,
183 )
184
185 else:
186 episode = 1
187 step = 1
188 executor_steps = 0
189
190 data_server = self._node_dict["data_server"]
191 _ = self._node_dict["parameter_server"]
192 executor = self._node_dict["executor"]
193 evaluator = self._node_dict["evaluator"]
194 trainer = self._node_dict["trainer"]
195
196 # getting the maximum queue size
197 queue_threshold = data_server.server_info()["trainer"].max_size
198
199 while (
200 self._single_process_max_episodes is None
201 or episode <= self._single_process_max_episodes
202 ):
203 # if the queue is too full we skip the executor to ensure that the
204 # executor won't hang when trying to push experience
205 if data_server.server_info()["trainer"].current_size < int(
206 queue_threshold * 0.75
207 ):
208 executor_stats = executor.run_episode_and_log()
209 executor_steps += executor_stats["episode_length"]
210
211 print(f"Episode {episode} completed.")
212 episode += 1
213
214 # if the queue has less than sample_batch_size samples in it we skip
215 # the trainer to ensure that the trainer won't hang
216 if (
217 data_server.server_info()["trainer"].current_size
218 >= trainer.store.global_config.sample_batch_size
219 and step % self._single_process_trainer_period == 0
220 ):
221 _ = trainer.step() # logging done in trainer
222 print("Performed trainer step.")
223 if step % self._single_process_evaluator_period == 0:
224 _ = evaluator.run_episode_and_log()
225 print("Performed evaluator run.")
226
227 step += 1
228
[end of mava/systems/jax/launcher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mava/systems/jax/launcher.py b/mava/systems/jax/launcher.py
--- a/mava/systems/jax/launcher.py
+++ b/mava/systems/jax/launcher.py
@@ -194,7 +194,7 @@
trainer = self._node_dict["trainer"]
# getting the maximum queue size
- queue_threshold = data_server.server_info()["trainer"].max_size
+ queue_threshold = data_server.server_info()["trainer_0"].max_size
while (
self._single_process_max_episodes is None
@@ -202,7 +202,7 @@
):
# if the queue is too full we skip the executor to ensure that the
# executor won't hang when trying to push experience
- if data_server.server_info()["trainer"].current_size < int(
+ if data_server.server_info()["trainer_0"].current_size < int(
queue_threshold * 0.75
):
executor_stats = executor.run_episode_and_log()
@@ -214,7 +214,7 @@
# if the queue has less than sample_batch_size samples in it we skip
# the trainer to ensure that the trainer won't hang
if (
- data_server.server_info()["trainer"].current_size
+ data_server.server_info()["trainer_0"].current_size
>= trainer.store.global_config.sample_batch_size
and step % self._single_process_trainer_period == 0
):
| {"golden_diff": "diff --git a/mava/systems/jax/launcher.py b/mava/systems/jax/launcher.py\n--- a/mava/systems/jax/launcher.py\n+++ b/mava/systems/jax/launcher.py\n@@ -194,7 +194,7 @@\n trainer = self._node_dict[\"trainer\"]\n \n # getting the maximum queue size\n- queue_threshold = data_server.server_info()[\"trainer\"].max_size\n+ queue_threshold = data_server.server_info()[\"trainer_0\"].max_size\n \n while (\n self._single_process_max_episodes is None\n@@ -202,7 +202,7 @@\n ):\n # if the queue is too full we skip the executor to ensure that the\n # executor won't hang when trying to push experience\n- if data_server.server_info()[\"trainer\"].current_size < int(\n+ if data_server.server_info()[\"trainer_0\"].current_size < int(\n queue_threshold * 0.75\n ):\n executor_stats = executor.run_episode_and_log()\n@@ -214,7 +214,7 @@\n # if the queue has less than sample_batch_size samples in it we skip\n # the trainer to ensure that the trainer won't hang\n if (\n- data_server.server_info()[\"trainer\"].current_size\n+ data_server.server_info()[\"trainer_0\"].current_size\n >= trainer.store.global_config.sample_batch_size\n and step % self._single_process_trainer_period == 0\n ):\n", "issue": "[BUG] Single Process Example Fails\n### Describe the bug\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/kaleab/Documents/Code/Mava/Mava/examples/jax/debugging/simple_spread/feedforward/decentralised/run_ippo_single_process.py\", line 110, in <module>\r\n app.run(main)\r\n File \"/home/kaleab/anaconda3/envs/mava/lib/python3.9/site-packages/absl/app.py\", line 308, in run\r\n _run_main(main, args)\r\n File \"/home/kaleab/anaconda3/envs/mava/lib/python3.9/site-packages/absl/app.py\", line 254, in _run_main\r\n sys.exit(main(argv))\r\n File \"/home/kaleab/Documents/Code/Mava/Mava/examples/jax/debugging/simple_spread/feedforward/decentralised/run_ippo_single_process.py\", line 106, in main\r\n system.launch()\r\n File \"/home/kaleab/Documents/Code/Mava/Mava/mava/systems/jax/system.py\", line 185, in launch\r\n self._builder.launch()\r\n File \"/home/kaleab/Documents/Code/Mava/Mava/mava/systems/jax/builder.py\", line 244, in launch\r\n self.on_building_launch()\r\n File \"/home/kaleab/Documents/Code/Mava/Mava/mava/callbacks/builder_mixin.py\", line 190, in on_building_launch\r\n callback.on_building_launch(self)\r\n File \"/home/kaleab/Documents/Code/Mava/Mava/mava/components/jax/building/distributor.py\", line 147, in on_building_launch\r\n builder.store.program.launch()\r\n File \"/home/kaleab/Documents/Code/Mava/Mava/mava/systems/jax/launcher.py\", line 197, in launch\r\n queue_threshold = data_server.server_info()[\"trainer\"].max_size\r\nKeyError: 'trainer'\r\n[reverb/cc/platform/default/server.cc:84] Shutting down replay server\r\n```\r\n\r\n### To Reproduce\r\nSteps to reproduce the behavior:\r\n1. `python examples/jax/debugging/simple_spread/feedforward/decentralised/run_ippo_single_process.py`\r\n\r\n\r\n### Expected behavior\r\nExample to run. \r\n\r\n### Context (Environment)\r\n - Mava `dev`.\r\n\r\n### Additional context\r\n-\r\n\r\n### Possible Solution\r\n-\r\n\n", "before_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"General launcher for systems\"\"\"\nfrom typing import Any, Dict, List, Optional, Union\n\nimport launchpad as lp\nimport reverb\n\nfrom mava.utils import lp_utils\nfrom mava.utils.builder_utils import copy_node_fn\n\n\nclass NodeType:\n \"\"\"Specify launchpad node types that systems can use.\"\"\"\n\n reverb = lp.ReverbNode\n courier = lp.CourierNode\n\n\nclass Launcher:\n \"\"\"This mava launcher can be used to launch multi-node systems using either single \\\n or distributed computation.\"\"\"\n\n def __init__(\n self,\n multi_process: bool,\n nodes_on_gpu: List = [],\n single_process_trainer_period: int = 1,\n single_process_evaluator_period: int = 10,\n single_process_max_episodes: Optional[int] = None,\n name: str = \"System\",\n terminal: str = \"current_terminal\",\n is_test: Optional[bool] = False,\n ) -> None:\n \"\"\"Initialise the launcher.\n\n If multi-process, set up the launchpad program.\n Otherwise, create a dictionary for the nodes in the system.\n\n Args:\n multi_process : whether to use launchpad to run nodes on separate processes.\n nodes_on_gpu : which nodes should be run on the GPU.\n single_process_trainer_period : number of episodes between single process\n trainer steps.\n single_process_evaluator_period : num episodes between single process\n evaluator steps.\n single_process_max_episodes: maximum number of episodes to run\n before termination.\n name : launchpad program name.\n terminal : terminal for launchpad processes to be shown on.\n is_test : whether to set testing launchpad launch_type.\n \"\"\"\n self._is_test = is_test\n self._multi_process = multi_process\n self._name = name\n self._single_process_trainer_period = single_process_trainer_period\n self._single_process_evaluator_period = single_process_evaluator_period\n self._single_process_max_episodes = single_process_max_episodes\n self._terminal = terminal\n if multi_process:\n self._program = lp.Program(name=name)\n self._nodes_on_gpu = nodes_on_gpu\n else:\n self._nodes: List = []\n self._node_dict: Dict = {\n \"data_server\": None,\n \"parameter_server\": None,\n \"executor\": None,\n \"evaluator\": None,\n \"trainer\": None,\n }\n\n def add(\n self,\n node_fn: Any,\n arguments: Any = [],\n node_type: Union[lp.ReverbNode, lp.CourierNode] = NodeType.courier,\n name: str = \"Node\",\n ) -> Any:\n \"\"\"Add a node to the system.\n\n If multi-processing, add a node to the existing launchpad program,\n grouped under the given name.\n This means that when multi-processing,\n you can have multiple nodes of the same name (e.g. executor).\n If system is single-process, only one node per name is allowed in the system.\n\n Args:\n node_fn : Function returning the system process that will run on the node.\n arguments : Arguments used when initialising the system process.\n node_type : Type of launchpad node to use.\n name : Node name (e.g. executor).\n\n Raises:\n ValueError: if single-process and node name is not supported.\n ValueError: if single-process and trying to init a node more than once.\n\n Returns:\n The system process or launchpad node.\n \"\"\"\n # Create a list of arguments\n if type(arguments) is not list:\n arguments = [arguments]\n\n if self._multi_process:\n with self._program.group(name):\n if self._is_test:\n node_fn = copy_node_fn(node_fn)\n node = self._program.add_node(node_type(node_fn, *arguments))\n return node\n else:\n if name not in self._node_dict:\n raise ValueError(\n f\"{name} is not a valid node name.\"\n + \"Single process currently only supports \"\n + \"nodes named: {list(self._node_dict.keys())}\"\n )\n elif self._node_dict[name] is not None:\n raise ValueError(\n f\"Node named {name} initialised more than once.\"\n + \"Single process currently only supports one node per type.\"\n )\n\n node_fn = copy_node_fn(node_fn)\n process = node_fn(*arguments)\n if node_type == lp.ReverbNode:\n # Assigning server to self to keep it alive.\n self._replay_server = reverb.Server(process, port=None)\n process = reverb.Client(f\"localhost:{self._replay_server.port}\")\n self._nodes.append(process)\n self._node_dict[name] = process\n return process\n\n def get_nodes(self) -> List[Any]:\n \"\"\"Get the nodes of a single-process system.\n\n Raises:\n ValueError: if system is multi-process.\n\n Returns:\n System nodes.\n \"\"\"\n if self._multi_process:\n raise ValueError(\"Get nodes only implemented for single process setups.\")\n\n return self._nodes\n\n def launch(self) -> None:\n \"\"\"Launch the launchpad program or start the single-process system loop.\n\n Returns:\n None.\n \"\"\"\n if self._multi_process:\n if self._is_test:\n launch_type = lp.LaunchType.TEST_MULTI_THREADING\n else:\n launch_type = lp.LaunchType.LOCAL_MULTI_PROCESSING\n\n local_resources = lp_utils.to_device(\n program_nodes=self._program.groups.keys(),\n nodes_on_gpu=self._nodes_on_gpu,\n )\n\n lp.launch(\n self._program,\n launch_type=launch_type,\n terminal=self._terminal,\n local_resources=local_resources,\n )\n\n else:\n episode = 1\n step = 1\n executor_steps = 0\n\n data_server = self._node_dict[\"data_server\"]\n _ = self._node_dict[\"parameter_server\"]\n executor = self._node_dict[\"executor\"]\n evaluator = self._node_dict[\"evaluator\"]\n trainer = self._node_dict[\"trainer\"]\n\n # getting the maximum queue size\n queue_threshold = data_server.server_info()[\"trainer\"].max_size\n\n while (\n self._single_process_max_episodes is None\n or episode <= self._single_process_max_episodes\n ):\n # if the queue is too full we skip the executor to ensure that the\n # executor won't hang when trying to push experience\n if data_server.server_info()[\"trainer\"].current_size < int(\n queue_threshold * 0.75\n ):\n executor_stats = executor.run_episode_and_log()\n executor_steps += executor_stats[\"episode_length\"]\n\n print(f\"Episode {episode} completed.\")\n episode += 1\n\n # if the queue has less than sample_batch_size samples in it we skip\n # the trainer to ensure that the trainer won't hang\n if (\n data_server.server_info()[\"trainer\"].current_size\n >= trainer.store.global_config.sample_batch_size\n and step % self._single_process_trainer_period == 0\n ):\n _ = trainer.step() # logging done in trainer\n print(\"Performed trainer step.\")\n if step % self._single_process_evaluator_period == 0:\n _ = evaluator.run_episode_and_log()\n print(\"Performed evaluator run.\")\n\n step += 1\n", "path": "mava/systems/jax/launcher.py"}]} | 3,358 | 325 |
gh_patches_debug_17319 | rasdani/github-patches | git_diff | elastic__elasticsearch-py-210 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for custom authentication objects for requests module
Hi,
Several transport classes are available, one of them is "requests".
Requests supports basic-authentication but far more than that ([0](http://docs.python-requests.org/en/latest/user/advanced/#custom-authentication)). In order to support this a few lines would need to be changed to allow for providing an authentication object ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)).
I have the code ready ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)) for this and am actively using it.
Would you be willing to accept this contribution ?
</issue>
<code>
[start of elasticsearch/connection/http_requests.py]
1 import time
2 import warnings
3 try:
4 import requests
5 REQUESTS_AVAILABLE = True
6 except ImportError:
7 REQUESTS_AVAILABLE = False
8
9 from .base import Connection
10 from ..exceptions import ConnectionError, ImproperlyConfigured, ConnectionTimeout, SSLError
11 from ..compat import urlencode
12
13 class RequestsHttpConnection(Connection):
14 """
15 Connection using the `requests` library.
16
17 :arg http_auth: optional http auth information as either ':' separated
18 string or a tuple
19 :arg use_ssl: use ssl for the connection if `True`
20 :arg verify_certs: whether to verify SSL certificates
21 :arg ca_certs: optional path to CA bundle. By default standard requests'
22 bundle will be used.
23 :arg client_cert: path to the file containing the private key and the
24 certificate
25 """
26 def __init__(self, host='localhost', port=9200, http_auth=None,
27 use_ssl=False, verify_certs=False, ca_certs=None, client_cert=None,
28 **kwargs):
29 if not REQUESTS_AVAILABLE:
30 raise ImproperlyConfigured("Please install requests to use RequestsHttpConnection.")
31
32 super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)
33 self.session = requests.session()
34 if http_auth is not None:
35 if not isinstance(http_auth, (tuple, list)):
36 http_auth = http_auth.split(':', 1)
37 http_auth = tuple(http_auth)
38 self.session.auth = http_auth
39 self.base_url = 'http%s://%s:%d%s' % (
40 's' if use_ssl else '',
41 host, port, self.url_prefix
42 )
43 self.session.verify = verify_certs
44 self.session.cert = client_cert
45 if ca_certs:
46 if not verify_certs:
47 raise ImproperlyConfigured("You cannot pass CA certificates when verify SSL is off.")
48 self.session.verify = ca_certs
49
50 if use_ssl and not verify_certs:
51 warnings.warn(
52 'Connecting to %s using SSL with verify_certs=False is insecure.' % self.base_url)
53
54 def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=()):
55 url = self.base_url + url
56 if params:
57 url = '%s?%s' % (url, urlencode(params or {}))
58
59 start = time.time()
60 try:
61 response = self.session.request(method, url, data=body, timeout=timeout or self.timeout)
62 duration = time.time() - start
63 raw_data = response.text
64 except requests.exceptions.SSLError as e:
65 self.log_request_fail(method, url, body, time.time() - start, exception=e)
66 raise SSLError('N/A', str(e), e)
67 except requests.Timeout as e:
68 self.log_request_fail(method, url, body, time.time() - start, exception=e)
69 raise ConnectionTimeout('TIMEOUT', str(e), e)
70 except requests.ConnectionError as e:
71 self.log_request_fail(method, url, body, time.time() - start, exception=e)
72 raise ConnectionError('N/A', str(e), e)
73
74 # raise errors based on http status codes, let the client handle those if needed
75 if not (200 <= response.status_code < 300) and response.status_code not in ignore:
76 self.log_request_fail(method, url, body, duration, response.status_code)
77 self._raise_error(response.status_code, raw_data)
78
79 self.log_request_success(method, url, response.request.path_url, body, response.status_code, raw_data, duration)
80
81 return response.status_code, response.headers, raw_data
82
[end of elasticsearch/connection/http_requests.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticsearch/connection/http_requests.py b/elasticsearch/connection/http_requests.py
--- a/elasticsearch/connection/http_requests.py
+++ b/elasticsearch/connection/http_requests.py
@@ -1,3 +1,4 @@
+import six
import time
import warnings
try:
@@ -32,8 +33,10 @@
super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)
self.session = requests.session()
if http_auth is not None:
- if not isinstance(http_auth, (tuple, list)):
- http_auth = http_auth.split(':', 1)
+ if isinstance(http_auth, (tuple, list)):
+ http_auth = tuple(http_auth)
+ elif isinstance(http_auth, six.string_types):
+ http_auth = tuple(http_auth.split(':', 1))
http_auth = tuple(http_auth)
self.session.auth = http_auth
self.base_url = 'http%s://%s:%d%s' % (
| {"golden_diff": "diff --git a/elasticsearch/connection/http_requests.py b/elasticsearch/connection/http_requests.py\n--- a/elasticsearch/connection/http_requests.py\n+++ b/elasticsearch/connection/http_requests.py\n@@ -1,3 +1,4 @@\n+import six\n import time\n import warnings\n try:\n@@ -32,8 +33,10 @@\n super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)\n self.session = requests.session()\n if http_auth is not None:\n- if not isinstance(http_auth, (tuple, list)):\n- http_auth = http_auth.split(':', 1)\n+ if isinstance(http_auth, (tuple, list)):\n+ http_auth = tuple(http_auth)\n+ elif isinstance(http_auth, six.string_types):\n+ http_auth = tuple(http_auth.split(':', 1))\n http_auth = tuple(http_auth)\n self.session.auth = http_auth\n self.base_url = 'http%s://%s:%d%s' % (\n", "issue": "Support for custom authentication objects for requests module\nHi,\n\nSeveral transport classes are available, one of them is \"requests\".\nRequests supports basic-authentication but far more than that ([0](http://docs.python-requests.org/en/latest/user/advanced/#custom-authentication)). In order to support this a few lines would need to be changed to allow for providing an authentication object ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)).\n\nI have the code ready ([1](https://github.com/elastic/elasticsearch-py/compare/master...sim0nx:requests_custom_authentication)) for this and am actively using it.\n\nWould you be willing to accept this contribution ?\n\n", "before_files": [{"content": "import time\nimport warnings\ntry:\n import requests\n REQUESTS_AVAILABLE = True\nexcept ImportError:\n REQUESTS_AVAILABLE = False\n\nfrom .base import Connection\nfrom ..exceptions import ConnectionError, ImproperlyConfigured, ConnectionTimeout, SSLError\nfrom ..compat import urlencode\n\nclass RequestsHttpConnection(Connection):\n \"\"\"\n Connection using the `requests` library.\n\n :arg http_auth: optional http auth information as either ':' separated\n string or a tuple\n :arg use_ssl: use ssl for the connection if `True`\n :arg verify_certs: whether to verify SSL certificates\n :arg ca_certs: optional path to CA bundle. By default standard requests'\n bundle will be used.\n :arg client_cert: path to the file containing the private key and the\n certificate\n \"\"\"\n def __init__(self, host='localhost', port=9200, http_auth=None,\n use_ssl=False, verify_certs=False, ca_certs=None, client_cert=None,\n **kwargs):\n if not REQUESTS_AVAILABLE:\n raise ImproperlyConfigured(\"Please install requests to use RequestsHttpConnection.\")\n\n super(RequestsHttpConnection, self).__init__(host= host, port=port, **kwargs)\n self.session = requests.session()\n if http_auth is not None:\n if not isinstance(http_auth, (tuple, list)):\n http_auth = http_auth.split(':', 1)\n http_auth = tuple(http_auth)\n self.session.auth = http_auth\n self.base_url = 'http%s://%s:%d%s' % (\n 's' if use_ssl else '',\n host, port, self.url_prefix\n )\n self.session.verify = verify_certs\n self.session.cert = client_cert\n if ca_certs:\n if not verify_certs:\n raise ImproperlyConfigured(\"You cannot pass CA certificates when verify SSL is off.\")\n self.session.verify = ca_certs\n\n if use_ssl and not verify_certs:\n warnings.warn(\n 'Connecting to %s using SSL with verify_certs=False is insecure.' % self.base_url)\n\n def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=()):\n url = self.base_url + url\n if params:\n url = '%s?%s' % (url, urlencode(params or {}))\n\n start = time.time()\n try:\n response = self.session.request(method, url, data=body, timeout=timeout or self.timeout)\n duration = time.time() - start\n raw_data = response.text\n except requests.exceptions.SSLError as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise SSLError('N/A', str(e), e)\n except requests.Timeout as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise ConnectionTimeout('TIMEOUT', str(e), e)\n except requests.ConnectionError as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise ConnectionError('N/A', str(e), e)\n\n # raise errors based on http status codes, let the client handle those if needed\n if not (200 <= response.status_code < 300) and response.status_code not in ignore:\n self.log_request_fail(method, url, body, duration, response.status_code)\n self._raise_error(response.status_code, raw_data)\n\n self.log_request_success(method, url, response.request.path_url, body, response.status_code, raw_data, duration)\n\n return response.status_code, response.headers, raw_data\n", "path": "elasticsearch/connection/http_requests.py"}]} | 1,628 | 210 |
gh_patches_debug_40840 | rasdani/github-patches | git_diff | freqtrade__freqtrade-5443 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fiat-convert: coingecko symbols are not unique
## Describe your environment
* Operating system: Debian testing
* Python Version: 3.9.1 (`python -V`)
* CCXT version: doesn't matter (`pip freeze | grep ccxt`)
* Freqtrade Version: develop (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
## Describe the problem:
While working on #5067 I've discovered that the coingecko coin list has duplicated symbols
```
In [7]: from pycoingecko import CoinGeckoAPI
...: cg = CoinGeckoAPI()
In [8]: a=cg.get_coins_list()
In [9]: b = [x['symbol'] for x in a]
In [10]: len(set([x for x in b if b.count(x) > 1]))
Out[10]: 747
```
https://github.com/freqtrade/freqtrade/blob/a0893b291a099b928ced567b9ef81f2d2795717a/freqtrade/rpc/fiat_convert.py#L54 also wrongfully assumes that the symbols are uniqe.
I've added a hand-hacked workaround for the symbols from binance in #5067 - but a more general solution might be needed.
</issue>
<code>
[start of freqtrade/rpc/fiat_convert.py]
1 """
2 Module that define classes to convert Crypto-currency to FIAT
3 e.g BTC to USD
4 """
5
6 import datetime
7 import logging
8 from typing import Dict
9
10 from cachetools.ttl import TTLCache
11 from pycoingecko import CoinGeckoAPI
12 from requests.exceptions import RequestException
13
14 from freqtrade.constants import SUPPORTED_FIAT
15
16
17 logger = logging.getLogger(__name__)
18
19
20 class CryptoToFiatConverter:
21 """
22 Main class to initiate Crypto to FIAT.
23 This object contains a list of pair Crypto, FIAT
24 This object is also a Singleton
25 """
26 __instance = None
27 _coingekko: CoinGeckoAPI = None
28
29 _cryptomap: Dict = {}
30 _backoff: float = 0.0
31
32 def __new__(cls):
33 """
34 This class is a singleton - cannot be instantiated twice.
35 """
36 if CryptoToFiatConverter.__instance is None:
37 CryptoToFiatConverter.__instance = object.__new__(cls)
38 try:
39 CryptoToFiatConverter._coingekko = CoinGeckoAPI()
40 except BaseException:
41 CryptoToFiatConverter._coingekko = None
42 return CryptoToFiatConverter.__instance
43
44 def __init__(self) -> None:
45 # Timeout: 6h
46 self._pair_price: TTLCache = TTLCache(maxsize=500, ttl=6 * 60 * 60)
47
48 self._load_cryptomap()
49
50 def _load_cryptomap(self) -> None:
51 try:
52 coinlistings = self._coingekko.get_coins_list()
53 # Create mapping table from symbol to coingekko_id
54 self._cryptomap = {x['symbol']: x['id'] for x in coinlistings}
55 except RequestException as request_exception:
56 if "429" in str(request_exception):
57 logger.warning(
58 "Too many requests for Coingecko API, backing off and trying again later.")
59 # Set backoff timestamp to 60 seconds in the future
60 self._backoff = datetime.datetime.now().timestamp() + 60
61 return
62 # If the request is not a 429 error we want to raise the normal error
63 logger.error(
64 "Could not load FIAT Cryptocurrency map for the following problem: {}".format(
65 request_exception
66 )
67 )
68 except (Exception) as exception:
69 logger.error(
70 f"Could not load FIAT Cryptocurrency map for the following problem: {exception}")
71
72 def convert_amount(self, crypto_amount: float, crypto_symbol: str, fiat_symbol: str) -> float:
73 """
74 Convert an amount of crypto-currency to fiat
75 :param crypto_amount: amount of crypto-currency to convert
76 :param crypto_symbol: crypto-currency used
77 :param fiat_symbol: fiat to convert to
78 :return: float, value in fiat of the crypto-currency amount
79 """
80 if crypto_symbol == fiat_symbol:
81 return float(crypto_amount)
82 price = self.get_price(crypto_symbol=crypto_symbol, fiat_symbol=fiat_symbol)
83 return float(crypto_amount) * float(price)
84
85 def get_price(self, crypto_symbol: str, fiat_symbol: str) -> float:
86 """
87 Return the price of the Crypto-currency in Fiat
88 :param crypto_symbol: Crypto-currency you want to convert (e.g BTC)
89 :param fiat_symbol: FIAT currency you want to convert to (e.g USD)
90 :return: Price in FIAT
91 """
92 crypto_symbol = crypto_symbol.lower()
93 fiat_symbol = fiat_symbol.lower()
94 inverse = False
95
96 if crypto_symbol == 'usd':
97 # usd corresponds to "uniswap-state-dollar" for coingecko.
98 # We'll therefore need to "swap" the currencies
99 logger.info(f"reversing Rates {crypto_symbol}, {fiat_symbol}")
100 crypto_symbol = fiat_symbol
101 fiat_symbol = 'usd'
102 inverse = True
103
104 symbol = f"{crypto_symbol}/{fiat_symbol}"
105 # Check if the fiat conversion you want is supported
106 if not self._is_supported_fiat(fiat=fiat_symbol):
107 raise ValueError(f'The fiat {fiat_symbol} is not supported.')
108
109 price = self._pair_price.get(symbol, None)
110
111 if not price:
112 price = self._find_price(
113 crypto_symbol=crypto_symbol,
114 fiat_symbol=fiat_symbol
115 )
116 if inverse and price != 0.0:
117 price = 1 / price
118 self._pair_price[symbol] = price
119
120 return price
121
122 def _is_supported_fiat(self, fiat: str) -> bool:
123 """
124 Check if the FIAT your want to convert to is supported
125 :param fiat: FIAT to check (e.g USD)
126 :return: bool, True supported, False not supported
127 """
128
129 return fiat.upper() in SUPPORTED_FIAT
130
131 def _find_price(self, crypto_symbol: str, fiat_symbol: str) -> float:
132 """
133 Call CoinGekko API to retrieve the price in the FIAT
134 :param crypto_symbol: Crypto-currency you want to convert (e.g btc)
135 :param fiat_symbol: FIAT currency you want to convert to (e.g usd)
136 :return: float, price of the crypto-currency in Fiat
137 """
138 # Check if the fiat conversion you want is supported
139 if not self._is_supported_fiat(fiat=fiat_symbol):
140 raise ValueError(f'The fiat {fiat_symbol} is not supported.')
141
142 # No need to convert if both crypto and fiat are the same
143 if crypto_symbol == fiat_symbol:
144 return 1.0
145
146 if self._cryptomap == {}:
147 if self._backoff <= datetime.datetime.now().timestamp():
148 self._load_cryptomap()
149 # return 0.0 if we still don't have data to check, no reason to proceed
150 if self._cryptomap == {}:
151 return 0.0
152 else:
153 return 0.0
154
155 if crypto_symbol not in self._cryptomap:
156 # return 0 for unsupported stake currencies (fiat-convert should not break the bot)
157 logger.warning("unsupported crypto-symbol %s - returning 0.0", crypto_symbol)
158 return 0.0
159
160 try:
161 _gekko_id = self._cryptomap[crypto_symbol]
162 return float(
163 self._coingekko.get_price(
164 ids=_gekko_id,
165 vs_currencies=fiat_symbol
166 )[_gekko_id][fiat_symbol]
167 )
168 except Exception as exception:
169 logger.error("Error in _find_price: %s", exception)
170 return 0.0
171
[end of freqtrade/rpc/fiat_convert.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/freqtrade/rpc/fiat_convert.py b/freqtrade/rpc/fiat_convert.py
--- a/freqtrade/rpc/fiat_convert.py
+++ b/freqtrade/rpc/fiat_convert.py
@@ -5,7 +5,7 @@
import datetime
import logging
-from typing import Dict
+from typing import Dict, List
from cachetools.ttl import TTLCache
from pycoingecko import CoinGeckoAPI
@@ -25,8 +25,7 @@
"""
__instance = None
_coingekko: CoinGeckoAPI = None
-
- _cryptomap: Dict = {}
+ _coinlistings: List[Dict] = []
_backoff: float = 0.0
def __new__(cls):
@@ -49,9 +48,8 @@
def _load_cryptomap(self) -> None:
try:
- coinlistings = self._coingekko.get_coins_list()
- # Create mapping table from symbol to coingekko_id
- self._cryptomap = {x['symbol']: x['id'] for x in coinlistings}
+ # Use list-comprehension to ensure we get a list.
+ self._coinlistings = [x for x in self._coingekko.get_coins_list()]
except RequestException as request_exception:
if "429" in str(request_exception):
logger.warning(
@@ -69,6 +67,24 @@
logger.error(
f"Could not load FIAT Cryptocurrency map for the following problem: {exception}")
+ def _get_gekko_id(self, crypto_symbol):
+ if not self._coinlistings:
+ if self._backoff <= datetime.datetime.now().timestamp():
+ self._load_cryptomap()
+ # Still not loaded.
+ if not self._coinlistings:
+ return None
+ else:
+ return None
+ found = [x for x in self._coinlistings if x['symbol'] == crypto_symbol]
+ if len(found) == 1:
+ return found[0]['id']
+
+ if len(found) > 0:
+ # Wrong!
+ logger.warning(f"Found multiple mappings in goingekko for {crypto_symbol}.")
+ return None
+
def convert_amount(self, crypto_amount: float, crypto_symbol: str, fiat_symbol: str) -> float:
"""
Convert an amount of crypto-currency to fiat
@@ -143,22 +159,14 @@
if crypto_symbol == fiat_symbol:
return 1.0
- if self._cryptomap == {}:
- if self._backoff <= datetime.datetime.now().timestamp():
- self._load_cryptomap()
- # return 0.0 if we still don't have data to check, no reason to proceed
- if self._cryptomap == {}:
- return 0.0
- else:
- return 0.0
+ _gekko_id = self._get_gekko_id(crypto_symbol)
- if crypto_symbol not in self._cryptomap:
+ if not _gekko_id:
# return 0 for unsupported stake currencies (fiat-convert should not break the bot)
logger.warning("unsupported crypto-symbol %s - returning 0.0", crypto_symbol)
return 0.0
try:
- _gekko_id = self._cryptomap[crypto_symbol]
return float(
self._coingekko.get_price(
ids=_gekko_id,
| {"golden_diff": "diff --git a/freqtrade/rpc/fiat_convert.py b/freqtrade/rpc/fiat_convert.py\n--- a/freqtrade/rpc/fiat_convert.py\n+++ b/freqtrade/rpc/fiat_convert.py\n@@ -5,7 +5,7 @@\n \n import datetime\n import logging\n-from typing import Dict\n+from typing import Dict, List\n \n from cachetools.ttl import TTLCache\n from pycoingecko import CoinGeckoAPI\n@@ -25,8 +25,7 @@\n \"\"\"\n __instance = None\n _coingekko: CoinGeckoAPI = None\n-\n- _cryptomap: Dict = {}\n+ _coinlistings: List[Dict] = []\n _backoff: float = 0.0\n \n def __new__(cls):\n@@ -49,9 +48,8 @@\n \n def _load_cryptomap(self) -> None:\n try:\n- coinlistings = self._coingekko.get_coins_list()\n- # Create mapping table from symbol to coingekko_id\n- self._cryptomap = {x['symbol']: x['id'] for x in coinlistings}\n+ # Use list-comprehension to ensure we get a list.\n+ self._coinlistings = [x for x in self._coingekko.get_coins_list()]\n except RequestException as request_exception:\n if \"429\" in str(request_exception):\n logger.warning(\n@@ -69,6 +67,24 @@\n logger.error(\n f\"Could not load FIAT Cryptocurrency map for the following problem: {exception}\")\n \n+ def _get_gekko_id(self, crypto_symbol):\n+ if not self._coinlistings:\n+ if self._backoff <= datetime.datetime.now().timestamp():\n+ self._load_cryptomap()\n+ # Still not loaded.\n+ if not self._coinlistings:\n+ return None\n+ else:\n+ return None\n+ found = [x for x in self._coinlistings if x['symbol'] == crypto_symbol]\n+ if len(found) == 1:\n+ return found[0]['id']\n+\n+ if len(found) > 0:\n+ # Wrong!\n+ logger.warning(f\"Found multiple mappings in goingekko for {crypto_symbol}.\")\n+ return None\n+\n def convert_amount(self, crypto_amount: float, crypto_symbol: str, fiat_symbol: str) -> float:\n \"\"\"\n Convert an amount of crypto-currency to fiat\n@@ -143,22 +159,14 @@\n if crypto_symbol == fiat_symbol:\n return 1.0\n \n- if self._cryptomap == {}:\n- if self._backoff <= datetime.datetime.now().timestamp():\n- self._load_cryptomap()\n- # return 0.0 if we still don't have data to check, no reason to proceed\n- if self._cryptomap == {}:\n- return 0.0\n- else:\n- return 0.0\n+ _gekko_id = self._get_gekko_id(crypto_symbol)\n \n- if crypto_symbol not in self._cryptomap:\n+ if not _gekko_id:\n # return 0 for unsupported stake currencies (fiat-convert should not break the bot)\n logger.warning(\"unsupported crypto-symbol %s - returning 0.0\", crypto_symbol)\n return 0.0\n \n try:\n- _gekko_id = self._cryptomap[crypto_symbol]\n return float(\n self._coingekko.get_price(\n ids=_gekko_id,\n", "issue": "fiat-convert: coingecko symbols are not unique\n## Describe your environment\r\n\r\n * Operating system: Debian testing\r\n * Python Version: 3.9.1 (`python -V`)\r\n * CCXT version: doesn't matter (`pip freeze | grep ccxt`)\r\n * Freqtrade Version: develop (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)\r\n \r\n## Describe the problem:\r\n\r\nWhile working on #5067 I've discovered that the coingecko coin list has duplicated symbols\r\n\r\n```\r\nIn [7]: from pycoingecko import CoinGeckoAPI\r\n ...: cg = CoinGeckoAPI()\r\n\r\nIn [8]: a=cg.get_coins_list()\r\n\r\nIn [9]: b = [x['symbol'] for x in a]\r\n\r\nIn [10]: len(set([x for x in b if b.count(x) > 1]))\r\nOut[10]: 747\r\n```\r\n\r\nhttps://github.com/freqtrade/freqtrade/blob/a0893b291a099b928ced567b9ef81f2d2795717a/freqtrade/rpc/fiat_convert.py#L54 also wrongfully assumes that the symbols are uniqe.\r\n\r\nI've added a hand-hacked workaround for the symbols from binance in #5067 - but a more general solution might be needed.\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nModule that define classes to convert Crypto-currency to FIAT\ne.g BTC to USD\n\"\"\"\n\nimport datetime\nimport logging\nfrom typing import Dict\n\nfrom cachetools.ttl import TTLCache\nfrom pycoingecko import CoinGeckoAPI\nfrom requests.exceptions import RequestException\n\nfrom freqtrade.constants import SUPPORTED_FIAT\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CryptoToFiatConverter:\n \"\"\"\n Main class to initiate Crypto to FIAT.\n This object contains a list of pair Crypto, FIAT\n This object is also a Singleton\n \"\"\"\n __instance = None\n _coingekko: CoinGeckoAPI = None\n\n _cryptomap: Dict = {}\n _backoff: float = 0.0\n\n def __new__(cls):\n \"\"\"\n This class is a singleton - cannot be instantiated twice.\n \"\"\"\n if CryptoToFiatConverter.__instance is None:\n CryptoToFiatConverter.__instance = object.__new__(cls)\n try:\n CryptoToFiatConverter._coingekko = CoinGeckoAPI()\n except BaseException:\n CryptoToFiatConverter._coingekko = None\n return CryptoToFiatConverter.__instance\n\n def __init__(self) -> None:\n # Timeout: 6h\n self._pair_price: TTLCache = TTLCache(maxsize=500, ttl=6 * 60 * 60)\n\n self._load_cryptomap()\n\n def _load_cryptomap(self) -> None:\n try:\n coinlistings = self._coingekko.get_coins_list()\n # Create mapping table from symbol to coingekko_id\n self._cryptomap = {x['symbol']: x['id'] for x in coinlistings}\n except RequestException as request_exception:\n if \"429\" in str(request_exception):\n logger.warning(\n \"Too many requests for Coingecko API, backing off and trying again later.\")\n # Set backoff timestamp to 60 seconds in the future\n self._backoff = datetime.datetime.now().timestamp() + 60\n return\n # If the request is not a 429 error we want to raise the normal error\n logger.error(\n \"Could not load FIAT Cryptocurrency map for the following problem: {}\".format(\n request_exception\n )\n )\n except (Exception) as exception:\n logger.error(\n f\"Could not load FIAT Cryptocurrency map for the following problem: {exception}\")\n\n def convert_amount(self, crypto_amount: float, crypto_symbol: str, fiat_symbol: str) -> float:\n \"\"\"\n Convert an amount of crypto-currency to fiat\n :param crypto_amount: amount of crypto-currency to convert\n :param crypto_symbol: crypto-currency used\n :param fiat_symbol: fiat to convert to\n :return: float, value in fiat of the crypto-currency amount\n \"\"\"\n if crypto_symbol == fiat_symbol:\n return float(crypto_amount)\n price = self.get_price(crypto_symbol=crypto_symbol, fiat_symbol=fiat_symbol)\n return float(crypto_amount) * float(price)\n\n def get_price(self, crypto_symbol: str, fiat_symbol: str) -> float:\n \"\"\"\n Return the price of the Crypto-currency in Fiat\n :param crypto_symbol: Crypto-currency you want to convert (e.g BTC)\n :param fiat_symbol: FIAT currency you want to convert to (e.g USD)\n :return: Price in FIAT\n \"\"\"\n crypto_symbol = crypto_symbol.lower()\n fiat_symbol = fiat_symbol.lower()\n inverse = False\n\n if crypto_symbol == 'usd':\n # usd corresponds to \"uniswap-state-dollar\" for coingecko.\n # We'll therefore need to \"swap\" the currencies\n logger.info(f\"reversing Rates {crypto_symbol}, {fiat_symbol}\")\n crypto_symbol = fiat_symbol\n fiat_symbol = 'usd'\n inverse = True\n\n symbol = f\"{crypto_symbol}/{fiat_symbol}\"\n # Check if the fiat conversion you want is supported\n if not self._is_supported_fiat(fiat=fiat_symbol):\n raise ValueError(f'The fiat {fiat_symbol} is not supported.')\n\n price = self._pair_price.get(symbol, None)\n\n if not price:\n price = self._find_price(\n crypto_symbol=crypto_symbol,\n fiat_symbol=fiat_symbol\n )\n if inverse and price != 0.0:\n price = 1 / price\n self._pair_price[symbol] = price\n\n return price\n\n def _is_supported_fiat(self, fiat: str) -> bool:\n \"\"\"\n Check if the FIAT your want to convert to is supported\n :param fiat: FIAT to check (e.g USD)\n :return: bool, True supported, False not supported\n \"\"\"\n\n return fiat.upper() in SUPPORTED_FIAT\n\n def _find_price(self, crypto_symbol: str, fiat_symbol: str) -> float:\n \"\"\"\n Call CoinGekko API to retrieve the price in the FIAT\n :param crypto_symbol: Crypto-currency you want to convert (e.g btc)\n :param fiat_symbol: FIAT currency you want to convert to (e.g usd)\n :return: float, price of the crypto-currency in Fiat\n \"\"\"\n # Check if the fiat conversion you want is supported\n if not self._is_supported_fiat(fiat=fiat_symbol):\n raise ValueError(f'The fiat {fiat_symbol} is not supported.')\n\n # No need to convert if both crypto and fiat are the same\n if crypto_symbol == fiat_symbol:\n return 1.0\n\n if self._cryptomap == {}:\n if self._backoff <= datetime.datetime.now().timestamp():\n self._load_cryptomap()\n # return 0.0 if we still don't have data to check, no reason to proceed\n if self._cryptomap == {}:\n return 0.0\n else:\n return 0.0\n\n if crypto_symbol not in self._cryptomap:\n # return 0 for unsupported stake currencies (fiat-convert should not break the bot)\n logger.warning(\"unsupported crypto-symbol %s - returning 0.0\", crypto_symbol)\n return 0.0\n\n try:\n _gekko_id = self._cryptomap[crypto_symbol]\n return float(\n self._coingekko.get_price(\n ids=_gekko_id,\n vs_currencies=fiat_symbol\n )[_gekko_id][fiat_symbol]\n )\n except Exception as exception:\n logger.error(\"Error in _find_price: %s\", exception)\n return 0.0\n", "path": "freqtrade/rpc/fiat_convert.py"}]} | 2,738 | 802 |
gh_patches_debug_22118 | rasdani/github-patches | git_diff | deis__deis-315 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
404 on cm.destroy should be a warning.
It's all too easy to run into the case where something errored in formations:create and we have a Django database entry for the formation, but no Chef node. The recent merge makes it impossible to delete the Formation in that case. If we instead treat the Chef 404 as a warning, this will work.
</issue>
<code>
[start of cm/chef.py]
1 """
2 Deis configuration management implementation for Opscode Chef.
3 """
4
5 from __future__ import unicode_literals
6
7 import os
8 import re
9 import subprocess
10 import tempfile
11 import time
12 import socket
13
14 from celery.canvas import group
15
16 from api.ssh import exec_ssh, connect_ssh
17 from cm.chef_api import ChefAPI
18
19
20 CHEF_CONFIG_PATH = '/etc/chef'
21 CHEF_INSTALL_TYPE = 'gems'
22 CHEF_RUBY_VERSION = '1.9.1'
23 CHEF_ENVIRONMENT = '_default'
24 CHEF_CLIENT_VERSION = '11.6.2'
25
26 # load chef config using CHEF_CONFIG_PATH
27 try:
28 # parse controller's chef config for server_url and client_name
29 _client_cfg_path = os.path.join(CHEF_CONFIG_PATH, 'client.rb')
30 if not os.path.exists(_client_cfg_path):
31 raise EnvironmentError('Could not find {}'.format(_client_cfg_path))
32 with open(_client_cfg_path) as f:
33 _data = f.read()
34 # construct a dict from the ruby client.rb
35 _d = {}
36 for m in re.findall(r'''^([a-zA-Z0-9_]+)[ \t]+(.*)$''',
37 _data, re.MULTILINE):
38 _d[m[0]] = m[1].strip("'").strip('"')
39 # set global variables from client.rb
40 CHEF_SERVER_URL = _d['chef_server_url']
41 CHEF_NODE_NAME = _d.get('node_name', socket.gethostname())
42 CHEF_CLIENT_NAME = _d.get('node_name', socket.gethostname())
43 CHEF_VALIDATION_NAME = _d['validation_client_name']
44 # read the client key
45 _client_pem_path = os.path.join(CHEF_CONFIG_PATH, 'client.pem')
46 CHEF_CLIENT_KEY = subprocess.check_output(
47 ['sudo', '/bin/cat', _client_pem_path]).strip('\n')
48 # read the validation key
49 _valid_pem_path = os.path.join(CHEF_CONFIG_PATH, 'validation.pem')
50 CHEF_VALIDATION_KEY = subprocess.check_output(
51 ['sudo', '/bin/cat', _valid_pem_path]).strip('\n')
52 except Exception as err:
53 msg = "Failed to auto-configure Chef -- {}".format(err)
54 if os.environ.get('READTHEDOCS'):
55 # Just print the error if Sphinx is running
56 print(msg)
57 else:
58 raise EnvironmentError(msg)
59
60
61 def _get_client():
62 """
63 Return a new instance of a Chef API Client
64
65 :rtype: a :class:`~cm.chef_api.ChefAPI` object
66 """
67 return ChefAPI(CHEF_SERVER_URL, CHEF_CLIENT_NAME, CHEF_CLIENT_KEY)
68
69
70 def bootstrap_node(node):
71 """
72 Bootstrap the Chef configuration management tools onto a node.
73
74 :param node: a dict containing the node's fully-qualified domain name and SSH info
75 :raises: RuntimeError
76 """
77 # block until we can connect over ssh
78 ssh = connect_ssh(node['ssh_username'], node['fqdn'], node.get('ssh_port', 22),
79 node['ssh_private_key'], timeout=120)
80 # block until ubuntu cloud-init is finished
81 initializing = True
82 while initializing:
83 time.sleep(10)
84 initializing, _rc = exec_ssh(ssh, 'ps auxw | egrep "cloud-init" | grep -v egrep')
85 # write out private key and prepare to `knife bootstrap`
86 try:
87 _, pk_path = tempfile.mkstemp()
88 _, output_path = tempfile.mkstemp()
89 with open(pk_path, 'w') as f:
90 f.write(node['ssh_private_key'])
91 # build knife bootstrap command
92 args = ['knife', 'bootstrap', node['fqdn']]
93 args.extend(['--identity-file', pk_path])
94 args.extend(['--node-name', node['id']])
95 args.extend(['--sudo', '--ssh-user', node['ssh_username']])
96 args.extend(['--ssh-port', str(node.get('ssh_port', 22))])
97 args.extend(['--bootstrap-version', CHEF_CLIENT_VERSION])
98 args.extend(['--no-host-key-verify'])
99 args.extend(['--run-list', _construct_run_list(node)])
100 print(' '.join(args))
101 # tee the command's output to a tempfile
102 args.extend(['|', 'tee', output_path])
103 # TODO: figure out why home isn't being set correctly for knife exec
104 env = os.environ.copy()
105 env['HOME'] = '/opt/deis'
106 # execute knife bootstrap
107 p = subprocess.Popen(' '.join(args), env=env, shell=True)
108 rc = p.wait()
109 # always print knife output
110 with open(output_path) as f:
111 output = f.read()
112 print(output)
113 # raise an exception if bootstrap failed
114 if rc != 0:
115 raise RuntimeError('Node Bootstrap Error')
116 # remove temp files from filesystem
117 finally:
118 os.remove(pk_path)
119 os.remove(output_path)
120
121
122 def _construct_run_list(node):
123 config = node['config']
124 # if run_list override specified, use it (assumes csv)
125 run_list = config.get('run_list', [])
126 # otherwise construct a run_list using proxy/runtime flags
127 if not run_list:
128 run_list = ['recipe[deis]']
129 if node.get('runtime') is True:
130 run_list.append('recipe[deis::runtime]')
131 if node.get('proxy') is True:
132 run_list.append('recipe[deis::proxy]')
133 return ','.join(run_list)
134
135
136 def purge_node(node):
137 """
138 Purge a node and its client from Chef configuration management.
139
140 :param node: a dict containing the id of a node to purge
141 """
142 client = _get_client()
143 node_id = node['id']
144 body, status = client.delete_node(node_id)
145 if status != 200:
146 raise RuntimeError('Could not purge node {node_id}: {body}'.format(**locals()))
147 body, status = client.delete_client(node_id)
148 if status != 200:
149 raise RuntimeError('Could not purge node client {node_id}: {body}'.format(**locals()))
150
151
152 def converge_controller():
153 """
154 Converge this controller node.
155
156 "Converge" means to change a node's configuration to match that defined by
157 configuration management.
158
159 :returns: the output of the convergence command, in this case `sudo chef-client`
160 """
161 try:
162 return subprocess.check_output(['sudo', 'chef-client'])
163 except subprocess.CalledProcessError as err:
164 print(err)
165 print(err.output)
166 raise err
167
168
169 def converge_node(node):
170 """
171 Converge a node.
172
173 "Converge" means to change a node's configuration to match that defined by
174 configuration management.
175
176 :param node: a dict containing the node's fully-qualified domain name and SSH info
177 :returns: a tuple of the convergence command's (output, return_code)
178 """
179 ssh = connect_ssh(node['ssh_username'],
180 node['fqdn'], 22,
181 node['ssh_private_key'])
182 output, rc = exec_ssh(ssh, 'sudo chef-client')
183 print(output)
184 if rc != 0:
185 e = RuntimeError('Node converge error')
186 e.output = output
187 raise e
188 return output, rc
189
190
191 def run_node(node, command):
192 """
193 Run a command on a node.
194
195 :param node: a dict containing the node's fully-qualified domain name and SSH info
196 :param command: the command-line to execute on the node
197 :returns: a tuple of the command's (output, return_code)
198 """
199 ssh = connect_ssh(node['ssh_username'], node['fqdn'],
200 node['ssh_port'], node['ssh_private_key'])
201 output, rc = exec_ssh(ssh, command, pty=True)
202 return output, rc
203
204
205 def converge_formation(formation):
206 """
207 Converge all nodes in a formation.
208
209 "Converge" means to change a node's configuration to match that defined by
210 configuration management.
211
212 :param formation: a :class:`~api.models.Formation` to converge
213 :returns: the combined output of the nodes' convergence commands
214 """
215 nodes = formation.node_set.all()
216 subtasks = []
217 for n in nodes:
218 subtask = converge_node.s(n.id,
219 n.layer.flavor.ssh_username,
220 n.fqdn,
221 n.layer.flavor.ssh_private_key)
222 subtasks.append(subtask)
223 job = group(*subtasks)
224 return job.apply_async().join()
225
226
227 def publish_user(user, data):
228 """
229 Publish a user to configuration management.
230
231 :param user: a dict containing the username
232 :param data: data to store with the user
233 :returns: a tuple of (body, status) from the underlying HTTP response
234 :raises: RuntimeError
235 """
236 _publish('deis-users', user['username'], data)
237
238
239 def publish_app(app, data):
240 """
241 Publish an app to configuration management.
242
243 :param app: a dict containing the id of the app
244 :param data: data to store with the app
245 :returns: a tuple of (body, status) from the underlying HTTP response
246 :raises: RuntimeError
247 """
248 _publish('deis-apps', app['id'], data)
249
250
251 def purge_app(app):
252 """
253 Purge an app from configuration management.
254
255 :param app: a dict containing the id of the app
256 :returns: a tuple of (body, status) from the underlying HTTP response
257 :raises: RuntimeError
258 """
259 _purge('deis-apps', app['id'])
260
261
262 def publish_formation(formation, data):
263 """
264 Publish a formation to configuration management.
265
266 :param formation: a dict containing the id of the formation
267 :param data: data to store with the formation
268 :returns: a tuple of (body, status) from the underlying HTTP response
269 :raises: RuntimeError
270 """
271 _publish('deis-formations', formation['id'], data)
272
273
274 def purge_formation(formation):
275 """
276 Purge a formation from configuration management.
277
278 :param formation: a dict containing the id of the formation
279 :returns: a tuple of (body, status) from the underlying HTTP response
280 :raises: RuntimeError
281 """
282 _purge('deis-formations', formation['id'])
283
284
285 def _publish(data_bag, item_name, item_value):
286 """
287 Publish a data bag item to the Chef server.
288
289 :param data_bag: the name of a Chef data bag
290 :param item_name: the name of the item to publish
291 :param item_value: the value of the item to publish
292 :returns: a tuple of (body, status) from the underlying HTTP response
293 :raises: RuntimeError
294 """
295 client = _get_client()
296 body, status = client.update_databag_item(data_bag, item_name, item_value)
297 if status != 200:
298 body, status = client.create_databag_item(data_bag, item_name, item_value)
299 if status != 201:
300 raise RuntimeError('Could not publish {item_name}: {body}'.format(**locals()))
301 return body, status
302
303
304 def _purge(databag_name, item_name):
305 """
306 Purge a data bag item from the Chef server.
307
308 :param databag_name: the name of a Chef data bag
309 :param item_name: the name of the item to purge
310 :returns: a tuple of (body, status) from the underlying HTTP response
311 :raises: RuntimeError
312 """
313 client = _get_client()
314 body, status = client.delete_databag_item(databag_name, item_name)
315 if status == 200 or status == 404:
316 return body, status
317 raise RuntimeError('Could not purge {item_name}: {body}'.format(**locals()))
318
[end of cm/chef.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cm/chef.py b/cm/chef.py
--- a/cm/chef.py
+++ b/cm/chef.py
@@ -142,11 +142,11 @@
client = _get_client()
node_id = node['id']
body, status = client.delete_node(node_id)
- if status != 200:
- raise RuntimeError('Could not purge node {node_id}: {body}'.format(**locals()))
+ if status not in [200, 404]:
+ raise RuntimeError("Could not purge node {node_id}: {body}".format(**locals()))
body, status = client.delete_client(node_id)
- if status != 200:
- raise RuntimeError('Could not purge node client {node_id}: {body}'.format(**locals()))
+ if status not in [200, 404]:
+ raise RuntimeError("Could not purge node client {node_id}: {body}".format(**locals()))
def converge_controller():
@@ -312,6 +312,6 @@
"""
client = _get_client()
body, status = client.delete_databag_item(databag_name, item_name)
- if status == 200 or status == 404:
+ if status in [200, 404]:
return body, status
raise RuntimeError('Could not purge {item_name}: {body}'.format(**locals()))
| {"golden_diff": "diff --git a/cm/chef.py b/cm/chef.py\n--- a/cm/chef.py\n+++ b/cm/chef.py\n@@ -142,11 +142,11 @@\n client = _get_client()\n node_id = node['id']\n body, status = client.delete_node(node_id)\n- if status != 200:\n- raise RuntimeError('Could not purge node {node_id}: {body}'.format(**locals()))\n+ if status not in [200, 404]:\n+ raise RuntimeError(\"Could not purge node {node_id}: {body}\".format(**locals()))\n body, status = client.delete_client(node_id)\n- if status != 200:\n- raise RuntimeError('Could not purge node client {node_id}: {body}'.format(**locals()))\n+ if status not in [200, 404]:\n+ raise RuntimeError(\"Could not purge node client {node_id}: {body}\".format(**locals()))\n \n \n def converge_controller():\n@@ -312,6 +312,6 @@\n \"\"\"\n client = _get_client()\n body, status = client.delete_databag_item(databag_name, item_name)\n- if status == 200 or status == 404:\n+ if status in [200, 404]:\n return body, status\n raise RuntimeError('Could not purge {item_name}: {body}'.format(**locals()))\n", "issue": "404 on cm.destroy should be a warning.\nIt's all too easy to run into the case where something errored in formations:create and we have a Django database entry for the formation, but no Chef node. The recent merge makes it impossible to delete the Formation in that case. If we instead treat the Chef 404 as a warning, this will work.\n\n", "before_files": [{"content": "\"\"\"\nDeis configuration management implementation for Opscode Chef.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport re\nimport subprocess\nimport tempfile\nimport time\nimport socket\n\nfrom celery.canvas import group\n\nfrom api.ssh import exec_ssh, connect_ssh\nfrom cm.chef_api import ChefAPI\n\n\nCHEF_CONFIG_PATH = '/etc/chef'\nCHEF_INSTALL_TYPE = 'gems'\nCHEF_RUBY_VERSION = '1.9.1'\nCHEF_ENVIRONMENT = '_default'\nCHEF_CLIENT_VERSION = '11.6.2'\n\n# load chef config using CHEF_CONFIG_PATH\ntry:\n # parse controller's chef config for server_url and client_name\n _client_cfg_path = os.path.join(CHEF_CONFIG_PATH, 'client.rb')\n if not os.path.exists(_client_cfg_path):\n raise EnvironmentError('Could not find {}'.format(_client_cfg_path))\n with open(_client_cfg_path) as f:\n _data = f.read()\n # construct a dict from the ruby client.rb\n _d = {}\n for m in re.findall(r'''^([a-zA-Z0-9_]+)[ \\t]+(.*)$''',\n _data, re.MULTILINE):\n _d[m[0]] = m[1].strip(\"'\").strip('\"')\n # set global variables from client.rb\n CHEF_SERVER_URL = _d['chef_server_url']\n CHEF_NODE_NAME = _d.get('node_name', socket.gethostname())\n CHEF_CLIENT_NAME = _d.get('node_name', socket.gethostname())\n CHEF_VALIDATION_NAME = _d['validation_client_name']\n # read the client key\n _client_pem_path = os.path.join(CHEF_CONFIG_PATH, 'client.pem')\n CHEF_CLIENT_KEY = subprocess.check_output(\n ['sudo', '/bin/cat', _client_pem_path]).strip('\\n')\n # read the validation key\n _valid_pem_path = os.path.join(CHEF_CONFIG_PATH, 'validation.pem')\n CHEF_VALIDATION_KEY = subprocess.check_output(\n ['sudo', '/bin/cat', _valid_pem_path]).strip('\\n')\nexcept Exception as err:\n msg = \"Failed to auto-configure Chef -- {}\".format(err)\n if os.environ.get('READTHEDOCS'):\n # Just print the error if Sphinx is running\n print(msg)\n else:\n raise EnvironmentError(msg)\n\n\ndef _get_client():\n \"\"\"\n Return a new instance of a Chef API Client\n\n :rtype: a :class:`~cm.chef_api.ChefAPI` object\n \"\"\"\n return ChefAPI(CHEF_SERVER_URL, CHEF_CLIENT_NAME, CHEF_CLIENT_KEY)\n\n\ndef bootstrap_node(node):\n \"\"\"\n Bootstrap the Chef configuration management tools onto a node.\n\n :param node: a dict containing the node's fully-qualified domain name and SSH info\n :raises: RuntimeError\n \"\"\"\n # block until we can connect over ssh\n ssh = connect_ssh(node['ssh_username'], node['fqdn'], node.get('ssh_port', 22),\n node['ssh_private_key'], timeout=120)\n # block until ubuntu cloud-init is finished\n initializing = True\n while initializing:\n time.sleep(10)\n initializing, _rc = exec_ssh(ssh, 'ps auxw | egrep \"cloud-init\" | grep -v egrep')\n # write out private key and prepare to `knife bootstrap`\n try:\n _, pk_path = tempfile.mkstemp()\n _, output_path = tempfile.mkstemp()\n with open(pk_path, 'w') as f:\n f.write(node['ssh_private_key'])\n # build knife bootstrap command\n args = ['knife', 'bootstrap', node['fqdn']]\n args.extend(['--identity-file', pk_path])\n args.extend(['--node-name', node['id']])\n args.extend(['--sudo', '--ssh-user', node['ssh_username']])\n args.extend(['--ssh-port', str(node.get('ssh_port', 22))])\n args.extend(['--bootstrap-version', CHEF_CLIENT_VERSION])\n args.extend(['--no-host-key-verify'])\n args.extend(['--run-list', _construct_run_list(node)])\n print(' '.join(args))\n # tee the command's output to a tempfile\n args.extend(['|', 'tee', output_path])\n # TODO: figure out why home isn't being set correctly for knife exec\n env = os.environ.copy()\n env['HOME'] = '/opt/deis'\n # execute knife bootstrap\n p = subprocess.Popen(' '.join(args), env=env, shell=True)\n rc = p.wait()\n # always print knife output\n with open(output_path) as f:\n output = f.read()\n print(output)\n # raise an exception if bootstrap failed\n if rc != 0:\n raise RuntimeError('Node Bootstrap Error')\n # remove temp files from filesystem\n finally:\n os.remove(pk_path)\n os.remove(output_path)\n\n\ndef _construct_run_list(node):\n config = node['config']\n # if run_list override specified, use it (assumes csv)\n run_list = config.get('run_list', [])\n # otherwise construct a run_list using proxy/runtime flags\n if not run_list:\n run_list = ['recipe[deis]']\n if node.get('runtime') is True:\n run_list.append('recipe[deis::runtime]')\n if node.get('proxy') is True:\n run_list.append('recipe[deis::proxy]')\n return ','.join(run_list)\n\n\ndef purge_node(node):\n \"\"\"\n Purge a node and its client from Chef configuration management.\n\n :param node: a dict containing the id of a node to purge\n \"\"\"\n client = _get_client()\n node_id = node['id']\n body, status = client.delete_node(node_id)\n if status != 200:\n raise RuntimeError('Could not purge node {node_id}: {body}'.format(**locals()))\n body, status = client.delete_client(node_id)\n if status != 200:\n raise RuntimeError('Could not purge node client {node_id}: {body}'.format(**locals()))\n\n\ndef converge_controller():\n \"\"\"\n Converge this controller node.\n\n \"Converge\" means to change a node's configuration to match that defined by\n configuration management.\n\n :returns: the output of the convergence command, in this case `sudo chef-client`\n \"\"\"\n try:\n return subprocess.check_output(['sudo', 'chef-client'])\n except subprocess.CalledProcessError as err:\n print(err)\n print(err.output)\n raise err\n\n\ndef converge_node(node):\n \"\"\"\n Converge a node.\n\n \"Converge\" means to change a node's configuration to match that defined by\n configuration management.\n\n :param node: a dict containing the node's fully-qualified domain name and SSH info\n :returns: a tuple of the convergence command's (output, return_code)\n \"\"\"\n ssh = connect_ssh(node['ssh_username'],\n node['fqdn'], 22,\n node['ssh_private_key'])\n output, rc = exec_ssh(ssh, 'sudo chef-client')\n print(output)\n if rc != 0:\n e = RuntimeError('Node converge error')\n e.output = output\n raise e\n return output, rc\n\n\ndef run_node(node, command):\n \"\"\"\n Run a command on a node.\n\n :param node: a dict containing the node's fully-qualified domain name and SSH info\n :param command: the command-line to execute on the node\n :returns: a tuple of the command's (output, return_code)\n \"\"\"\n ssh = connect_ssh(node['ssh_username'], node['fqdn'],\n node['ssh_port'], node['ssh_private_key'])\n output, rc = exec_ssh(ssh, command, pty=True)\n return output, rc\n\n\ndef converge_formation(formation):\n \"\"\"\n Converge all nodes in a formation.\n\n \"Converge\" means to change a node's configuration to match that defined by\n configuration management.\n\n :param formation: a :class:`~api.models.Formation` to converge\n :returns: the combined output of the nodes' convergence commands\n \"\"\"\n nodes = formation.node_set.all()\n subtasks = []\n for n in nodes:\n subtask = converge_node.s(n.id,\n n.layer.flavor.ssh_username,\n n.fqdn,\n n.layer.flavor.ssh_private_key)\n subtasks.append(subtask)\n job = group(*subtasks)\n return job.apply_async().join()\n\n\ndef publish_user(user, data):\n \"\"\"\n Publish a user to configuration management.\n\n :param user: a dict containing the username\n :param data: data to store with the user\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _publish('deis-users', user['username'], data)\n\n\ndef publish_app(app, data):\n \"\"\"\n Publish an app to configuration management.\n\n :param app: a dict containing the id of the app\n :param data: data to store with the app\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _publish('deis-apps', app['id'], data)\n\n\ndef purge_app(app):\n \"\"\"\n Purge an app from configuration management.\n\n :param app: a dict containing the id of the app\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _purge('deis-apps', app['id'])\n\n\ndef publish_formation(formation, data):\n \"\"\"\n Publish a formation to configuration management.\n\n :param formation: a dict containing the id of the formation\n :param data: data to store with the formation\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _publish('deis-formations', formation['id'], data)\n\n\ndef purge_formation(formation):\n \"\"\"\n Purge a formation from configuration management.\n\n :param formation: a dict containing the id of the formation\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _purge('deis-formations', formation['id'])\n\n\ndef _publish(data_bag, item_name, item_value):\n \"\"\"\n Publish a data bag item to the Chef server.\n\n :param data_bag: the name of a Chef data bag\n :param item_name: the name of the item to publish\n :param item_value: the value of the item to publish\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n client = _get_client()\n body, status = client.update_databag_item(data_bag, item_name, item_value)\n if status != 200:\n body, status = client.create_databag_item(data_bag, item_name, item_value)\n if status != 201:\n raise RuntimeError('Could not publish {item_name}: {body}'.format(**locals()))\n return body, status\n\n\ndef _purge(databag_name, item_name):\n \"\"\"\n Purge a data bag item from the Chef server.\n\n :param databag_name: the name of a Chef data bag\n :param item_name: the name of the item to purge\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n client = _get_client()\n body, status = client.delete_databag_item(databag_name, item_name)\n if status == 200 or status == 404:\n return body, status\n raise RuntimeError('Could not purge {item_name}: {body}'.format(**locals()))\n", "path": "cm/chef.py"}]} | 4,063 | 318 |
gh_patches_debug_16514 | rasdani/github-patches | git_diff | ipython__ipython-3072 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SciPy.weave broken in IPython notebook/ qtconsole
# Problem
Compiling and running of compiled scipy.weave code is not possible with IPython notebook and IPython console. However, it works with standalone, plain IPython.
# Minimal Example
http://projects.scipy.org/scipy/browser/trunk/Lib/weave/examples/array3d.py?rev=1558
# Error
## compiling code
find the traceback here:
http://pastebin.com/FdSadJtV
final error:
AttributeError: 'OutStream' object has no attribute 'fileno'
## executing code
For both notebook and qtconsole, the output is printed in the terminal that was used to start them.
# tested version
## failure in
ipython 0.13.1
scipy '0.11.0'
numpy '1.7.0'
## working in
### compilation
Apparently it is working with numpy 1.6.0. I did not test it by myself, but that was a response on IRC. It was there also reproduced using numpy 1.7.0. I could compile it in an virtualenv using numpy 1.6 using the notebook. (Still trying to get the qtconsole to run so that I can test that)
### output
Even with numpy 1.6 the output is directed to the terminal that was used to start the IPython kernel
</issue>
<code>
[start of IPython/kernel/zmq/iostream.py]
1 """wrappers for stdout/stderr forwarding over zmq
2 """
3
4 #-----------------------------------------------------------------------------
5 # Copyright (C) 2013 The IPython Development Team
6 #
7 # Distributed under the terms of the BSD License. The full license is in
8 # the file COPYING, distributed as part of this software.
9 #-----------------------------------------------------------------------------
10
11 import sys
12 import time
13 import os
14 import threading
15 import uuid
16 from io import StringIO
17
18 import zmq
19
20 from session import extract_header
21
22 from IPython.utils import py3compat
23
24 #-----------------------------------------------------------------------------
25 # Globals
26 #-----------------------------------------------------------------------------
27
28 MASTER = 0
29 CHILD = 1
30
31 #-----------------------------------------------------------------------------
32 # Stream classes
33 #-----------------------------------------------------------------------------
34
35 class OutStream(object):
36 """A file like object that publishes the stream to a 0MQ PUB socket."""
37
38 # The time interval between automatic flushes, in seconds.
39 _subprocess_flush_limit = 256
40 flush_interval = 0.05
41 topic=None
42
43 def __init__(self, session, pub_socket, name, pipe=True):
44 self.encoding = 'UTF-8'
45 self.session = session
46 self.pub_socket = pub_socket
47 self.name = name
48 self.parent_header = {}
49 self._new_buffer()
50 self._buffer_lock = threading.Lock()
51 self._master_pid = os.getpid()
52 self._master_thread = threading.current_thread().ident
53 self._pipe_pid = os.getpid()
54 self._pipe_flag = pipe
55 if pipe:
56 self._setup_pipe_in()
57
58 def _setup_pipe_in(self):
59 """setup listening pipe for subprocesses"""
60 ctx = self.pub_socket.context
61
62 # use UUID to authenticate pipe messages
63 self._pipe_uuid = uuid.uuid4().bytes
64
65 self._pipe_in = ctx.socket(zmq.PULL)
66 self._pipe_in.linger = 0
67 self._pipe_port = self._pipe_in.bind_to_random_port("tcp://127.0.0.1")
68 self._pipe_poller = zmq.Poller()
69 self._pipe_poller.register(self._pipe_in, zmq.POLLIN)
70
71 def _setup_pipe_out(self):
72 # must be new context after fork
73 ctx = zmq.Context()
74 self._pipe_pid = os.getpid()
75 self._pipe_out = ctx.socket(zmq.PUSH)
76 self._pipe_out_lock = threading.Lock()
77 self._pipe_out.connect("tcp://127.0.0.1:%i" % self._pipe_port)
78
79 def _is_master_process(self):
80 return os.getpid() == self._master_pid
81
82 def _is_master_thread(self):
83 return threading.current_thread().ident == self._master_thread
84
85 def _have_pipe_out(self):
86 return os.getpid() == self._pipe_pid
87
88 def _check_mp_mode(self):
89 """check for forks, and switch to zmq pipeline if necessary"""
90 if not self._pipe_flag or self._is_master_process():
91 return MASTER
92 else:
93 if not self._have_pipe_out():
94 self._flush_buffer()
95 # setup a new out pipe
96 self._setup_pipe_out()
97 return CHILD
98
99 def set_parent(self, parent):
100 self.parent_header = extract_header(parent)
101
102 def close(self):
103 self.pub_socket = None
104
105 def _flush_from_subprocesses(self):
106 """flush possible pub data from subprocesses into my buffer"""
107 if not self._pipe_flag or not self._is_master_process():
108 return
109 for i in range(self._subprocess_flush_limit):
110 if self._pipe_poller.poll(0):
111 msg = self._pipe_in.recv_multipart()
112 if msg[0] != self._pipe_uuid:
113 continue
114 else:
115 self._buffer.write(msg[1].decode(self.encoding, 'replace'))
116 # this always means a flush,
117 # so reset our timer
118 self._start = 0
119 else:
120 break
121
122 def flush(self):
123 """trigger actual zmq send"""
124 if self.pub_socket is None:
125 raise ValueError(u'I/O operation on closed file')
126
127 mp_mode = self._check_mp_mode()
128
129 if mp_mode != CHILD:
130 # we are master
131 if not self._is_master_thread():
132 # sub-threads must not trigger flush,
133 # but at least they can force the timer.
134 self._start = 0
135 return
136
137 self._flush_from_subprocesses()
138 data = self._flush_buffer()
139
140 if data:
141 content = {u'name':self.name, u'data':data}
142 msg = self.session.send(self.pub_socket, u'stream', content=content,
143 parent=self.parent_header, ident=self.topic)
144
145 if hasattr(self.pub_socket, 'flush'):
146 # socket itself has flush (presumably ZMQStream)
147 self.pub_socket.flush()
148 else:
149 with self._pipe_out_lock:
150 string = self._flush_buffer()
151 tracker = self._pipe_out.send_multipart([
152 self._pipe_uuid,
153 string.encode(self.encoding, 'replace'),
154 ], copy=False, track=True)
155 try:
156 tracker.wait(1)
157 except:
158 pass
159
160 def isatty(self):
161 return False
162
163 def __next__(self):
164 raise IOError('Read not supported on a write only stream.')
165
166 if not py3compat.PY3:
167 next = __next__
168
169 def read(self, size=-1):
170 raise IOError('Read not supported on a write only stream.')
171
172 def readline(self, size=-1):
173 raise IOError('Read not supported on a write only stream.')
174
175 def write(self, string):
176 if self.pub_socket is None:
177 raise ValueError('I/O operation on closed file')
178 else:
179 # Make sure that we're handling unicode
180 if not isinstance(string, unicode):
181 string = string.decode(self.encoding, 'replace')
182
183 is_child = (self._check_mp_mode() == CHILD)
184 self._buffer.write(string)
185 if is_child:
186 # newlines imply flush in subprocesses
187 # mp.Pool cannot be trusted to flush promptly (or ever),
188 # and this helps.
189 if '\n' in string:
190 self.flush()
191 # do we want to check subprocess flushes on write?
192 # self._flush_from_subprocesses()
193 current_time = time.time()
194 if self._start < 0:
195 self._start = current_time
196 elif current_time - self._start > self.flush_interval:
197 self.flush()
198
199 def writelines(self, sequence):
200 if self.pub_socket is None:
201 raise ValueError('I/O operation on closed file')
202 else:
203 for string in sequence:
204 self.write(string)
205
206 def _flush_buffer(self):
207 """clear the current buffer and return the current buffer data"""
208 data = u''
209 if self._buffer is not None:
210 data = self._buffer.getvalue()
211 self._buffer.close()
212 self._new_buffer()
213 return data
214
215 def _new_buffer(self):
216 self._buffer = StringIO()
217 self._start = -1
218
[end of IPython/kernel/zmq/iostream.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/kernel/zmq/iostream.py b/IPython/kernel/zmq/iostream.py
--- a/IPython/kernel/zmq/iostream.py
+++ b/IPython/kernel/zmq/iostream.py
@@ -8,12 +8,11 @@
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
-import sys
-import time
import os
import threading
+import time
import uuid
-from io import StringIO
+from io import StringIO, UnsupportedOperation
import zmq
@@ -171,6 +170,9 @@
def readline(self, size=-1):
raise IOError('Read not supported on a write only stream.')
+
+ def fileno(self):
+ raise UnsupportedOperation("IOStream has no fileno.")
def write(self, string):
if self.pub_socket is None:
| {"golden_diff": "diff --git a/IPython/kernel/zmq/iostream.py b/IPython/kernel/zmq/iostream.py\n--- a/IPython/kernel/zmq/iostream.py\n+++ b/IPython/kernel/zmq/iostream.py\n@@ -8,12 +8,11 @@\n # the file COPYING, distributed as part of this software.\n #-----------------------------------------------------------------------------\n \n-import sys\n-import time\n import os\n import threading\n+import time\n import uuid\n-from io import StringIO\n+from io import StringIO, UnsupportedOperation\n \n import zmq\n \n@@ -171,6 +170,9 @@\n \n def readline(self, size=-1):\n raise IOError('Read not supported on a write only stream.')\n+ \n+ def fileno(self):\n+ raise UnsupportedOperation(\"IOStream has no fileno.\")\n \n def write(self, string):\n if self.pub_socket is None:\n", "issue": "SciPy.weave broken in IPython notebook/ qtconsole \n# Problem\n\nCompiling and running of compiled scipy.weave code is not possible with IPython notebook and IPython console. However, it works with standalone, plain IPython.\n# Minimal Example\n\nhttp://projects.scipy.org/scipy/browser/trunk/Lib/weave/examples/array3d.py?rev=1558\n# Error\n## compiling code\n\nfind the traceback here:\nhttp://pastebin.com/FdSadJtV\nfinal error:\nAttributeError: 'OutStream' object has no attribute 'fileno'\n## executing code\n\nFor both notebook and qtconsole, the output is printed in the terminal that was used to start them.\n# tested version\n## failure in\n\nipython 0.13.1\nscipy '0.11.0' \nnumpy '1.7.0'\n## working in\n### compilation\n\nApparently it is working with numpy 1.6.0. I did not test it by myself, but that was a response on IRC. It was there also reproduced using numpy 1.7.0. I could compile it in an virtualenv using numpy 1.6 using the notebook. (Still trying to get the qtconsole to run so that I can test that)\n### output\n\nEven with numpy 1.6 the output is directed to the terminal that was used to start the IPython kernel\n\n", "before_files": [{"content": "\"\"\"wrappers for stdout/stderr forwarding over zmq\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (C) 2013 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\nimport sys\nimport time\nimport os\nimport threading\nimport uuid\nfrom io import StringIO\n\nimport zmq\n\nfrom session import extract_header\n\nfrom IPython.utils import py3compat\n\n#-----------------------------------------------------------------------------\n# Globals\n#-----------------------------------------------------------------------------\n\nMASTER = 0\nCHILD = 1\n\n#-----------------------------------------------------------------------------\n# Stream classes\n#-----------------------------------------------------------------------------\n\nclass OutStream(object):\n \"\"\"A file like object that publishes the stream to a 0MQ PUB socket.\"\"\"\n\n # The time interval between automatic flushes, in seconds.\n _subprocess_flush_limit = 256\n flush_interval = 0.05\n topic=None\n\n def __init__(self, session, pub_socket, name, pipe=True):\n self.encoding = 'UTF-8'\n self.session = session\n self.pub_socket = pub_socket\n self.name = name\n self.parent_header = {}\n self._new_buffer()\n self._buffer_lock = threading.Lock()\n self._master_pid = os.getpid()\n self._master_thread = threading.current_thread().ident\n self._pipe_pid = os.getpid()\n self._pipe_flag = pipe\n if pipe:\n self._setup_pipe_in()\n \n def _setup_pipe_in(self):\n \"\"\"setup listening pipe for subprocesses\"\"\"\n ctx = self.pub_socket.context\n \n # use UUID to authenticate pipe messages\n self._pipe_uuid = uuid.uuid4().bytes\n \n self._pipe_in = ctx.socket(zmq.PULL)\n self._pipe_in.linger = 0\n self._pipe_port = self._pipe_in.bind_to_random_port(\"tcp://127.0.0.1\")\n self._pipe_poller = zmq.Poller()\n self._pipe_poller.register(self._pipe_in, zmq.POLLIN)\n \n def _setup_pipe_out(self):\n # must be new context after fork\n ctx = zmq.Context()\n self._pipe_pid = os.getpid()\n self._pipe_out = ctx.socket(zmq.PUSH)\n self._pipe_out_lock = threading.Lock()\n self._pipe_out.connect(\"tcp://127.0.0.1:%i\" % self._pipe_port)\n \n def _is_master_process(self):\n return os.getpid() == self._master_pid\n \n def _is_master_thread(self):\n return threading.current_thread().ident == self._master_thread\n \n def _have_pipe_out(self):\n return os.getpid() == self._pipe_pid\n\n def _check_mp_mode(self):\n \"\"\"check for forks, and switch to zmq pipeline if necessary\"\"\"\n if not self._pipe_flag or self._is_master_process():\n return MASTER\n else:\n if not self._have_pipe_out():\n self._flush_buffer()\n # setup a new out pipe\n self._setup_pipe_out()\n return CHILD\n\n def set_parent(self, parent):\n self.parent_header = extract_header(parent)\n\n def close(self):\n self.pub_socket = None\n\n def _flush_from_subprocesses(self):\n \"\"\"flush possible pub data from subprocesses into my buffer\"\"\"\n if not self._pipe_flag or not self._is_master_process():\n return\n for i in range(self._subprocess_flush_limit):\n if self._pipe_poller.poll(0):\n msg = self._pipe_in.recv_multipart()\n if msg[0] != self._pipe_uuid:\n continue\n else:\n self._buffer.write(msg[1].decode(self.encoding, 'replace'))\n # this always means a flush,\n # so reset our timer\n self._start = 0\n else:\n break\n \n def flush(self):\n \"\"\"trigger actual zmq send\"\"\"\n if self.pub_socket is None:\n raise ValueError(u'I/O operation on closed file')\n \n mp_mode = self._check_mp_mode()\n \n if mp_mode != CHILD:\n # we are master\n if not self._is_master_thread():\n # sub-threads must not trigger flush,\n # but at least they can force the timer.\n self._start = 0\n return\n \n self._flush_from_subprocesses()\n data = self._flush_buffer()\n \n if data:\n content = {u'name':self.name, u'data':data}\n msg = self.session.send(self.pub_socket, u'stream', content=content,\n parent=self.parent_header, ident=self.topic)\n \n if hasattr(self.pub_socket, 'flush'):\n # socket itself has flush (presumably ZMQStream)\n self.pub_socket.flush()\n else:\n with self._pipe_out_lock:\n string = self._flush_buffer()\n tracker = self._pipe_out.send_multipart([\n self._pipe_uuid,\n string.encode(self.encoding, 'replace'),\n ], copy=False, track=True)\n try:\n tracker.wait(1)\n except:\n pass\n\n def isatty(self):\n return False\n\n def __next__(self):\n raise IOError('Read not supported on a write only stream.')\n\n if not py3compat.PY3:\n next = __next__\n\n def read(self, size=-1):\n raise IOError('Read not supported on a write only stream.')\n\n def readline(self, size=-1):\n raise IOError('Read not supported on a write only stream.')\n\n def write(self, string):\n if self.pub_socket is None:\n raise ValueError('I/O operation on closed file')\n else:\n # Make sure that we're handling unicode\n if not isinstance(string, unicode):\n string = string.decode(self.encoding, 'replace')\n \n is_child = (self._check_mp_mode() == CHILD)\n self._buffer.write(string)\n if is_child:\n # newlines imply flush in subprocesses\n # mp.Pool cannot be trusted to flush promptly (or ever),\n # and this helps.\n if '\\n' in string:\n self.flush()\n # do we want to check subprocess flushes on write?\n # self._flush_from_subprocesses()\n current_time = time.time()\n if self._start < 0:\n self._start = current_time\n elif current_time - self._start > self.flush_interval:\n self.flush()\n\n def writelines(self, sequence):\n if self.pub_socket is None:\n raise ValueError('I/O operation on closed file')\n else:\n for string in sequence:\n self.write(string)\n\n def _flush_buffer(self):\n \"\"\"clear the current buffer and return the current buffer data\"\"\"\n data = u''\n if self._buffer is not None:\n data = self._buffer.getvalue()\n self._buffer.close()\n self._new_buffer()\n return data\n \n def _new_buffer(self):\n self._buffer = StringIO()\n self._start = -1\n", "path": "IPython/kernel/zmq/iostream.py"}]} | 2,909 | 184 |
gh_patches_debug_37274 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2964 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider tiffany is broken
During the global build at 2021-05-26-14-42-23, spider **tiffany** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tiffany.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson))
Tiffany
http://www.tiffany.com/jewelry-stores/store-list/united-states
</issue>
<code>
[start of locations/spiders/tiffany.py]
1 import scrapy
2 import re
3 import json
4 from locations.items import GeojsonPointItem
5
6 class TiffanySpider(scrapy.Spider):
7
8 name = "tiffany"
9 item_attributes = { 'brand': "Tiffany" }
10 allowed_domains = ["www.tiffany.com"]
11 download_delay = 0.5
12 start_urls = (
13 'http://www.tiffany.com/jewelry-stores/store-list/united-states',
14 )
15
16 def parse_day(self, day):
17 if re.search('-', day):
18 days = day.split('-')
19 osm_days = []
20 if len(days) == 2:
21 for day in days:
22 osm_day = day.strip()[:2]
23 osm_days.append(osm_day)
24 return "-".join(osm_days)
25 return day.strip()[:2]
26
27 def parse_times(self, times):
28 if times.strip() == 'CLOSED':
29 return 'Closed'
30 hours_to = [x.strip() for x in times.split('-')]
31 cleaned_times = []
32 for hour in hours_to:
33 if re.search('PM$', hour):
34 hour = re.sub('PM', '', hour).strip()
35 hour_min = hour.split(":")
36 if int(hour_min[0]) < 12:
37 hour_min[0] = str(12 + int(hour_min[0]))
38 cleaned_times.append(":".join(hour_min))
39
40 if re.search('AM$', hour):
41 hour = re.sub('AM', '', hour).strip()
42 hour_min = hour.split(":")
43 if len(hour_min[0]) <2:
44 hour_min[0] = hour_min[0].zfill(2)
45 else:
46 hour_min[0] = str(int(hour_min[0]))
47
48 cleaned_times.append(":".join(hour_min))
49 return "-".join(cleaned_times)
50
51 def parse_hours(self, lis):
52 hours = []
53 for li in lis:
54 if re.search(r"([0-9]{1,2}):([0-9]{1,2})([APM]{2})|CLOSED" , li):
55 day = li.split(':')[0]
56 times = li.replace(day+':','')
57 if times and day:
58 parsed_time = self.parse_times(times)
59 parsed_day = self.parse_day(day)
60 hours.append(parsed_day + ' ' + parsed_time)
61
62 return "; ".join(hours)
63
64 def parse_stores(self, response):
65 data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
66 properties = {
67 'addr_full': data['address']['streetAddress'],
68 'phone': data['telephone'],
69 'name': data['name'],
70 'city': data['address']['addressLocality'],
71 'state': data['address']['addressRegion'],
72 'postcode': data['address']['postalCode'],
73 'ref': data['name'].replace(' ','_'),
74 'website': response.url,
75 'lat': float(data['geo']['latitude']),
76 'lon': float(data['geo']['longitude']),
77 }
78
79 hours = self.parse_hours(response.xpath('//div[@id="divExtendedInfo"]/text()').extract())
80 if hours:
81 properties['opening_hours'] = hours
82 yield GeojsonPointItem(**properties)
83
84 def parse(self, response):
85 urls = response.xpath('//a[contains(text(),"View on Map")]/@href').extract()
86 for path in urls:
87 yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)
88
[end of locations/spiders/tiffany.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/tiffany.py b/locations/spiders/tiffany.py
--- a/locations/spiders/tiffany.py
+++ b/locations/spiders/tiffany.py
@@ -6,11 +6,11 @@
class TiffanySpider(scrapy.Spider):
name = "tiffany"
- item_attributes = { 'brand': "Tiffany" }
+ item_attributes = { 'brand': "Tiffany", 'brand_wikidata': "Q1066858" }
allowed_domains = ["www.tiffany.com"]
download_delay = 0.5
start_urls = (
- 'http://www.tiffany.com/jewelry-stores/store-list/united-states',
+ 'https://www.tiffany.com/jewelry-stores/store-list/',
)
def parse_day(self, day):
@@ -61,27 +61,31 @@
return "; ".join(hours)
- def parse_stores(self, response):
- data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
- properties = {
- 'addr_full': data['address']['streetAddress'],
- 'phone': data['telephone'],
- 'name': data['name'],
- 'city': data['address']['addressLocality'],
- 'state': data['address']['addressRegion'],
- 'postcode': data['address']['postalCode'],
- 'ref': data['name'].replace(' ','_'),
- 'website': response.url,
- 'lat': float(data['geo']['latitude']),
- 'lon': float(data['geo']['longitude']),
- }
+ def parse(self, response):
+ for href in response.xpath('//@href[contains(., "/jewelry-stores/")]').extract():
+ yield scrapy.Request(response.urljoin(href))
- hours = self.parse_hours(response.xpath('//div[@id="divExtendedInfo"]/text()').extract())
- if hours:
- properties['opening_hours'] = hours
- yield GeojsonPointItem(**properties)
+ for ldjson in response.xpath('//script[@type="application/ld+json"]/text()').extract():
+ data = json.loads(ldjson)
+ if data["@type"] != "Store":
+ continue
+
+ properties = {
+ 'name': data['name'],
+ 'phone': data['telephone'],
+ 'addr_full': data['address']['streetAddress'],
+ 'city': data['address']['addressLocality'],
+ 'state': data['address']['addressRegion'],
+ 'postcode': data['address']['postalCode'],
+ 'country': data['address']['addressCountry'],
+ 'ref': data['name'].replace(' ','_'),
+ 'website': response.url,
+ 'lat': response.xpath('//tiffany-maps/@markeratlat').extract_first(),
+ 'lon': response.xpath('//tiffany-maps/@markeratlng').extract_first(),
+ }
+
+ hours = self.parse_hours(response.xpath('//div[@id="divExtendedInfo"]/text()').extract())
+ if hours:
+ properties['opening_hours'] = hours
+ yield GeojsonPointItem(**properties)
- def parse(self, response):
- urls = response.xpath('//a[contains(text(),"View on Map")]/@href').extract()
- for path in urls:
- yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)
| {"golden_diff": "diff --git a/locations/spiders/tiffany.py b/locations/spiders/tiffany.py\n--- a/locations/spiders/tiffany.py\n+++ b/locations/spiders/tiffany.py\n@@ -6,11 +6,11 @@\n class TiffanySpider(scrapy.Spider):\n \n name = \"tiffany\"\n- item_attributes = { 'brand': \"Tiffany\" }\n+ item_attributes = { 'brand': \"Tiffany\", 'brand_wikidata': \"Q1066858\" }\n allowed_domains = [\"www.tiffany.com\"]\n download_delay = 0.5\n start_urls = (\n- 'http://www.tiffany.com/jewelry-stores/store-list/united-states',\n+ 'https://www.tiffany.com/jewelry-stores/store-list/',\n )\n \n def parse_day(self, day):\n@@ -61,27 +61,31 @@\n \n return \"; \".join(hours)\n \n- def parse_stores(self, response):\n- data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n- properties = {\n- 'addr_full': data['address']['streetAddress'],\n- 'phone': data['telephone'],\n- 'name': data['name'],\n- 'city': data['address']['addressLocality'],\n- 'state': data['address']['addressRegion'],\n- 'postcode': data['address']['postalCode'],\n- 'ref': data['name'].replace(' ','_'),\n- 'website': response.url,\n- 'lat': float(data['geo']['latitude']),\n- 'lon': float(data['geo']['longitude']),\n- }\n+ def parse(self, response):\n+ for href in response.xpath('//@href[contains(., \"/jewelry-stores/\")]').extract():\n+ yield scrapy.Request(response.urljoin(href))\n \n- hours = self.parse_hours(response.xpath('//div[@id=\"divExtendedInfo\"]/text()').extract())\n- if hours:\n- properties['opening_hours'] = hours\n- yield GeojsonPointItem(**properties)\n+ for ldjson in response.xpath('//script[@type=\"application/ld+json\"]/text()').extract():\n+ data = json.loads(ldjson)\n+ if data[\"@type\"] != \"Store\":\n+ continue\n+\n+ properties = {\n+ 'name': data['name'],\n+ 'phone': data['telephone'],\n+ 'addr_full': data['address']['streetAddress'],\n+ 'city': data['address']['addressLocality'],\n+ 'state': data['address']['addressRegion'],\n+ 'postcode': data['address']['postalCode'],\n+ 'country': data['address']['addressCountry'],\n+ 'ref': data['name'].replace(' ','_'),\n+ 'website': response.url,\n+ 'lat': response.xpath('//tiffany-maps/@markeratlat').extract_first(),\n+ 'lon': response.xpath('//tiffany-maps/@markeratlng').extract_first(),\n+ }\n+\n+ hours = self.parse_hours(response.xpath('//div[@id=\"divExtendedInfo\"]/text()').extract())\n+ if hours:\n+ properties['opening_hours'] = hours\n+ yield GeojsonPointItem(**properties)\n \n- def parse(self, response):\n- urls = response.xpath('//a[contains(text(),\"View on Map\")]/@href').extract()\n- for path in urls:\n- yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)\n", "issue": "Spider tiffany is broken\nDuring the global build at 2021-05-26-14-42-23, spider **tiffany** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tiffany.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tiffany.geojson))\nTiffany\nhttp://www.tiffany.com/jewelry-stores/store-list/united-states\n", "before_files": [{"content": "import scrapy\nimport re\nimport json\nfrom locations.items import GeojsonPointItem\n\nclass TiffanySpider(scrapy.Spider):\n\n name = \"tiffany\"\n item_attributes = { 'brand': \"Tiffany\" }\n allowed_domains = [\"www.tiffany.com\"]\n download_delay = 0.5\n start_urls = (\n 'http://www.tiffany.com/jewelry-stores/store-list/united-states',\n )\n\n def parse_day(self, day):\n if re.search('-', day):\n days = day.split('-')\n osm_days = []\n if len(days) == 2:\n for day in days:\n osm_day = day.strip()[:2]\n osm_days.append(osm_day)\n return \"-\".join(osm_days)\n return day.strip()[:2]\n\n def parse_times(self, times):\n if times.strip() == 'CLOSED':\n return 'Closed'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n for hour in hours_to:\n if re.search('PM$', hour):\n hour = re.sub('PM', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('AM$', hour):\n hour = re.sub('AM', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str(int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n if re.search(r\"([0-9]{1,2}):([0-9]{1,2})([APM]{2})|CLOSED\" , li):\n day = li.split(':')[0]\n times = li.replace(day+':','')\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n hours.append(parsed_day + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n def parse_stores(self, response):\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n properties = {\n 'addr_full': data['address']['streetAddress'],\n 'phone': data['telephone'],\n 'name': data['name'],\n 'city': data['address']['addressLocality'],\n 'state': data['address']['addressRegion'],\n 'postcode': data['address']['postalCode'],\n 'ref': data['name'].replace(' ','_'),\n 'website': response.url,\n 'lat': float(data['geo']['latitude']),\n 'lon': float(data['geo']['longitude']),\n }\n\n hours = self.parse_hours(response.xpath('//div[@id=\"divExtendedInfo\"]/text()').extract())\n if hours:\n properties['opening_hours'] = hours\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n urls = response.xpath('//a[contains(text(),\"View on Map\")]/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_stores)\n", "path": "locations/spiders/tiffany.py"}]} | 1,652 | 756 |
gh_patches_debug_7003 | rasdani/github-patches | git_diff | dask__distributed-3035 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stop Queue from printing it's name on __init__
There's a print statement here:
https://github.com/dask/distributed/blob/8d07bf162b1f6434fc982f36360c5cedbac369a6/distributed/queues.py#L50
Would be nice to replace it with logging.debug()
</issue>
<code>
[start of distributed/queues.py]
1 from collections import defaultdict
2 import datetime
3 import logging
4 import uuid
5
6 import tornado.queues
7 from tornado.locks import Event
8
9 from .client import Future, _get_global_client, Client
10 from .utils import tokey, sync, thread_state
11 from .worker import get_client
12
13 logger = logging.getLogger(__name__)
14
15
16 class QueueExtension(object):
17 """ An extension for the scheduler to manage queues
18
19 This adds the following routes to the scheduler
20
21 * queue_create
22 * queue_release
23 * queue_put
24 * queue_get
25 * queue_size
26 """
27
28 def __init__(self, scheduler):
29 self.scheduler = scheduler
30 self.queues = dict()
31 self.client_refcount = dict()
32 self.future_refcount = defaultdict(lambda: 0)
33
34 self.scheduler.handlers.update(
35 {
36 "queue_create": self.create,
37 "queue_put": self.put,
38 "queue_get": self.get,
39 "queue_qsize": self.qsize,
40 }
41 )
42
43 self.scheduler.stream_handlers.update(
44 {"queue-future-release": self.future_release, "queue_release": self.release}
45 )
46
47 self.scheduler.extensions["queues"] = self
48
49 def create(self, stream=None, name=None, client=None, maxsize=0):
50 print("name", name)
51 if name not in self.queues:
52 self.queues[name] = tornado.queues.Queue(maxsize=maxsize)
53 self.client_refcount[name] = 1
54 else:
55 self.client_refcount[name] += 1
56
57 def release(self, stream=None, name=None, client=None):
58 if name not in self.queues:
59 return
60
61 self.client_refcount[name] -= 1
62 if self.client_refcount[name] == 0:
63 del self.client_refcount[name]
64 futures = self.queues[name]._queue
65 del self.queues[name]
66 keys = [d["value"] for d in futures if d["type"] == "Future"]
67 if keys:
68 self.scheduler.client_releases_keys(keys=keys, client="queue-%s" % name)
69
70 async def put(
71 self, stream=None, name=None, key=None, data=None, client=None, timeout=None
72 ):
73 if key is not None:
74 record = {"type": "Future", "value": key}
75 self.future_refcount[name, key] += 1
76 self.scheduler.client_desires_keys(keys=[key], client="queue-%s" % name)
77 else:
78 record = {"type": "msgpack", "value": data}
79 if timeout is not None:
80 timeout = datetime.timedelta(seconds=timeout)
81 await self.queues[name].put(record, timeout=timeout)
82
83 def future_release(self, name=None, key=None, client=None):
84 self.future_refcount[name, key] -= 1
85 if self.future_refcount[name, key] == 0:
86 self.scheduler.client_releases_keys(keys=[key], client="queue-%s" % name)
87 del self.future_refcount[name, key]
88
89 async def get(self, stream=None, name=None, client=None, timeout=None, batch=False):
90 def process(record):
91 """ Add task status if known """
92 if record["type"] == "Future":
93 record = record.copy()
94 key = record["value"]
95 ts = self.scheduler.tasks.get(key)
96 state = ts.state if ts is not None else "lost"
97
98 record["state"] = state
99 if state == "erred":
100 record["exception"] = ts.exception_blame.exception
101 record["traceback"] = ts.exception_blame.traceback
102
103 return record
104
105 if batch:
106 q = self.queues[name]
107 out = []
108 if batch is True:
109 while not q.empty():
110 record = await q.get()
111 out.append(record)
112 else:
113 if timeout is not None:
114 msg = (
115 "Dask queues don't support simultaneous use of "
116 "integer batch sizes and timeouts"
117 )
118 raise NotImplementedError(msg)
119 for i in range(batch):
120 record = await q.get()
121 out.append(record)
122 out = [process(o) for o in out]
123 return out
124 else:
125 if timeout is not None:
126 timeout = datetime.timedelta(seconds=timeout)
127 record = await self.queues[name].get(timeout=timeout)
128 record = process(record)
129 return record
130
131 def qsize(self, stream=None, name=None, client=None):
132 return self.queues[name].qsize()
133
134
135 class Queue(object):
136 """ Distributed Queue
137
138 This allows multiple clients to share futures or small bits of data between
139 each other with a multi-producer/multi-consumer queue. All metadata is
140 sequentialized through the scheduler.
141
142 Elements of the Queue must be either Futures or msgpack-encodable data
143 (ints, strings, lists, dicts). All data is sent through the scheduler so
144 it is wise not to send large objects. To share large objects scatter the
145 data and share the future instead.
146
147 .. warning::
148
149 This object is experimental and has known issues in Python 2
150
151 Examples
152 --------
153 >>> from dask.distributed import Client, Queue # doctest: +SKIP
154 >>> client = Client() # doctest: +SKIP
155 >>> queue = Queue('x') # doctest: +SKIP
156 >>> future = client.submit(f, x) # doctest: +SKIP
157 >>> queue.put(future) # doctest: +SKIP
158
159 See Also
160 --------
161 Variable: shared variable between clients
162 """
163
164 def __init__(self, name=None, client=None, maxsize=0):
165 self.client = client or _get_global_client()
166 self.name = name or "queue-" + uuid.uuid4().hex
167 self._event_started = Event()
168 if self.client.asynchronous or getattr(
169 thread_state, "on_event_loop_thread", False
170 ):
171
172 async def _create_queue():
173 await self.client.scheduler.queue_create(
174 name=self.name, maxsize=maxsize
175 )
176 self._event_started.set()
177
178 self.client.loop.add_callback(_create_queue)
179 else:
180 sync(
181 self.client.loop,
182 self.client.scheduler.queue_create,
183 name=self.name,
184 maxsize=maxsize,
185 )
186 self._event_started.set()
187
188 def __await__(self):
189 async def _():
190 await self._event_started.wait()
191 return self
192
193 return _().__await__()
194
195 async def _put(self, value, timeout=None):
196 if isinstance(value, Future):
197 await self.client.scheduler.queue_put(
198 key=tokey(value.key), timeout=timeout, name=self.name
199 )
200 else:
201 await self.client.scheduler.queue_put(
202 data=value, timeout=timeout, name=self.name
203 )
204
205 def put(self, value, timeout=None, **kwargs):
206 """ Put data into the queue """
207 return self.client.sync(self._put, value, timeout=timeout, **kwargs)
208
209 def get(self, timeout=None, batch=False, **kwargs):
210 """ Get data from the queue
211
212 Parameters
213 ----------
214 timeout: Number (optional)
215 Time in seconds to wait before timing out
216 batch: boolean, int (optional)
217 If True then return all elements currently waiting in the queue.
218 If an integer than return that many elements from the queue
219 If False (default) then return one item at a time
220 """
221 return self.client.sync(self._get, timeout=timeout, batch=batch, **kwargs)
222
223 def qsize(self, **kwargs):
224 """ Current number of elements in the queue """
225 return self.client.sync(self._qsize, **kwargs)
226
227 async def _get(self, timeout=None, batch=False):
228 resp = await self.client.scheduler.queue_get(
229 timeout=timeout, name=self.name, batch=batch
230 )
231
232 def process(d):
233 if d["type"] == "Future":
234 value = Future(d["value"], self.client, inform=True, state=d["state"])
235 if d["state"] == "erred":
236 value._state.set_error(d["exception"], d["traceback"])
237 self.client._send_to_scheduler(
238 {"op": "queue-future-release", "name": self.name, "key": d["value"]}
239 )
240 else:
241 value = d["value"]
242
243 return value
244
245 if batch is False:
246 result = process(resp)
247 else:
248 result = list(map(process, resp))
249
250 return result
251
252 async def _qsize(self):
253 result = await self.client.scheduler.queue_qsize(name=self.name)
254 return result
255
256 def close(self):
257 if self.client.status == "running": # TODO: can leave zombie futures
258 self.client._send_to_scheduler({"op": "queue_release", "name": self.name})
259
260 def __getstate__(self):
261 return (self.name, self.client.scheduler.address)
262
263 def __setstate__(self, state):
264 name, address = state
265 try:
266 client = get_client(address)
267 assert client.scheduler.address == address
268 except (AttributeError, AssertionError):
269 client = Client(address, set_as_default=False)
270 self.__init__(name=name, client=client)
271
[end of distributed/queues.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/distributed/queues.py b/distributed/queues.py
--- a/distributed/queues.py
+++ b/distributed/queues.py
@@ -47,7 +47,7 @@
self.scheduler.extensions["queues"] = self
def create(self, stream=None, name=None, client=None, maxsize=0):
- print("name", name)
+ logger.debug("Queue name: {}".format(name))
if name not in self.queues:
self.queues[name] = tornado.queues.Queue(maxsize=maxsize)
self.client_refcount[name] = 1
| {"golden_diff": "diff --git a/distributed/queues.py b/distributed/queues.py\n--- a/distributed/queues.py\n+++ b/distributed/queues.py\n@@ -47,7 +47,7 @@\n self.scheduler.extensions[\"queues\"] = self\n \n def create(self, stream=None, name=None, client=None, maxsize=0):\n- print(\"name\", name)\n+ logger.debug(\"Queue name: {}\".format(name))\n if name not in self.queues:\n self.queues[name] = tornado.queues.Queue(maxsize=maxsize)\n self.client_refcount[name] = 1\n", "issue": "Stop Queue from printing it's name on __init__\nThere's a print statement here:\r\n\r\nhttps://github.com/dask/distributed/blob/8d07bf162b1f6434fc982f36360c5cedbac369a6/distributed/queues.py#L50\r\n\r\nWould be nice to replace it with logging.debug()\n", "before_files": [{"content": "from collections import defaultdict\nimport datetime\nimport logging\nimport uuid\n\nimport tornado.queues\nfrom tornado.locks import Event\n\nfrom .client import Future, _get_global_client, Client\nfrom .utils import tokey, sync, thread_state\nfrom .worker import get_client\n\nlogger = logging.getLogger(__name__)\n\n\nclass QueueExtension(object):\n \"\"\" An extension for the scheduler to manage queues\n\n This adds the following routes to the scheduler\n\n * queue_create\n * queue_release\n * queue_put\n * queue_get\n * queue_size\n \"\"\"\n\n def __init__(self, scheduler):\n self.scheduler = scheduler\n self.queues = dict()\n self.client_refcount = dict()\n self.future_refcount = defaultdict(lambda: 0)\n\n self.scheduler.handlers.update(\n {\n \"queue_create\": self.create,\n \"queue_put\": self.put,\n \"queue_get\": self.get,\n \"queue_qsize\": self.qsize,\n }\n )\n\n self.scheduler.stream_handlers.update(\n {\"queue-future-release\": self.future_release, \"queue_release\": self.release}\n )\n\n self.scheduler.extensions[\"queues\"] = self\n\n def create(self, stream=None, name=None, client=None, maxsize=0):\n print(\"name\", name)\n if name not in self.queues:\n self.queues[name] = tornado.queues.Queue(maxsize=maxsize)\n self.client_refcount[name] = 1\n else:\n self.client_refcount[name] += 1\n\n def release(self, stream=None, name=None, client=None):\n if name not in self.queues:\n return\n\n self.client_refcount[name] -= 1\n if self.client_refcount[name] == 0:\n del self.client_refcount[name]\n futures = self.queues[name]._queue\n del self.queues[name]\n keys = [d[\"value\"] for d in futures if d[\"type\"] == \"Future\"]\n if keys:\n self.scheduler.client_releases_keys(keys=keys, client=\"queue-%s\" % name)\n\n async def put(\n self, stream=None, name=None, key=None, data=None, client=None, timeout=None\n ):\n if key is not None:\n record = {\"type\": \"Future\", \"value\": key}\n self.future_refcount[name, key] += 1\n self.scheduler.client_desires_keys(keys=[key], client=\"queue-%s\" % name)\n else:\n record = {\"type\": \"msgpack\", \"value\": data}\n if timeout is not None:\n timeout = datetime.timedelta(seconds=timeout)\n await self.queues[name].put(record, timeout=timeout)\n\n def future_release(self, name=None, key=None, client=None):\n self.future_refcount[name, key] -= 1\n if self.future_refcount[name, key] == 0:\n self.scheduler.client_releases_keys(keys=[key], client=\"queue-%s\" % name)\n del self.future_refcount[name, key]\n\n async def get(self, stream=None, name=None, client=None, timeout=None, batch=False):\n def process(record):\n \"\"\" Add task status if known \"\"\"\n if record[\"type\"] == \"Future\":\n record = record.copy()\n key = record[\"value\"]\n ts = self.scheduler.tasks.get(key)\n state = ts.state if ts is not None else \"lost\"\n\n record[\"state\"] = state\n if state == \"erred\":\n record[\"exception\"] = ts.exception_blame.exception\n record[\"traceback\"] = ts.exception_blame.traceback\n\n return record\n\n if batch:\n q = self.queues[name]\n out = []\n if batch is True:\n while not q.empty():\n record = await q.get()\n out.append(record)\n else:\n if timeout is not None:\n msg = (\n \"Dask queues don't support simultaneous use of \"\n \"integer batch sizes and timeouts\"\n )\n raise NotImplementedError(msg)\n for i in range(batch):\n record = await q.get()\n out.append(record)\n out = [process(o) for o in out]\n return out\n else:\n if timeout is not None:\n timeout = datetime.timedelta(seconds=timeout)\n record = await self.queues[name].get(timeout=timeout)\n record = process(record)\n return record\n\n def qsize(self, stream=None, name=None, client=None):\n return self.queues[name].qsize()\n\n\nclass Queue(object):\n \"\"\" Distributed Queue\n\n This allows multiple clients to share futures or small bits of data between\n each other with a multi-producer/multi-consumer queue. All metadata is\n sequentialized through the scheduler.\n\n Elements of the Queue must be either Futures or msgpack-encodable data\n (ints, strings, lists, dicts). All data is sent through the scheduler so\n it is wise not to send large objects. To share large objects scatter the\n data and share the future instead.\n\n .. warning::\n\n This object is experimental and has known issues in Python 2\n\n Examples\n --------\n >>> from dask.distributed import Client, Queue # doctest: +SKIP\n >>> client = Client() # doctest: +SKIP\n >>> queue = Queue('x') # doctest: +SKIP\n >>> future = client.submit(f, x) # doctest: +SKIP\n >>> queue.put(future) # doctest: +SKIP\n\n See Also\n --------\n Variable: shared variable between clients\n \"\"\"\n\n def __init__(self, name=None, client=None, maxsize=0):\n self.client = client or _get_global_client()\n self.name = name or \"queue-\" + uuid.uuid4().hex\n self._event_started = Event()\n if self.client.asynchronous or getattr(\n thread_state, \"on_event_loop_thread\", False\n ):\n\n async def _create_queue():\n await self.client.scheduler.queue_create(\n name=self.name, maxsize=maxsize\n )\n self._event_started.set()\n\n self.client.loop.add_callback(_create_queue)\n else:\n sync(\n self.client.loop,\n self.client.scheduler.queue_create,\n name=self.name,\n maxsize=maxsize,\n )\n self._event_started.set()\n\n def __await__(self):\n async def _():\n await self._event_started.wait()\n return self\n\n return _().__await__()\n\n async def _put(self, value, timeout=None):\n if isinstance(value, Future):\n await self.client.scheduler.queue_put(\n key=tokey(value.key), timeout=timeout, name=self.name\n )\n else:\n await self.client.scheduler.queue_put(\n data=value, timeout=timeout, name=self.name\n )\n\n def put(self, value, timeout=None, **kwargs):\n \"\"\" Put data into the queue \"\"\"\n return self.client.sync(self._put, value, timeout=timeout, **kwargs)\n\n def get(self, timeout=None, batch=False, **kwargs):\n \"\"\" Get data from the queue\n\n Parameters\n ----------\n timeout: Number (optional)\n Time in seconds to wait before timing out\n batch: boolean, int (optional)\n If True then return all elements currently waiting in the queue.\n If an integer than return that many elements from the queue\n If False (default) then return one item at a time\n \"\"\"\n return self.client.sync(self._get, timeout=timeout, batch=batch, **kwargs)\n\n def qsize(self, **kwargs):\n \"\"\" Current number of elements in the queue \"\"\"\n return self.client.sync(self._qsize, **kwargs)\n\n async def _get(self, timeout=None, batch=False):\n resp = await self.client.scheduler.queue_get(\n timeout=timeout, name=self.name, batch=batch\n )\n\n def process(d):\n if d[\"type\"] == \"Future\":\n value = Future(d[\"value\"], self.client, inform=True, state=d[\"state\"])\n if d[\"state\"] == \"erred\":\n value._state.set_error(d[\"exception\"], d[\"traceback\"])\n self.client._send_to_scheduler(\n {\"op\": \"queue-future-release\", \"name\": self.name, \"key\": d[\"value\"]}\n )\n else:\n value = d[\"value\"]\n\n return value\n\n if batch is False:\n result = process(resp)\n else:\n result = list(map(process, resp))\n\n return result\n\n async def _qsize(self):\n result = await self.client.scheduler.queue_qsize(name=self.name)\n return result\n\n def close(self):\n if self.client.status == \"running\": # TODO: can leave zombie futures\n self.client._send_to_scheduler({\"op\": \"queue_release\", \"name\": self.name})\n\n def __getstate__(self):\n return (self.name, self.client.scheduler.address)\n\n def __setstate__(self, state):\n name, address = state\n try:\n client = get_client(address)\n assert client.scheduler.address == address\n except (AttributeError, AssertionError):\n client = Client(address, set_as_default=False)\n self.__init__(name=name, client=client)\n", "path": "distributed/queues.py"}]} | 3,326 | 129 |
gh_patches_debug_12792 | rasdani/github-patches | git_diff | Qiskit__qiskit-4613 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
numpy>=1.17,<=1.18
Some tests are failing recently:
<img width="1035" alt="Screen Shot 2020-06-22 at 12 46 22 PM" src="https://user-images.githubusercontent.com/766693/85316568-f4934480-b48a-11ea-8fc1-5624c8d16d18.png">
<img width="1024" alt="Screen Shot 2020-06-22 at 12 46 46 PM" src="https://user-images.githubusercontent.com/766693/85316571-f5c47180-b48a-11ea-958b-c164b55250ca.png">
@mtreinish suggested that it could be numpy 0.19, released on June 20th. While investigating the issue, pinning the version.
</issue>
<code>
[start of qiskit/pulse/pulse_lib/sample_pulse.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2020.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """A pulse that is described by complex-valued sample points."""
16 import warnings
17 from typing import Callable, Union, List, Optional
18
19 import numpy as np
20
21 from ..channels import PulseChannel
22 from ..exceptions import PulseError
23 from .pulse import Pulse
24
25
26 class SamplePulse(Pulse):
27 """A pulse specified completely by complex-valued samples; each sample is played for the
28 duration of the backend cycle-time, dt.
29 """
30
31 def __init__(self, samples: Union[np.ndarray, List[complex]],
32 name: Optional[str] = None,
33 epsilon: float = 1e-7):
34 """Create new sample pulse command.
35
36 Args:
37 samples: Complex array of the samples in the pulse envelope.
38 name: Unique name to identify the pulse.
39 epsilon: Pulse sample norm tolerance for clipping.
40 If any sample's norm exceeds unity by less than or equal to epsilon
41 it will be clipped to unit norm. If the sample
42 norm is greater than 1+epsilon an error will be raised.
43 """
44 samples = np.asarray(samples, dtype=np.complex_)
45 self.epsilon = epsilon
46 self._samples = self._clip(samples, epsilon=epsilon)
47 super().__init__(duration=len(samples), name=name)
48
49 @property
50 def samples(self) -> np.ndarray:
51 """Return sample values."""
52 return self._samples
53
54 def _clip(self, samples: np.ndarray, epsilon: float = 1e-7) -> np.ndarray:
55 """If samples are within epsilon of unit norm, clip sample by reducing norm by (1-epsilon).
56
57 If difference is greater than epsilon error is raised.
58
59 Args:
60 samples: Complex array of the samples in the pulse envelope.
61 epsilon: Pulse sample norm tolerance for clipping.
62 If any sample's norm exceeds unity by less than or equal to epsilon
63 it will be clipped to unit norm. If the sample
64 norm is greater than 1+epsilon an error will be raised.
65
66 Returns:
67 Clipped pulse samples.
68
69 Raises:
70 PulseError: If there exists a pulse sample with a norm greater than 1+epsilon.
71 """
72 samples_norm = np.abs(samples)
73 to_clip = (samples_norm > 1.) & (samples_norm <= 1. + epsilon)
74
75 if np.any(to_clip):
76 # first try normalizing by the abs value
77 clip_where = np.argwhere(to_clip)
78 clip_angle = np.angle(samples[clip_where])
79 clipped_samples = np.exp(1j*clip_angle, dtype=np.complex_)
80
81 # if norm still exceed one subtract epsilon
82 # required for some platforms
83 clipped_sample_norms = np.abs(clipped_samples)
84 to_clip_epsilon = clipped_sample_norms > 1.
85 if np.any(to_clip_epsilon):
86 clip_where_epsilon = np.argwhere(to_clip_epsilon)
87 clipped_samples_epsilon = np.exp(
88 (1-epsilon)*1j*clip_angle[clip_where_epsilon], dtype=np.complex_)
89 clipped_samples[clip_where_epsilon] = clipped_samples_epsilon
90
91 # update samples with clipped values
92 samples[clip_where] = clipped_samples
93 samples_norm[clip_where] = np.abs(clipped_samples)
94
95 if np.any(samples_norm > 1.):
96 raise PulseError('Pulse contains sample with norm greater than 1+epsilon.')
97
98 return samples
99
100 def draw(self, dt: float = 1,
101 style=None,
102 filename: Optional[str] = None,
103 interp_method: Optional[Callable] = None,
104 scale: float = 1, interactive: bool = False,
105 scaling: float = None):
106 """Plot the interpolated envelope of pulse.
107
108 Args:
109 dt: Time interval of samples.
110 style (Optional[PulseStyle]): A style sheet to configure plot appearance.
111 filename: Name required to save pulse image.
112 interp_method: A function for interpolation.
113 scale: Relative visual scaling of waveform amplitudes.
114 interactive: When set true show the circuit in a new window.
115 (This depends on the matplotlib backend being used.)
116 scaling: Deprecated, see `scale`,
117
118 Returns:
119 matplotlib.figure: A matplotlib figure object of the pulse envelope
120 """
121 # pylint: disable=invalid-name, cyclic-import
122 if scaling is not None:
123 warnings.warn(
124 'The parameter "scaling" is being replaced by "scale"',
125 DeprecationWarning, 3)
126 scale = scaling
127
128 from qiskit import visualization
129
130 return visualization.pulse_drawer(self, dt=dt, style=style, filename=filename,
131 interp_method=interp_method, scale=scale,
132 interactive=interactive)
133
134 def __eq__(self, other: Pulse) -> bool:
135 return super().__eq__(other) and self.samples.shape == other.samples.shape and \
136 np.allclose(self.samples, other.samples, rtol=0, atol=self.epsilon)
137
138 def __hash__(self) -> int:
139 return hash(self.samples.tostring())
140
141 def __repr__(self) -> str:
142 opt = np.get_printoptions()
143 np.set_printoptions(threshold=50)
144 np.set_printoptions(**opt)
145 return "{}({}{})".format(self.__class__.__name__, repr(self.samples),
146 ", name='{}'".format(self.name) if self.name is not None else "")
147
148 def __call__(self, channel: PulseChannel):
149 warnings.warn("Calling `{}` with a channel is deprecated. Instantiate the new `Play` "
150 "instruction directly with a pulse and a channel. In this case, please "
151 "use: `Play(SamplePulse(samples), {})`."
152 "".format(self.__class__.__name__, channel),
153 DeprecationWarning)
154 return super().__call__(channel)
155
[end of qiskit/pulse/pulse_lib/sample_pulse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/pulse/pulse_lib/sample_pulse.py b/qiskit/pulse/pulse_lib/sample_pulse.py
--- a/qiskit/pulse/pulse_lib/sample_pulse.py
+++ b/qiskit/pulse/pulse_lib/sample_pulse.py
@@ -84,8 +84,8 @@
to_clip_epsilon = clipped_sample_norms > 1.
if np.any(to_clip_epsilon):
clip_where_epsilon = np.argwhere(to_clip_epsilon)
- clipped_samples_epsilon = np.exp(
- (1-epsilon)*1j*clip_angle[clip_where_epsilon], dtype=np.complex_)
+ clipped_samples_epsilon = (1-epsilon)*np.exp(
+ 1j*clip_angle[clip_where_epsilon], dtype=np.complex_)
clipped_samples[clip_where_epsilon] = clipped_samples_epsilon
# update samples with clipped values
| {"golden_diff": "diff --git a/qiskit/pulse/pulse_lib/sample_pulse.py b/qiskit/pulse/pulse_lib/sample_pulse.py\n--- a/qiskit/pulse/pulse_lib/sample_pulse.py\n+++ b/qiskit/pulse/pulse_lib/sample_pulse.py\n@@ -84,8 +84,8 @@\n to_clip_epsilon = clipped_sample_norms > 1.\n if np.any(to_clip_epsilon):\n clip_where_epsilon = np.argwhere(to_clip_epsilon)\n- clipped_samples_epsilon = np.exp(\n- (1-epsilon)*1j*clip_angle[clip_where_epsilon], dtype=np.complex_)\n+ clipped_samples_epsilon = (1-epsilon)*np.exp(\n+ 1j*clip_angle[clip_where_epsilon], dtype=np.complex_)\n clipped_samples[clip_where_epsilon] = clipped_samples_epsilon\n \n # update samples with clipped values\n", "issue": "numpy>=1.17,<=1.18\nSome tests are failing recently:\r\n<img width=\"1035\" alt=\"Screen Shot 2020-06-22 at 12 46 22 PM\" src=\"https://user-images.githubusercontent.com/766693/85316568-f4934480-b48a-11ea-8fc1-5624c8d16d18.png\">\r\n<img width=\"1024\" alt=\"Screen Shot 2020-06-22 at 12 46 46 PM\" src=\"https://user-images.githubusercontent.com/766693/85316571-f5c47180-b48a-11ea-958b-c164b55250ca.png\">\r\n\r\n@mtreinish suggested that it could be numpy 0.19, released on June 20th. While investigating the issue, pinning the version.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2020.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"A pulse that is described by complex-valued sample points.\"\"\"\nimport warnings\nfrom typing import Callable, Union, List, Optional\n\nimport numpy as np\n\nfrom ..channels import PulseChannel\nfrom ..exceptions import PulseError\nfrom .pulse import Pulse\n\n\nclass SamplePulse(Pulse):\n \"\"\"A pulse specified completely by complex-valued samples; each sample is played for the\n duration of the backend cycle-time, dt.\n \"\"\"\n\n def __init__(self, samples: Union[np.ndarray, List[complex]],\n name: Optional[str] = None,\n epsilon: float = 1e-7):\n \"\"\"Create new sample pulse command.\n\n Args:\n samples: Complex array of the samples in the pulse envelope.\n name: Unique name to identify the pulse.\n epsilon: Pulse sample norm tolerance for clipping.\n If any sample's norm exceeds unity by less than or equal to epsilon\n it will be clipped to unit norm. If the sample\n norm is greater than 1+epsilon an error will be raised.\n \"\"\"\n samples = np.asarray(samples, dtype=np.complex_)\n self.epsilon = epsilon\n self._samples = self._clip(samples, epsilon=epsilon)\n super().__init__(duration=len(samples), name=name)\n\n @property\n def samples(self) -> np.ndarray:\n \"\"\"Return sample values.\"\"\"\n return self._samples\n\n def _clip(self, samples: np.ndarray, epsilon: float = 1e-7) -> np.ndarray:\n \"\"\"If samples are within epsilon of unit norm, clip sample by reducing norm by (1-epsilon).\n\n If difference is greater than epsilon error is raised.\n\n Args:\n samples: Complex array of the samples in the pulse envelope.\n epsilon: Pulse sample norm tolerance for clipping.\n If any sample's norm exceeds unity by less than or equal to epsilon\n it will be clipped to unit norm. If the sample\n norm is greater than 1+epsilon an error will be raised.\n\n Returns:\n Clipped pulse samples.\n\n Raises:\n PulseError: If there exists a pulse sample with a norm greater than 1+epsilon.\n \"\"\"\n samples_norm = np.abs(samples)\n to_clip = (samples_norm > 1.) & (samples_norm <= 1. + epsilon)\n\n if np.any(to_clip):\n # first try normalizing by the abs value\n clip_where = np.argwhere(to_clip)\n clip_angle = np.angle(samples[clip_where])\n clipped_samples = np.exp(1j*clip_angle, dtype=np.complex_)\n\n # if norm still exceed one subtract epsilon\n # required for some platforms\n clipped_sample_norms = np.abs(clipped_samples)\n to_clip_epsilon = clipped_sample_norms > 1.\n if np.any(to_clip_epsilon):\n clip_where_epsilon = np.argwhere(to_clip_epsilon)\n clipped_samples_epsilon = np.exp(\n (1-epsilon)*1j*clip_angle[clip_where_epsilon], dtype=np.complex_)\n clipped_samples[clip_where_epsilon] = clipped_samples_epsilon\n\n # update samples with clipped values\n samples[clip_where] = clipped_samples\n samples_norm[clip_where] = np.abs(clipped_samples)\n\n if np.any(samples_norm > 1.):\n raise PulseError('Pulse contains sample with norm greater than 1+epsilon.')\n\n return samples\n\n def draw(self, dt: float = 1,\n style=None,\n filename: Optional[str] = None,\n interp_method: Optional[Callable] = None,\n scale: float = 1, interactive: bool = False,\n scaling: float = None):\n \"\"\"Plot the interpolated envelope of pulse.\n\n Args:\n dt: Time interval of samples.\n style (Optional[PulseStyle]): A style sheet to configure plot appearance.\n filename: Name required to save pulse image.\n interp_method: A function for interpolation.\n scale: Relative visual scaling of waveform amplitudes.\n interactive: When set true show the circuit in a new window.\n (This depends on the matplotlib backend being used.)\n scaling: Deprecated, see `scale`,\n\n Returns:\n matplotlib.figure: A matplotlib figure object of the pulse envelope\n \"\"\"\n # pylint: disable=invalid-name, cyclic-import\n if scaling is not None:\n warnings.warn(\n 'The parameter \"scaling\" is being replaced by \"scale\"',\n DeprecationWarning, 3)\n scale = scaling\n\n from qiskit import visualization\n\n return visualization.pulse_drawer(self, dt=dt, style=style, filename=filename,\n interp_method=interp_method, scale=scale,\n interactive=interactive)\n\n def __eq__(self, other: Pulse) -> bool:\n return super().__eq__(other) and self.samples.shape == other.samples.shape and \\\n np.allclose(self.samples, other.samples, rtol=0, atol=self.epsilon)\n\n def __hash__(self) -> int:\n return hash(self.samples.tostring())\n\n def __repr__(self) -> str:\n opt = np.get_printoptions()\n np.set_printoptions(threshold=50)\n np.set_printoptions(**opt)\n return \"{}({}{})\".format(self.__class__.__name__, repr(self.samples),\n \", name='{}'\".format(self.name) if self.name is not None else \"\")\n\n def __call__(self, channel: PulseChannel):\n warnings.warn(\"Calling `{}` with a channel is deprecated. Instantiate the new `Play` \"\n \"instruction directly with a pulse and a channel. In this case, please \"\n \"use: `Play(SamplePulse(samples), {})`.\"\n \"\".format(self.__class__.__name__, channel),\n DeprecationWarning)\n return super().__call__(channel)\n", "path": "qiskit/pulse/pulse_lib/sample_pulse.py"}]} | 2,504 | 185 |
gh_patches_debug_8364 | rasdani/github-patches | git_diff | svthalia__concrexit-1756 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PaymentDetailView in admin API allows deleting payments unauthorized
https://github.com/svthalia/concrexit/blob/4ab37961f50e398cc52422cdc1df66f6ab8ff2ee/website/payments/api/v2/admin/views.py#L69
### Describe the bug
Payments sometimes should be undeletable. For example, TPay payments that are in a batch. The PaymentAdmin prevents such deletions. However, the rest framework DestroyAPIView does not respect that.
### How to reproduce
Steps to reproduce the behaviour:
1. Have a payment
2. Add it to a batch
3. Process the batch
4. Do the API `DELETE` request at `/api/v2/admin/payments/<id>`
### Expected behaviour
Either disable payment deletion at all from the API, or manually implement a check that the payment is not in a processed batch.
</issue>
<code>
[start of website/payments/api/v2/admin/views.py]
1 import rest_framework.filters as framework_filters
2 from django.apps import apps
3 from django.http import Http404
4 from django.utils.translation import gettext_lazy as _
5 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
6 from rest_framework import status, serializers
7 from rest_framework.exceptions import PermissionDenied, ValidationError
8 from rest_framework.generics import get_object_or_404
9 from rest_framework.permissions import IsAdminUser
10 from rest_framework.response import Response
11 from rest_framework.settings import api_settings
12 from rest_framework.views import APIView
13
14 from payments import services, payables, NotRegistered
15 from payments.api.v2 import filters
16 from payments.api.v2.admin.serializers.payable_create import (
17 PayableCreateAdminSerializer,
18 )
19 from payments.api.v2.admin.serializers.payable_detail import PayableAdminSerializer
20 from payments.api.v2.admin.serializers.payment import (
21 PaymentAdminSerializer,
22 PaymentCreateSerializer,
23 )
24 from payments.exceptions import PaymentError
25 from payments.models import Payment, PaymentUser
26 from thaliawebsite.api.v2.admin import (
27 AdminListAPIView,
28 AdminCreateAPIView,
29 AdminRetrieveAPIView,
30 AdminDestroyAPIView,
31 )
32
33
34 class PaymentListCreateView(AdminListAPIView, AdminCreateAPIView):
35 """View that allows you to create and list payments as admin."""
36
37 queryset = Payment.objects.prefetch_related(
38 "paid_by__profile",
39 "paid_by__membership_set",
40 "processed_by__profile",
41 "processed_by__membership_set",
42 )
43
44 required_scopes = ["payments:admin"]
45 filter_backends = (
46 framework_filters.OrderingFilter,
47 filters.CreatedAtFilter,
48 filters.PaymentTypeFilter,
49 )
50 ordering_fields = ("created_at",)
51
52 def get_serializer_class(self):
53 if self.request.method.lower() == "post":
54 return PaymentCreateSerializer
55 return PaymentAdminSerializer
56
57 def create(self, request, *args, **kwargs):
58 serializer = self.get_serializer(data=request.data)
59 serializer.is_valid(raise_exception=True)
60 self.perform_create(serializer)
61 return Response(
62 PaymentAdminSerializer(
63 serializer.instance, context=self.get_serializer_context()
64 ).data,
65 status=status.HTTP_201_CREATED,
66 )
67
68
69 class PaymentDetailView(AdminRetrieveAPIView, AdminDestroyAPIView):
70 """View that allows you to manage a single payment as admin."""
71
72 queryset = Payment.objects.all()
73 serializer_class = PaymentAdminSerializer
74 permission_classes = [IsAuthenticatedOrTokenHasScope]
75 required_scopes = ["payments:admin"]
76
77
78 class PayableDetailView(APIView):
79 """View that allows you to manipulate the payment for the payable.
80
81 Permissions of this view are based on the payable.
82 """
83
84 required_scopes = ["payments:admin"]
85 permission_classes = [IsAuthenticatedOrTokenHasScope, IsAdminUser]
86
87 def get_serializer_context(self):
88 return {"request": self.request, "format": self.format_kwarg, "view": self}
89
90 def get_payable(self):
91 app_label = self.kwargs["app_label"]
92 model_name = self.kwargs["model_name"]
93 payable_pk = self.kwargs["payable_pk"]
94
95 try:
96 payable_model = apps.get_model(app_label=app_label, model_name=model_name)
97 payable = payables.get_payable(
98 get_object_or_404(payable_model, pk=payable_pk)
99 )
100 except (LookupError, NotRegistered) as e:
101 raise serializers.ValidationError(
102 {api_settings.NON_FIELD_ERRORS_KEY: [_("Payable model not found")]}
103 ) from e
104
105 if not payable.can_manage_payment(self.request.member):
106 raise PermissionDenied(
107 detail=_("You do not have permission to perform this action.")
108 )
109
110 return payable
111
112 def get(self, request, *args, **kwargs):
113 """Get information about a payable."""
114 serializer = PayableAdminSerializer(
115 self.get_payable(), context=self.get_serializer_context()
116 )
117 return Response(serializer.data, status=status.HTTP_200_OK)
118
119 def delete(self, request, *args, **kwargs):
120 """Remove the current payment for a payable."""
121 payable = self.get_payable()
122
123 if not payable.model.payment:
124 raise Http404
125
126 try:
127 services.delete_payment(
128 payable.model, request.member,
129 )
130 payable.model.save()
131 except PaymentError as e:
132 raise PermissionDenied(detail=str(e))
133
134 return Response(status=status.HTTP_204_NO_CONTENT)
135
136 def patch(self, request, *args, **kwargs):
137 """Mark the payable as paid by creating a payment for it."""
138 serializer = PayableCreateAdminSerializer(
139 data=request.data, context=self.get_serializer_context()
140 )
141 serializer.is_valid(raise_exception=True)
142
143 payable = self.get_payable()
144
145 try:
146 services.create_payment(
147 payable,
148 PaymentUser.objects.get(pk=request.user.pk),
149 serializer.data["payment_type"],
150 )
151 payable.model.save()
152 except PaymentError as e:
153 raise ValidationError(detail={api_settings.NON_FIELD_ERRORS_KEY: [str(e)]})
154
155 return Response(
156 PayableAdminSerializer(payable, context=self.get_serializer_context()).data,
157 status=status.HTTP_201_CREATED,
158 )
159
[end of website/payments/api/v2/admin/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/payments/api/v2/admin/views.py b/website/payments/api/v2/admin/views.py
--- a/website/payments/api/v2/admin/views.py
+++ b/website/payments/api/v2/admin/views.py
@@ -74,6 +74,11 @@
permission_classes = [IsAuthenticatedOrTokenHasScope]
required_scopes = ["payments:admin"]
+ def delete(self, request, *args, **kwargs):
+ if self.get_object().batch and self.get_object().batch.processed:
+ raise PermissionDenied("This payment cannot be deleted.")
+ return super().delete(request, *args, **kwargs)
+
class PayableDetailView(APIView):
"""View that allows you to manipulate the payment for the payable.
| {"golden_diff": "diff --git a/website/payments/api/v2/admin/views.py b/website/payments/api/v2/admin/views.py\n--- a/website/payments/api/v2/admin/views.py\n+++ b/website/payments/api/v2/admin/views.py\n@@ -74,6 +74,11 @@\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"payments:admin\"]\n \n+ def delete(self, request, *args, **kwargs):\n+ if self.get_object().batch and self.get_object().batch.processed:\n+ raise PermissionDenied(\"This payment cannot be deleted.\")\n+ return super().delete(request, *args, **kwargs)\n+\n \n class PayableDetailView(APIView):\n \"\"\"View that allows you to manipulate the payment for the payable.\n", "issue": "PaymentDetailView in admin API allows deleting payments unauthorized\nhttps://github.com/svthalia/concrexit/blob/4ab37961f50e398cc52422cdc1df66f6ab8ff2ee/website/payments/api/v2/admin/views.py#L69\r\n\r\n### Describe the bug\r\nPayments sometimes should be undeletable. For example, TPay payments that are in a batch. The PaymentAdmin prevents such deletions. However, the rest framework DestroyAPIView does not respect that.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Have a payment \r\n2. Add it to a batch\r\n3. Process the batch\r\n4. Do the API `DELETE` request at `/api/v2/admin/payments/<id>`\r\n\r\n### Expected behaviour\r\nEither disable payment deletion at all from the API, or manually implement a check that the payment is not in a processed batch.\r\n\n", "before_files": [{"content": "import rest_framework.filters as framework_filters\nfrom django.apps import apps\nfrom django.http import Http404\nfrom django.utils.translation import gettext_lazy as _\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import status, serializers\nfrom rest_framework.exceptions import PermissionDenied, ValidationError\nfrom rest_framework.generics import get_object_or_404\nfrom rest_framework.permissions import IsAdminUser\nfrom rest_framework.response import Response\nfrom rest_framework.settings import api_settings\nfrom rest_framework.views import APIView\n\nfrom payments import services, payables, NotRegistered\nfrom payments.api.v2 import filters\nfrom payments.api.v2.admin.serializers.payable_create import (\n PayableCreateAdminSerializer,\n)\nfrom payments.api.v2.admin.serializers.payable_detail import PayableAdminSerializer\nfrom payments.api.v2.admin.serializers.payment import (\n PaymentAdminSerializer,\n PaymentCreateSerializer,\n)\nfrom payments.exceptions import PaymentError\nfrom payments.models import Payment, PaymentUser\nfrom thaliawebsite.api.v2.admin import (\n AdminListAPIView,\n AdminCreateAPIView,\n AdminRetrieveAPIView,\n AdminDestroyAPIView,\n)\n\n\nclass PaymentListCreateView(AdminListAPIView, AdminCreateAPIView):\n \"\"\"View that allows you to create and list payments as admin.\"\"\"\n\n queryset = Payment.objects.prefetch_related(\n \"paid_by__profile\",\n \"paid_by__membership_set\",\n \"processed_by__profile\",\n \"processed_by__membership_set\",\n )\n\n required_scopes = [\"payments:admin\"]\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.CreatedAtFilter,\n filters.PaymentTypeFilter,\n )\n ordering_fields = (\"created_at\",)\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"post\":\n return PaymentCreateSerializer\n return PaymentAdminSerializer\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return Response(\n PaymentAdminSerializer(\n serializer.instance, context=self.get_serializer_context()\n ).data,\n status=status.HTTP_201_CREATED,\n )\n\n\nclass PaymentDetailView(AdminRetrieveAPIView, AdminDestroyAPIView):\n \"\"\"View that allows you to manage a single payment as admin.\"\"\"\n\n queryset = Payment.objects.all()\n serializer_class = PaymentAdminSerializer\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"payments:admin\"]\n\n\nclass PayableDetailView(APIView):\n \"\"\"View that allows you to manipulate the payment for the payable.\n\n Permissions of this view are based on the payable.\n \"\"\"\n\n required_scopes = [\"payments:admin\"]\n permission_classes = [IsAuthenticatedOrTokenHasScope, IsAdminUser]\n\n def get_serializer_context(self):\n return {\"request\": self.request, \"format\": self.format_kwarg, \"view\": self}\n\n def get_payable(self):\n app_label = self.kwargs[\"app_label\"]\n model_name = self.kwargs[\"model_name\"]\n payable_pk = self.kwargs[\"payable_pk\"]\n\n try:\n payable_model = apps.get_model(app_label=app_label, model_name=model_name)\n payable = payables.get_payable(\n get_object_or_404(payable_model, pk=payable_pk)\n )\n except (LookupError, NotRegistered) as e:\n raise serializers.ValidationError(\n {api_settings.NON_FIELD_ERRORS_KEY: [_(\"Payable model not found\")]}\n ) from e\n\n if not payable.can_manage_payment(self.request.member):\n raise PermissionDenied(\n detail=_(\"You do not have permission to perform this action.\")\n )\n\n return payable\n\n def get(self, request, *args, **kwargs):\n \"\"\"Get information about a payable.\"\"\"\n serializer = PayableAdminSerializer(\n self.get_payable(), context=self.get_serializer_context()\n )\n return Response(serializer.data, status=status.HTTP_200_OK)\n\n def delete(self, request, *args, **kwargs):\n \"\"\"Remove the current payment for a payable.\"\"\"\n payable = self.get_payable()\n\n if not payable.model.payment:\n raise Http404\n\n try:\n services.delete_payment(\n payable.model, request.member,\n )\n payable.model.save()\n except PaymentError as e:\n raise PermissionDenied(detail=str(e))\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n def patch(self, request, *args, **kwargs):\n \"\"\"Mark the payable as paid by creating a payment for it.\"\"\"\n serializer = PayableCreateAdminSerializer(\n data=request.data, context=self.get_serializer_context()\n )\n serializer.is_valid(raise_exception=True)\n\n payable = self.get_payable()\n\n try:\n services.create_payment(\n payable,\n PaymentUser.objects.get(pk=request.user.pk),\n serializer.data[\"payment_type\"],\n )\n payable.model.save()\n except PaymentError as e:\n raise ValidationError(detail={api_settings.NON_FIELD_ERRORS_KEY: [str(e)]})\n\n return Response(\n PayableAdminSerializer(payable, context=self.get_serializer_context()).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/payments/api/v2/admin/views.py"}]} | 2,222 | 168 |
gh_patches_debug_17103 | rasdani/github-patches | git_diff | fossasia__open-event-server-6604 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong Ticket Type for Free Tickets
**Describe the bug**
The ticket type of free tickets is shown as TICKET_TYPE_PAID.
**To Reproduce**
Try booking a ticket using attendee app.
**Expected behavior**
The ticket type of free tickets should be TICKET_TYPE_FREE.
</issue>
<code>
[start of app/api/schema/tickets.py]
1 from marshmallow import validates_schema
2 from marshmallow_jsonapi import fields
3 from marshmallow_jsonapi.flask import Relationship
4 from sqlalchemy.orm.exc import NoResultFound
5
6 from app.api.helpers.exceptions import UnprocessableEntity
7 from app.api.helpers.utilities import dasherize
8 from app.api.schema.base import SoftDeletionSchema
9 from app.models.discount_code import DiscountCode
10 from app.models.ticket import Ticket
11 from utils.common import use_defaults
12
13
14 @use_defaults()
15 class TicketSchemaPublic(SoftDeletionSchema):
16 class Meta:
17 type_ = 'ticket'
18 self_view = 'v1.ticket_detail'
19 self_view_kwargs = {'id': '<id>'}
20 inflect = dasherize
21
22 @validates_schema(pass_original=True)
23 def validate_date(self, data, original_data):
24 if 'id' in original_data['data']:
25 ticket = Ticket.query.filter_by(id=original_data['data']['id']).one()
26
27 if 'sales_starts_at' not in data:
28 data['sales_starts_at'] = ticket.sales_starts_at
29
30 if 'sales_ends_at' not in data:
31 data['sales_ends_at'] = ticket.sales_ends_at
32
33 # if 'event_ends_at' not in data:
34 # data['event_ends_at'] = ticket.event.ends_at
35
36 if data['sales_starts_at'] >= data['sales_ends_at']:
37 raise UnprocessableEntity({'pointer': '/data/attributes/sales-ends-at'},
38 "sales-ends-at should be after sales-starts-at")
39
40 # if 'event_ends_at' in data and data['sales_starts_at'] > data['event_ends_at']:
41 # raise UnprocessableEntity({'pointer': '/data/attributes/sales-starts-at'},
42 # "ticket sales-starts-at should be before event ends-at")
43
44 # if 'event_ends_at' in data and data['sales_ends_at'] > data['event_ends_at']:
45 # raise UnprocessableEntity({'pointer': '/data/attributes/sales-ends-at'},
46 # "ticket sales-ends-at should be before event ends-at")
47
48 @validates_schema
49 def validate_quantity(self, data):
50 if 'max_order' in data and 'min_order' in data:
51 if data['max_order'] < data['min_order']:
52 raise UnprocessableEntity({'pointer': '/data/attributes/max-order'},
53 "max-order should be greater than or equal to min-order")
54
55 if 'quantity' in data and 'min_order' in data:
56 if data['quantity'] < data['min_order']:
57 raise UnprocessableEntity({'pointer': '/data/attributes/quantity'},
58 "quantity should be greater than or equal to min-order")
59
60 if 'min_price' in data and 'max_price' in data and data['type'] == 'donation':
61 if data['min_price'] > data['max_price']:
62 raise UnprocessableEntity({'pointer': '/data/attributes/min-price'},
63 "minimum price should be lesser than or equal to maximum price")
64
65 if 'quantity' in data and 'max_order' in data:
66 if data['quantity'] < data['max_order']:
67 raise UnprocessableEntity({'pointer': '/data/attributes/quantity'},
68 "quantity should be greater than or equal to max-order")
69
70 @validates_schema(pass_original=True)
71 def validate_discount_code(self, data, original_data):
72 if 'relationships' in original_data and 'discount-codes' in original_data['data']['relationships']:
73 discount_codes = original_data['data']['relationships']['discount-codes']
74 for code in discount_codes['data']:
75 try:
76 DiscountCode.query.filter_by(id=code['id']).one()
77 except NoResultFound:
78 raise UnprocessableEntity(
79 {'pointer': '/data/relationships/discount-codes'}, "Discount code does not exist")
80
81 id = fields.Str(dump_only=True)
82 name = fields.Str(required=True)
83 description = fields.Str(allow_none=True)
84 type = fields.Str(required=True)
85 price = fields.Float(validate=lambda n: n >= 0, allow_none=True)
86 min_price = fields.Float(validate=lambda n: n >= 0)
87 max_price = fields.Float(validate=lambda n: n >= 0, allow_none=True)
88 quantity = fields.Integer(validate=lambda n: n >= 0, allow_none=True)
89 is_description_visible = fields.Boolean(default=False)
90 position = fields.Integer(allow_none=True)
91 is_fee_absorbed = fields.Boolean()
92 sales_starts_at = fields.DateTime(required=True)
93 sales_ends_at = fields.DateTime(required=True)
94 is_hidden = fields.Boolean(default=False)
95 min_order = fields.Integer(validate=lambda n: n >= 0, allow_none=True)
96 max_order = fields.Integer(validate=lambda n: n >= 0, allow_none=True)
97 is_checkin_restricted = fields.Boolean(default=True)
98 auto_checkin_enabled = fields.Boolean(default=False)
99 event = Relationship(attribute='event',
100 self_view='v1.ticket_event',
101 self_view_kwargs={'id': '<id>'},
102 related_view='v1.event_detail',
103 related_view_kwargs={'ticket_id': '<id>'},
104 schema='EventSchemaPublic',
105 type_='event')
106
107 ticket_tags = Relationship(attribute='tags',
108 self_view='v1.ticket_ticket_tag',
109 self_view_kwargs={'id': '<id>'},
110 related_view='v1.ticket_tag_list',
111 related_view_kwargs={'ticket_id': '<id>'},
112 schema='TicketTagSchema',
113 many=True,
114 type_='ticket-tag')
115
116 discount_codes = Relationship(
117 attribute='discount_codes',
118 self_view='v1.ticket_discount_codes',
119 self_view_kwargs={'id': '<id>'},
120 related_view='v1.discount_code_list',
121 related_view_kwargs={'ticket_id': '<id>'},
122 schema='DiscountCodeSchemaTicket',
123 many=True,
124 type_='discount-code')
125
126
127 class TicketSchema(TicketSchemaPublic):
128 class Meta:
129 type_ = 'ticket'
130 self_view = 'v1.ticket_detail'
131 self_view_kwargs = {'id': '<id>'}
132 inflect = dasherize
133
134 access_codes = Relationship(attribute='access_codes',
135 self_view='v1.ticket_access_code',
136 self_view_kwargs={'id': '<id>'},
137 related_view='v1.access_code_list',
138 related_view_kwargs={'ticket_id': '<id>'},
139 schema='AccessCodeSchema',
140 many=True,
141 type_='access-code')
142 attendees = Relationship(attribute='ticket_holders',
143 self_view='v1.ticket_attendees',
144 self_view_kwargs={'id': '<id>'},
145 related_view='v1.attendee_list_post',
146 related_view_kwargs={'ticket_id': '<id>'},
147 schema='AttendeeSchema',
148 many=True,
149 type_='attendee')
150
[end of app/api/schema/tickets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/schema/tickets.py b/app/api/schema/tickets.py
--- a/app/api/schema/tickets.py
+++ b/app/api/schema/tickets.py
@@ -67,6 +67,12 @@
raise UnprocessableEntity({'pointer': '/data/attributes/quantity'},
"quantity should be greater than or equal to max-order")
+ @validates_schema
+ def validate_price(self, data):
+ if data['type'] == 'paid' and ('price' not in data or data['price'] <= 0):
+ raise UnprocessableEntity({'pointer': 'data/attributes/price'},
+ "paid ticket price should be greater than 0")
+
@validates_schema(pass_original=True)
def validate_discount_code(self, data, original_data):
if 'relationships' in original_data and 'discount-codes' in original_data['data']['relationships']:
| {"golden_diff": "diff --git a/app/api/schema/tickets.py b/app/api/schema/tickets.py\n--- a/app/api/schema/tickets.py\n+++ b/app/api/schema/tickets.py\n@@ -67,6 +67,12 @@\n raise UnprocessableEntity({'pointer': '/data/attributes/quantity'},\n \"quantity should be greater than or equal to max-order\")\n \n+ @validates_schema\n+ def validate_price(self, data):\n+ if data['type'] == 'paid' and ('price' not in data or data['price'] <= 0):\n+ raise UnprocessableEntity({'pointer': 'data/attributes/price'},\n+ \"paid ticket price should be greater than 0\")\n+\n @validates_schema(pass_original=True)\n def validate_discount_code(self, data, original_data):\n if 'relationships' in original_data and 'discount-codes' in original_data['data']['relationships']:\n", "issue": "Wrong Ticket Type for Free Tickets\n**Describe the bug**\r\nThe ticket type of free tickets is shown as TICKET_TYPE_PAID.\r\n\r\n**To Reproduce**\r\nTry booking a ticket using attendee app.\r\n\r\n**Expected behavior**\r\nThe ticket type of free tickets should be TICKET_TYPE_FREE.\r\n\r\n\n", "before_files": [{"content": "from marshmallow import validates_schema\nfrom marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Relationship\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app.api.helpers.exceptions import UnprocessableEntity\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.schema.base import SoftDeletionSchema\nfrom app.models.discount_code import DiscountCode\nfrom app.models.ticket import Ticket\nfrom utils.common import use_defaults\n\n\n@use_defaults()\nclass TicketSchemaPublic(SoftDeletionSchema):\n class Meta:\n type_ = 'ticket'\n self_view = 'v1.ticket_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n @validates_schema(pass_original=True)\n def validate_date(self, data, original_data):\n if 'id' in original_data['data']:\n ticket = Ticket.query.filter_by(id=original_data['data']['id']).one()\n\n if 'sales_starts_at' not in data:\n data['sales_starts_at'] = ticket.sales_starts_at\n\n if 'sales_ends_at' not in data:\n data['sales_ends_at'] = ticket.sales_ends_at\n\n # if 'event_ends_at' not in data:\n # data['event_ends_at'] = ticket.event.ends_at\n\n if data['sales_starts_at'] >= data['sales_ends_at']:\n raise UnprocessableEntity({'pointer': '/data/attributes/sales-ends-at'},\n \"sales-ends-at should be after sales-starts-at\")\n\n # if 'event_ends_at' in data and data['sales_starts_at'] > data['event_ends_at']:\n # raise UnprocessableEntity({'pointer': '/data/attributes/sales-starts-at'},\n # \"ticket sales-starts-at should be before event ends-at\")\n\n # if 'event_ends_at' in data and data['sales_ends_at'] > data['event_ends_at']:\n # raise UnprocessableEntity({'pointer': '/data/attributes/sales-ends-at'},\n # \"ticket sales-ends-at should be before event ends-at\")\n\n @validates_schema\n def validate_quantity(self, data):\n if 'max_order' in data and 'min_order' in data:\n if data['max_order'] < data['min_order']:\n raise UnprocessableEntity({'pointer': '/data/attributes/max-order'},\n \"max-order should be greater than or equal to min-order\")\n\n if 'quantity' in data and 'min_order' in data:\n if data['quantity'] < data['min_order']:\n raise UnprocessableEntity({'pointer': '/data/attributes/quantity'},\n \"quantity should be greater than or equal to min-order\")\n\n if 'min_price' in data and 'max_price' in data and data['type'] == 'donation':\n if data['min_price'] > data['max_price']:\n raise UnprocessableEntity({'pointer': '/data/attributes/min-price'},\n \"minimum price should be lesser than or equal to maximum price\")\n\n if 'quantity' in data and 'max_order' in data:\n if data['quantity'] < data['max_order']:\n raise UnprocessableEntity({'pointer': '/data/attributes/quantity'},\n \"quantity should be greater than or equal to max-order\")\n\n @validates_schema(pass_original=True)\n def validate_discount_code(self, data, original_data):\n if 'relationships' in original_data and 'discount-codes' in original_data['data']['relationships']:\n discount_codes = original_data['data']['relationships']['discount-codes']\n for code in discount_codes['data']:\n try:\n DiscountCode.query.filter_by(id=code['id']).one()\n except NoResultFound:\n raise UnprocessableEntity(\n {'pointer': '/data/relationships/discount-codes'}, \"Discount code does not exist\")\n\n id = fields.Str(dump_only=True)\n name = fields.Str(required=True)\n description = fields.Str(allow_none=True)\n type = fields.Str(required=True)\n price = fields.Float(validate=lambda n: n >= 0, allow_none=True)\n min_price = fields.Float(validate=lambda n: n >= 0)\n max_price = fields.Float(validate=lambda n: n >= 0, allow_none=True)\n quantity = fields.Integer(validate=lambda n: n >= 0, allow_none=True)\n is_description_visible = fields.Boolean(default=False)\n position = fields.Integer(allow_none=True)\n is_fee_absorbed = fields.Boolean()\n sales_starts_at = fields.DateTime(required=True)\n sales_ends_at = fields.DateTime(required=True)\n is_hidden = fields.Boolean(default=False)\n min_order = fields.Integer(validate=lambda n: n >= 0, allow_none=True)\n max_order = fields.Integer(validate=lambda n: n >= 0, allow_none=True)\n is_checkin_restricted = fields.Boolean(default=True)\n auto_checkin_enabled = fields.Boolean(default=False)\n event = Relationship(attribute='event',\n self_view='v1.ticket_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'ticket_id': '<id>'},\n schema='EventSchemaPublic',\n type_='event')\n\n ticket_tags = Relationship(attribute='tags',\n self_view='v1.ticket_ticket_tag',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.ticket_tag_list',\n related_view_kwargs={'ticket_id': '<id>'},\n schema='TicketTagSchema',\n many=True,\n type_='ticket-tag')\n\n discount_codes = Relationship(\n attribute='discount_codes',\n self_view='v1.ticket_discount_codes',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.discount_code_list',\n related_view_kwargs={'ticket_id': '<id>'},\n schema='DiscountCodeSchemaTicket',\n many=True,\n type_='discount-code')\n\n\nclass TicketSchema(TicketSchemaPublic):\n class Meta:\n type_ = 'ticket'\n self_view = 'v1.ticket_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n access_codes = Relationship(attribute='access_codes',\n self_view='v1.ticket_access_code',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.access_code_list',\n related_view_kwargs={'ticket_id': '<id>'},\n schema='AccessCodeSchema',\n many=True,\n type_='access-code')\n attendees = Relationship(attribute='ticket_holders',\n self_view='v1.ticket_attendees',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.attendee_list_post',\n related_view_kwargs={'ticket_id': '<id>'},\n schema='AttendeeSchema',\n many=True,\n type_='attendee')\n", "path": "app/api/schema/tickets.py"}]} | 2,379 | 194 |
gh_patches_debug_644 | rasdani/github-patches | git_diff | pex-tool__pex-1864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.101
On the docket:
+ [x] Pex fails to find RECORD for python-certifi-win32 1.6.1 #1861
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.100"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.100"
+__version__ = "2.1.101"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.100\"\n+__version__ = \"2.1.101\"\n", "issue": "Release 2.1.101\nOn the docket:\r\n+ [x] Pex fails to find RECORD for python-certifi-win32 1.6.1 #1861\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.100\"\n", "path": "pex/version.py"}]} | 627 | 98 |
gh_patches_debug_32606 | rasdani/github-patches | git_diff | nextcloud__appstore-33 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API routes should be enabled for CORS
Only API routes should be whitelisted for CORS
We need to make sure that CORS and session auth are mutually exclusive
The solution is probably to integrate https://github.com/ottoyiu/django-cors-headers/
</issue>
<code>
[start of nextcloudappstore/settings.py]
1 """
2 Django settings for nextcloudappstore project.
3
4 Generated by 'django-admin startproject' using Django 1.9.6.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.9/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.9/ref/settings/
11 """
12
13 import os
14
15 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
16 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
17
18 # Quick-start development settings - unsuitable for production
19 # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/
20
21 # Application definition
22
23 INSTALLED_APPS = [
24 'parler',
25 'rest_framework',
26 'django.contrib.admin',
27 'django.contrib.auth',
28 'django.contrib.contenttypes',
29 'django.contrib.sessions',
30 'django.contrib.messages',
31 # The Django sites framework is required by allauth
32 'django.contrib.sites',
33 'django.contrib.staticfiles',
34 'captcha',
35 'nextcloudappstore.core.apps.CoreConfig',
36 'allauth',
37 'allauth.account',
38 'allauth.socialaccount',
39 'allauth.socialaccount.providers.github',
40 'allauth.socialaccount.providers.bitbucket',
41 ]
42
43 MIDDLEWARE_CLASSES = [
44 'django.middleware.security.SecurityMiddleware',
45 'django.contrib.sessions.middleware.SessionMiddleware',
46 'django.middleware.common.CommonMiddleware',
47 'django.middleware.csrf.CsrfViewMiddleware',
48 'django.contrib.auth.middleware.AuthenticationMiddleware',
49 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
50 'django.contrib.messages.middleware.MessageMiddleware',
51 'django.middleware.clickjacking.XFrameOptionsMiddleware',
52 ]
53
54 ROOT_URLCONF = 'nextcloudappstore.urls'
55
56 TEMPLATES = [
57 {
58 'BACKEND': 'django.template.backends.django.DjangoTemplates',
59 'DIRS': [],
60 'APP_DIRS': True,
61 'OPTIONS': {
62 'context_processors': [
63 'django.template.context_processors.debug',
64 'django.template.context_processors.request',
65 'django.contrib.auth.context_processors.auth',
66 'django.contrib.messages.context_processors.messages',
67 ],
68 },
69 },
70 ]
71
72 WSGI_APPLICATION = 'nextcloudappstore.wsgi.application'
73
74 # Database
75 # https://docs.djangoproject.com/en/1.9/ref/settings/#databases
76
77 DATABASES = {
78 'default': {
79 'ENGINE': 'django.db.backends.sqlite3',
80 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
81 'TEST': {
82 'NAME': os.path.join(BASE_DIR, 'test.sqlite3'),
83 }
84 }
85 }
86
87 AUTHENTICATION_BACKENDS = (
88 # Needed to login by username in Django admin, regardless of `allauth`
89 'django.contrib.auth.backends.ModelBackend',
90
91 # `allauth` specific authentication methods, such as login by e-mail
92 'allauth.account.auth_backends.AuthenticationBackend',
93 )
94
95 # Password validation
96 # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators
97
98 AUTH_PASSWORD_VALIDATORS = [
99 {
100 'NAME': 'django.contrib.auth.password_validation'
101 '.UserAttributeSimilarityValidator',
102 },
103 {
104 'NAME': 'django.contrib.auth.password_validation'
105 '.MinimumLengthValidator',
106 },
107 {
108 'NAME': 'django.contrib.auth.password_validation'
109 '.CommonPasswordValidator',
110 },
111 {
112 'NAME': 'django.contrib.auth.password_validation'
113 '.NumericPasswordValidator',
114 },
115 ]
116
117 REST_FRAMEWORK = {
118 'DEFAULT_RENDERER_CLASSES': (
119 'djangorestframework_camel_case.render.CamelCaseJSONRenderer',
120 ),
121 'DEFAULT_PARSER_CLASSES': (
122 'djangorestframework_camel_case.parser.CamelCaseJSONParser',
123 ),
124 'DEFAULT_THROTTLE_RATES': {
125 'app_upload': '100/day'
126 }
127 }
128
129 SITE_ID = 1
130
131 # Allauth configuration
132 # http://django-allauth.readthedocs.io/en/latest/configuration.html
133 ACCOUNT_EMAIL_REQUIRED = True
134 ACCOUNT_EMAIL_VERIFICATION = "mandatory"
135 ACCOUNT_LOGOUT_ON_GET = True
136 ACCOUNT_LOGOUT_REDIRECT_URL = 'home'
137 ACCOUNT_SESSION_REMEMBER = True
138 ACCOUNT_SIGNUP_FORM_CLASS = \
139 'nextcloudappstore.core.user.forms.SignupFormRecaptcha'
140
141 # Internationalization
142 # https://docs.djangoproject.com/en/1.9/topics/i18n/
143 LANGUAGE_CODE = 'en-us'
144 TIME_ZONE = 'UTC'
145 USE_I18N = True
146 USE_L10N = True
147 USE_TZ = True
148
149 PARLER_LANGUAGES = {
150 1: (
151 {'code': 'en'},
152 {'code': 'de'},
153 {'code': 'fr'},
154 ),
155 'default': {
156 'fallbacks': ['en'],
157 'hide_untranslated': False,
158 }
159 }
160
161 # Static files (CSS, JavaScript, Images)
162 # https://docs.djangoproject.com/en/1.9/howto/static-files/
163 STATICFILES_DIRS = (
164 os.path.join(BASE_DIR, 'static'),
165 )
166 MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
167 RELEASE_DOWNLOAD_ROOT = os.path.join(MEDIA_ROOT, 'releasetmp')
168 STATIC_URL = '/static/'
169 MEDIA_URL = '/media/'
170
171 # Default security settings
172 SECURE_BROWSER_XSS_FILTER = True
173 SECURE_CONTENT_TYPE_NOSNIFF = True
174
175 # use modern no Captcha reCaptcha
176 NOCAPTCHA = True
177
178
179 LOGIN_REDIRECT_URL = 'home'
180
181 try:
182 from nextcloudappstore.local_settings import *
183 except:
184 pass
185
[end of nextcloudappstore/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nextcloudappstore/settings.py b/nextcloudappstore/settings.py
--- a/nextcloudappstore/settings.py
+++ b/nextcloudappstore/settings.py
@@ -21,28 +21,29 @@
# Application definition
INSTALLED_APPS = [
+ 'nextcloudappstore.core.apps.CoreConfig',
'parler',
+ 'captcha',
'rest_framework',
+ 'corsheaders',
+ 'allauth',
+ 'allauth.account',
+ 'allauth.socialaccount',
+ 'allauth.socialaccount.providers.github',
+ 'allauth.socialaccount.providers.bitbucket',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
- # The Django sites framework is required by allauth
'django.contrib.sites',
'django.contrib.staticfiles',
- 'captcha',
- 'nextcloudappstore.core.apps.CoreConfig',
- 'allauth',
- 'allauth.account',
- 'allauth.socialaccount',
- 'allauth.socialaccount.providers.github',
- 'allauth.socialaccount.providers.bitbucket',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
+ 'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
@@ -171,6 +172,22 @@
# Default security settings
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
+CORS_ORIGIN_ALLOW_ALL = True
+CORS_URLS_REGEX = r'^/api/.*$'
+CORS_ALLOW_HEADERS = (
+ 'x-requested-with',
+ 'content-type',
+ 'accept',
+ 'origin',
+ 'authorization',
+ 'x-csrftoken',
+ 'if-none-match',
+)
+CORS_EXPOSE_HEADERS = (
+ 'etag',
+ 'x-content-type-options',
+ 'content-type',
+)
# use modern no Captcha reCaptcha
NOCAPTCHA = True
| {"golden_diff": "diff --git a/nextcloudappstore/settings.py b/nextcloudappstore/settings.py\n--- a/nextcloudappstore/settings.py\n+++ b/nextcloudappstore/settings.py\n@@ -21,28 +21,29 @@\n # Application definition\n \n INSTALLED_APPS = [\n+ 'nextcloudappstore.core.apps.CoreConfig',\n 'parler',\n+ 'captcha',\n 'rest_framework',\n+ 'corsheaders',\n+ 'allauth',\n+ 'allauth.account',\n+ 'allauth.socialaccount',\n+ 'allauth.socialaccount.providers.github',\n+ 'allauth.socialaccount.providers.bitbucket',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n- # The Django sites framework is required by allauth\n 'django.contrib.sites',\n 'django.contrib.staticfiles',\n- 'captcha',\n- 'nextcloudappstore.core.apps.CoreConfig',\n- 'allauth',\n- 'allauth.account',\n- 'allauth.socialaccount',\n- 'allauth.socialaccount.providers.github',\n- 'allauth.socialaccount.providers.bitbucket',\n ]\n \n MIDDLEWARE_CLASSES = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n+ 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n@@ -171,6 +172,22 @@\n # Default security settings\n SECURE_BROWSER_XSS_FILTER = True\n SECURE_CONTENT_TYPE_NOSNIFF = True\n+CORS_ORIGIN_ALLOW_ALL = True\n+CORS_URLS_REGEX = r'^/api/.*$'\n+CORS_ALLOW_HEADERS = (\n+ 'x-requested-with',\n+ 'content-type',\n+ 'accept',\n+ 'origin',\n+ 'authorization',\n+ 'x-csrftoken',\n+ 'if-none-match',\n+)\n+CORS_EXPOSE_HEADERS = (\n+ 'etag',\n+ 'x-content-type-options',\n+ 'content-type',\n+)\n \n # use modern no Captcha reCaptcha\n NOCAPTCHA = True\n", "issue": "API routes should be enabled for CORS\nOnly API routes should be whitelisted for CORS\n\nWe need to make sure that CORS and session auth are mutually exclusive\n\nThe solution is probably to integrate https://github.com/ottoyiu/django-cors-headers/\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for nextcloudappstore project.\n\nGenerated by 'django-admin startproject' using Django 1.9.6.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.9/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.9/ref/settings/\n\"\"\"\n\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/\n\n# Application definition\n\nINSTALLED_APPS = [\n 'parler',\n 'rest_framework',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n # The Django sites framework is required by allauth\n 'django.contrib.sites',\n 'django.contrib.staticfiles',\n 'captcha',\n 'nextcloudappstore.core.apps.CoreConfig',\n 'allauth',\n 'allauth.account',\n 'allauth.socialaccount',\n 'allauth.socialaccount.providers.github',\n 'allauth.socialaccount.providers.bitbucket',\n]\n\nMIDDLEWARE_CLASSES = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'nextcloudappstore.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'nextcloudappstore.wsgi.application'\n\n# Database\n# https://docs.djangoproject.com/en/1.9/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n 'TEST': {\n 'NAME': os.path.join(BASE_DIR, 'test.sqlite3'),\n }\n }\n}\n\nAUTHENTICATION_BACKENDS = (\n # Needed to login by username in Django admin, regardless of `allauth`\n 'django.contrib.auth.backends.ModelBackend',\n\n # `allauth` specific authentication methods, such as login by e-mail\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\n# Password validation\n# https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation'\n '.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation'\n '.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation'\n '.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation'\n '.NumericPasswordValidator',\n },\n]\n\nREST_FRAMEWORK = {\n 'DEFAULT_RENDERER_CLASSES': (\n 'djangorestframework_camel_case.render.CamelCaseJSONRenderer',\n ),\n 'DEFAULT_PARSER_CLASSES': (\n 'djangorestframework_camel_case.parser.CamelCaseJSONParser',\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'app_upload': '100/day'\n }\n}\n\nSITE_ID = 1\n\n# Allauth configuration\n# http://django-allauth.readthedocs.io/en/latest/configuration.html\nACCOUNT_EMAIL_REQUIRED = True\nACCOUNT_EMAIL_VERIFICATION = \"mandatory\"\nACCOUNT_LOGOUT_ON_GET = True\nACCOUNT_LOGOUT_REDIRECT_URL = 'home'\nACCOUNT_SESSION_REMEMBER = True\nACCOUNT_SIGNUP_FORM_CLASS = \\\n 'nextcloudappstore.core.user.forms.SignupFormRecaptcha'\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.9/topics/i18n/\nLANGUAGE_CODE = 'en-us'\nTIME_ZONE = 'UTC'\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\n\nPARLER_LANGUAGES = {\n 1: (\n {'code': 'en'},\n {'code': 'de'},\n {'code': 'fr'},\n ),\n 'default': {\n 'fallbacks': ['en'],\n 'hide_untranslated': False,\n }\n}\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.9/howto/static-files/\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'static'),\n)\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nRELEASE_DOWNLOAD_ROOT = os.path.join(MEDIA_ROOT, 'releasetmp')\nSTATIC_URL = '/static/'\nMEDIA_URL = '/media/'\n\n# Default security settings\nSECURE_BROWSER_XSS_FILTER = True\nSECURE_CONTENT_TYPE_NOSNIFF = True\n\n# use modern no Captcha reCaptcha\nNOCAPTCHA = True\n\n\nLOGIN_REDIRECT_URL = 'home'\n\ntry:\n from nextcloudappstore.local_settings import *\nexcept:\n pass\n", "path": "nextcloudappstore/settings.py"}]} | 2,218 | 478 |
gh_patches_debug_27482 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-2784 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
California is down due to negative nuclear production
[Kibana](https://kibana.electricitymap.org/app/kibana#/discover/10af54f0-0c4a-11e9-85c1-1d63df8c862c?_g=(refreshInterval:('$$hashKey':'object:232',display:'5%20minutes',pause:!f,section:2,value:300000),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(message,extra.key,level),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:level,negate:!f,params:(query:ERROR,type:phrase),type:phrase,value:ERROR),query:(match:(level:(query:ERROR,type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:extra.key,negate:!f,params:(query:US-CAL-CISO,type:phrase),type:phrase,value:US-CAL-CISO),query:(match:(extra.key:(query:US-CAL-CISO,type:phrase))))),index:'96f67170-0c49-11e9-85c1-1d63df8c862c',interval:auto,query:(language:lucene,query:''),sort:!('@timestamp',desc))):
>invalid point: {'zoneKey': 'US-CAL-CISO', 'production': defaultdict(<class 'float'>, {'solar': 0.0, 'wind': 35.0, 'geothermal': 537.0, 'biomass': 385.0, 'hydro': 916.0, 'coal': 16.0, 'nuclear': -34.0, 'gas': 10688.0, 'unknown': 0.0}), 'storage': defaultdict(<class 'float'>, {'battery': 3.0}), 'source': 'caiso.com', 'datetime': datetime.datetime(2020, 10, 28, 4, 10, tzinfo=tzfile('/usr/share/zoneinfo/US/Pacific')), 'schemaVersion': 3}, reason:US-CAL-CISO: key nuclear has negative value -34.0
Given the small value and that it's zero on the [CAISO web site](http://www.caiso.com/TodaysOutlook/Pages/supply.aspx), this is probably self consumption. Some googling shows that both units of Diablo Canyon, California's sole nuclear power plant, are [down for maintainance](https://keyt.com/news/san-luis-obispo-county/2020/10/15/pge-removes-diablo-canyons-unit-2-for-unexpected-maintenance/).
</issue>
<code>
[start of parsers/US_CA.py]
1 #!/usr/bin/env python3
2
3 import arrow
4 import pandas
5 import requests
6 from bs4 import BeautifulSoup
7 from collections import defaultdict
8
9 FUEL_SOURCE_CSV = 'http://www.caiso.com/outlook/SP/fuelsource.csv'
10
11 MX_EXCHANGE_URL = 'http://www.cenace.gob.mx/Paginas/Publicas/Info/DemandaRegional.aspx'
12
13 def fetch_production(zone_key='US-CA', session=None, target_datetime=None,
14 logger=None):
15 """Requests the last known production mix (in MW) of a given country
16
17 Arguments:
18 zone_key: used in case a parser is able to fetch multiple countries
19 session: request session passed in order to re-use an existing session
20
21 Return:
22 A dictionary in the form:
23 {
24 'zoneKey': 'FR',
25 'datetime': '2017-01-01T00:00:00Z',
26 'production': {
27 'biomass': 0.0,
28 'coal': 0.0,
29 'gas': 0.0,
30 'hydro': 0.0,
31 'nuclear': null,
32 'oil': 0.0,
33 'solar': 0.0,
34 'wind': 0.0,
35 'geothermal': 0.0,
36 'unknown': 0.0
37 },
38 'storage': {
39 'hydro': -10.0,
40 },
41 'source': 'mysource.com'
42 }
43 """
44 if target_datetime:
45 return fetch_historical_production(target_datetime, zone_key)
46
47 target_datetime = arrow.get(target_datetime)
48
49 # Get the production from the CSV
50 csv = pandas.read_csv(FUEL_SOURCE_CSV)
51 latest_index = len(csv) - 1
52 production_map = {
53 'Solar': 'solar',
54 'Wind': 'wind',
55 'Geothermal': 'geothermal',
56 'Biomass': 'biomass',
57 'Biogas': 'biomass',
58 'Small hydro': 'hydro',
59 'Coal': 'coal',
60 'Nuclear': 'nuclear',
61 'Natural gas': 'gas',
62 'Large hydro': 'hydro',
63 'Other': 'unknown'
64 }
65 storage_map = {
66 'Batteries': 'battery'
67 }
68 daily_data = []
69 for i in range(0, latest_index + 1):
70 h, m = map(int, csv['Time'][i].split(':'))
71 date = arrow.utcnow().to('US/Pacific').replace(hour=h, minute=m,
72 second=0, microsecond=0)
73 data = {
74 'zoneKey': zone_key,
75 'production': defaultdict(float),
76 'storage': defaultdict(float),
77 'source': 'caiso.com',
78 'datetime': date.datetime
79 }
80
81 # map items from names in CAISO CSV to names used in Electricity Map
82 for ca_gen_type, mapped_gen_type in production_map.items():
83 production = float(csv[ca_gen_type][i])
84
85 if mapped_gen_type == 'solar' and production < 0:
86 logger.warn('Solar production for US_CA was reported as less than 0 and was clamped')
87 production = 0.0
88
89 # if another mean of production created a value, sum them up
90 data['production'][mapped_gen_type] += production
91
92 for ca_storage_type, mapped_storage_type in storage_map.items():
93 storage = -float(csv[ca_storage_type][i])
94
95 # if another mean of storage created a value, sum them up
96 data['storage'][mapped_storage_type] += storage
97
98 daily_data.append(data)
99
100 return daily_data
101
102
103 def fetch_historical_production(target_datetime, zone_key):
104 return fetch_historical_data(target_datetime, zone_key)[0]
105
106
107 def fetch_historical_exchange(target_datetime):
108 return fetch_historical_data(target_datetime)[1]
109
110
111 def fetch_historical_data(target_datetime, zone_key='US-CA'):
112 # caiso.com provides daily data until the day before today
113 # get a clean date at the beginning of yesterday
114 target_date = arrow.get(target_datetime).to('US/Pacific').replace(
115 hour=0, minute=0, second=0, microsecond=0)
116
117 url = 'http://content.caiso.com/green/renewrpt/' + target_date.format(
118 'YYYYMMDD') + '_DailyRenewablesWatch.txt'
119
120 renewable_resources = pandas.read_table(
121 url, sep='\t\t', skiprows=2, header=None,
122 names=['Hour', 'GEOTHERMAL', 'BIOMASS', 'BIOGAS', 'SMALL HYDRO',
123 'WIND TOTAL', 'SOLAR PV', 'SOLAR THERMAL'],
124 skipfooter=27, skipinitialspace=True, engine='python')
125 other_resources = pandas.read_table(
126 url, sep='\t\t', skiprows=30, header=None,
127 names=['Hour', 'RENEWABLES', 'NUCLEAR', 'THERMAL', 'IMPORTS', 'HYDRO'],
128 skipinitialspace=True, engine='python')
129
130 daily_data, import_data = [], []
131
132 for i in range(0, 24):
133 daily_data.append({
134 'zoneKey': zone_key,
135 'storage': {},
136 'source': 'caiso.com',
137 'production': {
138 'biomass': float(renewable_resources['BIOMASS'][i]),
139 'gas': float(renewable_resources['BIOGAS'][i])
140 + float(other_resources['THERMAL'][i]),
141 'hydro': float(renewable_resources['SMALL HYDRO'][i])
142 + float(other_resources['HYDRO'][i]),
143 'nuclear': float(other_resources['NUCLEAR'][i]),
144 'solar': float(renewable_resources['SOLAR PV'][i])
145 + float(renewable_resources['SOLAR THERMAL'][i]),
146 'wind': float(renewable_resources['WIND TOTAL'][i]),
147 'geothermal': float(renewable_resources['GEOTHERMAL'][i]),
148 },
149 'datetime': target_date.shift(hours=i + 1).datetime,
150 })
151 import_data.append(
152 {
153 'sortedZoneKeys': 'US->US-CA',
154 'datetime': target_date.shift(hours=i + 1).datetime,
155 'netFlow': float(other_resources['IMPORTS'][i]),
156 'source': 'caiso.com'
157 }
158 )
159
160 return daily_data, import_data
161
162
163 def fetch_MX_exchange(s):
164 req = s.get(MX_EXCHANGE_URL)
165 soup = BeautifulSoup(req.text, 'html.parser')
166 exchange_div = soup.find("div", attrs={'id': 'IntercambioUSA-BCA'})
167 val = exchange_div.text
168
169 # cenace html uses unicode hyphens instead of minus signs
170 try:
171 val = val.replace(chr(8208), chr(45))
172 except ValueError:
173 pass
174
175 # negative value indicates flow from CA to MX
176
177 return float(val)
178
179
180 def fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None,
181 logger=None):
182 """Requests the last known power exchange (in MW) between two zones
183 Arguments:
184 zone_key1: the first country code
185 zone_key2: the second country code; order of the two codes in params
186 doesn't matter
187 session: request session passed in order to re-use an existing session
188 Return:
189 A dictionary in the form:
190 {
191 'sortedZoneKeys': 'DK->NO',
192 'datetime': '2017-01-01T00:00:00Z',
193 'netFlow': 0.0,
194 'source': 'mysource.com'
195 }
196 where net flow is from DK into NO
197 """
198 sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))
199
200 s = session or requests.Session()
201
202 if sorted_zone_keys == 'MX-BC->US-CA' or sorted_zone_keys == 'MX-BC->US-CAL-CISO':
203 netflow = fetch_MX_exchange(s)
204 exchange = {
205 'sortedZoneKeys': sorted_zone_keys,
206 'datetime': arrow.now('America/Tijuana').datetime,
207 'netFlow': netflow,
208 'source': 'cenace.gob.mx'
209 }
210 return exchange
211
212 if target_datetime:
213 return fetch_historical_exchange(target_datetime)
214
215 # CSV has imports to California as positive.
216 # Electricity Map expects A->B to indicate flow to B as positive.
217 # So values in CSV can be used as-is.
218
219 csv = pandas.read_csv(FUEL_SOURCE_CSV)
220 latest_index = len(csv) - 1
221 daily_data = []
222 for i in range(0, latest_index + 1):
223 h, m = map(int, csv['Time'][i].split(':'))
224 date = arrow.utcnow().to('US/Pacific').replace(hour=h, minute=m,
225 second=0, microsecond=0)
226 data = {
227 'sortedZoneKeys': sorted_zone_keys,
228 'datetime': date.datetime,
229 'netFlow': float(csv['Imports'][i]),
230 'source': 'caiso.com'
231 }
232
233 daily_data.append(data)
234
235 return daily_data
236
237
238 if __name__ == '__main__':
239 "Main method, not used by Electricity Map backend, but handy for testing"
240
241 from pprint import pprint
242
243 print('fetch_production() ->')
244 pprint(fetch_production())
245
246 print('fetch_exchange("US-CA", "US") ->')
247 #pprint(fetch_exchange("US-CA", "US"))
248
249 print('fetch_exchange("MX-BC", "US-CA")')
250 pprint(fetch_exchange("MX-BC", "US-CA"))
251
[end of parsers/US_CA.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsers/US_CA.py b/parsers/US_CA.py
--- a/parsers/US_CA.py
+++ b/parsers/US_CA.py
@@ -5,13 +5,14 @@
import requests
from bs4 import BeautifulSoup
from collections import defaultdict
+import logging
FUEL_SOURCE_CSV = 'http://www.caiso.com/outlook/SP/fuelsource.csv'
MX_EXCHANGE_URL = 'http://www.cenace.gob.mx/Paginas/Publicas/Info/DemandaRegional.aspx'
def fetch_production(zone_key='US-CA', session=None, target_datetime=None,
- logger=None):
+ logger: logging.Logger = logging.getLogger(__name__)):
"""Requests the last known production mix (in MW) of a given country
Arguments:
@@ -82,8 +83,8 @@
for ca_gen_type, mapped_gen_type in production_map.items():
production = float(csv[ca_gen_type][i])
- if mapped_gen_type == 'solar' and production < 0:
- logger.warn('Solar production for US_CA was reported as less than 0 and was clamped')
+ if production < 0 and (mapped_gen_type == 'solar' or mapped_gen_type == 'nuclear'):
+ logger.warn(ca_gen_type + ' production for US_CA was reported as less than 0 and was clamped')
production = 0.0
# if another mean of production created a value, sum them up
| {"golden_diff": "diff --git a/parsers/US_CA.py b/parsers/US_CA.py\n--- a/parsers/US_CA.py\n+++ b/parsers/US_CA.py\n@@ -5,13 +5,14 @@\n import requests\n from bs4 import BeautifulSoup\n from collections import defaultdict\n+import logging\n \n FUEL_SOURCE_CSV = 'http://www.caiso.com/outlook/SP/fuelsource.csv'\n \n MX_EXCHANGE_URL = 'http://www.cenace.gob.mx/Paginas/Publicas/Info/DemandaRegional.aspx'\n \n def fetch_production(zone_key='US-CA', session=None, target_datetime=None,\n- logger=None):\n+ logger: logging.Logger = logging.getLogger(__name__)):\n \"\"\"Requests the last known production mix (in MW) of a given country\n \n Arguments:\n@@ -82,8 +83,8 @@\n for ca_gen_type, mapped_gen_type in production_map.items():\n production = float(csv[ca_gen_type][i])\n \n- if mapped_gen_type == 'solar' and production < 0:\n- logger.warn('Solar production for US_CA was reported as less than 0 and was clamped')\n+ if production < 0 and (mapped_gen_type == 'solar' or mapped_gen_type == 'nuclear'):\n+ logger.warn(ca_gen_type + ' production for US_CA was reported as less than 0 and was clamped')\n production = 0.0\n \n # if another mean of production created a value, sum them up\n", "issue": "California is down due to negative nuclear production\n[Kibana](https://kibana.electricitymap.org/app/kibana#/discover/10af54f0-0c4a-11e9-85c1-1d63df8c862c?_g=(refreshInterval:('$$hashKey':'object:232',display:'5%20minutes',pause:!f,section:2,value:300000),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(message,extra.key,level),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:level,negate:!f,params:(query:ERROR,type:phrase),type:phrase,value:ERROR),query:(match:(level:(query:ERROR,type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:extra.key,negate:!f,params:(query:US-CAL-CISO,type:phrase),type:phrase,value:US-CAL-CISO),query:(match:(extra.key:(query:US-CAL-CISO,type:phrase))))),index:'96f67170-0c49-11e9-85c1-1d63df8c862c',interval:auto,query:(language:lucene,query:''),sort:!('@timestamp',desc))):\r\n>invalid point: {'zoneKey': 'US-CAL-CISO', 'production': defaultdict(<class 'float'>, {'solar': 0.0, 'wind': 35.0, 'geothermal': 537.0, 'biomass': 385.0, 'hydro': 916.0, 'coal': 16.0, 'nuclear': -34.0, 'gas': 10688.0, 'unknown': 0.0}), 'storage': defaultdict(<class 'float'>, {'battery': 3.0}), 'source': 'caiso.com', 'datetime': datetime.datetime(2020, 10, 28, 4, 10, tzinfo=tzfile('/usr/share/zoneinfo/US/Pacific')), 'schemaVersion': 3}, reason:US-CAL-CISO: key nuclear has negative value -34.0\r\n\r\nGiven the small value and that it's zero on the [CAISO web site](http://www.caiso.com/TodaysOutlook/Pages/supply.aspx), this is probably self consumption. Some googling shows that both units of Diablo Canyon, California's sole nuclear power plant, are [down for maintainance](https://keyt.com/news/san-luis-obispo-county/2020/10/15/pge-removes-diablo-canyons-unit-2-for-unexpected-maintenance/).\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport arrow\nimport pandas\nimport requests\nfrom bs4 import BeautifulSoup\nfrom collections import defaultdict\n\nFUEL_SOURCE_CSV = 'http://www.caiso.com/outlook/SP/fuelsource.csv'\n\nMX_EXCHANGE_URL = 'http://www.cenace.gob.mx/Paginas/Publicas/Info/DemandaRegional.aspx'\n\ndef fetch_production(zone_key='US-CA', session=None, target_datetime=None,\n logger=None):\n \"\"\"Requests the last known production mix (in MW) of a given country\n\n Arguments:\n zone_key: used in case a parser is able to fetch multiple countries\n session: request session passed in order to re-use an existing session\n\n Return:\n A dictionary in the form:\n {\n 'zoneKey': 'FR',\n 'datetime': '2017-01-01T00:00:00Z',\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': null,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {\n 'hydro': -10.0,\n },\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n return fetch_historical_production(target_datetime, zone_key)\n\n target_datetime = arrow.get(target_datetime)\n\n # Get the production from the CSV\n csv = pandas.read_csv(FUEL_SOURCE_CSV)\n latest_index = len(csv) - 1\n production_map = {\n 'Solar': 'solar',\n 'Wind': 'wind',\n 'Geothermal': 'geothermal',\n 'Biomass': 'biomass',\n 'Biogas': 'biomass',\n 'Small hydro': 'hydro',\n 'Coal': 'coal',\n 'Nuclear': 'nuclear',\n 'Natural gas': 'gas',\n 'Large hydro': 'hydro',\n 'Other': 'unknown'\n }\n storage_map = {\n 'Batteries': 'battery'\n }\n daily_data = []\n for i in range(0, latest_index + 1):\n h, m = map(int, csv['Time'][i].split(':'))\n date = arrow.utcnow().to('US/Pacific').replace(hour=h, minute=m,\n second=0, microsecond=0)\n data = {\n 'zoneKey': zone_key,\n 'production': defaultdict(float),\n 'storage': defaultdict(float),\n 'source': 'caiso.com',\n 'datetime': date.datetime\n }\n\n # map items from names in CAISO CSV to names used in Electricity Map\n for ca_gen_type, mapped_gen_type in production_map.items():\n production = float(csv[ca_gen_type][i])\n \n if mapped_gen_type == 'solar' and production < 0:\n logger.warn('Solar production for US_CA was reported as less than 0 and was clamped')\n production = 0.0\n \n # if another mean of production created a value, sum them up\n data['production'][mapped_gen_type] += production\n\n for ca_storage_type, mapped_storage_type in storage_map.items():\n storage = -float(csv[ca_storage_type][i])\n\n # if another mean of storage created a value, sum them up\n data['storage'][mapped_storage_type] += storage\n\n daily_data.append(data)\n\n return daily_data\n\n\ndef fetch_historical_production(target_datetime, zone_key):\n return fetch_historical_data(target_datetime, zone_key)[0]\n\n\ndef fetch_historical_exchange(target_datetime):\n return fetch_historical_data(target_datetime)[1]\n\n\ndef fetch_historical_data(target_datetime, zone_key='US-CA'):\n # caiso.com provides daily data until the day before today\n # get a clean date at the beginning of yesterday\n target_date = arrow.get(target_datetime).to('US/Pacific').replace(\n hour=0, minute=0, second=0, microsecond=0)\n\n url = 'http://content.caiso.com/green/renewrpt/' + target_date.format(\n 'YYYYMMDD') + '_DailyRenewablesWatch.txt'\n\n renewable_resources = pandas.read_table(\n url, sep='\\t\\t', skiprows=2, header=None,\n names=['Hour', 'GEOTHERMAL', 'BIOMASS', 'BIOGAS', 'SMALL HYDRO',\n 'WIND TOTAL', 'SOLAR PV', 'SOLAR THERMAL'],\n skipfooter=27, skipinitialspace=True, engine='python')\n other_resources = pandas.read_table(\n url, sep='\\t\\t', skiprows=30, header=None,\n names=['Hour', 'RENEWABLES', 'NUCLEAR', 'THERMAL', 'IMPORTS', 'HYDRO'],\n skipinitialspace=True, engine='python')\n\n daily_data, import_data = [], []\n\n for i in range(0, 24):\n daily_data.append({\n 'zoneKey': zone_key,\n 'storage': {},\n 'source': 'caiso.com',\n 'production': {\n 'biomass': float(renewable_resources['BIOMASS'][i]),\n 'gas': float(renewable_resources['BIOGAS'][i])\n + float(other_resources['THERMAL'][i]),\n 'hydro': float(renewable_resources['SMALL HYDRO'][i])\n + float(other_resources['HYDRO'][i]),\n 'nuclear': float(other_resources['NUCLEAR'][i]),\n 'solar': float(renewable_resources['SOLAR PV'][i])\n + float(renewable_resources['SOLAR THERMAL'][i]),\n 'wind': float(renewable_resources['WIND TOTAL'][i]),\n 'geothermal': float(renewable_resources['GEOTHERMAL'][i]),\n },\n 'datetime': target_date.shift(hours=i + 1).datetime,\n })\n import_data.append(\n {\n 'sortedZoneKeys': 'US->US-CA',\n 'datetime': target_date.shift(hours=i + 1).datetime,\n 'netFlow': float(other_resources['IMPORTS'][i]),\n 'source': 'caiso.com'\n }\n )\n\n return daily_data, import_data\n\n\ndef fetch_MX_exchange(s):\n req = s.get(MX_EXCHANGE_URL)\n soup = BeautifulSoup(req.text, 'html.parser')\n exchange_div = soup.find(\"div\", attrs={'id': 'IntercambioUSA-BCA'})\n val = exchange_div.text\n\n # cenace html uses unicode hyphens instead of minus signs\n try:\n val = val.replace(chr(8208), chr(45))\n except ValueError:\n pass\n\n # negative value indicates flow from CA to MX\n\n return float(val)\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None,\n logger=None):\n \"\"\"Requests the last known power exchange (in MW) between two zones\n Arguments:\n zone_key1: the first country code\n zone_key2: the second country code; order of the two codes in params\n doesn't matter\n session: request session passed in order to re-use an existing session\n Return:\n A dictionary in the form:\n {\n 'sortedZoneKeys': 'DK->NO',\n 'datetime': '2017-01-01T00:00:00Z',\n 'netFlow': 0.0,\n 'source': 'mysource.com'\n }\n where net flow is from DK into NO\n \"\"\"\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n s = session or requests.Session()\n\n if sorted_zone_keys == 'MX-BC->US-CA' or sorted_zone_keys == 'MX-BC->US-CAL-CISO':\n netflow = fetch_MX_exchange(s)\n exchange = {\n 'sortedZoneKeys': sorted_zone_keys,\n 'datetime': arrow.now('America/Tijuana').datetime,\n 'netFlow': netflow,\n 'source': 'cenace.gob.mx'\n }\n return exchange\n\n if target_datetime:\n return fetch_historical_exchange(target_datetime)\n\n # CSV has imports to California as positive.\n # Electricity Map expects A->B to indicate flow to B as positive.\n # So values in CSV can be used as-is.\n\n csv = pandas.read_csv(FUEL_SOURCE_CSV)\n latest_index = len(csv) - 1\n daily_data = []\n for i in range(0, latest_index + 1):\n h, m = map(int, csv['Time'][i].split(':'))\n date = arrow.utcnow().to('US/Pacific').replace(hour=h, minute=m,\n second=0, microsecond=0)\n data = {\n 'sortedZoneKeys': sorted_zone_keys,\n 'datetime': date.datetime,\n 'netFlow': float(csv['Imports'][i]),\n 'source': 'caiso.com'\n }\n\n daily_data.append(data)\n\n return daily_data\n\n\nif __name__ == '__main__':\n \"Main method, not used by Electricity Map backend, but handy for testing\"\n\n from pprint import pprint\n\n print('fetch_production() ->')\n pprint(fetch_production())\n\n print('fetch_exchange(\"US-CA\", \"US\") ->')\n #pprint(fetch_exchange(\"US-CA\", \"US\"))\n\n print('fetch_exchange(\"MX-BC\", \"US-CA\")')\n pprint(fetch_exchange(\"MX-BC\", \"US-CA\"))\n", "path": "parsers/US_CA.py"}]} | 4,048 | 326 |
gh_patches_debug_38106 | rasdani/github-patches | git_diff | TOMToolkit__tom_base-825 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ATLAS forced photometry data processor should correctly interpret limiting data points.
see [ATLAS Forced Photemetry Output Description](https://fallingstar-data.com/forcedphot/resultdesc/)
for how to interpret data.
</issue>
<code>
[start of tom_dataproducts/processors/atlas_processor.py]
1 import mimetypes
2
3 from astropy import units
4 import astropy.io.ascii
5 from astropy.time import Time, TimezoneInfo
6
7 from tom_dataproducts.data_processor import DataProcessor
8 from tom_dataproducts.exceptions import InvalidFileFormatException
9
10
11 class AtlasProcessor(DataProcessor):
12
13 def data_type_override(self):
14 return 'photometry'
15
16 def process_data(self, data_product):
17 """
18 Routes a atlas processing call to a method specific to a file-format.
19
20 :param data_product: Photometric DataProduct which will be processed into the specified format for database
21 ingestion
22 :type data_product: DataProduct
23
24 :returns: python list of 2-tuples, each with a timestamp and corresponding data
25 :rtype: list
26 """
27
28 mimetype = mimetypes.guess_type(data_product.data.path)[0]
29 if mimetype in self.PLAINTEXT_MIMETYPES:
30 photometry = self._process_photometry_from_plaintext(data_product)
31 return [(datum.pop('timestamp'), datum, datum.pop('source', 'ATLAS')) for datum in photometry]
32 else:
33 raise InvalidFileFormatException('Unsupported file type')
34
35 def _process_photometry_from_plaintext(self, data_product):
36 """
37 Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as
38 specified in the below documentation. The file is expected to be a multi-column delimited space delimited
39 text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot
40
41 The header looks like this:
42 ###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs
43
44 :param data_product: ATLAS Photometric DataProduct which will be processed into a list of dicts
45 :type data_product: DataProduct
46
47 :returns: python list containing the photometric data from the DataProduct
48 :rtype: list
49 """
50 photometry = []
51
52 data = astropy.io.ascii.read(data_product.data.path)
53 if len(data) < 1:
54 raise InvalidFileFormatException('Empty table or invalid file type')
55
56 try:
57 for datum in data:
58 time = Time(float(datum['##MJD']), format='mjd')
59 utc = TimezoneInfo(utc_offset=0*units.hour)
60 time.format = 'datetime'
61 value = {
62 'timestamp': time.to_datetime(timezone=utc),
63 'magnitude': float(datum['m']),
64 'magnitude_error': float(datum['dm']),
65 'filter': str(datum['F'])
66 }
67 photometry.append(value)
68 except Exception as e:
69 raise InvalidFileFormatException(e)
70
71 return photometry
72
[end of tom_dataproducts/processors/atlas_processor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tom_dataproducts/processors/atlas_processor.py b/tom_dataproducts/processors/atlas_processor.py
--- a/tom_dataproducts/processors/atlas_processor.py
+++ b/tom_dataproducts/processors/atlas_processor.py
@@ -21,7 +21,7 @@
ingestion
:type data_product: DataProduct
- :returns: python list of 2-tuples, each with a timestamp and corresponding data
+ :returns: python list of 3-tuples, each with a timestamp and corresponding data, and source
:rtype: list
"""
@@ -37,6 +37,7 @@
Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as
specified in the below documentation. The file is expected to be a multi-column delimited space delimited
text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot
+ See https://fallingstar-data.com/forcedphot/resultdesc/ for a description of the output format.
The header looks like this:
###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs
@@ -48,6 +49,7 @@
:rtype: list
"""
photometry = []
+ signal_to_noise_cutoff = 3.0 # cutoff to turn magnitudes into non-detection limits
data = astropy.io.ascii.read(data_product.data.path)
if len(data) < 1:
@@ -60,10 +62,19 @@
time.format = 'datetime'
value = {
'timestamp': time.to_datetime(timezone=utc),
- 'magnitude': float(datum['m']),
- 'magnitude_error': float(datum['dm']),
- 'filter': str(datum['F'])
+ 'filter': str(datum['F']),
+ 'error': float(datum['dm']),
+ 'telescope': 'ATLAS',
}
+ # If the signal is in the noise, set the non-detection limit to the
+ # absolute value of the reported magnitude.
+ # see https://fallingstar-data.com/forcedphot/resultdesc/
+ signal_to_noise = abs(float(datum['uJy']))/abs(float(datum['duJy']))
+ if signal_to_noise <= signal_to_noise_cutoff:
+ value['limit'] = abs(float(datum['m']))
+ else:
+ value['magnitude'] = abs(float(datum['m']))
+
photometry.append(value)
except Exception as e:
raise InvalidFileFormatException(e)
| {"golden_diff": "diff --git a/tom_dataproducts/processors/atlas_processor.py b/tom_dataproducts/processors/atlas_processor.py\n--- a/tom_dataproducts/processors/atlas_processor.py\n+++ b/tom_dataproducts/processors/atlas_processor.py\n@@ -21,7 +21,7 @@\n ingestion\n :type data_product: DataProduct\n \n- :returns: python list of 2-tuples, each with a timestamp and corresponding data\n+ :returns: python list of 3-tuples, each with a timestamp and corresponding data, and source\n :rtype: list\n \"\"\"\n \n@@ -37,6 +37,7 @@\n Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as\n specified in the below documentation. The file is expected to be a multi-column delimited space delimited\n text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot\n+ See https://fallingstar-data.com/forcedphot/resultdesc/ for a description of the output format.\n \n The header looks like this:\n ###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs\n@@ -48,6 +49,7 @@\n :rtype: list\n \"\"\"\n photometry = []\n+ signal_to_noise_cutoff = 3.0 # cutoff to turn magnitudes into non-detection limits\n \n data = astropy.io.ascii.read(data_product.data.path)\n if len(data) < 1:\n@@ -60,10 +62,19 @@\n time.format = 'datetime'\n value = {\n 'timestamp': time.to_datetime(timezone=utc),\n- 'magnitude': float(datum['m']),\n- 'magnitude_error': float(datum['dm']),\n- 'filter': str(datum['F'])\n+ 'filter': str(datum['F']),\n+ 'error': float(datum['dm']),\n+ 'telescope': 'ATLAS',\n }\n+ # If the signal is in the noise, set the non-detection limit to the\n+ # absolute value of the reported magnitude.\n+ # see https://fallingstar-data.com/forcedphot/resultdesc/\n+ signal_to_noise = abs(float(datum['uJy']))/abs(float(datum['duJy']))\n+ if signal_to_noise <= signal_to_noise_cutoff:\n+ value['limit'] = abs(float(datum['m']))\n+ else:\n+ value['magnitude'] = abs(float(datum['m']))\n+\n photometry.append(value)\n except Exception as e:\n raise InvalidFileFormatException(e)\n", "issue": "ATLAS forced photometry data processor should correctly interpret limiting data points.\nsee [ATLAS Forced Photemetry Output Description](https://fallingstar-data.com/forcedphot/resultdesc/)\nfor how to interpret data.\n", "before_files": [{"content": "import mimetypes\n\nfrom astropy import units\nimport astropy.io.ascii\nfrom astropy.time import Time, TimezoneInfo\n\nfrom tom_dataproducts.data_processor import DataProcessor\nfrom tom_dataproducts.exceptions import InvalidFileFormatException\n\n\nclass AtlasProcessor(DataProcessor):\n\n def data_type_override(self):\n return 'photometry'\n\n def process_data(self, data_product):\n \"\"\"\n Routes a atlas processing call to a method specific to a file-format.\n\n :param data_product: Photometric DataProduct which will be processed into the specified format for database\n ingestion\n :type data_product: DataProduct\n\n :returns: python list of 2-tuples, each with a timestamp and corresponding data\n :rtype: list\n \"\"\"\n\n mimetype = mimetypes.guess_type(data_product.data.path)[0]\n if mimetype in self.PLAINTEXT_MIMETYPES:\n photometry = self._process_photometry_from_plaintext(data_product)\n return [(datum.pop('timestamp'), datum, datum.pop('source', 'ATLAS')) for datum in photometry]\n else:\n raise InvalidFileFormatException('Unsupported file type')\n\n def _process_photometry_from_plaintext(self, data_product):\n \"\"\"\n Processes the photometric data from a plaintext file into a list of dicts. File is read using astropy as\n specified in the below documentation. The file is expected to be a multi-column delimited space delimited\n text file, as produced by the ATLAS forced photometry service at https://fallingstar-data.com/forcedphot\n\n The header looks like this:\n ###MJD m dm uJy duJy F err chi/N RA Dec x y maj min phi apfit mag5sig Sky Obs\n\n :param data_product: ATLAS Photometric DataProduct which will be processed into a list of dicts\n :type data_product: DataProduct\n\n :returns: python list containing the photometric data from the DataProduct\n :rtype: list\n \"\"\"\n photometry = []\n\n data = astropy.io.ascii.read(data_product.data.path)\n if len(data) < 1:\n raise InvalidFileFormatException('Empty table or invalid file type')\n\n try:\n for datum in data:\n time = Time(float(datum['##MJD']), format='mjd')\n utc = TimezoneInfo(utc_offset=0*units.hour)\n time.format = 'datetime'\n value = {\n 'timestamp': time.to_datetime(timezone=utc),\n 'magnitude': float(datum['m']),\n 'magnitude_error': float(datum['dm']),\n 'filter': str(datum['F'])\n }\n photometry.append(value)\n except Exception as e:\n raise InvalidFileFormatException(e)\n\n return photometry\n", "path": "tom_dataproducts/processors/atlas_processor.py"}]} | 1,336 | 608 |
gh_patches_debug_2210 | rasdani/github-patches | git_diff | ARM-DOE__ACT-673 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feedstock failing due to pandas datetime
### Description
CI is failing due to datetime units not being set for csv reader
### What I Did
See the PR here that was failing
https://github.com/conda-forge/act-atmos-feedstock/pull/63
</issue>
<code>
[start of act/io/csvfiles.py]
1 """
2 This module contains I/O operations for loading csv files.
3
4 """
5
6 import pathlib
7
8 import pandas as pd
9
10 from .armfiles import check_arm_standards
11
12
13 def read_csv(filename, sep=',', engine='python', column_names=None, skipfooter=0, ignore_index=True, **kwargs):
14
15 """
16 Returns an `xarray.Dataset` with stored data and metadata from user-defined
17 query of CSV files.
18
19 Parameters
20 ----------
21 filenames : str or list
22 Name of file(s) to read.
23 sep : str
24 The separator between columns in the csv file.
25 column_names : list or None
26 The list of column names in the csv file.
27 verbose : bool
28 If true, will print if a file is not found.
29 ignore_index : bool
30 Keyword for pandas concat function. If True, do not use the index
31 values along the concatenation axis. The resulting axis will be labeled
32 0, …, n - 1. This is useful if you are concatenating datasets where the
33 concatenation axis does not have meaningful indexing information. Note
34 the index values on the other axes are still respected in the join.
35
36 Additional keyword arguments will be passed into pandas.read_csv.
37
38 Returns
39 -------
40 ds : xarray.Dataset
41 ACT Xarray dataset. Will be None if the file is not found.
42
43 Examples
44 --------
45 This example will load the example sounding data used for unit testing:
46
47 .. code-block:: python
48
49 import act
50
51 ds = act.io.csvfiles.read(act.tests.sample_files.EXAMPLE_CSV_WILDCARD)
52
53 """
54
55 # Convert to string if filename is a pathlib or not a list
56 if isinstance(filename, (pathlib.PurePath, str)):
57 filename = [str(filename)]
58
59 if isinstance(filename, list) and isinstance(filename[0], pathlib.PurePath):
60 filename = [str(ii) for ii in filename]
61
62 # Read data using pandas read_csv one file at a time and append to
63 # list. Then concatinate the list into one pandas dataframe.
64 li = []
65 for fl in filename:
66 df = pd.read_csv(
67 fl, sep=sep, names=column_names, skipfooter=skipfooter, engine=engine, **kwargs
68 )
69 li.append(df)
70
71 if len(li) == 1:
72 df = li[0]
73 else:
74 df = pd.concat(li, axis=0, ignore_index=ignore_index)
75
76 # Set Coordinates if there's a variable date_time
77 if 'date_time' in df:
78 df.date_time = df.date_time.astype('datetime64')
79 df.time = df.date_time
80 df = df.set_index('time')
81
82 # Convert to xarray DataSet
83 ds = df.to_xarray()
84
85 # Set additional variables
86 # Since we cannot assume a standard naming convention setting
87 # file_date and file_time to the first time in the file
88 x_coord = ds.coords.to_index().values[0]
89 if isinstance(x_coord, str):
90 x_coord_dt = pd.to_datetime(x_coord)
91 ds.attrs['_file_dates'] = x_coord_dt.strftime('%Y%m%d')
92 ds.attrs['_file_times'] = x_coord_dt.strftime('%H%M%S')
93
94 # Check for standard ARM datastream name, if none, assume the file is ARM
95 # standard format.
96 is_arm_file_flag = check_arm_standards(ds)
97 if is_arm_file_flag == 0:
98
99 ds.attrs['_datastream'] = '.'.join(filename[0].split('/')[-1].split('.')[0:2])
100
101 # Add additional attributes, site, standards flag, etc...
102 ds.attrs['_site'] = str(ds.attrs['_datastream'])[0:3]
103 ds.attrs['_arm_standards_flag'] = is_arm_file_flag
104
105 return ds
106
[end of act/io/csvfiles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/act/io/csvfiles.py b/act/io/csvfiles.py
--- a/act/io/csvfiles.py
+++ b/act/io/csvfiles.py
@@ -75,7 +75,7 @@
# Set Coordinates if there's a variable date_time
if 'date_time' in df:
- df.date_time = df.date_time.astype('datetime64')
+ df.date_time = df.date_time.astype('datetime64[ns]')
df.time = df.date_time
df = df.set_index('time')
| {"golden_diff": "diff --git a/act/io/csvfiles.py b/act/io/csvfiles.py\n--- a/act/io/csvfiles.py\n+++ b/act/io/csvfiles.py\n@@ -75,7 +75,7 @@\n \n # Set Coordinates if there's a variable date_time\n if 'date_time' in df:\n- df.date_time = df.date_time.astype('datetime64')\n+ df.date_time = df.date_time.astype('datetime64[ns]')\n df.time = df.date_time\n df = df.set_index('time')\n", "issue": "Feedstock failing due to pandas datetime\n### Description\r\nCI is failing due to datetime units not being set for csv reader\r\n\r\n### What I Did\r\n\r\nSee the PR here that was failing\r\nhttps://github.com/conda-forge/act-atmos-feedstock/pull/63\r\n\n", "before_files": [{"content": "\"\"\"\nThis module contains I/O operations for loading csv files.\n\n\"\"\"\n\nimport pathlib\n\nimport pandas as pd\n\nfrom .armfiles import check_arm_standards\n\n\ndef read_csv(filename, sep=',', engine='python', column_names=None, skipfooter=0, ignore_index=True, **kwargs):\n\n \"\"\"\n Returns an `xarray.Dataset` with stored data and metadata from user-defined\n query of CSV files.\n\n Parameters\n ----------\n filenames : str or list\n Name of file(s) to read.\n sep : str\n The separator between columns in the csv file.\n column_names : list or None\n The list of column names in the csv file.\n verbose : bool\n If true, will print if a file is not found.\n ignore_index : bool\n Keyword for pandas concat function. If True, do not use the index\n values along the concatenation axis. The resulting axis will be labeled\n 0, \u2026, n - 1. This is useful if you are concatenating datasets where the\n concatenation axis does not have meaningful indexing information. Note\n the index values on the other axes are still respected in the join.\n\n Additional keyword arguments will be passed into pandas.read_csv.\n\n Returns\n -------\n ds : xarray.Dataset\n ACT Xarray dataset. Will be None if the file is not found.\n\n Examples\n --------\n This example will load the example sounding data used for unit testing:\n\n .. code-block:: python\n\n import act\n\n ds = act.io.csvfiles.read(act.tests.sample_files.EXAMPLE_CSV_WILDCARD)\n\n \"\"\"\n\n # Convert to string if filename is a pathlib or not a list\n if isinstance(filename, (pathlib.PurePath, str)):\n filename = [str(filename)]\n\n if isinstance(filename, list) and isinstance(filename[0], pathlib.PurePath):\n filename = [str(ii) for ii in filename]\n\n # Read data using pandas read_csv one file at a time and append to\n # list. Then concatinate the list into one pandas dataframe.\n li = []\n for fl in filename:\n df = pd.read_csv(\n fl, sep=sep, names=column_names, skipfooter=skipfooter, engine=engine, **kwargs\n )\n li.append(df)\n\n if len(li) == 1:\n df = li[0]\n else:\n df = pd.concat(li, axis=0, ignore_index=ignore_index)\n\n # Set Coordinates if there's a variable date_time\n if 'date_time' in df:\n df.date_time = df.date_time.astype('datetime64')\n df.time = df.date_time\n df = df.set_index('time')\n\n # Convert to xarray DataSet\n ds = df.to_xarray()\n\n # Set additional variables\n # Since we cannot assume a standard naming convention setting\n # file_date and file_time to the first time in the file\n x_coord = ds.coords.to_index().values[0]\n if isinstance(x_coord, str):\n x_coord_dt = pd.to_datetime(x_coord)\n ds.attrs['_file_dates'] = x_coord_dt.strftime('%Y%m%d')\n ds.attrs['_file_times'] = x_coord_dt.strftime('%H%M%S')\n\n # Check for standard ARM datastream name, if none, assume the file is ARM\n # standard format.\n is_arm_file_flag = check_arm_standards(ds)\n if is_arm_file_flag == 0:\n\n ds.attrs['_datastream'] = '.'.join(filename[0].split('/')[-1].split('.')[0:2])\n\n # Add additional attributes, site, standards flag, etc...\n ds.attrs['_site'] = str(ds.attrs['_datastream'])[0:3]\n ds.attrs['_arm_standards_flag'] = is_arm_file_flag\n\n return ds\n", "path": "act/io/csvfiles.py"}]} | 1,638 | 119 |
gh_patches_debug_14245 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-1067 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SciPy in requirements in README but not in install_requires
Hey!
I'm wondering why SciPy is listed as a requirement in README but not in setup.py install_require argument.
Cheers,
Mike
</issue>
<code>
[start of examples/MultiPlotWidget.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3 ## Add path to library (just for examples; you do not need this)
4 import initExample
5
6
7 from scipy import random
8 from numpy import linspace
9 from pyqtgraph.Qt import QtGui, QtCore
10 import pyqtgraph as pg
11 from pyqtgraph import MultiPlotWidget
12 try:
13 from pyqtgraph.metaarray import *
14 except:
15 print("MultiPlot is only used with MetaArray for now (and you do not have the metaarray package)")
16 exit()
17
18 app = QtGui.QApplication([])
19 mw = QtGui.QMainWindow()
20 mw.resize(800,800)
21 pw = MultiPlotWidget()
22 mw.setCentralWidget(pw)
23 mw.show()
24
25 data = random.normal(size=(3, 1000)) * np.array([[0.1], [1e-5], [1]])
26 ma = MetaArray(data, info=[
27 {'name': 'Signal', 'cols': [
28 {'name': 'Col1', 'units': 'V'},
29 {'name': 'Col2', 'units': 'A'},
30 {'name': 'Col3'},
31 ]},
32 {'name': 'Time', 'values': linspace(0., 1., 1000), 'units': 's'}
33 ])
34 pw.plot(ma)
35
36 ## Start Qt event loop unless running in interactive mode.
37 if __name__ == '__main__':
38 import sys
39 if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
40 QtGui.QApplication.instance().exec_()
41
42
[end of examples/MultiPlotWidget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/MultiPlotWidget.py b/examples/MultiPlotWidget.py
--- a/examples/MultiPlotWidget.py
+++ b/examples/MultiPlotWidget.py
@@ -3,8 +3,7 @@
## Add path to library (just for examples; you do not need this)
import initExample
-
-from scipy import random
+import numpy as np
from numpy import linspace
from pyqtgraph.Qt import QtGui, QtCore
import pyqtgraph as pg
@@ -22,7 +21,7 @@
mw.setCentralWidget(pw)
mw.show()
-data = random.normal(size=(3, 1000)) * np.array([[0.1], [1e-5], [1]])
+data = np.random.normal(size=(3, 1000)) * np.array([[0.1], [1e-5], [1]])
ma = MetaArray(data, info=[
{'name': 'Signal', 'cols': [
{'name': 'Col1', 'units': 'V'},
| {"golden_diff": "diff --git a/examples/MultiPlotWidget.py b/examples/MultiPlotWidget.py\n--- a/examples/MultiPlotWidget.py\n+++ b/examples/MultiPlotWidget.py\n@@ -3,8 +3,7 @@\n ## Add path to library (just for examples; you do not need this)\n import initExample\n \n-\n-from scipy import random\n+import numpy as np\n from numpy import linspace\n from pyqtgraph.Qt import QtGui, QtCore\n import pyqtgraph as pg\n@@ -22,7 +21,7 @@\n mw.setCentralWidget(pw)\n mw.show()\n \n-data = random.normal(size=(3, 1000)) * np.array([[0.1], [1e-5], [1]])\n+data = np.random.normal(size=(3, 1000)) * np.array([[0.1], [1e-5], [1]])\n ma = MetaArray(data, info=[\n {'name': 'Signal', 'cols': [\n {'name': 'Col1', 'units': 'V'},\n", "issue": "SciPy in requirements in README but not in install_requires\nHey!\r\nI'm wondering why SciPy is listed as a requirement in README but not in setup.py install_require argument.\r\n\r\nCheers,\r\nMike\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n## Add path to library (just for examples; you do not need this)\nimport initExample\n\n\nfrom scipy import random\nfrom numpy import linspace\nfrom pyqtgraph.Qt import QtGui, QtCore\nimport pyqtgraph as pg\nfrom pyqtgraph import MultiPlotWidget\ntry:\n from pyqtgraph.metaarray import *\nexcept:\n print(\"MultiPlot is only used with MetaArray for now (and you do not have the metaarray package)\")\n exit()\n\napp = QtGui.QApplication([])\nmw = QtGui.QMainWindow()\nmw.resize(800,800)\npw = MultiPlotWidget()\nmw.setCentralWidget(pw)\nmw.show()\n\ndata = random.normal(size=(3, 1000)) * np.array([[0.1], [1e-5], [1]])\nma = MetaArray(data, info=[\n {'name': 'Signal', 'cols': [\n {'name': 'Col1', 'units': 'V'}, \n {'name': 'Col2', 'units': 'A'}, \n {'name': 'Col3'},\n ]}, \n {'name': 'Time', 'values': linspace(0., 1., 1000), 'units': 's'}\n ])\npw.plot(ma)\n\n## Start Qt event loop unless running in interactive mode.\nif __name__ == '__main__':\n import sys\n if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):\n QtGui.QApplication.instance().exec_()\n\n", "path": "examples/MultiPlotWidget.py"}]} | 987 | 220 |
gh_patches_debug_29127 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-2384 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
L'API ne retourne pas toujours les mêmes infos pour un membre
> Un autre truc, quand on met un jour un membre on peut spécifier deux champs qui ne sont pas fournit par le get classique : `hover_or_click` et `show_sign`. Est ce normal ?
Source:[Kje](http://zestedesavoir.com/forums/sujet/1365/zep-17-elaboration-de-lapi-des-membres/?page=18#p45095)
</issue>
<code>
[start of zds/member/api/serializers.py]
1 # -*- coding: utf-8 -*-
2
3 from rest_framework import serializers
4
5 from zds.member.commons import ProfileUsernameValidator, ProfileEmailValidator, \
6 ProfileCreate
7 from zds.member.models import Profile
8
9
10 class ProfileListSerializer(serializers.ModelSerializer):
11 """
12 Serializers of a user object.
13 """
14
15 username = serializers.CharField(source='user.username')
16 is_active = serializers.BooleanField(source='user.is_active')
17 date_joined = serializers.DateTimeField(source='user.date_joined')
18
19 class Meta:
20 model = Profile
21 fields = ('pk', 'username', 'is_active', 'date_joined')
22
23
24 class ProfileCreateSerializer(serializers.ModelSerializer, ProfileCreate, ProfileUsernameValidator,
25 ProfileEmailValidator):
26 """
27 Serializers of a user object to create one.
28 """
29
30 username = serializers.CharField(source='user.username')
31 email = serializers.EmailField(source='user.email')
32 password = serializers.CharField(source='user.password')
33
34 class Meta:
35 model = Profile
36 fields = ('pk', 'username', 'email', 'password')
37 write_only_fields = ('password')
38
39 def create(self, validated_data):
40 profile = self.create_profile(validated_data.get('user'))
41 self.save_profile(profile)
42 return profile
43
44 def throw_error(self, key=None, message=None):
45 raise serializers.ValidationError(message)
46
47
48 class ProfileDetailSerializer(serializers.ModelSerializer):
49 """
50 Serializers of a profile object.
51 """
52
53 username = serializers.CharField(source='user.username')
54 email = serializers.EmailField(source='user.email')
55 is_active = serializers.BooleanField(source='user.is_active')
56 date_joined = serializers.DateTimeField(source='user.date_joined')
57
58 class Meta:
59 model = Profile
60 fields = ('pk', 'username', 'show_email', 'email', 'is_active',
61 'site', 'avatar_url', 'biography', 'sign', 'email_for_answer',
62 'last_visit', 'date_joined')
63
64 def __init__(self, *args, **kwargs):
65 """
66 Create the serializer with or without email field, depending on the show_email argument.
67 """
68 show_email = kwargs.pop('show_email', False)
69 is_authenticated = kwargs.pop('is_authenticated', False)
70
71 super(ProfileDetailSerializer, self).__init__(*args, **kwargs)
72
73 if not show_email or not is_authenticated:
74 # Drop email field.
75 self.fields.pop('email')
76
77
78 class ProfileValidatorSerializer(serializers.ModelSerializer, ProfileUsernameValidator, ProfileEmailValidator):
79 """
80 Serializers of a profile object used to update a member.
81 """
82
83 username = serializers.CharField(source='user.username', required=False, allow_blank=True)
84 email = serializers.EmailField(source='user.email', required=False, allow_blank=True)
85
86 class Meta:
87 model = Profile
88 fields = ('pk', 'username', 'email', 'site', 'avatar_url', 'biography',
89 'sign', 'show_email', 'show_sign', 'hover_or_click',
90 'email_for_answer')
91
92 def update(self, instance, validated_data):
93 """
94 Update and return an existing `Profile` instance, given the validated data.
95 """
96 instance.user.username = validated_data.get('user').get('username',
97 instance.user.username) or instance.user.username
98 instance.user.email = validated_data.get('user').get('email', instance.user.email) or instance.user.email
99 instance.site = validated_data.get('site', instance.site) or instance.site
100 instance.avatar_url = validated_data.get('avatar_url', instance.avatar_url) or instance.avatar_url
101 instance.biography = validated_data.get('biography', instance.biography) or instance.biography
102 instance.sign = validated_data.get('sign', instance.sign) or instance.sign
103 instance.show_email = validated_data.get('show_email', instance.show_email) or instance.show_email
104 instance.show_sign = validated_data.get('show_sign', instance.show_sign) or instance.show_sign
105 instance.hover_or_click = validated_data.get('hover_or_click',
106 instance.hover_or_click) or instance.hover_or_click
107 instance.email_for_answer = validated_data.get('email_for_answer',
108 instance.email_for_answer) or instance.email_for_answer
109 instance.user.save()
110 instance.save()
111 return instance
112
113 def throw_error(self, key=None, message=None):
114 raise serializers.ValidationError(message)
115
116
117 class ProfileSanctionSerializer(serializers.ModelSerializer):
118 """
119 Serializers of a profile object to set the user in reading only access.
120 """
121
122 username = serializers.ReadOnlyField(source='user.username')
123 email = serializers.ReadOnlyField(source='user.email')
124
125 class Meta:
126 model = Profile
127 fields = ('pk', 'username', 'email', 'can_write', 'end_ban_write', 'can_read', 'end_ban_read')
128 read_only_fields = ('can_write', 'end_ban_write', 'can_read', 'end_ban_read')
129
[end of zds/member/api/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zds/member/api/serializers.py b/zds/member/api/serializers.py
--- a/zds/member/api/serializers.py
+++ b/zds/member/api/serializers.py
@@ -57,9 +57,9 @@
class Meta:
model = Profile
- fields = ('pk', 'username', 'show_email', 'email', 'is_active',
- 'site', 'avatar_url', 'biography', 'sign', 'email_for_answer',
- 'last_visit', 'date_joined')
+ fields = ('pk', 'username', 'email', 'is_active', 'date_joined',
+ 'site', 'avatar_url', 'biography', 'sign', 'show_email',
+ 'show_sign', 'hover_or_click', 'email_for_answer', 'last_visit')
def __init__(self, *args, **kwargs):
"""
@@ -82,12 +82,15 @@
username = serializers.CharField(source='user.username', required=False, allow_blank=True)
email = serializers.EmailField(source='user.email', required=False, allow_blank=True)
+ is_active = serializers.BooleanField(source='user.is_active', required=False)
+ date_joined = serializers.DateTimeField(source='user.date_joined', required=False)
class Meta:
model = Profile
- fields = ('pk', 'username', 'email', 'site', 'avatar_url', 'biography',
- 'sign', 'show_email', 'show_sign', 'hover_or_click',
- 'email_for_answer')
+ fields = ('pk', 'username', 'email', 'is_active', 'date_joined',
+ 'site', 'avatar_url', 'biography', 'sign', 'show_email',
+ 'show_sign', 'hover_or_click', 'email_for_answer', 'last_visit')
+ read_only_fields = ('is_active', 'date_joined', 'last_visit',)
def update(self, instance, validated_data):
"""
| {"golden_diff": "diff --git a/zds/member/api/serializers.py b/zds/member/api/serializers.py\n--- a/zds/member/api/serializers.py\n+++ b/zds/member/api/serializers.py\n@@ -57,9 +57,9 @@\n \n class Meta:\n model = Profile\n- fields = ('pk', 'username', 'show_email', 'email', 'is_active',\n- 'site', 'avatar_url', 'biography', 'sign', 'email_for_answer',\n- 'last_visit', 'date_joined')\n+ fields = ('pk', 'username', 'email', 'is_active', 'date_joined',\n+ 'site', 'avatar_url', 'biography', 'sign', 'show_email',\n+ 'show_sign', 'hover_or_click', 'email_for_answer', 'last_visit')\n \n def __init__(self, *args, **kwargs):\n \"\"\"\n@@ -82,12 +82,15 @@\n \n username = serializers.CharField(source='user.username', required=False, allow_blank=True)\n email = serializers.EmailField(source='user.email', required=False, allow_blank=True)\n+ is_active = serializers.BooleanField(source='user.is_active', required=False)\n+ date_joined = serializers.DateTimeField(source='user.date_joined', required=False)\n \n class Meta:\n model = Profile\n- fields = ('pk', 'username', 'email', 'site', 'avatar_url', 'biography',\n- 'sign', 'show_email', 'show_sign', 'hover_or_click',\n- 'email_for_answer')\n+ fields = ('pk', 'username', 'email', 'is_active', 'date_joined',\n+ 'site', 'avatar_url', 'biography', 'sign', 'show_email',\n+ 'show_sign', 'hover_or_click', 'email_for_answer', 'last_visit')\n+ read_only_fields = ('is_active', 'date_joined', 'last_visit',)\n \n def update(self, instance, validated_data):\n \"\"\"\n", "issue": "L'API ne retourne pas toujours les m\u00eames infos pour un membre\n> Un autre truc, quand on met un jour un membre on peut sp\u00e9cifier deux champs qui ne sont pas fournit par le get classique : `hover_or_click` et `show_sign`. Est ce normal ?\n\nSource:[Kje](http://zestedesavoir.com/forums/sujet/1365/zep-17-elaboration-de-lapi-des-membres/?page=18#p45095)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom rest_framework import serializers\n\nfrom zds.member.commons import ProfileUsernameValidator, ProfileEmailValidator, \\\n ProfileCreate\nfrom zds.member.models import Profile\n\n\nclass ProfileListSerializer(serializers.ModelSerializer):\n \"\"\"\n Serializers of a user object.\n \"\"\"\n\n username = serializers.CharField(source='user.username')\n is_active = serializers.BooleanField(source='user.is_active')\n date_joined = serializers.DateTimeField(source='user.date_joined')\n\n class Meta:\n model = Profile\n fields = ('pk', 'username', 'is_active', 'date_joined')\n\n\nclass ProfileCreateSerializer(serializers.ModelSerializer, ProfileCreate, ProfileUsernameValidator,\n ProfileEmailValidator):\n \"\"\"\n Serializers of a user object to create one.\n \"\"\"\n\n username = serializers.CharField(source='user.username')\n email = serializers.EmailField(source='user.email')\n password = serializers.CharField(source='user.password')\n\n class Meta:\n model = Profile\n fields = ('pk', 'username', 'email', 'password')\n write_only_fields = ('password')\n\n def create(self, validated_data):\n profile = self.create_profile(validated_data.get('user'))\n self.save_profile(profile)\n return profile\n\n def throw_error(self, key=None, message=None):\n raise serializers.ValidationError(message)\n\n\nclass ProfileDetailSerializer(serializers.ModelSerializer):\n \"\"\"\n Serializers of a profile object.\n \"\"\"\n\n username = serializers.CharField(source='user.username')\n email = serializers.EmailField(source='user.email')\n is_active = serializers.BooleanField(source='user.is_active')\n date_joined = serializers.DateTimeField(source='user.date_joined')\n\n class Meta:\n model = Profile\n fields = ('pk', 'username', 'show_email', 'email', 'is_active',\n 'site', 'avatar_url', 'biography', 'sign', 'email_for_answer',\n 'last_visit', 'date_joined')\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Create the serializer with or without email field, depending on the show_email argument.\n \"\"\"\n show_email = kwargs.pop('show_email', False)\n is_authenticated = kwargs.pop('is_authenticated', False)\n\n super(ProfileDetailSerializer, self).__init__(*args, **kwargs)\n\n if not show_email or not is_authenticated:\n # Drop email field.\n self.fields.pop('email')\n\n\nclass ProfileValidatorSerializer(serializers.ModelSerializer, ProfileUsernameValidator, ProfileEmailValidator):\n \"\"\"\n Serializers of a profile object used to update a member.\n \"\"\"\n\n username = serializers.CharField(source='user.username', required=False, allow_blank=True)\n email = serializers.EmailField(source='user.email', required=False, allow_blank=True)\n\n class Meta:\n model = Profile\n fields = ('pk', 'username', 'email', 'site', 'avatar_url', 'biography',\n 'sign', 'show_email', 'show_sign', 'hover_or_click',\n 'email_for_answer')\n\n def update(self, instance, validated_data):\n \"\"\"\n Update and return an existing `Profile` instance, given the validated data.\n \"\"\"\n instance.user.username = validated_data.get('user').get('username',\n instance.user.username) or instance.user.username\n instance.user.email = validated_data.get('user').get('email', instance.user.email) or instance.user.email\n instance.site = validated_data.get('site', instance.site) or instance.site\n instance.avatar_url = validated_data.get('avatar_url', instance.avatar_url) or instance.avatar_url\n instance.biography = validated_data.get('biography', instance.biography) or instance.biography\n instance.sign = validated_data.get('sign', instance.sign) or instance.sign\n instance.show_email = validated_data.get('show_email', instance.show_email) or instance.show_email\n instance.show_sign = validated_data.get('show_sign', instance.show_sign) or instance.show_sign\n instance.hover_or_click = validated_data.get('hover_or_click',\n instance.hover_or_click) or instance.hover_or_click\n instance.email_for_answer = validated_data.get('email_for_answer',\n instance.email_for_answer) or instance.email_for_answer\n instance.user.save()\n instance.save()\n return instance\n\n def throw_error(self, key=None, message=None):\n raise serializers.ValidationError(message)\n\n\nclass ProfileSanctionSerializer(serializers.ModelSerializer):\n \"\"\"\n Serializers of a profile object to set the user in reading only access.\n \"\"\"\n\n username = serializers.ReadOnlyField(source='user.username')\n email = serializers.ReadOnlyField(source='user.email')\n\n class Meta:\n model = Profile\n fields = ('pk', 'username', 'email', 'can_write', 'end_ban_write', 'can_read', 'end_ban_read')\n read_only_fields = ('can_write', 'end_ban_write', 'can_read', 'end_ban_read')\n", "path": "zds/member/api/serializers.py"}]} | 1,970 | 434 |
gh_patches_debug_27067 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2287 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow 0 as value for funding amount in partnerships
It should be possible to fill in 0 as a funding amount in the project editor, and then publish a project. This is based on Plan Finland feedback:
"Are you able to give us an estimate on when the suggestions we made to Geert could be published (the changes to the results section and possibility for 0€ budget project)."
</issue>
<code>
[start of akvo/rsr/models/publishing_status.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from django.conf import settings
8 from django.core.exceptions import ValidationError
9 from django.core.mail import send_mail
10 from django.db import models
11 from django.db.models.signals import post_save
12 from django.dispatch import receiver
13 from django.utils.translation import ugettext_lazy as _
14 from .partnership import Partnership
15
16 from ..fields import ValidXMLCharField
17
18
19 class PublishingStatus(models.Model):
20 """Keep track of publishing status."""
21 STATUS_PUBLISHED = 'published'
22 STATUS_UNPUBLISHED = 'unpublished'
23 PUBLISHING_STATUS = (
24 (STATUS_UNPUBLISHED, _(u'Unpublished')),
25 (STATUS_PUBLISHED, _(u'Published')),
26 )
27
28 project = models.OneToOneField('Project',)
29 status = ValidXMLCharField(max_length=30,
30 choices=PUBLISHING_STATUS,
31 db_index=True, default=STATUS_UNPUBLISHED)
32
33 def clean(self):
34 """Projects can only be published, when several checks have been performed."""
35 if self.status == 'published':
36 validation_errors = []
37
38 if not self.project.title:
39 validation_errors.append(
40 ValidationError(_('Project needs to have a title.'),
41 code='title')
42 )
43
44 if not self.project.subtitle:
45 validation_errors.append(
46 ValidationError(_('Project needs to have a subtitle.'),
47 code='subtitle')
48 )
49
50 if self.project.iati_status == '6':
51 validation_errors.append(
52 ValidationError(_('Project needs to have non-suspended status.'),
53 code='status')
54 )
55
56 if not (self.project.date_start_planned or self.project.date_start_actual):
57 validation_errors.append(
58 ValidationError(
59 _('Project needs to have the planned or actual start date field filled '
60 'in.'), code='start_date')
61 )
62
63 if not self.project.current_image:
64 validation_errors.append(
65 ValidationError(_('Project needs to have a photo.'),
66 code='current_image')
67 )
68
69 if not self.project.partnerships.filter(
70 organisation__can_create_projects__exact=True).exists():
71 validation_errors.append(
72 ValidationError(
73 _('Project has no partner that is allowed to publish it.'),
74 code='partners'
75 )
76 )
77
78 if not self.project.partnerships.filter(
79 iati_organisation_role__in=[Partnership.IATI_FUNDING_PARTNER,
80 Partnership.IATI_IMPLEMENTING_PARTNER,
81 Partnership.IATI_ACCOUNTABLE_PARTNER]
82 ).exists():
83 validation_errors.append(
84 ValidationError(
85 _('Project needs to have at least one funding, implementing or accountable '
86 'partner.'),
87 code='partners'
88 )
89 )
90 else:
91 for funding_partner in self.project.partnerships.filter(
92 iati_organisation_role=Partnership.IATI_FUNDING_PARTNER):
93 if not funding_partner.funding_amount:
94 validation_errors.append(
95 ValidationError(_('All funding partners should have a funding amount.'),
96 code='partners'
97 )
98 )
99 break
100
101 if not self.project.project_plan_summary:
102 validation_errors.append(
103 ValidationError(_('Project needs to have the project plan summary filled in.'),
104 code='summary')
105 )
106
107 if not self.project.goals_overview:
108 validation_errors.append(
109 ValidationError(_('Project needs to have the goals overview field filled in.'),
110 code='goals_overview')
111 )
112
113 if not self.project.locations.all():
114 validation_errors.append(
115 ValidationError(_('Project needs to have at least one location.'),
116 code='location')
117 )
118 else:
119 for location in self.project.locations.all():
120 if not (location.latitude and location.longitude):
121 validation_errors.append(
122 ValidationError(
123 _('All locations need to have a latitude and longitude specified.'),
124 code='location')
125 )
126 break
127
128 if not self.project.budget_items.all():
129 validation_errors.append(
130 ValidationError(_('Project needs to have at least one budget item.'),
131 code='budget_item')
132 )
133 elif not self.project.budget_items.filter(amount__gt=0).exists():
134 validation_errors.append(
135 ValidationError(
136 _('Project needs to have at least one budget item with an amount.'),
137 code='budget_item'
138 )
139 )
140
141 if validation_errors:
142 raise ValidationError(validation_errors)
143
144 class Meta:
145 app_label = 'rsr'
146 verbose_name = _(u'publishing status')
147 verbose_name_plural = _(u'publishing statuses')
148 ordering = ('-status', 'project')
149
[end of akvo/rsr/models/publishing_status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rsr/models/publishing_status.py b/akvo/rsr/models/publishing_status.py
--- a/akvo/rsr/models/publishing_status.py
+++ b/akvo/rsr/models/publishing_status.py
@@ -90,7 +90,7 @@
else:
for funding_partner in self.project.partnerships.filter(
iati_organisation_role=Partnership.IATI_FUNDING_PARTNER):
- if not funding_partner.funding_amount:
+ if not funding_partner.funding_amount and not funding_partner.funding_amount == 0:
validation_errors.append(
ValidationError(_('All funding partners should have a funding amount.'),
code='partners'
@@ -130,7 +130,7 @@
ValidationError(_('Project needs to have at least one budget item.'),
code='budget_item')
)
- elif not self.project.budget_items.filter(amount__gt=0).exists():
+ elif not self.project.budget_items.filter(amount__gte=0).exists():
validation_errors.append(
ValidationError(
_('Project needs to have at least one budget item with an amount.'),
| {"golden_diff": "diff --git a/akvo/rsr/models/publishing_status.py b/akvo/rsr/models/publishing_status.py\n--- a/akvo/rsr/models/publishing_status.py\n+++ b/akvo/rsr/models/publishing_status.py\n@@ -90,7 +90,7 @@\n else:\n for funding_partner in self.project.partnerships.filter(\n iati_organisation_role=Partnership.IATI_FUNDING_PARTNER):\n- if not funding_partner.funding_amount:\n+ if not funding_partner.funding_amount and not funding_partner.funding_amount == 0:\n validation_errors.append(\n ValidationError(_('All funding partners should have a funding amount.'),\n code='partners'\n@@ -130,7 +130,7 @@\n ValidationError(_('Project needs to have at least one budget item.'),\n code='budget_item')\n )\n- elif not self.project.budget_items.filter(amount__gt=0).exists():\n+ elif not self.project.budget_items.filter(amount__gte=0).exists():\n validation_errors.append(\n ValidationError(\n _('Project needs to have at least one budget item with an amount.'),\n", "issue": "Allow 0 as value for funding amount in partnerships\nIt should be possible to fill in 0 as a funding amount in the project editor, and then publish a project. This is based on Plan Finland feedback:\n\n\"Are you able to give us an estimate on when the suggestions we made to Geert could be published (the changes to the results section and possibility for 0\u20ac budget project).\"\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.mail import send_mail\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\nfrom django.utils.translation import ugettext_lazy as _\nfrom .partnership import Partnership\n\nfrom ..fields import ValidXMLCharField\n\n\nclass PublishingStatus(models.Model):\n \"\"\"Keep track of publishing status.\"\"\"\n STATUS_PUBLISHED = 'published'\n STATUS_UNPUBLISHED = 'unpublished'\n PUBLISHING_STATUS = (\n (STATUS_UNPUBLISHED, _(u'Unpublished')),\n (STATUS_PUBLISHED, _(u'Published')),\n )\n\n project = models.OneToOneField('Project',)\n status = ValidXMLCharField(max_length=30,\n choices=PUBLISHING_STATUS,\n db_index=True, default=STATUS_UNPUBLISHED)\n\n def clean(self):\n \"\"\"Projects can only be published, when several checks have been performed.\"\"\"\n if self.status == 'published':\n validation_errors = []\n\n if not self.project.title:\n validation_errors.append(\n ValidationError(_('Project needs to have a title.'),\n code='title')\n )\n\n if not self.project.subtitle:\n validation_errors.append(\n ValidationError(_('Project needs to have a subtitle.'),\n code='subtitle')\n )\n\n if self.project.iati_status == '6':\n validation_errors.append(\n ValidationError(_('Project needs to have non-suspended status.'),\n code='status')\n )\n\n if not (self.project.date_start_planned or self.project.date_start_actual):\n validation_errors.append(\n ValidationError(\n _('Project needs to have the planned or actual start date field filled '\n 'in.'), code='start_date')\n )\n\n if not self.project.current_image:\n validation_errors.append(\n ValidationError(_('Project needs to have a photo.'),\n code='current_image')\n )\n\n if not self.project.partnerships.filter(\n organisation__can_create_projects__exact=True).exists():\n validation_errors.append(\n ValidationError(\n _('Project has no partner that is allowed to publish it.'),\n code='partners'\n )\n )\n\n if not self.project.partnerships.filter(\n iati_organisation_role__in=[Partnership.IATI_FUNDING_PARTNER,\n Partnership.IATI_IMPLEMENTING_PARTNER,\n Partnership.IATI_ACCOUNTABLE_PARTNER]\n ).exists():\n validation_errors.append(\n ValidationError(\n _('Project needs to have at least one funding, implementing or accountable '\n 'partner.'),\n code='partners'\n )\n )\n else:\n for funding_partner in self.project.partnerships.filter(\n iati_organisation_role=Partnership.IATI_FUNDING_PARTNER):\n if not funding_partner.funding_amount:\n validation_errors.append(\n ValidationError(_('All funding partners should have a funding amount.'),\n code='partners'\n )\n )\n break\n\n if not self.project.project_plan_summary:\n validation_errors.append(\n ValidationError(_('Project needs to have the project plan summary filled in.'),\n code='summary')\n )\n\n if not self.project.goals_overview:\n validation_errors.append(\n ValidationError(_('Project needs to have the goals overview field filled in.'),\n code='goals_overview')\n )\n\n if not self.project.locations.all():\n validation_errors.append(\n ValidationError(_('Project needs to have at least one location.'),\n code='location')\n )\n else:\n for location in self.project.locations.all():\n if not (location.latitude and location.longitude):\n validation_errors.append(\n ValidationError(\n _('All locations need to have a latitude and longitude specified.'),\n code='location')\n )\n break\n\n if not self.project.budget_items.all():\n validation_errors.append(\n ValidationError(_('Project needs to have at least one budget item.'),\n code='budget_item')\n )\n elif not self.project.budget_items.filter(amount__gt=0).exists():\n validation_errors.append(\n ValidationError(\n _('Project needs to have at least one budget item with an amount.'),\n code='budget_item'\n )\n )\n\n if validation_errors:\n raise ValidationError(validation_errors)\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'publishing status')\n verbose_name_plural = _(u'publishing statuses')\n ordering = ('-status', 'project')\n", "path": "akvo/rsr/models/publishing_status.py"}]} | 1,955 | 248 |
gh_patches_debug_23462 | rasdani/github-patches | git_diff | cocotb__cocotb-1732 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document how to get a rotating logger
In #1224 we had the request to support a "rotating" logger in cocotb.
Now that #1266 has gone in there is a reasonably comfortable way to do that without modifying cocotb itself.
```python
root_logger = logging.getLogger()
# undo the setup cocotb did
for handler in root_logger.handlers:
root_logger.remove_handler(handler)
handler.close()
# do whatever configuration you want instead
file_handler = RotatingFileHandler(logfile, maxBytes=(1048576*5), backupCount=4)
file_handler.setFormatter(cocotb.log.SimLogFormatter())
root_logger.addHandler(file_handler)
```
at which point this seems like something that doesn't need to be in cocotb itself.
_Originally posted by @eric-wieser in https://github.com/cocotb/cocotb/pull/1224#issuecomment-581368001_
This issue is to document this as a useful snippet somewhere in the docs, as the general problem seems to be something other users might stumble across.
</issue>
<code>
[start of cocotb/log.py]
1 # Copyright (c) 2013, 2018 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4 #
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Everything related to logging
30 """
31
32 import os
33 import sys
34 import logging
35 import warnings
36
37 from cocotb.utils import (
38 get_sim_time, get_time_from_sim_steps, want_color_output
39 )
40
41 import cocotb.ANSI as ANSI
42
43 if "COCOTB_REDUCED_LOG_FMT" in os.environ:
44 _suppress = True
45 else:
46 _suppress = False
47
48 # Column alignment
49 _LEVEL_CHARS = len("CRITICAL") # noqa
50 _RECORD_CHARS = 35 # noqa
51 _FILENAME_CHARS = 20 # noqa
52 _LINENO_CHARS = 4 # noqa
53 _FUNCNAME_CHARS = 31 # noqa
54
55
56 def default_config():
57 """ Apply the default cocotb log formatting to the root logger.
58
59 This hooks up the logger to write to stdout, using either
60 :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending
61 on whether colored output is requested. It also adds a
62 :class:`SimTimeContextFilter` filter so that
63 :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.
64
65 The logging level for cocotb logs is set based on the
66 :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.
67
68 If desired, this logging configuration can be overwritten by calling
69 ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by
70 manually resetting the root logger instance, for which examples can be
71 found online.
72 """
73 # construct an appropriate handler
74 hdlr = logging.StreamHandler(sys.stdout)
75 hdlr.addFilter(SimTimeContextFilter())
76 if want_color_output():
77 hdlr.setFormatter(SimColourLogFormatter())
78 else:
79 hdlr.setFormatter(SimLogFormatter())
80
81 logging.setLoggerClass(SimBaseLog) # For backwards compatibility
82 logging.basicConfig()
83 logging.getLogger().handlers = [hdlr] # overwrite default handlers
84
85 # apply level settings for cocotb
86 log = logging.getLogger('cocotb')
87 level = os.getenv("COCOTB_LOG_LEVEL", "INFO")
88 try:
89 _default_log = getattr(logging, level)
90 except AttributeError:
91 log.error("Unable to set logging level to %r" % level)
92 _default_log = logging.INFO
93 log.setLevel(_default_log)
94
95 # Notify GPI of log level, which it uses as an optimization to avoid
96 # calling into Python.
97 if "COCOTB_SIM" in os.environ:
98 from cocotb import simulator
99 simulator.log_level(_default_log)
100
101
102 class SimBaseLog(logging.getLoggerClass()):
103 """ This class only exists for backwards compatibility """
104
105 @property
106 def logger(self):
107 warnings.warn(
108 "the .logger attribute should not be used now that `SimLog` "
109 "returns a native logger instance directly.",
110 DeprecationWarning, stacklevel=2)
111 return self
112
113 @property
114 def colour(self):
115 warnings.warn(
116 "the .colour attribute may be removed in future, use the "
117 "equivalent `cocotb.utils.want_color_output()` instead",
118 DeprecationWarning, stacklevel=2)
119 return want_color_output()
120
121
122 # this used to be a class, hence the unusual capitalization
123 def SimLog(name, ident=None):
124 """ Like logging.getLogger, but append a numeric identifier to the name """
125 if ident is not None:
126 name = "%s.0x%x" % (name, ident)
127 return logging.getLogger(name)
128
129
130 class SimTimeContextFilter(logging.Filter):
131 """
132 A filter to inject simulator times into the log records.
133
134 This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.
135
136 This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.
137 """
138
139 # needed to make our docs render well
140 def __init__(self):
141 """ Takes no arguments """
142 super().__init__()
143
144 def filter(self, record):
145 try:
146 record.created_sim_time = get_sim_time()
147 except RecursionError:
148 # get_sim_time may try to log - if that happens, we can't
149 # attach a simulator time to this message.
150 record.created_sim_time = None
151 return True
152
153
154 class SimLogFormatter(logging.Formatter):
155 """Log formatter to provide consistent log message handling.
156
157 This will only add simulator timestamps if the handler object this
158 formatter is attached to has a :class:`SimTimeContextFilter` filter
159 attached, which cocotb ensures by default.
160 """
161
162 # Removes the arguments from the base class. Docstring needed to make
163 # sphinx happy.
164 def __init__(self):
165 """ Takes no arguments. """
166 super().__init__()
167
168 # Justify and truncate
169 @staticmethod
170 def ljust(string, chars):
171 if len(string) > chars:
172 return ".." + string[(chars - 2) * -1:]
173 return string.ljust(chars)
174
175 @staticmethod
176 def rjust(string, chars):
177 if len(string) > chars:
178 return ".." + string[(chars - 2) * -1:]
179 return string.rjust(chars)
180
181 def _format(self, level, record, msg, coloured=False):
182 sim_time = getattr(record, 'created_sim_time', None)
183 if sim_time is None:
184 sim_time_str = " -.--ns"
185 else:
186 time_ns = get_time_from_sim_steps(sim_time, 'ns')
187 sim_time_str = "{:6.2f}ns".format(time_ns)
188 prefix = sim_time_str.rjust(11) + ' ' + level + ' '
189 if not _suppress:
190 prefix += self.ljust(record.name, _RECORD_CHARS) + \
191 self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS) + \
192 ':' + self.ljust(str(record.lineno), _LINENO_CHARS) + \
193 ' in ' + self.ljust(str(record.funcName), _FUNCNAME_CHARS) + ' '
194
195 # these lines are copied from the builtin logger
196 if record.exc_info:
197 # Cache the traceback text to avoid converting it multiple times
198 # (it's constant anyway)
199 if not record.exc_text:
200 record.exc_text = self.formatException(record.exc_info)
201 if record.exc_text:
202 if msg[-1:] != "\n":
203 msg = msg + "\n"
204 msg = msg + record.exc_text
205
206 prefix_len = len(prefix)
207 if coloured:
208 prefix_len -= (len(level) - _LEVEL_CHARS)
209 pad = "\n" + " " * (prefix_len)
210 return prefix + pad.join(msg.split('\n'))
211
212 def format(self, record):
213 """Prettify the log output, annotate with simulation time"""
214
215 msg = record.getMessage()
216 level = record.levelname.ljust(_LEVEL_CHARS)
217
218 return self._format(level, record, msg)
219
220
221 class SimColourLogFormatter(SimLogFormatter):
222 """Log formatter to provide consistent log message handling."""
223
224 loglevel2colour = {
225 logging.DEBUG : "%s",
226 logging.INFO : ANSI.COLOR_INFO + "%s" + ANSI.COLOR_DEFAULT,
227 logging.WARNING : ANSI.COLOR_WARNING + "%s" + ANSI.COLOR_DEFAULT,
228 logging.ERROR : ANSI.COLOR_ERROR + "%s" + ANSI.COLOR_DEFAULT,
229 logging.CRITICAL: ANSI.COLOR_CRITICAL + "%s" + ANSI.COLOR_DEFAULT,
230 }
231
232 def format(self, record):
233 """Prettify the log output, annotate with simulation time"""
234
235 msg = record.getMessage()
236
237 # Need to colour each line in case coloring is applied in the message
238 msg = '\n'.join([SimColourLogFormatter.loglevel2colour[record.levelno] % line for line in msg.split('\n')])
239 level = (SimColourLogFormatter.loglevel2colour[record.levelno] %
240 record.levelname.ljust(_LEVEL_CHARS))
241
242 return self._format(level, record, msg, coloured=True)
243
244
245 def _filter_from_c(logger_name, level):
246 return logging.getLogger(logger_name).isEnabledFor(level)
247
248
249 def _log_from_c(logger_name, level, filename, lineno, msg, function_name):
250 """
251 This is for use from the C world, and allows us to insert C stack
252 information.
253 """
254 logger = logging.getLogger(logger_name)
255 if logger.isEnabledFor(level):
256 record = logger.makeRecord(
257 logger.name,
258 level,
259 filename,
260 lineno,
261 msg,
262 None,
263 None,
264 function_name
265 )
266 logger.handle(record)
267
[end of cocotb/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cocotb/log.py b/cocotb/log.py
--- a/cocotb/log.py
+++ b/cocotb/log.py
@@ -67,8 +67,10 @@
If desired, this logging configuration can be overwritten by calling
``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by
- manually resetting the root logger instance, for which examples can be
- found online.
+ manually resetting the root logger instance.
+ An example of this can be found in the section on :ref:`rotating-logger`.
+
+ .. versionadded:: 1.4
"""
# construct an appropriate handler
hdlr = logging.StreamHandler(sys.stdout)
@@ -135,11 +137,13 @@
This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.
This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.
+
+ .. versionadded:: 1.4
"""
# needed to make our docs render well
def __init__(self):
- """ Takes no arguments """
+ """"""
super().__init__()
def filter(self, record):
| {"golden_diff": "diff --git a/cocotb/log.py b/cocotb/log.py\n--- a/cocotb/log.py\n+++ b/cocotb/log.py\n@@ -67,8 +67,10 @@\n \n If desired, this logging configuration can be overwritten by calling\n ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by\n- manually resetting the root logger instance, for which examples can be\n- found online.\n+ manually resetting the root logger instance.\n+ An example of this can be found in the section on :ref:`rotating-logger`.\n+\n+ .. versionadded:: 1.4\n \"\"\"\n # construct an appropriate handler\n hdlr = logging.StreamHandler(sys.stdout)\n@@ -135,11 +137,13 @@\n This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.\n \n This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.\n+\n+ .. versionadded:: 1.4\n \"\"\"\n \n # needed to make our docs render well\n def __init__(self):\n- \"\"\" Takes no arguments \"\"\"\n+ \"\"\"\"\"\"\n super().__init__()\n \n def filter(self, record):\n", "issue": "Document how to get a rotating logger\nIn #1224 we had the request to support a \"rotating\" logger in cocotb. \r\nNow that #1266 has gone in there is a reasonably comfortable way to do that without modifying cocotb itself.\r\n\r\n```python\r\nroot_logger = logging.getLogger()\r\n\r\n# undo the setup cocotb did\r\nfor handler in root_logger.handlers:\r\n root_logger.remove_handler(handler)\r\n handler.close()\r\n\r\n# do whatever configuration you want instead\r\nfile_handler = RotatingFileHandler(logfile, maxBytes=(1048576*5), backupCount=4)\r\nfile_handler.setFormatter(cocotb.log.SimLogFormatter())\r\nroot_logger.addHandler(file_handler)\r\n```\r\nat which point this seems like something that doesn't need to be in cocotb itself.\r\n\r\n_Originally posted by @eric-wieser in https://github.com/cocotb/cocotb/pull/1224#issuecomment-581368001_\r\n\r\n\r\nThis issue is to document this as a useful snippet somewhere in the docs, as the general problem seems to be something other users might stumble across.\n", "before_files": [{"content": "# Copyright (c) 2013, 2018 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nEverything related to logging\n\"\"\"\n\nimport os\nimport sys\nimport logging\nimport warnings\n\nfrom cocotb.utils import (\n get_sim_time, get_time_from_sim_steps, want_color_output\n)\n\nimport cocotb.ANSI as ANSI\n\nif \"COCOTB_REDUCED_LOG_FMT\" in os.environ:\n _suppress = True\nelse:\n _suppress = False\n\n# Column alignment\n_LEVEL_CHARS = len(\"CRITICAL\") # noqa\n_RECORD_CHARS = 35 # noqa\n_FILENAME_CHARS = 20 # noqa\n_LINENO_CHARS = 4 # noqa\n_FUNCNAME_CHARS = 31 # noqa\n\n\ndef default_config():\n \"\"\" Apply the default cocotb log formatting to the root logger.\n\n This hooks up the logger to write to stdout, using either\n :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending\n on whether colored output is requested. It also adds a\n :class:`SimTimeContextFilter` filter so that\n :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.\n\n The logging level for cocotb logs is set based on the\n :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.\n\n If desired, this logging configuration can be overwritten by calling\n ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by\n manually resetting the root logger instance, for which examples can be\n found online.\n \"\"\"\n # construct an appropriate handler\n hdlr = logging.StreamHandler(sys.stdout)\n hdlr.addFilter(SimTimeContextFilter())\n if want_color_output():\n hdlr.setFormatter(SimColourLogFormatter())\n else:\n hdlr.setFormatter(SimLogFormatter())\n\n logging.setLoggerClass(SimBaseLog) # For backwards compatibility\n logging.basicConfig()\n logging.getLogger().handlers = [hdlr] # overwrite default handlers\n\n # apply level settings for cocotb\n log = logging.getLogger('cocotb')\n level = os.getenv(\"COCOTB_LOG_LEVEL\", \"INFO\")\n try:\n _default_log = getattr(logging, level)\n except AttributeError:\n log.error(\"Unable to set logging level to %r\" % level)\n _default_log = logging.INFO\n log.setLevel(_default_log)\n\n # Notify GPI of log level, which it uses as an optimization to avoid\n # calling into Python.\n if \"COCOTB_SIM\" in os.environ:\n from cocotb import simulator\n simulator.log_level(_default_log)\n\n\nclass SimBaseLog(logging.getLoggerClass()):\n \"\"\" This class only exists for backwards compatibility \"\"\"\n\n @property\n def logger(self):\n warnings.warn(\n \"the .logger attribute should not be used now that `SimLog` \"\n \"returns a native logger instance directly.\",\n DeprecationWarning, stacklevel=2)\n return self\n\n @property\n def colour(self):\n warnings.warn(\n \"the .colour attribute may be removed in future, use the \"\n \"equivalent `cocotb.utils.want_color_output()` instead\",\n DeprecationWarning, stacklevel=2)\n return want_color_output()\n\n\n# this used to be a class, hence the unusual capitalization\ndef SimLog(name, ident=None):\n \"\"\" Like logging.getLogger, but append a numeric identifier to the name \"\"\"\n if ident is not None:\n name = \"%s.0x%x\" % (name, ident)\n return logging.getLogger(name)\n\n\nclass SimTimeContextFilter(logging.Filter):\n \"\"\"\n A filter to inject simulator times into the log records.\n\n This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.\n\n This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.\n \"\"\"\n\n # needed to make our docs render well\n def __init__(self):\n \"\"\" Takes no arguments \"\"\"\n super().__init__()\n\n def filter(self, record):\n try:\n record.created_sim_time = get_sim_time()\n except RecursionError:\n # get_sim_time may try to log - if that happens, we can't\n # attach a simulator time to this message.\n record.created_sim_time = None\n return True\n\n\nclass SimLogFormatter(logging.Formatter):\n \"\"\"Log formatter to provide consistent log message handling.\n\n This will only add simulator timestamps if the handler object this\n formatter is attached to has a :class:`SimTimeContextFilter` filter\n attached, which cocotb ensures by default.\n \"\"\"\n\n # Removes the arguments from the base class. Docstring needed to make\n # sphinx happy.\n def __init__(self):\n \"\"\" Takes no arguments. \"\"\"\n super().__init__()\n\n # Justify and truncate\n @staticmethod\n def ljust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1:]\n return string.ljust(chars)\n\n @staticmethod\n def rjust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1:]\n return string.rjust(chars)\n\n def _format(self, level, record, msg, coloured=False):\n sim_time = getattr(record, 'created_sim_time', None)\n if sim_time is None:\n sim_time_str = \" -.--ns\"\n else:\n time_ns = get_time_from_sim_steps(sim_time, 'ns')\n sim_time_str = \"{:6.2f}ns\".format(time_ns)\n prefix = sim_time_str.rjust(11) + ' ' + level + ' '\n if not _suppress:\n prefix += self.ljust(record.name, _RECORD_CHARS) + \\\n self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS) + \\\n ':' + self.ljust(str(record.lineno), _LINENO_CHARS) + \\\n ' in ' + self.ljust(str(record.funcName), _FUNCNAME_CHARS) + ' '\n\n # these lines are copied from the builtin logger\n if record.exc_info:\n # Cache the traceback text to avoid converting it multiple times\n # (it's constant anyway)\n if not record.exc_text:\n record.exc_text = self.formatException(record.exc_info)\n if record.exc_text:\n if msg[-1:] != \"\\n\":\n msg = msg + \"\\n\"\n msg = msg + record.exc_text\n\n prefix_len = len(prefix)\n if coloured:\n prefix_len -= (len(level) - _LEVEL_CHARS)\n pad = \"\\n\" + \" \" * (prefix_len)\n return prefix + pad.join(msg.split('\\n'))\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n level = record.levelname.ljust(_LEVEL_CHARS)\n\n return self._format(level, record, msg)\n\n\nclass SimColourLogFormatter(SimLogFormatter):\n \"\"\"Log formatter to provide consistent log message handling.\"\"\"\n\n loglevel2colour = {\n logging.DEBUG : \"%s\",\n logging.INFO : ANSI.COLOR_INFO + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.WARNING : ANSI.COLOR_WARNING + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.ERROR : ANSI.COLOR_ERROR + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.CRITICAL: ANSI.COLOR_CRITICAL + \"%s\" + ANSI.COLOR_DEFAULT,\n }\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n\n # Need to colour each line in case coloring is applied in the message\n msg = '\\n'.join([SimColourLogFormatter.loglevel2colour[record.levelno] % line for line in msg.split('\\n')])\n level = (SimColourLogFormatter.loglevel2colour[record.levelno] %\n record.levelname.ljust(_LEVEL_CHARS))\n\n return self._format(level, record, msg, coloured=True)\n\n\ndef _filter_from_c(logger_name, level):\n return logging.getLogger(logger_name).isEnabledFor(level)\n\n\ndef _log_from_c(logger_name, level, filename, lineno, msg, function_name):\n \"\"\"\n This is for use from the C world, and allows us to insert C stack\n information.\n \"\"\"\n logger = logging.getLogger(logger_name)\n if logger.isEnabledFor(level):\n record = logger.makeRecord(\n logger.name,\n level,\n filename,\n lineno,\n msg,\n None,\n None,\n function_name\n )\n logger.handle(record)\n", "path": "cocotb/log.py"}]} | 3,701 | 279 |
gh_patches_debug_6638 | rasdani/github-patches | git_diff | zulip__zulip-28016 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Onboarding hotspots are misplaced
I think our grid rewrites of the sidebars have resulted in the onboarding hotspots being somewhat misplaced:

(The `offset_x` and `offset_y` values may need updating).
I'm not entirely sure where the best place for these are. The main one that seems very wrong is the compose box one.
That said, we should aim to spend pretty minimal time on this system because we plan to rip it out in favor of a totally different onboarding system.
See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html for notes on how to test using the `ALWAYS_SEND_ALL_HOTSPOTS` setting as shown in this screenshot. (Usually, they're shown only one at a time in sequence).
@sayamsamal can you pick this one up?
</issue>
<code>
[start of zerver/lib/hotspots.py]
1 # See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html
2 # for documentation on this subsystem.
3 from dataclasses import dataclass
4 from typing import Dict, List, Optional, Union
5
6 from django.conf import settings
7 from django.utils.translation import gettext_lazy
8 from django_stubs_ext import StrPromise
9
10 from zerver.models import UserHotspot, UserProfile
11
12
13 @dataclass
14 class Hotspot:
15 name: str
16 title: Optional[StrPromise]
17 description: Optional[StrPromise]
18 has_trigger: bool = False
19
20 def to_dict(self, delay: float = 0) -> Dict[str, Union[str, float, bool]]:
21 return {
22 "name": self.name,
23 "title": str(self.title),
24 "description": str(self.description),
25 "delay": delay,
26 "has_trigger": self.has_trigger,
27 }
28
29
30 INTRO_HOTSPOTS: List[Hotspot] = [
31 Hotspot(
32 name="intro_streams",
33 title=gettext_lazy("Catch up on a stream"),
34 description=gettext_lazy(
35 "Messages sent to a stream are seen by everyone subscribed "
36 "to that stream. Try clicking on one of the stream links below."
37 ),
38 ),
39 Hotspot(
40 name="intro_topics",
41 title=gettext_lazy("Topics"),
42 description=gettext_lazy(
43 "Every message has a topic. Topics keep conversations "
44 "easy to follow, and make it easy to reply to conversations that start "
45 "while you are offline."
46 ),
47 ),
48 Hotspot(
49 name="intro_gear",
50 title=gettext_lazy("Settings"),
51 description=gettext_lazy("Go to Settings to configure your notifications and preferences."),
52 ),
53 Hotspot(
54 name="intro_compose",
55 title=gettext_lazy("Compose"),
56 description=gettext_lazy(
57 "Click here to start a new conversation. Pick a topic "
58 "(2-3 words is best), and give it a go!"
59 ),
60 ),
61 ]
62
63
64 NON_INTRO_HOTSPOTS: List[Hotspot] = []
65
66 # We would most likely implement new hotspots in the future that aren't
67 # a part of the initial tutorial. To that end, classifying them into
68 # categories which are aggregated in ALL_HOTSPOTS, seems like a good start.
69 ALL_HOTSPOTS = [*INTRO_HOTSPOTS, *NON_INTRO_HOTSPOTS]
70
71
72 def get_next_hotspots(user: UserProfile) -> List[Dict[str, Union[str, float, bool]]]:
73 # For manual testing, it can be convenient to set
74 # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to
75 # make it easy to click on all of the hotspots.
76 #
77 # Since this is just for development purposes, it's convenient for us to send
78 # all the hotspots rather than any specific category.
79 if settings.ALWAYS_SEND_ALL_HOTSPOTS:
80 return [hotspot.to_dict() for hotspot in ALL_HOTSPOTS]
81
82 # If a Zulip server has disabled the tutorial, never send hotspots.
83 if not settings.TUTORIAL_ENABLED:
84 return []
85
86 seen_hotspots = frozenset(
87 UserHotspot.objects.filter(user=user).values_list("hotspot", flat=True)
88 )
89
90 hotspots = [hotspot.to_dict() for hotspot in NON_INTRO_HOTSPOTS]
91
92 if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:
93 return hotspots
94
95 for hotspot in INTRO_HOTSPOTS:
96 if hotspot.name in seen_hotspots:
97 continue
98
99 hotspots.append(hotspot.to_dict(delay=0.5))
100 return hotspots
101
102 user.tutorial_status = UserProfile.TUTORIAL_FINISHED
103 user.save(update_fields=["tutorial_status"])
104 return hotspots
105
106
107 def copy_hotspots(source_profile: UserProfile, target_profile: UserProfile) -> None:
108 for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):
109 UserHotspot.objects.create(
110 user=target_profile, hotspot=userhotspot.hotspot, timestamp=userhotspot.timestamp
111 )
112
113 target_profile.tutorial_status = source_profile.tutorial_status
114 target_profile.onboarding_steps = source_profile.onboarding_steps
115 target_profile.save(update_fields=["tutorial_status", "onboarding_steps"])
116
[end of zerver/lib/hotspots.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py
--- a/zerver/lib/hotspots.py
+++ b/zerver/lib/hotspots.py
@@ -46,6 +46,9 @@
),
),
Hotspot(
+ # In theory, this should be renamed to intro_personal, since
+ # it's no longer attached to the gear menu, but renaming these
+ # requires a migration that is not worth doing at this time.
name="intro_gear",
title=gettext_lazy("Settings"),
description=gettext_lazy("Go to Settings to configure your notifications and preferences."),
| {"golden_diff": "diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py\n--- a/zerver/lib/hotspots.py\n+++ b/zerver/lib/hotspots.py\n@@ -46,6 +46,9 @@\n ),\n ),\n Hotspot(\n+ # In theory, this should be renamed to intro_personal, since\n+ # it's no longer attached to the gear menu, but renaming these\n+ # requires a migration that is not worth doing at this time.\n name=\"intro_gear\",\n title=gettext_lazy(\"Settings\"),\n description=gettext_lazy(\"Go to Settings to configure your notifications and preferences.\"),\n", "issue": "Onboarding hotspots are misplaced\nI think our grid rewrites of the sidebars have resulted in the onboarding hotspots being somewhat misplaced:\r\n\r\n\r\n\r\n(The `offset_x` and `offset_y` values may need updating).\r\n\r\nI'm not entirely sure where the best place for these are. The main one that seems very wrong is the compose box one.\r\n\r\nThat said, we should aim to spend pretty minimal time on this system because we plan to rip it out in favor of a totally different onboarding system.\r\n\r\nSee https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html for notes on how to test using the `ALWAYS_SEND_ALL_HOTSPOTS` setting as shown in this screenshot. (Usually, they're shown only one at a time in sequence).\r\n\r\n@sayamsamal can you pick this one up?\r\n\n", "before_files": [{"content": "# See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html\n# for documentation on this subsystem.\nfrom dataclasses import dataclass\nfrom typing import Dict, List, Optional, Union\n\nfrom django.conf import settings\nfrom django.utils.translation import gettext_lazy\nfrom django_stubs_ext import StrPromise\n\nfrom zerver.models import UserHotspot, UserProfile\n\n\n@dataclass\nclass Hotspot:\n name: str\n title: Optional[StrPromise]\n description: Optional[StrPromise]\n has_trigger: bool = False\n\n def to_dict(self, delay: float = 0) -> Dict[str, Union[str, float, bool]]:\n return {\n \"name\": self.name,\n \"title\": str(self.title),\n \"description\": str(self.description),\n \"delay\": delay,\n \"has_trigger\": self.has_trigger,\n }\n\n\nINTRO_HOTSPOTS: List[Hotspot] = [\n Hotspot(\n name=\"intro_streams\",\n title=gettext_lazy(\"Catch up on a stream\"),\n description=gettext_lazy(\n \"Messages sent to a stream are seen by everyone subscribed \"\n \"to that stream. Try clicking on one of the stream links below.\"\n ),\n ),\n Hotspot(\n name=\"intro_topics\",\n title=gettext_lazy(\"Topics\"),\n description=gettext_lazy(\n \"Every message has a topic. Topics keep conversations \"\n \"easy to follow, and make it easy to reply to conversations that start \"\n \"while you are offline.\"\n ),\n ),\n Hotspot(\n name=\"intro_gear\",\n title=gettext_lazy(\"Settings\"),\n description=gettext_lazy(\"Go to Settings to configure your notifications and preferences.\"),\n ),\n Hotspot(\n name=\"intro_compose\",\n title=gettext_lazy(\"Compose\"),\n description=gettext_lazy(\n \"Click here to start a new conversation. Pick a topic \"\n \"(2-3 words is best), and give it a go!\"\n ),\n ),\n]\n\n\nNON_INTRO_HOTSPOTS: List[Hotspot] = []\n\n# We would most likely implement new hotspots in the future that aren't\n# a part of the initial tutorial. To that end, classifying them into\n# categories which are aggregated in ALL_HOTSPOTS, seems like a good start.\nALL_HOTSPOTS = [*INTRO_HOTSPOTS, *NON_INTRO_HOTSPOTS]\n\n\ndef get_next_hotspots(user: UserProfile) -> List[Dict[str, Union[str, float, bool]]]:\n # For manual testing, it can be convenient to set\n # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to\n # make it easy to click on all of the hotspots.\n #\n # Since this is just for development purposes, it's convenient for us to send\n # all the hotspots rather than any specific category.\n if settings.ALWAYS_SEND_ALL_HOTSPOTS:\n return [hotspot.to_dict() for hotspot in ALL_HOTSPOTS]\n\n # If a Zulip server has disabled the tutorial, never send hotspots.\n if not settings.TUTORIAL_ENABLED:\n return []\n\n seen_hotspots = frozenset(\n UserHotspot.objects.filter(user=user).values_list(\"hotspot\", flat=True)\n )\n\n hotspots = [hotspot.to_dict() for hotspot in NON_INTRO_HOTSPOTS]\n\n if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:\n return hotspots\n\n for hotspot in INTRO_HOTSPOTS:\n if hotspot.name in seen_hotspots:\n continue\n\n hotspots.append(hotspot.to_dict(delay=0.5))\n return hotspots\n\n user.tutorial_status = UserProfile.TUTORIAL_FINISHED\n user.save(update_fields=[\"tutorial_status\"])\n return hotspots\n\n\ndef copy_hotspots(source_profile: UserProfile, target_profile: UserProfile) -> None:\n for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):\n UserHotspot.objects.create(\n user=target_profile, hotspot=userhotspot.hotspot, timestamp=userhotspot.timestamp\n )\n\n target_profile.tutorial_status = source_profile.tutorial_status\n target_profile.onboarding_steps = source_profile.onboarding_steps\n target_profile.save(update_fields=[\"tutorial_status\", \"onboarding_steps\"])\n", "path": "zerver/lib/hotspots.py"}]} | 1,931 | 139 |
gh_patches_debug_39124 | rasdani/github-patches | git_diff | ietf-tools__datatracker-4070 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
group acronym validation crashes in the admin if the group type has not been set.
It should raise a validation error instead if 'type' isn't in the form yet.
See https://github.com/ietf-tools/datatracker/blob/main/ietf/group/admin.py#L47
</issue>
<code>
[start of ietf/group/admin.py]
1 # Copyright The IETF Trust 2010-2020, All Rights Reserved
2 # -*- coding: utf-8 -*-
3
4 import re
5
6 from functools import update_wrapper
7
8 import debug # pyflakes:ignore
9
10 from django import forms
11
12 from django.contrib import admin
13 from django.contrib.admin.utils import unquote
14 from django.core.management import load_command_class
15 from django.http import Http404
16 from django.shortcuts import render
17 from django.utils.encoding import force_text
18 from django.utils.html import escape
19 from django.utils.translation import ugettext as _
20
21 from ietf.group.models import (Group, GroupFeatures, GroupHistory, GroupEvent, GroupURL, GroupMilestone,
22 GroupMilestoneHistory, GroupStateTransitions, Role, RoleHistory, ChangeStateGroupEvent,
23 MilestoneGroupEvent, GroupExtResource, )
24 from ietf.name.models import GroupTypeName
25
26 from ietf.utils.validators import validate_external_resource_value
27 from ietf.utils.response import permission_denied
28
29 class RoleInline(admin.TabularInline):
30 model = Role
31 raw_id_fields = ["person", "email"]
32
33 class GroupURLInline(admin.TabularInline):
34 model = GroupURL
35
36 class GroupForm(forms.ModelForm):
37 class Meta:
38 model = Group
39 fields = '__all__'
40
41 def clean_acronym(self):
42 ''' Constrain the acronym form. Note that this doesn't look for collisions.
43 See ietf.group.forms.GroupForm.clean_acronym()
44 '''
45 acronym = self.cleaned_data['acronym'].strip().lower()
46 if not self.instance.pk:
47 type = self.cleaned_data['type']
48 if GroupFeatures.objects.get(type=type).has_documents:
49 if not re.match(r'^[a-z][a-z0-9]+$', acronym):
50 raise forms.ValidationError("Acronym is invalid, for groups that create documents, the acronym must be at least two characters and only contain lowercase letters and numbers starting with a letter.")
51 else:
52 if not re.match(r'^[a-z][a-z0-9-]*[a-z0-9]$', acronym):
53 raise forms.ValidationError("Acronym is invalid, must be at least two characters and only contain lowercase letters and numbers starting with a letter. It may contain hyphens, but that is discouraged.")
54 return acronym
55
56 def clean_used_roles(self):
57 data = self.cleaned_data['used_roles']
58 if data is None or data == '':
59 raise forms.ValidationError("Must contain a valid json expression. To use the defaults prove an empty list: []")
60 return data
61
62
63 class GroupAdmin(admin.ModelAdmin):
64 form = GroupForm
65 list_display = ["acronym", "name", "type", "state", "time", "role_list"]
66 list_display_links = ["acronym", "name"]
67 list_filter = ["type", "state", "time"]
68 search_fields = ["acronym", "name"]
69 ordering = ["name"]
70 raw_id_fields = ["charter", "parent"]
71 inlines = [RoleInline, GroupURLInline]
72 prepopulated_fields = {"acronym": ("name", )}
73
74 def role_list(self, obj):
75 roles = Role.objects.filter(group=obj).order_by("name", "person__name").select_related('person')
76 res = []
77 for r in roles:
78 res.append('<a href="../../person/person/%s/">%s</a> (<a href="../../group/role/%s/">%s)' % (r.person.pk, escape(r.person.plain_name()), r.pk, r.name.name))
79 return ", ".join(res)
80 role_list.short_description = "Persons" # type: ignore # https://github.com/python/mypy/issues/2087
81 role_list.allow_tags = True # type: ignore # https://github.com/python/mypy/issues/2087
82
83
84 # SDO reminder
85 def get_urls(self):
86 from ietf.utils.urls import url
87
88 def wrap(view):
89 def wrapper(*args, **kwargs):
90 return self.admin_site.admin_view(view)(*args, **kwargs)
91 return update_wrapper(wrapper, view)
92
93 info = self.model._meta.app_label, self.model._meta.model_name
94
95 urls = [
96 url(r'^reminder/$', wrap(self.send_reminder), name='%s_%s_reminder' % info),
97 url(r'^(.+)/reminder/$', wrap(self.send_one_reminder), name='%s_%s_one_reminder' % info),
98 ]
99 urls += super(GroupAdmin, self).get_urls()
100 return urls
101
102 def send_reminder(self, request, sdo=None):
103 opts = self.model._meta
104 app_label = opts.app_label
105
106 output = None
107 sdo_pk = sdo and sdo.pk or None
108 if request.method == 'POST' and request.POST.get('send', False):
109 command = load_command_class('ietf.liaisons', 'remind_update_sdo_list')
110 output=command.handle(return_output=True, sdo_pk=sdo_pk)
111 output='\n'.join(output)
112
113 context = {
114 'opts': opts,
115 'has_change_permission': self.has_change_permission(request),
116 'app_label': app_label,
117 'output': output,
118 'sdo': sdo,
119 }
120 return render(request, 'admin/group/group/send_sdo_reminder.html', context )
121
122
123 def send_one_reminder(self, request, object_id):
124 model = self.model
125 opts = model._meta
126
127 try:
128 obj = self.queryset(request).get(pk=unquote(object_id))
129 except model.DoesNotExist:
130 obj = None
131
132 if not self.has_change_permission(request, obj):
133 permission_denied(request, "You don't have edit permissions for this change.")
134
135 if obj is None:
136 raise Http404(_('%(name)s object with primary key %(key)r does not exist.') % {'name': force_text(opts.verbose_name), 'key': escape(object_id)})
137
138 return self.send_reminder(request, sdo=obj)
139
140
141 admin.site.register(Group, GroupAdmin)
142
143
144 class GroupFeaturesAdminForm(forms.ModelForm):
145 def clean_default_parent(self):
146 # called before form clean() method -- cannot access other fields
147 parent_acro = self.cleaned_data['default_parent'].strip().lower()
148 if len(parent_acro) > 0:
149 if Group.objects.filter(acronym=parent_acro).count() == 0:
150 raise forms.ValidationError(
151 'No group exists with acronym "%(acro)s"',
152 params=dict(acro=parent_acro),
153 )
154 return parent_acro
155
156 def clean(self):
157 # cleaning/validation that requires multiple fields
158 parent_acro = self.cleaned_data['default_parent']
159 if len(parent_acro) > 0:
160 parent_type = GroupTypeName.objects.filter(group__acronym=parent_acro).first()
161 if parent_type not in self.cleaned_data['parent_types']:
162 self.add_error(
163 'default_parent',
164 forms.ValidationError(
165 'Default parent group "%(acro)s" is type "%(gtype)s", which is not an allowed parent type.',
166 params=dict(acro=parent_acro, gtype=parent_type),
167 )
168 )
169
170 class GroupFeaturesAdmin(admin.ModelAdmin):
171 form = GroupFeaturesAdminForm
172 list_display = [
173 'type',
174 'need_parent',
175 'default_parent',
176 'gf_parent_types',
177 'has_milestones',
178 'has_chartering_process',
179 'has_documents',
180 'has_session_materials',
181 'has_nonsession_materials',
182 'has_meetings',
183 'has_reviews',
184 'has_default_jabber',
185 'acts_like_wg',
186 'create_wiki',
187 'custom_group_roles',
188 'customize_workflow',
189 'is_schedulable',
190 'show_on_agenda',
191 'agenda_filter_type',
192 'req_subm_approval',
193 'agenda_type',
194 'material_types',
195 'admin_roles',
196 'docman_roles',
197 'groupman_roles',
198 'groupman_authroles',
199 'matman_roles',
200 'role_order',
201 ]
202
203 def gf_parent_types(self, groupfeatures):
204 """Generate list of parent types; needed because many-to-many is not handled automatically"""
205 return ', '.join([gtn.slug for gtn in groupfeatures.parent_types.all()])
206 gf_parent_types.short_description = 'Parent Types' # type: ignore # https://github.com/python/mypy/issues/2087
207
208 admin.site.register(GroupFeatures, GroupFeaturesAdmin)
209
210 class GroupHistoryAdmin(admin.ModelAdmin):
211 list_display = ["time", "acronym", "name", "type"]
212 list_display_links = ["acronym", "name"]
213 list_filter = ["type"]
214 search_fields = ["acronym", "name"]
215 ordering = ["name"]
216 raw_id_fields = ["group", "parent"]
217
218 admin.site.register(GroupHistory, GroupHistoryAdmin)
219
220 class GroupURLAdmin(admin.ModelAdmin):
221 list_display = ['id', 'group', 'name', 'url']
222 raw_id_fields = ['group']
223 search_fields = ['name']
224 admin.site.register(GroupURL, GroupURLAdmin)
225
226 class GroupMilestoneAdmin(admin.ModelAdmin):
227 list_display = ["group", "desc", "due", "resolved", "time"]
228 search_fields = ["group__name", "group__acronym", "desc", "resolved"]
229 raw_id_fields = ["group", "docs"]
230 admin.site.register(GroupMilestone, GroupMilestoneAdmin)
231 admin.site.register(GroupMilestoneHistory, GroupMilestoneAdmin)
232
233 class GroupStateTransitionsAdmin(admin.ModelAdmin):
234 list_display = ['id', 'group', 'state']
235 raw_id_fields = ['group', 'state']
236 admin.site.register(GroupStateTransitions, GroupStateTransitionsAdmin)
237
238 class RoleAdmin(admin.ModelAdmin):
239 list_display = ["name", "person", "email", "group"]
240 list_display_links = ["name"]
241 search_fields = ["name__name", "person__name", "email__address"]
242 list_filter = ["name", "group"]
243 ordering = ["id"]
244 raw_id_fields = ["email", "person", "group"]
245 admin.site.register(Role, RoleAdmin)
246 admin.site.register(RoleHistory, RoleAdmin)
247
248 class GroupEventAdmin(admin.ModelAdmin):
249 list_display = ["id", "group", "time", "type", "by", ]
250 search_fields = ["group__name", "group__acronym"]
251 admin.site.register(GroupEvent, GroupEventAdmin)
252
253 class ChangeStateGroupEventAdmin(admin.ModelAdmin):
254 list_display = ["id", "group", "state", "time", "type", "by", ]
255 list_filter = ["state", "time", ]
256 search_fields = ["group__name", "group__acronym"]
257 admin.site.register(ChangeStateGroupEvent, ChangeStateGroupEventAdmin)
258
259 class MilestoneGroupEventAdmin(admin.ModelAdmin):
260 list_display = ['id', 'group', 'time', 'type', 'by', 'desc', 'milestone']
261 list_filter = ['time']
262 raw_id_fields = ['group', 'by', 'milestone']
263 admin.site.register(MilestoneGroupEvent, MilestoneGroupEventAdmin)
264
265 class GroupExtResourceAdminForm(forms.ModelForm):
266 def clean(self):
267 validate_external_resource_value(self.cleaned_data['name'],self.cleaned_data['value'])
268
269 class GroupExtResourceAdmin(admin.ModelAdmin):
270 form = GroupExtResourceAdminForm
271 list_display = ['id', 'group', 'name', 'display_name', 'value',]
272 search_fields = ['group__acronym', 'value', 'display_name', 'name__slug',]
273 raw_id_fields = ['group', ]
274 admin.site.register(GroupExtResource, GroupExtResourceAdmin)
275
[end of ietf/group/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ietf/group/admin.py b/ietf/group/admin.py
--- a/ietf/group/admin.py
+++ b/ietf/group/admin.py
@@ -34,31 +34,50 @@
model = GroupURL
class GroupForm(forms.ModelForm):
+ # Use CharField with our own validation instead of default SlugField. The real check is in the clean() method.
+ acronym = forms.CharField(min_length=2, max_length=40)
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.fields['type'].required = True # require this even though the model field can nominally be null
+
class Meta:
model = Group
fields = '__all__'
- def clean_acronym(self):
- ''' Constrain the acronym form. Note that this doesn't look for collisions.
- See ietf.group.forms.GroupForm.clean_acronym()
- '''
- acronym = self.cleaned_data['acronym'].strip().lower()
- if not self.instance.pk:
- type = self.cleaned_data['type']
- if GroupFeatures.objects.get(type=type).has_documents:
- if not re.match(r'^[a-z][a-z0-9]+$', acronym):
- raise forms.ValidationError("Acronym is invalid, for groups that create documents, the acronym must be at least two characters and only contain lowercase letters and numbers starting with a letter.")
- else:
- if not re.match(r'^[a-z][a-z0-9-]*[a-z0-9]$', acronym):
- raise forms.ValidationError("Acronym is invalid, must be at least two characters and only contain lowercase letters and numbers starting with a letter. It may contain hyphens, but that is discouraged.")
- return acronym
-
def clean_used_roles(self):
data = self.cleaned_data['used_roles']
if data is None or data == '':
raise forms.ValidationError("Must contain a valid json expression. To use the defaults prove an empty list: []")
return data
-
+
+ def clean(self):
+ """Clean parts of the form that involve multiple fields"""
+ # Constrain the acronym form. Note that this doesn't look for collisions.
+ # See ietf.group.forms.GroupForm.clean_acronym()
+ if 'acronym' in self.cleaned_data:
+ acronym = self.cleaned_data['acronym'].strip().lower()
+ self.cleaned_data['acronym'] = acronym
+ if 'type' in self.cleaned_data and not self.instance.pk:
+ features = GroupFeatures.objects.get(type=self.cleaned_data['type'])
+ new_and_has_documents = features.has_documents if features else False
+ else:
+ new_and_has_documents = False
+ if new_and_has_documents:
+ valid_re = r'^[a-z][a-z0-9]+$'
+ error_msg = (
+ 'Acronym is invalid. For groups that create documents, the acronym must be at least '
+ 'two characters and only contain lowercase letters and numbers starting with a letter.'
+ )
+ else:
+ valid_re = r'^[a-z][a-z0-9-]*[a-z0-9]$'
+ error_msg = (
+ 'Acronym is invalid. It must be at least two characters and only contain lowercase '
+ 'letters and numbers starting with a letter. It may contain hyphens, but that is discouraged.'
+ )
+ if not re.match(valid_re, acronym):
+ self.add_error('acronym', error_msg)
+
class GroupAdmin(admin.ModelAdmin):
form = GroupForm
| {"golden_diff": "diff --git a/ietf/group/admin.py b/ietf/group/admin.py\n--- a/ietf/group/admin.py\n+++ b/ietf/group/admin.py\n@@ -34,31 +34,50 @@\n model = GroupURL\n \n class GroupForm(forms.ModelForm):\n+ # Use CharField with our own validation instead of default SlugField. The real check is in the clean() method.\n+ acronym = forms.CharField(min_length=2, max_length=40)\n+\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.fields['type'].required = True # require this even though the model field can nominally be null\n+\n class Meta:\n model = Group\n fields = '__all__'\n \n- def clean_acronym(self):\n- ''' Constrain the acronym form. Note that this doesn't look for collisions.\n- See ietf.group.forms.GroupForm.clean_acronym()\n- '''\n- acronym = self.cleaned_data['acronym'].strip().lower()\n- if not self.instance.pk:\n- type = self.cleaned_data['type']\n- if GroupFeatures.objects.get(type=type).has_documents:\n- if not re.match(r'^[a-z][a-z0-9]+$', acronym):\n- raise forms.ValidationError(\"Acronym is invalid, for groups that create documents, the acronym must be at least two characters and only contain lowercase letters and numbers starting with a letter.\")\n- else:\n- if not re.match(r'^[a-z][a-z0-9-]*[a-z0-9]$', acronym):\n- raise forms.ValidationError(\"Acronym is invalid, must be at least two characters and only contain lowercase letters and numbers starting with a letter. It may contain hyphens, but that is discouraged.\")\n- return acronym\n-\n def clean_used_roles(self):\n data = self.cleaned_data['used_roles']\n if data is None or data == '':\n raise forms.ValidationError(\"Must contain a valid json expression. To use the defaults prove an empty list: []\")\n return data\n- \n+\n+ def clean(self):\n+ \"\"\"Clean parts of the form that involve multiple fields\"\"\"\n+ # Constrain the acronym form. Note that this doesn't look for collisions.\n+ # See ietf.group.forms.GroupForm.clean_acronym()\n+ if 'acronym' in self.cleaned_data:\n+ acronym = self.cleaned_data['acronym'].strip().lower()\n+ self.cleaned_data['acronym'] = acronym\n+ if 'type' in self.cleaned_data and not self.instance.pk:\n+ features = GroupFeatures.objects.get(type=self.cleaned_data['type'])\n+ new_and_has_documents = features.has_documents if features else False\n+ else:\n+ new_and_has_documents = False\n+ if new_and_has_documents:\n+ valid_re = r'^[a-z][a-z0-9]+$'\n+ error_msg = (\n+ 'Acronym is invalid. For groups that create documents, the acronym must be at least '\n+ 'two characters and only contain lowercase letters and numbers starting with a letter.'\n+ )\n+ else:\n+ valid_re = r'^[a-z][a-z0-9-]*[a-z0-9]$'\n+ error_msg = (\n+ 'Acronym is invalid. It must be at least two characters and only contain lowercase '\n+ 'letters and numbers starting with a letter. It may contain hyphens, but that is discouraged.'\n+ )\n+ if not re.match(valid_re, acronym):\n+ self.add_error('acronym', error_msg)\n+\n \n class GroupAdmin(admin.ModelAdmin):\n form = GroupForm\n", "issue": "group acronym validation crashes in the admin if the group type has not been set.\nIt should raise a validation error instead if 'type' isn't in the form yet.\r\nSee https://github.com/ietf-tools/datatracker/blob/main/ietf/group/admin.py#L47\n", "before_files": [{"content": "# Copyright The IETF Trust 2010-2020, All Rights Reserved\n# -*- coding: utf-8 -*-\n\nimport re\n\nfrom functools import update_wrapper\n\nimport debug # pyflakes:ignore\n\nfrom django import forms\n\nfrom django.contrib import admin\nfrom django.contrib.admin.utils import unquote\nfrom django.core.management import load_command_class\nfrom django.http import Http404\nfrom django.shortcuts import render\nfrom django.utils.encoding import force_text\nfrom django.utils.html import escape\nfrom django.utils.translation import ugettext as _\n\nfrom ietf.group.models import (Group, GroupFeatures, GroupHistory, GroupEvent, GroupURL, GroupMilestone,\n GroupMilestoneHistory, GroupStateTransitions, Role, RoleHistory, ChangeStateGroupEvent,\n MilestoneGroupEvent, GroupExtResource, )\nfrom ietf.name.models import GroupTypeName\n\nfrom ietf.utils.validators import validate_external_resource_value\nfrom ietf.utils.response import permission_denied\n\nclass RoleInline(admin.TabularInline):\n model = Role\n raw_id_fields = [\"person\", \"email\"]\n\nclass GroupURLInline(admin.TabularInline):\n model = GroupURL\n\nclass GroupForm(forms.ModelForm):\n class Meta:\n model = Group\n fields = '__all__'\n\n def clean_acronym(self):\n ''' Constrain the acronym form. Note that this doesn't look for collisions.\n See ietf.group.forms.GroupForm.clean_acronym()\n '''\n acronym = self.cleaned_data['acronym'].strip().lower()\n if not self.instance.pk:\n type = self.cleaned_data['type']\n if GroupFeatures.objects.get(type=type).has_documents:\n if not re.match(r'^[a-z][a-z0-9]+$', acronym):\n raise forms.ValidationError(\"Acronym is invalid, for groups that create documents, the acronym must be at least two characters and only contain lowercase letters and numbers starting with a letter.\")\n else:\n if not re.match(r'^[a-z][a-z0-9-]*[a-z0-9]$', acronym):\n raise forms.ValidationError(\"Acronym is invalid, must be at least two characters and only contain lowercase letters and numbers starting with a letter. It may contain hyphens, but that is discouraged.\")\n return acronym\n\n def clean_used_roles(self):\n data = self.cleaned_data['used_roles']\n if data is None or data == '':\n raise forms.ValidationError(\"Must contain a valid json expression. To use the defaults prove an empty list: []\")\n return data\n \n\nclass GroupAdmin(admin.ModelAdmin):\n form = GroupForm\n list_display = [\"acronym\", \"name\", \"type\", \"state\", \"time\", \"role_list\"]\n list_display_links = [\"acronym\", \"name\"]\n list_filter = [\"type\", \"state\", \"time\"]\n search_fields = [\"acronym\", \"name\"]\n ordering = [\"name\"]\n raw_id_fields = [\"charter\", \"parent\"]\n inlines = [RoleInline, GroupURLInline]\n prepopulated_fields = {\"acronym\": (\"name\", )}\n\n def role_list(self, obj):\n roles = Role.objects.filter(group=obj).order_by(\"name\", \"person__name\").select_related('person')\n res = []\n for r in roles:\n res.append('<a href=\"../../person/person/%s/\">%s</a> (<a href=\"../../group/role/%s/\">%s)' % (r.person.pk, escape(r.person.plain_name()), r.pk, r.name.name))\n return \", \".join(res)\n role_list.short_description = \"Persons\" # type: ignore # https://github.com/python/mypy/issues/2087\n role_list.allow_tags = True # type: ignore # https://github.com/python/mypy/issues/2087\n \n\n # SDO reminder\n def get_urls(self):\n from ietf.utils.urls import url\n\n def wrap(view):\n def wrapper(*args, **kwargs):\n return self.admin_site.admin_view(view)(*args, **kwargs)\n return update_wrapper(wrapper, view)\n\n info = self.model._meta.app_label, self.model._meta.model_name\n\n urls = [\n url(r'^reminder/$', wrap(self.send_reminder), name='%s_%s_reminder' % info),\n url(r'^(.+)/reminder/$', wrap(self.send_one_reminder), name='%s_%s_one_reminder' % info),\n ]\n urls += super(GroupAdmin, self).get_urls()\n return urls\n\n def send_reminder(self, request, sdo=None):\n opts = self.model._meta\n app_label = opts.app_label\n\n output = None\n sdo_pk = sdo and sdo.pk or None\n if request.method == 'POST' and request.POST.get('send', False):\n command = load_command_class('ietf.liaisons', 'remind_update_sdo_list')\n output=command.handle(return_output=True, sdo_pk=sdo_pk)\n output='\\n'.join(output)\n\n context = {\n 'opts': opts,\n 'has_change_permission': self.has_change_permission(request),\n 'app_label': app_label,\n 'output': output,\n 'sdo': sdo,\n }\n return render(request, 'admin/group/group/send_sdo_reminder.html', context )\n\n\n def send_one_reminder(self, request, object_id):\n model = self.model\n opts = model._meta\n\n try:\n obj = self.queryset(request).get(pk=unquote(object_id))\n except model.DoesNotExist:\n obj = None\n\n if not self.has_change_permission(request, obj):\n permission_denied(request, \"You don't have edit permissions for this change.\")\n\n if obj is None:\n raise Http404(_('%(name)s object with primary key %(key)r does not exist.') % {'name': force_text(opts.verbose_name), 'key': escape(object_id)})\n\n return self.send_reminder(request, sdo=obj)\n \n\nadmin.site.register(Group, GroupAdmin)\n\n\nclass GroupFeaturesAdminForm(forms.ModelForm):\n def clean_default_parent(self):\n # called before form clean() method -- cannot access other fields\n parent_acro = self.cleaned_data['default_parent'].strip().lower()\n if len(parent_acro) > 0:\n if Group.objects.filter(acronym=parent_acro).count() == 0:\n raise forms.ValidationError(\n 'No group exists with acronym \"%(acro)s\"',\n params=dict(acro=parent_acro),\n )\n return parent_acro\n\n def clean(self):\n # cleaning/validation that requires multiple fields\n parent_acro = self.cleaned_data['default_parent']\n if len(parent_acro) > 0:\n parent_type = GroupTypeName.objects.filter(group__acronym=parent_acro).first()\n if parent_type not in self.cleaned_data['parent_types']:\n self.add_error(\n 'default_parent',\n forms.ValidationError(\n 'Default parent group \"%(acro)s\" is type \"%(gtype)s\", which is not an allowed parent type.',\n params=dict(acro=parent_acro, gtype=parent_type),\n )\n )\n\nclass GroupFeaturesAdmin(admin.ModelAdmin):\n form = GroupFeaturesAdminForm\n list_display = [\n 'type',\n 'need_parent',\n 'default_parent',\n 'gf_parent_types',\n 'has_milestones',\n 'has_chartering_process',\n 'has_documents',\n 'has_session_materials',\n 'has_nonsession_materials',\n 'has_meetings',\n 'has_reviews',\n 'has_default_jabber',\n 'acts_like_wg',\n 'create_wiki',\n 'custom_group_roles',\n 'customize_workflow',\n 'is_schedulable',\n 'show_on_agenda',\n 'agenda_filter_type',\n 'req_subm_approval',\n 'agenda_type',\n 'material_types',\n 'admin_roles',\n 'docman_roles',\n 'groupman_roles',\n 'groupman_authroles',\n 'matman_roles',\n 'role_order',\n ]\n\n def gf_parent_types(self, groupfeatures):\n \"\"\"Generate list of parent types; needed because many-to-many is not handled automatically\"\"\"\n return ', '.join([gtn.slug for gtn in groupfeatures.parent_types.all()])\n gf_parent_types.short_description = 'Parent Types' # type: ignore # https://github.com/python/mypy/issues/2087\n\nadmin.site.register(GroupFeatures, GroupFeaturesAdmin)\n\nclass GroupHistoryAdmin(admin.ModelAdmin):\n list_display = [\"time\", \"acronym\", \"name\", \"type\"]\n list_display_links = [\"acronym\", \"name\"]\n list_filter = [\"type\"]\n search_fields = [\"acronym\", \"name\"]\n ordering = [\"name\"]\n raw_id_fields = [\"group\", \"parent\"]\n\nadmin.site.register(GroupHistory, GroupHistoryAdmin)\n\nclass GroupURLAdmin(admin.ModelAdmin):\n list_display = ['id', 'group', 'name', 'url']\n raw_id_fields = ['group']\n search_fields = ['name']\nadmin.site.register(GroupURL, GroupURLAdmin)\n\nclass GroupMilestoneAdmin(admin.ModelAdmin):\n list_display = [\"group\", \"desc\", \"due\", \"resolved\", \"time\"]\n search_fields = [\"group__name\", \"group__acronym\", \"desc\", \"resolved\"]\n raw_id_fields = [\"group\", \"docs\"]\nadmin.site.register(GroupMilestone, GroupMilestoneAdmin)\nadmin.site.register(GroupMilestoneHistory, GroupMilestoneAdmin)\n\nclass GroupStateTransitionsAdmin(admin.ModelAdmin):\n list_display = ['id', 'group', 'state']\n raw_id_fields = ['group', 'state']\nadmin.site.register(GroupStateTransitions, GroupStateTransitionsAdmin)\n\nclass RoleAdmin(admin.ModelAdmin):\n list_display = [\"name\", \"person\", \"email\", \"group\"]\n list_display_links = [\"name\"]\n search_fields = [\"name__name\", \"person__name\", \"email__address\"]\n list_filter = [\"name\", \"group\"]\n ordering = [\"id\"]\n raw_id_fields = [\"email\", \"person\", \"group\"]\nadmin.site.register(Role, RoleAdmin)\nadmin.site.register(RoleHistory, RoleAdmin)\n\nclass GroupEventAdmin(admin.ModelAdmin):\n list_display = [\"id\", \"group\", \"time\", \"type\", \"by\", ]\n search_fields = [\"group__name\", \"group__acronym\"]\nadmin.site.register(GroupEvent, GroupEventAdmin)\n\nclass ChangeStateGroupEventAdmin(admin.ModelAdmin):\n list_display = [\"id\", \"group\", \"state\", \"time\", \"type\", \"by\", ]\n list_filter = [\"state\", \"time\", ]\n search_fields = [\"group__name\", \"group__acronym\"]\nadmin.site.register(ChangeStateGroupEvent, ChangeStateGroupEventAdmin)\n\nclass MilestoneGroupEventAdmin(admin.ModelAdmin):\n list_display = ['id', 'group', 'time', 'type', 'by', 'desc', 'milestone']\n list_filter = ['time']\n raw_id_fields = ['group', 'by', 'milestone']\nadmin.site.register(MilestoneGroupEvent, MilestoneGroupEventAdmin)\n\nclass GroupExtResourceAdminForm(forms.ModelForm):\n def clean(self):\n validate_external_resource_value(self.cleaned_data['name'],self.cleaned_data['value'])\n\nclass GroupExtResourceAdmin(admin.ModelAdmin):\n form = GroupExtResourceAdminForm\n list_display = ['id', 'group', 'name', 'display_name', 'value',]\n search_fields = ['group__acronym', 'value', 'display_name', 'name__slug',]\n raw_id_fields = ['group', ]\nadmin.site.register(GroupExtResource, GroupExtResourceAdmin)\n", "path": "ietf/group/admin.py"}]} | 3,816 | 796 |
gh_patches_debug_29299 | rasdani/github-patches | git_diff | pypa__setuptools-2961 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] DistutilsMetaFinder appears multiple times on sys.meta_path
### setuptools version
setuptools==60.0.4
### Python version
python3.8
### OS
Any
### Additional environment information
N/A
### Description
The _DistutilsMetaFinder meta_path finder shim is installed multiple times, at least twice by `site.py` via the `.pth` shim, and once by `ensure_local_distutils` when `setuptools` is actually imported. It's unlikely to be a problem, especially since it's literally the same instance, but `add_shim` should be idempotent instead of blindly inserting the shim each time it's called. It also appears that the intent was to remove the shim entirely once distutils has been loaded, but since it's added more than once, the removal in the import leaves two references to it in `sys.meta_path` after setuptools has been imported.
### Expected behavior
After setuptools has been imported, the distutils shim finder is no longer on the meta path.
### How to Reproduce
(with setuptools>60 installed):
python -c "import sys; print(f'Before: {sys.meta_path}'); import setuptools; print(f'After: {sys.meta_path}')"
### Output
```console
Before: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>]
After: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>, <pkg_resources.extern.VendorImporter object at 0x7fe4d21c5460>, <setuptools.extern.VendorImporter object at 0x7fe4d1f2a100>]
```
### Code of Conduct
- [X] I agree to follow the PSF Code of Conduct
[BUG] DistutilsMetaFinder appears multiple times on sys.meta_path
### setuptools version
setuptools==60.0.4
### Python version
python3.8
### OS
Any
### Additional environment information
N/A
### Description
The _DistutilsMetaFinder meta_path finder shim is installed multiple times, at least twice by `site.py` via the `.pth` shim, and once by `ensure_local_distutils` when `setuptools` is actually imported. It's unlikely to be a problem, especially since it's literally the same instance, but `add_shim` should be idempotent instead of blindly inserting the shim each time it's called. It also appears that the intent was to remove the shim entirely once distutils has been loaded, but since it's added more than once, the removal in the import leaves two references to it in `sys.meta_path` after setuptools has been imported.
### Expected behavior
After setuptools has been imported, the distutils shim finder is no longer on the meta path.
### How to Reproduce
(with setuptools>60 installed):
python -c "import sys; print(f'Before: {sys.meta_path}'); import setuptools; print(f'After: {sys.meta_path}')"
### Output
```console
Before: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>]
After: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>, <pkg_resources.extern.VendorImporter object at 0x7fe4d21c5460>, <setuptools.extern.VendorImporter object at 0x7fe4d1f2a100>]
```
### Code of Conduct
- [X] I agree to follow the PSF Code of Conduct
</issue>
<code>
[start of _distutils_hack/__init__.py]
1 import sys
2 import os
3 import re
4 import importlib
5 import warnings
6
7
8 is_pypy = '__pypy__' in sys.builtin_module_names
9
10
11 warnings.filterwarnings('ignore',
12 r'.+ distutils\b.+ deprecated',
13 DeprecationWarning)
14
15
16 def warn_distutils_present():
17 if 'distutils' not in sys.modules:
18 return
19 if is_pypy and sys.version_info < (3, 7):
20 # PyPy for 3.6 unconditionally imports distutils, so bypass the warning
21 # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250
22 return
23 warnings.warn(
24 "Distutils was imported before Setuptools, but importing Setuptools "
25 "also replaces the `distutils` module in `sys.modules`. This may lead "
26 "to undesirable behaviors or errors. To avoid these issues, avoid "
27 "using distutils directly, ensure that setuptools is installed in the "
28 "traditional way (e.g. not an editable install), and/or make sure "
29 "that setuptools is always imported before distutils.")
30
31
32 def clear_distutils():
33 if 'distutils' not in sys.modules:
34 return
35 warnings.warn("Setuptools is replacing distutils.")
36 mods = [name for name in sys.modules if re.match(r'distutils\b', name)]
37 for name in mods:
38 del sys.modules[name]
39
40
41 def enabled():
42 """
43 Allow selection of distutils by environment variable.
44 """
45 which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'local')
46 return which == 'local'
47
48
49 def ensure_local_distutils():
50 clear_distutils()
51
52 # With the DistutilsMetaFinder in place,
53 # perform an import to cause distutils to be
54 # loaded from setuptools._distutils. Ref #2906.
55 add_shim()
56 importlib.import_module('distutils')
57 remove_shim()
58
59 # check that submodules load as expected
60 core = importlib.import_module('distutils.core')
61 assert '_distutils' in core.__file__, core.__file__
62
63
64 def do_override():
65 """
66 Ensure that the local copy of distutils is preferred over stdlib.
67
68 See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401
69 for more motivation.
70 """
71 if enabled():
72 warn_distutils_present()
73 ensure_local_distutils()
74
75
76 class DistutilsMetaFinder:
77 def find_spec(self, fullname, path, target=None):
78 if path is not None:
79 return
80
81 method_name = 'spec_for_{fullname}'.format(**locals())
82 method = getattr(self, method_name, lambda: None)
83 return method()
84
85 def spec_for_distutils(self):
86 import importlib.abc
87 import importlib.util
88
89 class DistutilsLoader(importlib.abc.Loader):
90
91 def create_module(self, spec):
92 return importlib.import_module('setuptools._distutils')
93
94 def exec_module(self, module):
95 pass
96
97 return importlib.util.spec_from_loader('distutils', DistutilsLoader())
98
99 def spec_for_pip(self):
100 """
101 Ensure stdlib distutils when running under pip.
102 See pypa/pip#8761 for rationale.
103 """
104 if self.pip_imported_during_build():
105 return
106 clear_distutils()
107 self.spec_for_distutils = lambda: None
108
109 @classmethod
110 def pip_imported_during_build(cls):
111 """
112 Detect if pip is being imported in a build script. Ref #2355.
113 """
114 import traceback
115 return any(
116 cls.frame_file_is_setup(frame)
117 for frame, line in traceback.walk_stack(None)
118 )
119
120 @staticmethod
121 def frame_file_is_setup(frame):
122 """
123 Return True if the indicated frame suggests a setup.py file.
124 """
125 # some frames may not have __file__ (#2940)
126 return frame.f_globals.get('__file__', '').endswith('setup.py')
127
128
129 DISTUTILS_FINDER = DistutilsMetaFinder()
130
131
132 def add_shim():
133 sys.meta_path.insert(0, DISTUTILS_FINDER)
134
135
136 def remove_shim():
137 try:
138 sys.meta_path.remove(DISTUTILS_FINDER)
139 except ValueError:
140 pass
141
[end of _distutils_hack/__init__.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4 import sys
5 import textwrap
6
7 import setuptools
8 from setuptools.command.install import install
9
10 here = os.path.dirname(__file__)
11
12
13 package_data = dict(
14 setuptools=['script (dev).tmpl', 'script.tmpl', 'site-patch.py'],
15 )
16
17 force_windows_specific_files = (
18 os.environ.get("SETUPTOOLS_INSTALL_WINDOWS_SPECIFIC_FILES", "1").lower()
19 not in ("", "0", "false", "no")
20 )
21
22 include_windows_files = sys.platform == 'win32' or force_windows_specific_files
23
24 if include_windows_files:
25 package_data.setdefault('setuptools', []).extend(['*.exe'])
26 package_data.setdefault('setuptools.command', []).extend(['*.xml'])
27
28
29 def pypi_link(pkg_filename):
30 """
31 Given the filename, including md5 fragment, construct the
32 dependency link for PyPI.
33 """
34 root = 'https://files.pythonhosted.org/packages/source'
35 name, sep, rest = pkg_filename.partition('-')
36 parts = root, name[0], name, pkg_filename
37 return '/'.join(parts)
38
39
40 class install_with_pth(install):
41 """
42 Custom install command to install a .pth file for distutils patching.
43
44 This hack is necessary because there's no standard way to install behavior
45 on startup (and it's debatable if there should be one). This hack (ab)uses
46 the `extra_path` behavior in Setuptools to install a `.pth` file with
47 implicit behavior on startup to give higher precedence to the local version
48 of `distutils` over the version from the standard library.
49
50 Please do not replicate this behavior.
51 """
52
53 _pth_name = 'distutils-precedence'
54 _pth_contents = textwrap.dedent("""
55 import os
56 var = 'SETUPTOOLS_USE_DISTUTILS'
57 enabled = os.environ.get(var, 'local') == 'local'
58 enabled and __import__('_distutils_hack').add_shim()
59 """).lstrip().replace('\n', '; ')
60
61 def initialize_options(self):
62 install.initialize_options(self)
63 self.extra_path = self._pth_name, self._pth_contents
64
65 def finalize_options(self):
66 install.finalize_options(self)
67 self._restore_install_lib()
68
69 def _restore_install_lib(self):
70 """
71 Undo secondary effect of `extra_path` adding to `install_lib`
72 """
73 suffix = os.path.relpath(self.install_lib, self.install_libbase)
74
75 if suffix.strip() == self._pth_contents.strip():
76 self.install_lib = self.install_libbase
77
78
79 setup_params = dict(
80 cmdclass={'install': install_with_pth},
81 package_data=package_data,
82 )
83
84 if __name__ == '__main__':
85 # allow setup.py to run from another directory
86 here and os.chdir(here)
87 dist = setuptools.setup(**setup_params)
88
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/_distutils_hack/__init__.py b/_distutils_hack/__init__.py
--- a/_distutils_hack/__init__.py
+++ b/_distutils_hack/__init__.py
@@ -3,6 +3,7 @@
import re
import importlib
import warnings
+import contextlib
is_pypy = '__pypy__' in sys.builtin_module_names
@@ -52,9 +53,8 @@
# With the DistutilsMetaFinder in place,
# perform an import to cause distutils to be
# loaded from setuptools._distutils. Ref #2906.
- add_shim()
- importlib.import_module('distutils')
- remove_shim()
+ with shim():
+ importlib.import_module('distutils')
# check that submodules load as expected
core = importlib.import_module('distutils.core')
@@ -129,6 +129,19 @@
DISTUTILS_FINDER = DistutilsMetaFinder()
+def ensure_shim():
+ DISTUTILS_FINDER in sys.meta_path or add_shim()
+
+
[email protected]
+def shim():
+ add_shim()
+ try:
+ yield
+ finally:
+ remove_shim()
+
+
def add_shim():
sys.meta_path.insert(0, DISTUTILS_FINDER)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -55,7 +55,7 @@
import os
var = 'SETUPTOOLS_USE_DISTUTILS'
enabled = os.environ.get(var, 'local') == 'local'
- enabled and __import__('_distutils_hack').add_shim()
+ enabled and __import__('_distutils_hack').ensure_shim()
""").lstrip().replace('\n', '; ')
def initialize_options(self):
| {"golden_diff": "diff --git a/_distutils_hack/__init__.py b/_distutils_hack/__init__.py\n--- a/_distutils_hack/__init__.py\n+++ b/_distutils_hack/__init__.py\n@@ -3,6 +3,7 @@\n import re\n import importlib\n import warnings\n+import contextlib\n \n \n is_pypy = '__pypy__' in sys.builtin_module_names\n@@ -52,9 +53,8 @@\n # With the DistutilsMetaFinder in place,\n # perform an import to cause distutils to be\n # loaded from setuptools._distutils. Ref #2906.\n- add_shim()\n- importlib.import_module('distutils')\n- remove_shim()\n+ with shim():\n+ importlib.import_module('distutils')\n \n # check that submodules load as expected\n core = importlib.import_module('distutils.core')\n@@ -129,6 +129,19 @@\n DISTUTILS_FINDER = DistutilsMetaFinder()\n \n \n+def ensure_shim():\n+ DISTUTILS_FINDER in sys.meta_path or add_shim()\n+\n+\[email protected]\n+def shim():\n+ add_shim()\n+ try:\n+ yield\n+ finally:\n+ remove_shim()\n+\n+\n def add_shim():\n sys.meta_path.insert(0, DISTUTILS_FINDER)\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -55,7 +55,7 @@\n import os\n var = 'SETUPTOOLS_USE_DISTUTILS'\n enabled = os.environ.get(var, 'local') == 'local'\n- enabled and __import__('_distutils_hack').add_shim()\n+ enabled and __import__('_distutils_hack').ensure_shim()\n \"\"\").lstrip().replace('\\n', '; ')\n \n def initialize_options(self):\n", "issue": "[BUG] DistutilsMetaFinder appears multiple times on sys.meta_path\n### setuptools version\n\nsetuptools==60.0.4\n\n### Python version\n\npython3.8\n\n### OS\n\nAny\n\n### Additional environment information\n\nN/A\n\n### Description\n\nThe _DistutilsMetaFinder meta_path finder shim is installed multiple times, at least twice by `site.py` via the `.pth` shim, and once by `ensure_local_distutils` when `setuptools` is actually imported. It's unlikely to be a problem, especially since it's literally the same instance, but `add_shim` should be idempotent instead of blindly inserting the shim each time it's called. It also appears that the intent was to remove the shim entirely once distutils has been loaded, but since it's added more than once, the removal in the import leaves two references to it in `sys.meta_path` after setuptools has been imported.\n\n### Expected behavior\n\nAfter setuptools has been imported, the distutils shim finder is no longer on the meta path.\n\n### How to Reproduce\n\n(with setuptools>60 installed):\r\npython -c \"import sys; print(f'Before: {sys.meta_path}'); import setuptools; print(f'After: {sys.meta_path}')\"\r\n\n\n### Output\n\n```console\r\nBefore: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>]\r\nAfter: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>, <pkg_resources.extern.VendorImporter object at 0x7fe4d21c5460>, <setuptools.extern.VendorImporter object at 0x7fe4d1f2a100>]\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the PSF Code of Conduct\n[BUG] DistutilsMetaFinder appears multiple times on sys.meta_path\n### setuptools version\n\nsetuptools==60.0.4\n\n### Python version\n\npython3.8\n\n### OS\n\nAny\n\n### Additional environment information\n\nN/A\n\n### Description\n\nThe _DistutilsMetaFinder meta_path finder shim is installed multiple times, at least twice by `site.py` via the `.pth` shim, and once by `ensure_local_distutils` when `setuptools` is actually imported. It's unlikely to be a problem, especially since it's literally the same instance, but `add_shim` should be idempotent instead of blindly inserting the shim each time it's called. It also appears that the intent was to remove the shim entirely once distutils has been loaded, but since it's added more than once, the removal in the import leaves two references to it in `sys.meta_path` after setuptools has been imported.\n\n### Expected behavior\n\nAfter setuptools has been imported, the distutils shim finder is no longer on the meta path.\n\n### How to Reproduce\n\n(with setuptools>60 installed):\r\npython -c \"import sys; print(f'Before: {sys.meta_path}'); import setuptools; print(f'After: {sys.meta_path}')\"\r\n\n\n### Output\n\n```console\r\nBefore: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>]\r\nAfter: [<_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_distutils_hack.DistutilsMetaFinder object at 0x7fe4d2688700>, <_virtualenv._Finder object at 0x7fe4d2806f70>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>, <pkg_resources.extern.VendorImporter object at 0x7fe4d21c5460>, <setuptools.extern.VendorImporter object at 0x7fe4d1f2a100>]\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the PSF Code of Conduct\n", "before_files": [{"content": "import sys\nimport os\nimport re\nimport importlib\nimport warnings\n\n\nis_pypy = '__pypy__' in sys.builtin_module_names\n\n\nwarnings.filterwarnings('ignore',\n r'.+ distutils\\b.+ deprecated',\n DeprecationWarning)\n\n\ndef warn_distutils_present():\n if 'distutils' not in sys.modules:\n return\n if is_pypy and sys.version_info < (3, 7):\n # PyPy for 3.6 unconditionally imports distutils, so bypass the warning\n # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250\n return\n warnings.warn(\n \"Distutils was imported before Setuptools, but importing Setuptools \"\n \"also replaces the `distutils` module in `sys.modules`. This may lead \"\n \"to undesirable behaviors or errors. To avoid these issues, avoid \"\n \"using distutils directly, ensure that setuptools is installed in the \"\n \"traditional way (e.g. not an editable install), and/or make sure \"\n \"that setuptools is always imported before distutils.\")\n\n\ndef clear_distutils():\n if 'distutils' not in sys.modules:\n return\n warnings.warn(\"Setuptools is replacing distutils.\")\n mods = [name for name in sys.modules if re.match(r'distutils\\b', name)]\n for name in mods:\n del sys.modules[name]\n\n\ndef enabled():\n \"\"\"\n Allow selection of distutils by environment variable.\n \"\"\"\n which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'local')\n return which == 'local'\n\n\ndef ensure_local_distutils():\n clear_distutils()\n\n # With the DistutilsMetaFinder in place,\n # perform an import to cause distutils to be\n # loaded from setuptools._distutils. Ref #2906.\n add_shim()\n importlib.import_module('distutils')\n remove_shim()\n\n # check that submodules load as expected\n core = importlib.import_module('distutils.core')\n assert '_distutils' in core.__file__, core.__file__\n\n\ndef do_override():\n \"\"\"\n Ensure that the local copy of distutils is preferred over stdlib.\n\n See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401\n for more motivation.\n \"\"\"\n if enabled():\n warn_distutils_present()\n ensure_local_distutils()\n\n\nclass DistutilsMetaFinder:\n def find_spec(self, fullname, path, target=None):\n if path is not None:\n return\n\n method_name = 'spec_for_{fullname}'.format(**locals())\n method = getattr(self, method_name, lambda: None)\n return method()\n\n def spec_for_distutils(self):\n import importlib.abc\n import importlib.util\n\n class DistutilsLoader(importlib.abc.Loader):\n\n def create_module(self, spec):\n return importlib.import_module('setuptools._distutils')\n\n def exec_module(self, module):\n pass\n\n return importlib.util.spec_from_loader('distutils', DistutilsLoader())\n\n def spec_for_pip(self):\n \"\"\"\n Ensure stdlib distutils when running under pip.\n See pypa/pip#8761 for rationale.\n \"\"\"\n if self.pip_imported_during_build():\n return\n clear_distutils()\n self.spec_for_distutils = lambda: None\n\n @classmethod\n def pip_imported_during_build(cls):\n \"\"\"\n Detect if pip is being imported in a build script. Ref #2355.\n \"\"\"\n import traceback\n return any(\n cls.frame_file_is_setup(frame)\n for frame, line in traceback.walk_stack(None)\n )\n\n @staticmethod\n def frame_file_is_setup(frame):\n \"\"\"\n Return True if the indicated frame suggests a setup.py file.\n \"\"\"\n # some frames may not have __file__ (#2940)\n return frame.f_globals.get('__file__', '').endswith('setup.py')\n\n\nDISTUTILS_FINDER = DistutilsMetaFinder()\n\n\ndef add_shim():\n sys.meta_path.insert(0, DISTUTILS_FINDER)\n\n\ndef remove_shim():\n try:\n sys.meta_path.remove(DISTUTILS_FINDER)\n except ValueError:\n pass\n", "path": "_distutils_hack/__init__.py"}, {"content": "#!/usr/bin/env python\n\nimport os\nimport sys\nimport textwrap\n\nimport setuptools\nfrom setuptools.command.install import install\n\nhere = os.path.dirname(__file__)\n\n\npackage_data = dict(\n setuptools=['script (dev).tmpl', 'script.tmpl', 'site-patch.py'],\n)\n\nforce_windows_specific_files = (\n os.environ.get(\"SETUPTOOLS_INSTALL_WINDOWS_SPECIFIC_FILES\", \"1\").lower()\n not in (\"\", \"0\", \"false\", \"no\")\n)\n\ninclude_windows_files = sys.platform == 'win32' or force_windows_specific_files\n\nif include_windows_files:\n package_data.setdefault('setuptools', []).extend(['*.exe'])\n package_data.setdefault('setuptools.command', []).extend(['*.xml'])\n\n\ndef pypi_link(pkg_filename):\n \"\"\"\n Given the filename, including md5 fragment, construct the\n dependency link for PyPI.\n \"\"\"\n root = 'https://files.pythonhosted.org/packages/source'\n name, sep, rest = pkg_filename.partition('-')\n parts = root, name[0], name, pkg_filename\n return '/'.join(parts)\n\n\nclass install_with_pth(install):\n \"\"\"\n Custom install command to install a .pth file for distutils patching.\n\n This hack is necessary because there's no standard way to install behavior\n on startup (and it's debatable if there should be one). This hack (ab)uses\n the `extra_path` behavior in Setuptools to install a `.pth` file with\n implicit behavior on startup to give higher precedence to the local version\n of `distutils` over the version from the standard library.\n\n Please do not replicate this behavior.\n \"\"\"\n\n _pth_name = 'distutils-precedence'\n _pth_contents = textwrap.dedent(\"\"\"\n import os\n var = 'SETUPTOOLS_USE_DISTUTILS'\n enabled = os.environ.get(var, 'local') == 'local'\n enabled and __import__('_distutils_hack').add_shim()\n \"\"\").lstrip().replace('\\n', '; ')\n\n def initialize_options(self):\n install.initialize_options(self)\n self.extra_path = self._pth_name, self._pth_contents\n\n def finalize_options(self):\n install.finalize_options(self)\n self._restore_install_lib()\n\n def _restore_install_lib(self):\n \"\"\"\n Undo secondary effect of `extra_path` adding to `install_lib`\n \"\"\"\n suffix = os.path.relpath(self.install_lib, self.install_libbase)\n\n if suffix.strip() == self._pth_contents.strip():\n self.install_lib = self.install_libbase\n\n\nsetup_params = dict(\n cmdclass={'install': install_with_pth},\n package_data=package_data,\n)\n\nif __name__ == '__main__':\n # allow setup.py to run from another directory\n here and os.chdir(here)\n dist = setuptools.setup(**setup_params)\n", "path": "setup.py"}]} | 3,805 | 427 |
gh_patches_debug_38808 | rasdani/github-patches | git_diff | pulp__pulpcore-2233 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PulpImport/Export of kickstart repos with subrepos broken
See https://bugzilla.redhat.com/show_bug.cgi?id=2040870 for details.
</issue>
<code>
[start of pulpcore/app/importexport.py]
1 import os
2 import io
3 import json
4 import tarfile
5 import tempfile
6 import logging
7
8 from django.conf import settings
9 from django.db.models.query import QuerySet
10
11 from pulpcore.app.apps import get_plugin_config
12 from pulpcore.app.models.progress import ProgressReport
13 from pulpcore.app.models.repository import Repository
14 from pulpcore.app.modelresource import (
15 ArtifactResource,
16 ContentArtifactResource,
17 RepositoryResource,
18 )
19 from pulpcore.constants import TASK_STATES, EXPORT_BATCH_SIZE
20
21 log = logging.getLogger(__name__)
22
23
24 def _write_export(the_tarfile, resource, dest_dir=None):
25 """
26 Write the JSON export for the specified resource to the specified tarfile.
27
28 The resulting file will be found at <dest_dir>/<resource.__class__.__name__>.json. If dest_dir
29 is None, the file will be added at the 'top level' of the_tarfile.
30
31 Export-files are UTF-8 encoded.
32
33 Args:
34 the_tarfile (tarfile.Tarfile): tarfile we are writing into
35 resource (import_export.resources.ModelResource): ModelResource to be exported
36 dest_dir str(directory-path): directory 'inside' the tarfile to write to
37 """
38 filename = "{}.{}.json".format(resource.__module__, type(resource).__name__)
39 if dest_dir:
40 dest_filename = os.path.join(dest_dir, filename)
41 else:
42 dest_filename = filename
43
44 # If the resource is the type of QuerySet, then export the data in batch to save memory.
45 # Otherwise, export all data in oneshot. This is because the underlying libraries
46 # (json; django-import-export) do not support to stream the output to file, we export
47 # the data in batches to memory and concatenate the json lists via string manipulation.
48 with tempfile.NamedTemporaryFile(dir=os.getcwd(), mode="w", encoding="utf8") as temp_file:
49 if isinstance(resource.queryset, QuerySet):
50 temp_file.write("[")
51 total = resource.queryset.count()
52 for i in range(0, total, EXPORT_BATCH_SIZE):
53 current_batch = i + EXPORT_BATCH_SIZE
54 dataset = resource.export(resource.queryset[i:current_batch])
55 # Strip "[" and "]" as we are writing the dataset in batch
56 temp_file.write(dataset.json.lstrip("[").rstrip("]"))
57 if current_batch < total:
58 # Write "," if not last loop
59 temp_file.write(", ")
60 temp_file.write("]")
61 else:
62 dataset = resource.export(resource.queryset)
63 temp_file.write(dataset.json)
64
65 temp_file.flush()
66 info = tarfile.TarInfo(name=dest_filename)
67 info.size = os.path.getsize(temp_file.name)
68 with open(temp_file.name, "rb") as fd:
69 the_tarfile.addfile(info, fd)
70
71
72 def export_versions(export, version_info):
73 """
74 Write a JSON list of plugins and their versions as 'versions.json' to export.tarfile
75
76 Output format is [{"component": "<pluginname>", "version": "<pluginversion>"},...]
77
78 Args:
79 export (django.db.models.PulpExport): export instance that's doing the export
80 version_info (set): set of (distribution-label,version) tuples for repos in this export
81 """
82 # build the version-list from the distributions for each component
83 versions = [{"component": label, "version": version} for (label, version) in version_info]
84
85 version_json = json.dumps(versions).encode("utf8")
86 info = tarfile.TarInfo(name="versions.json")
87 info.size = len(version_json)
88 export.tarfile.addfile(info, io.BytesIO(version_json))
89
90
91 def export_artifacts(export, artifacts):
92 """
93 Export a set of Artifacts, ArtifactResources, and RepositoryResources
94
95 Args:
96 export (django.db.models.PulpExport): export instance that's doing the export
97 artifacts (django.db.models.Artifacts): list of artifacts in all repos being exported
98
99 Raises:
100 ValidationError: When path is not in the ALLOWED_EXPORT_PATHS setting
101 """
102 data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifacts))
103 with ProgressReport(**data) as pb:
104 for artifact in pb.iter(artifacts):
105 dest = artifact.file.name
106 if settings.DEFAULT_FILE_STORAGE != "pulpcore.app.models.storage.FileSystem":
107 with tempfile.TemporaryDirectory() as temp_dir:
108 with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
109 temp_file.write(artifact.file.read())
110 temp_file.flush()
111 artifact.file.close()
112 export.tarfile.add(temp_file.name, dest)
113 else:
114 export.tarfile.add(artifact.file.path, dest)
115
116 resource = ArtifactResource()
117 resource.queryset = artifacts
118 _write_export(export.tarfile, resource)
119
120 resource = RepositoryResource()
121 resource.queryset = Repository.objects.filter(pk__in=export.exporter.repositories.all())
122 _write_export(export.tarfile, resource)
123
124
125 def export_content(export, repository_version):
126 """
127 Export db-content, and the db-content of the owning repositories
128
129 Args:
130 export (django.db.models.PulpExport): export instance that's doing the export
131 repository_version (django.db.models.RepositoryVersion): RepositoryVersion being exported
132 """
133
134 def _combine_content_mappings(map1, map2):
135 """Combine two content mapping dicts into one by combining ids for for each key."""
136 result = {}
137 for key in map1.keys() | map2.keys():
138 result[key] = list(set(map1.get(key, []) + map2.get(key, [])))
139 return result
140
141 dest_dir = os.path.join(
142 "repository-{}_{}".format(
143 str(repository_version.repository.name), repository_version.number
144 )
145 )
146
147 # Export the connection between content and artifacts
148 resource = ContentArtifactResource(repository_version)
149 _write_export(export.tarfile, resource, dest_dir)
150
151 # content mapping is used by repo versions with subrepos (eg distribution tree repos)
152 content_mapping = {}
153
154 # find and export any ModelResource found in pulp_<repo-type>.app.modelresource
155 plugin_name = repository_version.repository.pulp_type.split(".")[0]
156 cfg = get_plugin_config(plugin_name)
157 if cfg.exportable_classes:
158 for cls in cfg.exportable_classes:
159 resource = cls(repository_version)
160 _write_export(export.tarfile, resource, dest_dir)
161
162 if hasattr(resource, "content_mapping") and resource.content_mapping:
163 content_mapping = _combine_content_mappings(
164 content_mapping, resource.content_mapping
165 )
166
167 msg = (
168 f"Exporting content for {plugin_name} "
169 f"repository-version {repository_version.repository.name}/{repository_version.number}"
170 )
171 content_count = repository_version.content.count()
172 data = dict(
173 message=msg,
174 code="export.repo.version.content",
175 total=content_count,
176 done=content_count,
177 state=TASK_STATES.COMPLETED,
178 )
179 pb = ProgressReport(**data)
180 pb.save()
181
182 if content_mapping:
183 # write the content mapping to tarfile
184 cm_json = json.dumps(content_mapping).encode("utf8")
185 info = tarfile.TarInfo(name=f"{dest_dir}/content_mapping.json")
186 info.size = len(cm_json)
187 export.tarfile.addfile(info, io.BytesIO(cm_json))
188
[end of pulpcore/app/importexport.py]
[start of pulpcore/app/modelresource.py]
1 from import_export import fields
2 from import_export.widgets import ForeignKeyWidget
3 from logging import getLogger
4
5 from pulpcore.app.models.content import (
6 Artifact,
7 Content,
8 ContentArtifact,
9 )
10 from pulpcore.app.models.repository import Repository
11 from pulpcore.constants import ALL_KNOWN_CONTENT_CHECKSUMS
12 from pulpcore.plugin.importexport import QueryModelResource
13
14
15 log = getLogger(__name__)
16
17
18 #
19 # Artifact and Repository are different from other import-export entities, in that they are not
20 # repo-version-specific.
21 #
22 class ArtifactResource(QueryModelResource):
23 """Resource for import/export of artifacts."""
24
25 def before_import_row(self, row, **kwargs):
26 """
27 Sets digests to None if they are blank strings.
28
29 Args:
30 row (tablib.Dataset row): incoming import-row representing a single Variant.
31 kwargs: args passed along from the import() call.
32
33 """
34 # the export converts None to blank strings but sha384 and sha512 have unique constraints
35 # that get triggered if they are blank. convert checksums back into None if they are blank.
36 for checksum in ALL_KNOWN_CONTENT_CHECKSUMS:
37 if row[checksum] == "":
38 row[checksum] = None
39
40 class Meta:
41 model = Artifact
42 exclude = (
43 "pulp_id",
44 "pulp_created",
45 "pulp_last_updated",
46 )
47 import_id_fields = ("sha256",)
48
49
50 class RepositoryResource(QueryModelResource):
51 class Meta:
52 model = Repository
53 import_id_fields = ("name",)
54 exclude = (
55 "pulp_id",
56 "pulp_created",
57 "pulp_last_updated",
58 "content",
59 )
60
61
62 class ContentArtifactResource(QueryModelResource):
63 """
64 Handles import/export of the ContentArtifact model.
65
66 ContentArtifact is different from other import-export entities because it has no 'natural key'
67 other than a pulp_id, which aren't shared across instances. We do some magic to link up
68 ContentArtifacts to their matching (already-imported) Content.
69 """
70
71 artifact = fields.Field(
72 column_name="artifact", attribute="artifact", widget=ForeignKeyWidget(Artifact, "sha256")
73 )
74
75 def before_import_row(self, row, **kwargs):
76 """
77 Fixes the content-ptr of an incoming content-artifact row at import time.
78
79 Finds the 'original uuid' of the Content for this row, looks it up as the
80 'upstream_id' of imported Content, and then replaces the Content-pk with its
81 (new) uuid.
82
83 Args:
84 row (tablib.Dataset row): incoming import-row representing a single ContentArtifact.
85 kwargs: args passed along from the import() call.
86
87 Returns:
88 (tablib.Dataset row): row that now points to the new downstream uuid for its content.
89 """
90
91 linked_content = Content.objects.get(upstream_id=row["content"])
92 row["content"] = str(linked_content.pulp_id)
93
94 def set_up_queryset(self):
95 return ContentArtifact.objects.filter(content__in=self.repo_version.content).order_by(
96 "content", "relative_path"
97 )
98
99 class Meta:
100 model = ContentArtifact
101 import_id_fields = (
102 "content",
103 "relative_path",
104 )
105 exclude = (
106 "pulp_created",
107 "pulp_last_updated",
108 "_artifacts",
109 "pulp_id",
110 )
111
[end of pulpcore/app/modelresource.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/importexport.py b/pulpcore/app/importexport.py
--- a/pulpcore/app/importexport.py
+++ b/pulpcore/app/importexport.py
@@ -144,10 +144,6 @@
)
)
- # Export the connection between content and artifacts
- resource = ContentArtifactResource(repository_version)
- _write_export(export.tarfile, resource, dest_dir)
-
# content mapping is used by repo versions with subrepos (eg distribution tree repos)
content_mapping = {}
@@ -164,6 +160,10 @@
content_mapping, resource.content_mapping
)
+ # Export the connection between content and artifacts
+ resource = ContentArtifactResource(repository_version, content_mapping)
+ _write_export(export.tarfile, resource, dest_dir)
+
msg = (
f"Exporting content for {plugin_name} "
f"repository-version {repository_version.repository.name}/{repository_version.number}"
diff --git a/pulpcore/app/modelresource.py b/pulpcore/app/modelresource.py
--- a/pulpcore/app/modelresource.py
+++ b/pulpcore/app/modelresource.py
@@ -66,12 +66,19 @@
ContentArtifact is different from other import-export entities because it has no 'natural key'
other than a pulp_id, which aren't shared across instances. We do some magic to link up
ContentArtifacts to their matching (already-imported) Content.
+
+ Some plugin-models have sub-repositories. We take advantage of the content-mapping
+ machinery to account for those contentartifacts as well.
"""
artifact = fields.Field(
column_name="artifact", attribute="artifact", widget=ForeignKeyWidget(Artifact, "sha256")
)
+ def __init__(self, repo_version=None, content_mapping=None):
+ self.content_mapping = content_mapping
+ super().__init__(repo_version)
+
def before_import_row(self, row, **kwargs):
"""
Fixes the content-ptr of an incoming content-artifact row at import time.
@@ -92,9 +99,15 @@
row["content"] = str(linked_content.pulp_id)
def set_up_queryset(self):
- return ContentArtifact.objects.filter(content__in=self.repo_version.content).order_by(
- "content", "relative_path"
- )
+ vers_content = ContentArtifact.objects.filter(content__in=self.repo_version.content)
+ if self.content_mapping:
+ all_content = []
+ for content_ids in self.content_mapping.values():
+ all_content.extend(content_ids)
+ vers_content = vers_content.union(
+ ContentArtifact.objects.filter(content__in=all_content)
+ )
+ return vers_content.order_by("content", "relative_path")
class Meta:
model = ContentArtifact
| {"golden_diff": "diff --git a/pulpcore/app/importexport.py b/pulpcore/app/importexport.py\n--- a/pulpcore/app/importexport.py\n+++ b/pulpcore/app/importexport.py\n@@ -144,10 +144,6 @@\n )\n )\n \n- # Export the connection between content and artifacts\n- resource = ContentArtifactResource(repository_version)\n- _write_export(export.tarfile, resource, dest_dir)\n-\n # content mapping is used by repo versions with subrepos (eg distribution tree repos)\n content_mapping = {}\n \n@@ -164,6 +160,10 @@\n content_mapping, resource.content_mapping\n )\n \n+ # Export the connection between content and artifacts\n+ resource = ContentArtifactResource(repository_version, content_mapping)\n+ _write_export(export.tarfile, resource, dest_dir)\n+\n msg = (\n f\"Exporting content for {plugin_name} \"\n f\"repository-version {repository_version.repository.name}/{repository_version.number}\"\ndiff --git a/pulpcore/app/modelresource.py b/pulpcore/app/modelresource.py\n--- a/pulpcore/app/modelresource.py\n+++ b/pulpcore/app/modelresource.py\n@@ -66,12 +66,19 @@\n ContentArtifact is different from other import-export entities because it has no 'natural key'\n other than a pulp_id, which aren't shared across instances. We do some magic to link up\n ContentArtifacts to their matching (already-imported) Content.\n+\n+ Some plugin-models have sub-repositories. We take advantage of the content-mapping\n+ machinery to account for those contentartifacts as well.\n \"\"\"\n \n artifact = fields.Field(\n column_name=\"artifact\", attribute=\"artifact\", widget=ForeignKeyWidget(Artifact, \"sha256\")\n )\n \n+ def __init__(self, repo_version=None, content_mapping=None):\n+ self.content_mapping = content_mapping\n+ super().__init__(repo_version)\n+\n def before_import_row(self, row, **kwargs):\n \"\"\"\n Fixes the content-ptr of an incoming content-artifact row at import time.\n@@ -92,9 +99,15 @@\n row[\"content\"] = str(linked_content.pulp_id)\n \n def set_up_queryset(self):\n- return ContentArtifact.objects.filter(content__in=self.repo_version.content).order_by(\n- \"content\", \"relative_path\"\n- )\n+ vers_content = ContentArtifact.objects.filter(content__in=self.repo_version.content)\n+ if self.content_mapping:\n+ all_content = []\n+ for content_ids in self.content_mapping.values():\n+ all_content.extend(content_ids)\n+ vers_content = vers_content.union(\n+ ContentArtifact.objects.filter(content__in=all_content)\n+ )\n+ return vers_content.order_by(\"content\", \"relative_path\")\n \n class Meta:\n model = ContentArtifact\n", "issue": "PulpImport/Export of kickstart repos with subrepos broken\nSee https://bugzilla.redhat.com/show_bug.cgi?id=2040870 for details.\n", "before_files": [{"content": "import os\nimport io\nimport json\nimport tarfile\nimport tempfile\nimport logging\n\nfrom django.conf import settings\nfrom django.db.models.query import QuerySet\n\nfrom pulpcore.app.apps import get_plugin_config\nfrom pulpcore.app.models.progress import ProgressReport\nfrom pulpcore.app.models.repository import Repository\nfrom pulpcore.app.modelresource import (\n ArtifactResource,\n ContentArtifactResource,\n RepositoryResource,\n)\nfrom pulpcore.constants import TASK_STATES, EXPORT_BATCH_SIZE\n\nlog = logging.getLogger(__name__)\n\n\ndef _write_export(the_tarfile, resource, dest_dir=None):\n \"\"\"\n Write the JSON export for the specified resource to the specified tarfile.\n\n The resulting file will be found at <dest_dir>/<resource.__class__.__name__>.json. If dest_dir\n is None, the file will be added at the 'top level' of the_tarfile.\n\n Export-files are UTF-8 encoded.\n\n Args:\n the_tarfile (tarfile.Tarfile): tarfile we are writing into\n resource (import_export.resources.ModelResource): ModelResource to be exported\n dest_dir str(directory-path): directory 'inside' the tarfile to write to\n \"\"\"\n filename = \"{}.{}.json\".format(resource.__module__, type(resource).__name__)\n if dest_dir:\n dest_filename = os.path.join(dest_dir, filename)\n else:\n dest_filename = filename\n\n # If the resource is the type of QuerySet, then export the data in batch to save memory.\n # Otherwise, export all data in oneshot. This is because the underlying libraries\n # (json; django-import-export) do not support to stream the output to file, we export\n # the data in batches to memory and concatenate the json lists via string manipulation.\n with tempfile.NamedTemporaryFile(dir=os.getcwd(), mode=\"w\", encoding=\"utf8\") as temp_file:\n if isinstance(resource.queryset, QuerySet):\n temp_file.write(\"[\")\n total = resource.queryset.count()\n for i in range(0, total, EXPORT_BATCH_SIZE):\n current_batch = i + EXPORT_BATCH_SIZE\n dataset = resource.export(resource.queryset[i:current_batch])\n # Strip \"[\" and \"]\" as we are writing the dataset in batch\n temp_file.write(dataset.json.lstrip(\"[\").rstrip(\"]\"))\n if current_batch < total:\n # Write \",\" if not last loop\n temp_file.write(\", \")\n temp_file.write(\"]\")\n else:\n dataset = resource.export(resource.queryset)\n temp_file.write(dataset.json)\n\n temp_file.flush()\n info = tarfile.TarInfo(name=dest_filename)\n info.size = os.path.getsize(temp_file.name)\n with open(temp_file.name, \"rb\") as fd:\n the_tarfile.addfile(info, fd)\n\n\ndef export_versions(export, version_info):\n \"\"\"\n Write a JSON list of plugins and their versions as 'versions.json' to export.tarfile\n\n Output format is [{\"component\": \"<pluginname>\", \"version\": \"<pluginversion>\"},...]\n\n Args:\n export (django.db.models.PulpExport): export instance that's doing the export\n version_info (set): set of (distribution-label,version) tuples for repos in this export\n \"\"\"\n # build the version-list from the distributions for each component\n versions = [{\"component\": label, \"version\": version} for (label, version) in version_info]\n\n version_json = json.dumps(versions).encode(\"utf8\")\n info = tarfile.TarInfo(name=\"versions.json\")\n info.size = len(version_json)\n export.tarfile.addfile(info, io.BytesIO(version_json))\n\n\ndef export_artifacts(export, artifacts):\n \"\"\"\n Export a set of Artifacts, ArtifactResources, and RepositoryResources\n\n Args:\n export (django.db.models.PulpExport): export instance that's doing the export\n artifacts (django.db.models.Artifacts): list of artifacts in all repos being exported\n\n Raises:\n ValidationError: When path is not in the ALLOWED_EXPORT_PATHS setting\n \"\"\"\n data = dict(message=\"Exporting Artifacts\", code=\"export.artifacts\", total=len(artifacts))\n with ProgressReport(**data) as pb:\n for artifact in pb.iter(artifacts):\n dest = artifact.file.name\n if settings.DEFAULT_FILE_STORAGE != \"pulpcore.app.models.storage.FileSystem\":\n with tempfile.TemporaryDirectory() as temp_dir:\n with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:\n temp_file.write(artifact.file.read())\n temp_file.flush()\n artifact.file.close()\n export.tarfile.add(temp_file.name, dest)\n else:\n export.tarfile.add(artifact.file.path, dest)\n\n resource = ArtifactResource()\n resource.queryset = artifacts\n _write_export(export.tarfile, resource)\n\n resource = RepositoryResource()\n resource.queryset = Repository.objects.filter(pk__in=export.exporter.repositories.all())\n _write_export(export.tarfile, resource)\n\n\ndef export_content(export, repository_version):\n \"\"\"\n Export db-content, and the db-content of the owning repositories\n\n Args:\n export (django.db.models.PulpExport): export instance that's doing the export\n repository_version (django.db.models.RepositoryVersion): RepositoryVersion being exported\n \"\"\"\n\n def _combine_content_mappings(map1, map2):\n \"\"\"Combine two content mapping dicts into one by combining ids for for each key.\"\"\"\n result = {}\n for key in map1.keys() | map2.keys():\n result[key] = list(set(map1.get(key, []) + map2.get(key, [])))\n return result\n\n dest_dir = os.path.join(\n \"repository-{}_{}\".format(\n str(repository_version.repository.name), repository_version.number\n )\n )\n\n # Export the connection between content and artifacts\n resource = ContentArtifactResource(repository_version)\n _write_export(export.tarfile, resource, dest_dir)\n\n # content mapping is used by repo versions with subrepos (eg distribution tree repos)\n content_mapping = {}\n\n # find and export any ModelResource found in pulp_<repo-type>.app.modelresource\n plugin_name = repository_version.repository.pulp_type.split(\".\")[0]\n cfg = get_plugin_config(plugin_name)\n if cfg.exportable_classes:\n for cls in cfg.exportable_classes:\n resource = cls(repository_version)\n _write_export(export.tarfile, resource, dest_dir)\n\n if hasattr(resource, \"content_mapping\") and resource.content_mapping:\n content_mapping = _combine_content_mappings(\n content_mapping, resource.content_mapping\n )\n\n msg = (\n f\"Exporting content for {plugin_name} \"\n f\"repository-version {repository_version.repository.name}/{repository_version.number}\"\n )\n content_count = repository_version.content.count()\n data = dict(\n message=msg,\n code=\"export.repo.version.content\",\n total=content_count,\n done=content_count,\n state=TASK_STATES.COMPLETED,\n )\n pb = ProgressReport(**data)\n pb.save()\n\n if content_mapping:\n # write the content mapping to tarfile\n cm_json = json.dumps(content_mapping).encode(\"utf8\")\n info = tarfile.TarInfo(name=f\"{dest_dir}/content_mapping.json\")\n info.size = len(cm_json)\n export.tarfile.addfile(info, io.BytesIO(cm_json))\n", "path": "pulpcore/app/importexport.py"}, {"content": "from import_export import fields\nfrom import_export.widgets import ForeignKeyWidget\nfrom logging import getLogger\n\nfrom pulpcore.app.models.content import (\n Artifact,\n Content,\n ContentArtifact,\n)\nfrom pulpcore.app.models.repository import Repository\nfrom pulpcore.constants import ALL_KNOWN_CONTENT_CHECKSUMS\nfrom pulpcore.plugin.importexport import QueryModelResource\n\n\nlog = getLogger(__name__)\n\n\n#\n# Artifact and Repository are different from other import-export entities, in that they are not\n# repo-version-specific.\n#\nclass ArtifactResource(QueryModelResource):\n \"\"\"Resource for import/export of artifacts.\"\"\"\n\n def before_import_row(self, row, **kwargs):\n \"\"\"\n Sets digests to None if they are blank strings.\n\n Args:\n row (tablib.Dataset row): incoming import-row representing a single Variant.\n kwargs: args passed along from the import() call.\n\n \"\"\"\n # the export converts None to blank strings but sha384 and sha512 have unique constraints\n # that get triggered if they are blank. convert checksums back into None if they are blank.\n for checksum in ALL_KNOWN_CONTENT_CHECKSUMS:\n if row[checksum] == \"\":\n row[checksum] = None\n\n class Meta:\n model = Artifact\n exclude = (\n \"pulp_id\",\n \"pulp_created\",\n \"pulp_last_updated\",\n )\n import_id_fields = (\"sha256\",)\n\n\nclass RepositoryResource(QueryModelResource):\n class Meta:\n model = Repository\n import_id_fields = (\"name\",)\n exclude = (\n \"pulp_id\",\n \"pulp_created\",\n \"pulp_last_updated\",\n \"content\",\n )\n\n\nclass ContentArtifactResource(QueryModelResource):\n \"\"\"\n Handles import/export of the ContentArtifact model.\n\n ContentArtifact is different from other import-export entities because it has no 'natural key'\n other than a pulp_id, which aren't shared across instances. We do some magic to link up\n ContentArtifacts to their matching (already-imported) Content.\n \"\"\"\n\n artifact = fields.Field(\n column_name=\"artifact\", attribute=\"artifact\", widget=ForeignKeyWidget(Artifact, \"sha256\")\n )\n\n def before_import_row(self, row, **kwargs):\n \"\"\"\n Fixes the content-ptr of an incoming content-artifact row at import time.\n\n Finds the 'original uuid' of the Content for this row, looks it up as the\n 'upstream_id' of imported Content, and then replaces the Content-pk with its\n (new) uuid.\n\n Args:\n row (tablib.Dataset row): incoming import-row representing a single ContentArtifact.\n kwargs: args passed along from the import() call.\n\n Returns:\n (tablib.Dataset row): row that now points to the new downstream uuid for its content.\n \"\"\"\n\n linked_content = Content.objects.get(upstream_id=row[\"content\"])\n row[\"content\"] = str(linked_content.pulp_id)\n\n def set_up_queryset(self):\n return ContentArtifact.objects.filter(content__in=self.repo_version.content).order_by(\n \"content\", \"relative_path\"\n )\n\n class Meta:\n model = ContentArtifact\n import_id_fields = (\n \"content\",\n \"relative_path\",\n )\n exclude = (\n \"pulp_created\",\n \"pulp_last_updated\",\n \"_artifacts\",\n \"pulp_id\",\n )\n", "path": "pulpcore/app/modelresource.py"}]} | 3,580 | 624 |
gh_patches_debug_52787 | rasdani/github-patches | git_diff | conan-io__conan-center-index-5412 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] all: "Access is denied" in os.rename() on Windows
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **almost all packages affected**
* Operating System+version: **Windows 10**
* Compiler+version: **MSVC 16**
* Conan version: **conan 1.35.2**
* Python version: **Python 3.8.7**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
[settings]
os_build=Windows
os=Windows
arch=x86_64
arch_build=x86_64
compiler=Visual Studio
compiler.version=16
compiler.runtime=MD
build_type=Release
```
### Steps to reproduce (Include if Applicable)
This is a known issue. Solution provided by https://github.com/conan-io/conan/pull/6774
However most recipes still use `os.rename()` and not `tools.rename()`.
### Log
```
b2/4.2.0: Configuring sources in C:\Users\xxx\.conan\data\b2\4.2.0\_\_\source
ERROR: b2/4.2.0: Error in source() method, line 58
os.rename(extracted_dir, "source")
PermissionError: [WinError 5] Access is denied: 'build-4.2.0' -> 'source'
```
</issue>
<code>
[start of recipes/bzip2/all/conanfile.py]
1 import os
2 import textwrap
3 from conans import ConanFile, CMake, tools
4
5 required_conan_version = ">=1.33.0"
6
7
8 class Bzip2Conan(ConanFile):
9 name = "bzip2"
10 url = "https://github.com/conan-io/conan-center-index"
11 homepage = "http://www.bzip.org"
12 license = "bzip2-1.0.8"
13 description = "bzip2 is a free and open-source file compression program that uses the Burrows Wheeler algorithm."
14 topics = ("conan", "bzip2", "data-compressor", "file-compression")
15
16 settings = "os", "compiler", "arch", "build_type"
17 options = {
18 "shared": [True, False],
19 "fPIC": [True, False],
20 "build_executable": [True, False]
21 }
22 default_options = {
23 "shared": False,
24 "fPIC": True,
25 "build_executable": True
26 }
27
28 exports_sources = ["CMakeLists.txt", "patches/**"]
29 generators = "cmake"
30 _cmake = None
31
32 @property
33 def _source_subfolder(self):
34 return "source_subfolder"
35
36 def config_options(self):
37 if self.settings.os == "Windows":
38 del self.options.fPIC
39 self.license = "bzip2-{}".format(self.version)
40
41 def configure(self):
42 if self.options.shared:
43 del self.options.fPIC
44 del self.settings.compiler.libcxx
45 del self.settings.compiler.cppstd
46
47 def source(self):
48 tools.get(**self.conan_data["sources"][self.version])
49 folder_name = "%s-%s" % (self.name, self.version)
50 os.rename(folder_name, self._source_subfolder)
51
52 def _configure_cmake(self):
53 if self._cmake:
54 return self._cmake
55 self._cmake = CMake(self)
56 self._cmake.definitions["BZ2_VERSION_STRING"] = self.version
57 self._cmake.definitions["BZ2_VERSION_MAJOR"] = tools.Version(self.version).major
58 self._cmake.definitions["BZ2_BUILD_EXE"] = self.options.build_executable
59 self._cmake.configure()
60 return self._cmake
61
62 def build(self):
63 for patch in self.conan_data.get("patches", {}).get(self.version, []):
64 tools.patch(**patch)
65 cmake = self._configure_cmake()
66 cmake.build()
67
68 def package(self):
69 self.copy("LICENSE", dst="licenses", src=self._source_subfolder)
70 cmake = self._configure_cmake()
71 cmake.install()
72 self._create_cmake_module_variables(
73 os.path.join(self.package_folder, self._module_subfolder, self._module_file)
74 )
75
76 @staticmethod
77 def _create_cmake_module_variables(module_file):
78 content = textwrap.dedent("""\
79 if(DEFINED BZip2_FOUND)
80 set(BZIP2_FOUND ${BZip2_FOUND})
81 set(BZIP2_NEED_PREFIX TRUE)
82 endif()
83 if(DEFINED BZip2_INCLUDE_DIR)
84 set(BZIP2_INCLUDE_DIRS ${BZip2_INCLUDE_DIR})
85 set(BZIP2_INCLUDE_DIR ${BZip2_INCLUDE_DIR})
86 endif()
87 if(DEFINED BZip2_LIBRARIES)
88 set(BZIP2_LIBRARIES ${BZip2_LIBRARIES})
89 endif()
90 if(DEFINED BZip2_VERSION)
91 set(BZIP2_VERSION_STRING ${BZip2_VERSION})
92 endif()
93 """)
94 tools.save(module_file, content)
95
96 @property
97 def _module_subfolder(self):
98 return os.path.join("lib", "cmake")
99
100 @property
101 def _module_file(self):
102 return "conan-official-{}-variables.cmake".format(self.name)
103
104 def package_info(self):
105 self.cpp_info.names["cmake_find_package"] = "BZip2"
106 self.cpp_info.names["cmake_find_package_multi"] = "BZip2"
107 self.cpp_info.builddirs.append(self._module_subfolder)
108 self.cpp_info.build_modules["cmake_find_package"] = [os.path.join(self._module_subfolder, self._module_file)]
109 self.cpp_info.libs = ["bz2"]
110
111 if self.options.build_executable:
112 bin_path = os.path.join(self.package_folder, "bin")
113 self.output.info("Appending PATH environment variable: {}".format(bin_path))
114 self.env_info.PATH.append(bin_path)
115
[end of recipes/bzip2/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/bzip2/all/conanfile.py b/recipes/bzip2/all/conanfile.py
--- a/recipes/bzip2/all/conanfile.py
+++ b/recipes/bzip2/all/conanfile.py
@@ -45,9 +45,7 @@
del self.settings.compiler.cppstd
def source(self):
- tools.get(**self.conan_data["sources"][self.version])
- folder_name = "%s-%s" % (self.name, self.version)
- os.rename(folder_name, self._source_subfolder)
+ tools.get(**self.conan_data["sources"][self.version], destination=self._source_subfolder, strip_root=True)
def _configure_cmake(self):
if self._cmake:
| {"golden_diff": "diff --git a/recipes/bzip2/all/conanfile.py b/recipes/bzip2/all/conanfile.py\n--- a/recipes/bzip2/all/conanfile.py\n+++ b/recipes/bzip2/all/conanfile.py\n@@ -45,9 +45,7 @@\n del self.settings.compiler.cppstd\n \n def source(self):\n- tools.get(**self.conan_data[\"sources\"][self.version])\n- folder_name = \"%s-%s\" % (self.name, self.version)\n- os.rename(folder_name, self._source_subfolder)\n+ tools.get(**self.conan_data[\"sources\"][self.version], destination=self._source_subfolder, strip_root=True)\n \n def _configure_cmake(self):\n if self._cmake:\n", "issue": "[package] all: \"Access is denied\" in os.rename() on Windows\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **almost all packages affected**\r\n * Operating System+version: **Windows 10**\r\n * Compiler+version: **MSVC 16**\r\n * Conan version: **conan 1.35.2**\r\n * Python version: **Python 3.8.7**\r\n\r\n\r\n### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)\r\n```\r\n[settings]\r\nos_build=Windows\r\nos=Windows\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=Visual Studio\r\ncompiler.version=16\r\ncompiler.runtime=MD\r\nbuild_type=Release\r\n```\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\nThis is a known issue. Solution provided by https://github.com/conan-io/conan/pull/6774\r\nHowever most recipes still use `os.rename()` and not `tools.rename()`. \r\n\r\n### Log\r\n```\r\nb2/4.2.0: Configuring sources in C:\\Users\\xxx\\.conan\\data\\b2\\4.2.0\\_\\_\\source\r\nERROR: b2/4.2.0: Error in source() method, line 58\r\nos.rename(extracted_dir, \"source\")\r\nPermissionError: [WinError 5] Access is denied: 'build-4.2.0' -> 'source'\r\n```\r\n\n", "before_files": [{"content": "import os\nimport textwrap\nfrom conans import ConanFile, CMake, tools\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass Bzip2Conan(ConanFile):\n name = \"bzip2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://www.bzip.org\"\n license = \"bzip2-1.0.8\"\n description = \"bzip2 is a free and open-source file compression program that uses the Burrows Wheeler algorithm.\"\n topics = (\"conan\", \"bzip2\", \"data-compressor\", \"file-compression\")\n\n settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"build_executable\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"build_executable\": True\n }\n\n exports_sources = [\"CMakeLists.txt\", \"patches/**\"]\n generators = \"cmake\"\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n self.license = \"bzip2-{}\".format(self.version)\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n folder_name = \"%s-%s\" % (self.name, self.version)\n os.rename(folder_name, self._source_subfolder)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"BZ2_VERSION_STRING\"] = self.version\n self._cmake.definitions[\"BZ2_VERSION_MAJOR\"] = tools.Version(self.version).major\n self._cmake.definitions[\"BZ2_BUILD_EXE\"] = self.options.build_executable\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n self._create_cmake_module_variables(\n os.path.join(self.package_folder, self._module_subfolder, self._module_file)\n )\n\n @staticmethod\n def _create_cmake_module_variables(module_file):\n content = textwrap.dedent(\"\"\"\\\n if(DEFINED BZip2_FOUND)\n set(BZIP2_FOUND ${BZip2_FOUND})\n set(BZIP2_NEED_PREFIX TRUE)\n endif()\n if(DEFINED BZip2_INCLUDE_DIR)\n set(BZIP2_INCLUDE_DIRS ${BZip2_INCLUDE_DIR})\n set(BZIP2_INCLUDE_DIR ${BZip2_INCLUDE_DIR})\n endif()\n if(DEFINED BZip2_LIBRARIES)\n set(BZIP2_LIBRARIES ${BZip2_LIBRARIES})\n endif()\n if(DEFINED BZip2_VERSION)\n set(BZIP2_VERSION_STRING ${BZip2_VERSION})\n endif()\n \"\"\")\n tools.save(module_file, content)\n\n @property\n def _module_subfolder(self):\n return os.path.join(\"lib\", \"cmake\")\n\n @property\n def _module_file(self):\n return \"conan-official-{}-variables.cmake\".format(self.name)\n\n def package_info(self):\n self.cpp_info.names[\"cmake_find_package\"] = \"BZip2\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"BZip2\"\n self.cpp_info.builddirs.append(self._module_subfolder)\n self.cpp_info.build_modules[\"cmake_find_package\"] = [os.path.join(self._module_subfolder, self._module_file)]\n self.cpp_info.libs = [\"bz2\"]\n\n if self.options.build_executable:\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/bzip2/all/conanfile.py"}]} | 2,091 | 165 |
gh_patches_debug_55397 | rasdani/github-patches | git_diff | googleapis__python-bigquery-1413 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support Pythons <4
I'd like to be able to allow python <4 in ibis, but as of this PR (https://github.com/ibis-project/ibis/pull/4797) I cannot due to this library's `<3.11` pin.
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 "grpcio >= 1.47.0, < 2.0dev", # https://github.com/googleapis/python-bigquery/issues/1262
33 # NOTE: Maintainers, please do not require google-api-core>=2.x.x
34 # Until this issue is closed
35 # https://github.com/googleapis/google-cloud-python/issues/10566
36 "google-api-core[grpc] >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0",
37 "google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev",
38 "proto-plus >= 1.22.0, <2.0.0dev",
39 # NOTE: Maintainers, please do not require google-cloud-core>=2.x.x
40 # Until this issue is closed
41 # https://github.com/googleapis/google-cloud-python/issues/10566
42 "google-cloud-core >= 1.4.1, <3.0.0dev",
43 "google-resumable-media >= 0.6.0, < 3.0dev",
44 "packaging >= 14.3, <22.0.0dev",
45 "protobuf>=3.19.5,<5.0.0dev,!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5", # For the legacy proto-based types.
46 "python-dateutil >= 2.7.2, <3.0dev",
47 "pyarrow >= 3.0.0, < 11.0dev",
48 "requests >= 2.21.0, < 3.0.0dev",
49 ]
50 extras = {
51 # Keep the no-op bqstorage extra for backward compatibility.
52 # See: https://github.com/googleapis/python-bigquery/issues/757
53 "bqstorage": [],
54 "pandas": ["pandas>=1.0.0", "db-dtypes>=0.3.0,<2.0.0dev"],
55 "ipywidgets": ["ipywidgets==7.7.1"],
56 "geopandas": ["geopandas>=0.9.0, <1.0dev", "Shapely>=1.6.0, <2.0dev"],
57 "ipython": ["ipython>=7.0.1,!=8.1.0"],
58 "tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
59 "opentelemetry": [
60 "opentelemetry-api >= 1.1.0",
61 "opentelemetry-sdk >= 1.1.0",
62 "opentelemetry-instrumentation >= 0.20b0",
63 ],
64 }
65
66 all_extras = []
67
68 for extra in extras:
69 all_extras.extend(extras[extra])
70
71 extras["all"] = all_extras
72
73 # Setup boilerplate below this line.
74
75 package_root = os.path.abspath(os.path.dirname(__file__))
76
77 readme_filename = os.path.join(package_root, "README.rst")
78 with io.open(readme_filename, encoding="utf-8") as readme_file:
79 readme = readme_file.read()
80
81 version = {}
82 with open(os.path.join(package_root, "google/cloud/bigquery/version.py")) as fp:
83 exec(fp.read(), version)
84 version = version["__version__"]
85
86 # Only include packages under the 'google' namespace. Do not include tests,
87 # benchmarks, etc.
88 packages = [
89 package
90 for package in setuptools.PEP420PackageFinder.find()
91 if package.startswith("google")
92 ]
93
94 # Determine which namespaces are needed.
95 namespaces = ["google"]
96 if "google.cloud" in packages:
97 namespaces.append("google.cloud")
98
99
100 setuptools.setup(
101 name=name,
102 version=version,
103 description=description,
104 long_description=readme,
105 author="Google LLC",
106 author_email="[email protected]",
107 license="Apache 2.0",
108 url="https://github.com/googleapis/python-bigquery",
109 classifiers=[
110 release_status,
111 "Intended Audience :: Developers",
112 "License :: OSI Approved :: Apache Software License",
113 "Programming Language :: Python",
114 "Programming Language :: Python :: 3",
115 "Programming Language :: Python :: 3.7",
116 "Programming Language :: Python :: 3.8",
117 "Programming Language :: Python :: 3.9",
118 "Programming Language :: Python :: 3.10",
119 "Operating System :: OS Independent",
120 "Topic :: Internet",
121 ],
122 platforms="Posix; MacOS X; Windows",
123 packages=packages,
124 namespace_packages=namespaces,
125 install_requires=dependencies,
126 extras_require=extras,
127 python_requires=">=3.7, <3.11",
128 include_package_data=True,
129 zip_safe=False,
130 )
131
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -124,7 +124,7 @@
namespace_packages=namespaces,
install_requires=dependencies,
extras_require=extras,
- python_requires=">=3.7, <3.11",
+ python_requires=">=3.7",
include_package_data=True,
zip_safe=False,
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -124,7 +124,7 @@\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n- python_requires=\">=3.7, <3.11\",\n+ python_requires=\">=3.7\",\n include_package_data=True,\n zip_safe=False,\n )\n", "issue": "Support Pythons <4\nI'd like to be able to allow python <4 in ibis, but as of this PR (https://github.com/ibis-project/ibis/pull/4797) I cannot due to this library's `<3.11` pin.\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"grpcio >= 1.47.0, < 2.0dev\", # https://github.com/googleapis/python-bigquery/issues/1262\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core[grpc] >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0\",\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n \"proto-plus >= 1.22.0, <2.0.0dev\",\n # NOTE: Maintainers, please do not require google-cloud-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-cloud-core >= 1.4.1, <3.0.0dev\",\n \"google-resumable-media >= 0.6.0, < 3.0dev\",\n \"packaging >= 14.3, <22.0.0dev\",\n \"protobuf>=3.19.5,<5.0.0dev,!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5\", # For the legacy proto-based types.\n \"python-dateutil >= 2.7.2, <3.0dev\",\n \"pyarrow >= 3.0.0, < 11.0dev\",\n \"requests >= 2.21.0, < 3.0.0dev\",\n]\nextras = {\n # Keep the no-op bqstorage extra for backward compatibility.\n # See: https://github.com/googleapis/python-bigquery/issues/757\n \"bqstorage\": [],\n \"pandas\": [\"pandas>=1.0.0\", \"db-dtypes>=0.3.0,<2.0.0dev\"],\n \"ipywidgets\": [\"ipywidgets==7.7.1\"],\n \"geopandas\": [\"geopandas>=0.9.0, <1.0dev\", \"Shapely>=1.6.0, <2.0dev\"],\n \"ipython\": [\"ipython>=7.0.1,!=8.1.0\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 1.1.0\",\n \"opentelemetry-sdk >= 1.1.0\",\n \"opentelemetry-instrumentation >= 0.20b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.7, <3.11\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 2,210 | 91 |
gh_patches_debug_1624 | rasdani/github-patches | git_diff | pypa__cibuildwheel-977 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
on windows, setup_py_python_requires attempts to open utf-8 setup.py as Windows-1252 and fails
### Description
This [setup.py file](https://github.com/fgregg/fastcluster/blob/master/setup.py) is valid utf-8, and has a few non-ascii characters. In a windows build, `setup_py_python_requires` appears to be opening this file as if it was encoded like Windows-1252 and thus fails on some non-ascii characters.
### Build log
https://github.com/fgregg/fastcluster/runs/4660766954?check_suite_focus=true#step:5:40
### CI config
https://github.com/fgregg/fastcluster/blob/master/.github/workflows/pythonpackage.yml#L41-L47
</issue>
<code>
[start of cibuildwheel/projectfiles.py]
1 import ast
2 import sys
3 from configparser import ConfigParser
4 from pathlib import Path
5 from typing import Any, Optional
6
7 import tomli
8
9 if sys.version_info < (3, 8):
10 Constant = ast.Str
11
12 def get_constant(x: ast.Str) -> str:
13 return x.s
14
15 else:
16 Constant = ast.Constant
17
18 def get_constant(x: ast.Constant) -> Any:
19 return x.value
20
21
22 class Analyzer(ast.NodeVisitor):
23 def __init__(self) -> None:
24 self.requires_python: Optional[str] = None
25
26 def visit(self, content: ast.AST) -> None:
27 for node in ast.walk(content):
28 for child in ast.iter_child_nodes(node):
29 child.parent = node # type: ignore[attr-defined]
30 super().visit(content)
31
32 def visit_keyword(self, node: ast.keyword) -> None:
33 self.generic_visit(node)
34 if node.arg == "python_requires":
35 # Must not be nested in an if or other structure
36 # This will be Module -> Expr -> Call -> keyword
37 if not hasattr(node.parent.parent.parent, "parent") and isinstance( # type: ignore[attr-defined]
38 node.value, Constant
39 ):
40 self.requires_python = get_constant(node.value)
41
42
43 def setup_py_python_requires(content: str) -> Optional[str]:
44 try:
45 tree = ast.parse(content)
46 analyzer = Analyzer()
47 analyzer.visit(tree)
48 return analyzer.requires_python or None
49 except Exception:
50 return None
51
52
53 def get_requires_python_str(package_dir: Path) -> Optional[str]:
54 """Return the python requires string from the most canonical source available, or None"""
55
56 # Read in from pyproject.toml:project.requires-python
57 try:
58 with (package_dir / "pyproject.toml").open("rb") as f1:
59 info = tomli.load(f1)
60 return str(info["project"]["requires-python"])
61 except (FileNotFoundError, KeyError, IndexError, TypeError):
62 pass
63
64 # Read in from setup.cfg:options.python_requires
65 try:
66 config = ConfigParser()
67 config.read(package_dir / "setup.cfg")
68 return str(config["options"]["python_requires"])
69 except (FileNotFoundError, KeyError, IndexError, TypeError):
70 pass
71
72 try:
73 with (package_dir / "setup.py").open() as f2:
74 return setup_py_python_requires(f2.read())
75 except FileNotFoundError:
76 pass
77
78 return None
79
[end of cibuildwheel/projectfiles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cibuildwheel/projectfiles.py b/cibuildwheel/projectfiles.py
--- a/cibuildwheel/projectfiles.py
+++ b/cibuildwheel/projectfiles.py
@@ -70,7 +70,7 @@
pass
try:
- with (package_dir / "setup.py").open() as f2:
+ with (package_dir / "setup.py").open(encoding="utf8") as f2:
return setup_py_python_requires(f2.read())
except FileNotFoundError:
pass
| {"golden_diff": "diff --git a/cibuildwheel/projectfiles.py b/cibuildwheel/projectfiles.py\n--- a/cibuildwheel/projectfiles.py\n+++ b/cibuildwheel/projectfiles.py\n@@ -70,7 +70,7 @@\n pass\n \n try:\n- with (package_dir / \"setup.py\").open() as f2:\n+ with (package_dir / \"setup.py\").open(encoding=\"utf8\") as f2:\n return setup_py_python_requires(f2.read())\n except FileNotFoundError:\n pass\n", "issue": "on windows, setup_py_python_requires attempts to open utf-8 setup.py as Windows-1252 and fails\n### Description\r\n\r\nThis [setup.py file](https://github.com/fgregg/fastcluster/blob/master/setup.py) is valid utf-8, and has a few non-ascii characters. In a windows build, `setup_py_python_requires` appears to be opening this file as if it was encoded like Windows-1252 and thus fails on some non-ascii characters.\r\n\r\n### Build log\r\n\r\nhttps://github.com/fgregg/fastcluster/runs/4660766954?check_suite_focus=true#step:5:40\r\n\r\n### CI config\r\n\r\nhttps://github.com/fgregg/fastcluster/blob/master/.github/workflows/pythonpackage.yml#L41-L47\n", "before_files": [{"content": "import ast\nimport sys\nfrom configparser import ConfigParser\nfrom pathlib import Path\nfrom typing import Any, Optional\n\nimport tomli\n\nif sys.version_info < (3, 8):\n Constant = ast.Str\n\n def get_constant(x: ast.Str) -> str:\n return x.s\n\nelse:\n Constant = ast.Constant\n\n def get_constant(x: ast.Constant) -> Any:\n return x.value\n\n\nclass Analyzer(ast.NodeVisitor):\n def __init__(self) -> None:\n self.requires_python: Optional[str] = None\n\n def visit(self, content: ast.AST) -> None:\n for node in ast.walk(content):\n for child in ast.iter_child_nodes(node):\n child.parent = node # type: ignore[attr-defined]\n super().visit(content)\n\n def visit_keyword(self, node: ast.keyword) -> None:\n self.generic_visit(node)\n if node.arg == \"python_requires\":\n # Must not be nested in an if or other structure\n # This will be Module -> Expr -> Call -> keyword\n if not hasattr(node.parent.parent.parent, \"parent\") and isinstance( # type: ignore[attr-defined]\n node.value, Constant\n ):\n self.requires_python = get_constant(node.value)\n\n\ndef setup_py_python_requires(content: str) -> Optional[str]:\n try:\n tree = ast.parse(content)\n analyzer = Analyzer()\n analyzer.visit(tree)\n return analyzer.requires_python or None\n except Exception:\n return None\n\n\ndef get_requires_python_str(package_dir: Path) -> Optional[str]:\n \"\"\"Return the python requires string from the most canonical source available, or None\"\"\"\n\n # Read in from pyproject.toml:project.requires-python\n try:\n with (package_dir / \"pyproject.toml\").open(\"rb\") as f1:\n info = tomli.load(f1)\n return str(info[\"project\"][\"requires-python\"])\n except (FileNotFoundError, KeyError, IndexError, TypeError):\n pass\n\n # Read in from setup.cfg:options.python_requires\n try:\n config = ConfigParser()\n config.read(package_dir / \"setup.cfg\")\n return str(config[\"options\"][\"python_requires\"])\n except (FileNotFoundError, KeyError, IndexError, TypeError):\n pass\n\n try:\n with (package_dir / \"setup.py\").open() as f2:\n return setup_py_python_requires(f2.read())\n except FileNotFoundError:\n pass\n\n return None\n", "path": "cibuildwheel/projectfiles.py"}]} | 1,387 | 113 |
gh_patches_debug_1169 | rasdani/github-patches | git_diff | sosreport__sos-3483 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Obtain CNI files for containerd
Containerd uses the CNI configuration present in the defined folders by the configuration
```
[plugins."io.containerd.grpc.v1.cri".cni]
conf_dir = "/etc/cni/net.d
```
It will be very useful to obtain the cni configurations present on the folder for debugging networking related problems
https://github.com/sosreport/sos/blob/b94ced8370824bd62f3c7573ae33fcb96c5da531/sos/report/plugins/containerd.py#L12-L28
</issue>
<code>
[start of sos/report/plugins/containerd.py]
1 # This file is part of the sos project: https://github.com/sosreport/sos
2 #
3 # This copyrighted material is made available to anyone wishing to use,
4 # modify, copy, or redistribute it subject to the terms and conditions of
5 # version 2 of the GNU General Public License.
6 #
7 # See the LICENSE file in the source distribution for further information.
8
9 from sos.report.plugins import (Plugin, RedHatPlugin, UbuntuPlugin, CosPlugin)
10
11
12 class Containerd(Plugin, RedHatPlugin, UbuntuPlugin, CosPlugin):
13
14 short_desc = 'Containerd containers'
15 plugin_name = 'containerd'
16 profiles = ('container',)
17 packages = ('containerd', 'containerd.io',)
18
19 def setup(self):
20 self.add_copy_spec([
21 "/etc/containerd/",
22 ])
23
24 self.add_cmd_output('containerd config dump')
25
26 # collect the containerd logs.
27 self.add_journal(units='containerd')
28
29 # vim: set et ts=4 sw=4 :
30
[end of sos/report/plugins/containerd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sos/report/plugins/containerd.py b/sos/report/plugins/containerd.py
--- a/sos/report/plugins/containerd.py
+++ b/sos/report/plugins/containerd.py
@@ -19,6 +19,7 @@
def setup(self):
self.add_copy_spec([
"/etc/containerd/",
+ "/etc/cni/net.d/",
])
self.add_cmd_output('containerd config dump')
| {"golden_diff": "diff --git a/sos/report/plugins/containerd.py b/sos/report/plugins/containerd.py\n--- a/sos/report/plugins/containerd.py\n+++ b/sos/report/plugins/containerd.py\n@@ -19,6 +19,7 @@\n def setup(self):\n self.add_copy_spec([\n \"/etc/containerd/\",\n+ \"/etc/cni/net.d/\",\n ])\n \n self.add_cmd_output('containerd config dump')\n", "issue": "Obtain CNI files for containerd\nContainerd uses the CNI configuration present in the defined folders by the configuration\r\n\r\n```\r\n [plugins.\"io.containerd.grpc.v1.cri\".cni]\r\n conf_dir = \"/etc/cni/net.d\r\n```\r\n\r\nIt will be very useful to obtain the cni configurations present on the folder for debugging networking related problems \r\n\r\n\r\nhttps://github.com/sosreport/sos/blob/b94ced8370824bd62f3c7573ae33fcb96c5da531/sos/report/plugins/containerd.py#L12-L28\n", "before_files": [{"content": "# This file is part of the sos project: https://github.com/sosreport/sos\n#\n# This copyrighted material is made available to anyone wishing to use,\n# modify, copy, or redistribute it subject to the terms and conditions of\n# version 2 of the GNU General Public License.\n#\n# See the LICENSE file in the source distribution for further information.\n\nfrom sos.report.plugins import (Plugin, RedHatPlugin, UbuntuPlugin, CosPlugin)\n\n\nclass Containerd(Plugin, RedHatPlugin, UbuntuPlugin, CosPlugin):\n\n short_desc = 'Containerd containers'\n plugin_name = 'containerd'\n profiles = ('container',)\n packages = ('containerd', 'containerd.io',)\n\n def setup(self):\n self.add_copy_spec([\n \"/etc/containerd/\",\n ])\n\n self.add_cmd_output('containerd config dump')\n\n # collect the containerd logs.\n self.add_journal(units='containerd')\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/report/plugins/containerd.py"}]} | 936 | 92 |
gh_patches_debug_15005 | rasdani/github-patches | git_diff | webkom__lego-3013 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Migrate from CAPTCHA to Turnstile
Turnstile
* is not from *Google*.
* doesn't force users to help train Google's car AI.
* easy (?) to [migrate to](https://developers.cloudflare.com/turnstile/get-started/migrating-from-recaptcha/).
* solves the stupid issue with showing captchas so early they expire (Bc. a Turnstile token is valid for 300 seconds).
Closes #2291
</issue>
<code>
[start of lego/settings/development.py]
1 import os
2
3 import stripe
4
5 from .base import INSTALLED_APPS, MIDDLEWARE
6 from .rest_framework import REST_FRAMEWORK
7
8 DEBUG = True
9 DEVELOPMENT = True
10
11 SERVER_URL = "http://127.0.0.1:8000"
12 FRONTEND_URL = "http://127.0.0.1:3000"
13 SERVER_EMAIL = "Abakus <[email protected]>"
14 ENVIRONMENT_NAME = "development"
15
16 SECRET_KEY = "secret"
17
18 stripe.api_key = os.environ.get("STRIPE_TEST_KEY")
19 STRIPE_WEBHOOK_SECRET = os.environ.get("STRIPE_WEBHOOK_SECRET")
20 CAPTCHA_KEY = (
21 os.environ.get("CAPTCHA_KEY") or "6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe"
22 )
23
24 SESSION_COOKIE_SECURE = False
25 DATABASES = {
26 "default": {
27 "ENGINE": "django.db.backends.postgresql",
28 "HOST": "127.0.0.1",
29 "NAME": "lego",
30 "USER": "lego",
31 "PASSWORD": "",
32 "PORT": "",
33 }
34 }
35
36 CACHES = {
37 "default": {
38 "BACKEND": "django_redis.cache.RedisCache",
39 "LOCATION": "redis://127.0.0.1:6379/0",
40 "OPTIONS": {"CLIENT_CLASS": "django_redis.client.DefaultClient"},
41 }
42 }
43
44 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
45
46 INTERNAL_IPS = ["127.0.0.1"]
47 INSTALLED_APPS += ["coverage", "debug_toolbar"]
48 MIDDLEWARE += ["debug_toolbar.middleware.DebugToolbarMiddleware"]
49 DEBUG_TOOLBAR_PANELS = [
50 "debug_toolbar.panels.versions.VersionsPanel",
51 "debug_toolbar.panels.timer.TimerPanel",
52 "debug_toolbar.panels.settings.SettingsPanel",
53 "debug_toolbar.panels.headers.HeadersPanel",
54 "debug_toolbar.panels.request.RequestPanel",
55 "debug_toolbar.panels.sql.SQLPanel",
56 "debug_toolbar.panels.staticfiles.StaticFilesPanel",
57 "debug_toolbar.panels.templates.TemplatesPanel",
58 "debug_toolbar.panels.cache.CachePanel",
59 "debug_toolbar.panels.signals.SignalsPanel",
60 "debug_toolbar.panels.logging.LoggingPanel",
61 "debug_toolbar.panels.redirects.RedirectsPanel",
62 ]
63
64 REST_FRAMEWORK["DEFAULT_RENDERER_CLASSES"] += [ # type: ignore
65 # "rest_framework.renderers.BrowsableAPIRenderer"
66 "djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer"
67 ]
68
69 AWS_ACCESS_KEY_ID = "lego-dev"
70 AWS_SECRET_ACCESS_KEY = "lego-dev"
71 AWS_REGION = "us-east-1"
72 AWS_S3_BUCKET = "lego"
73 AWS_ENTRYPOINT = "http://127.0.0.1:9000"
74
75 THUMBOR_SERVER = "http://127.0.0.1:10000"
76 THUMBOR_SECURITY_KEY = "lego-dev"
77
78 CELERY_BROKER_URL = "redis://127.0.0.1"
79 CELERY_TASK_ALWAYS_EAGER = True
80
81 ELASTICSEARCH = "127.0.0.1"
82 SEARCH_BACKEND = os.environ.get("SEARCH_BACKEND", "postgres")
83
84 LDAP_SERVER = "127.0.0.1:389"
85 LDAP_USER = "cn=admin,dc=abakus,dc=no"
86 LDAP_PASSWORD = "admin"
87
88 CORS_ORIGIN_WHITELIST = list({"http://127.0.0.1:3000", "http://localhost:3000"})
89
90 SEARCH_INDEX = "lego-search"
91
[end of lego/settings/development.py]
[start of lego/settings/base.py]
1 import base64
2 import datetime
3 import json
4 import os
5
6 import environ
7
8 root = environ.Path(__file__) - 2
9 BASE_DIR = root()
10
11 ALLOWED_HOSTS = ["*"]
12 SHELL_PLUS = "ipython"
13
14 INSTALLED_APPS = [
15 "django.contrib.auth",
16 "django.contrib.contenttypes",
17 "django.contrib.sessions",
18 "django.contrib.messages",
19 "django.contrib.staticfiles",
20 "django.contrib.postgres",
21 "django_extensions",
22 "oauth2_provider",
23 "rest_framework",
24 "rest_framework_jwt",
25 "rest_framework_jwt.blacklist", # Not used, but needed to avoid db issues.
26 "corsheaders",
27 "mptt",
28 "channels",
29 "django_filters",
30 "push_notifications",
31 "health_check",
32 "phonenumber_field",
33 "lego.utils",
34 "lego.apps.action_handlers",
35 "lego.apps.articles",
36 "lego.apps.comments",
37 "lego.apps.companies",
38 "lego.apps.contact",
39 "lego.apps.content",
40 "lego.apps.email",
41 "lego.apps.emojis",
42 "lego.apps.events",
43 "lego.apps.external_sync",
44 "lego.apps.feeds",
45 "lego.apps.files",
46 "lego.apps.flatpages",
47 "lego.apps.followers",
48 "lego.apps.frontpage",
49 "lego.apps.gallery",
50 "lego.apps.healthchecks",
51 "lego.apps.ical",
52 "lego.apps.joblistings",
53 "lego.apps.meetings",
54 "lego.apps.notifications",
55 "lego.apps.oauth",
56 "lego.apps.permissions",
57 "lego.apps.podcasts",
58 "lego.apps.polls",
59 "lego.apps.quotes",
60 "lego.apps.reactions",
61 "lego.apps.restricted",
62 "lego.apps.search",
63 "lego.apps.stats",
64 "lego.apps.surveys",
65 "lego.apps.tags",
66 "lego.apps.users",
67 "lego.apps.websockets",
68 ]
69
70 DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
71 SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
72 AUTH_USER_MODEL = "users.User"
73 AUTHENTICATION_BACKENDS = ("lego.apps.permissions.backends.LegoPermissionBackend",)
74 LOGIN_URL = "/authorization/login/"
75 LOGOUT_URL = "/authorization/logout/"
76 AUTH_PASSWORD_VALIDATORS = [
77 {
78 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator"
79 },
80 {"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator"},
81 {"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator"},
82 {"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator"},
83 ]
84
85 MIDDLEWARE = [
86 "django.middleware.security.SecurityMiddleware",
87 "django.contrib.sessions.middleware.SessionMiddleware",
88 "lego.utils.middleware.cors.CORSPatchMiddleware",
89 "corsheaders.middleware.CorsMiddleware",
90 "django.middleware.common.CommonMiddleware",
91 "django.middleware.csrf.CsrfViewMiddleware",
92 "django.contrib.auth.middleware.AuthenticationMiddleware",
93 "django.contrib.messages.middleware.MessageMiddleware",
94 "django.middleware.clickjacking.XFrameOptionsMiddleware",
95 "lego.utils.middleware.logging.LoggingMiddleware",
96 ]
97
98 TEMPLATES = [
99 {
100 "BACKEND": "django.template.backends.django.DjangoTemplates",
101 "DIRS": [os.path.join(BASE_DIR, "templates")],
102 "APP_DIRS": True,
103 "OPTIONS": {
104 "context_processors": [
105 "django.contrib.auth.context_processors.auth",
106 "django.template.context_processors.debug",
107 "django.template.context_processors.i18n",
108 "django.template.context_processors.media",
109 "django.template.context_processors.static",
110 "django.template.context_processors.tz",
111 "django.contrib.messages.context_processors.messages",
112 "lego.utils.context_processors.site",
113 ]
114 },
115 }
116 ]
117
118 JWT_AUTH = {
119 "JWT_RESPONSE_PAYLOAD_HANDLER": "lego.apps.jwt.handlers.response_handler",
120 # Tokens will expire after 14 days
121 "JWT_EXPIRATION_DELTA": datetime.timedelta(days=14),
122 # Allow refresh. Tokens can be refreshed for 180 days after initial login,
123 # so users must login ~twice a year
124 "JWT_ALLOW_REFRESH": True,
125 "JWT_REFRESH_EXPIRATION_DELTA": datetime.timedelta(days=180),
126 }
127
128 OAUTH2_PROVIDER_APPLICATION_MODEL = "oauth.APIApplication"
129 OAUTH2_PROVIDER_ACCESS_TOKEN_MODEL = "oauth2_provider.AccessToken"
130 OAUTH2_PROVIDER_REFRESH_TOKEN_MODEL = "oauth2_provider.RefreshToken"
131 OAUTH2_PROVIDER_ID_TOKEN_MODEL = "oauth2_provider.IDToken"
132 # Tokens is valid for 7 days.
133 OAUTH2_PROVIDER = {
134 "ACCESS_TOKEN_EXPIRE_SECONDS": 86400 * 7,
135 "SCOPES": {
136 "user": (
137 "Enkel brukerinfo. Dette gir lesetilgang til ditt navn, "
138 "brukernavn, profilbilde, epost, kjønn og dine medlemskap"
139 ),
140 "all": "Gir tilgang til all brukerinfo. Kan også gjøre ting som deg",
141 },
142 }
143
144 ROOT_URLCONF = "lego.urls"
145
146 WSGI_APPLICATION = "lego.wsgi.application"
147
148 SEARCH_BACKEND = "postgres"
149
150 TIME_ZONE = "UTC"
151 USE_I18N = False
152 USE_L10N = False
153 USE_TZ = True
154
155 STATICFILES_STORAGE = "django.contrib.staticfiles.storage.StaticFilesStorage"
156 STATICFILES_FINDERS = (
157 "django.contrib.staticfiles.finders.FileSystemFinder",
158 "django.contrib.staticfiles.finders.AppDirectoriesFinder",
159 )
160 STATIC_ROOT = os.path.join(BASE_DIR, "files", "static")
161 STATIC_URL = "/static/"
162 STATICFILES_DIRS = (("assets", os.path.join(BASE_DIR, "assets")),)
163
164 MEDIA_ROOT = os.path.join(BASE_DIR, "files", "media")
165 MEDIA_URL = "/media/"
166
167 ASGI_APPLICATION = "lego.apps.websockets.routing.application"
168 CHANNEL_LAYERS = {
169 "default": {
170 "BACKEND": "channels_redis.core.RedisChannelLayer",
171 "CONFIG": {"hosts": ["redis://127.0.0.1:6379"]},
172 }
173 }
174
175 LDAP_BASE_DN = "dc=abakus,dc=no"
176
177 CAPTCHA_URL = "https://www.google.com/recaptcha/api/siteverify"
178
179 PUSH_NOTIFICATIONS_SETTINGS = {
180 "APNS_USE_SANDBOX": False,
181 "UPDATE_ON_DUPLICATE_REG_ID": True,
182 "APNS_TOPIC": "no.abakus.abakus",
183 "GCM_ERROR_TIMEOUT": 30,
184 "FCM_ERROR_TIMEOUT": 30,
185 }
186
187 GSUITE_DELEGATED_ACCOUNT = os.environ.get("GSUITE_DELEGATED_ACCOUNT")
188 SMTP_SSL_ENABLE = os.environ.get("SMTP_SSL_ENABLE") or False
189 SMTP_SSL_CERTIFICATE = os.environ.get("SMTP_SSL_CERTIFICATE")
190 SMTP_SSL_KEY = os.environ.get("SMTP_SSL_KEY")
191
192 if os.environ.get("GSUITE_CREDENTIALS"):
193 GSUITE_CREDENTIALS = json.loads(
194 base64.b64decode(os.environ.get("GSUITE_CREDENTIALS")), strict=False # type: ignore
195 )
196 else:
197 GSUITE_CREDENTIALS = None
198
[end of lego/settings/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lego/settings/base.py b/lego/settings/base.py
--- a/lego/settings/base.py
+++ b/lego/settings/base.py
@@ -174,7 +174,7 @@
LDAP_BASE_DN = "dc=abakus,dc=no"
-CAPTCHA_URL = "https://www.google.com/recaptcha/api/siteverify"
+CAPTCHA_URL = "https://challenges.cloudflare.com/turnstile/v0/siteverify"
PUSH_NOTIFICATIONS_SETTINGS = {
"APNS_USE_SANDBOX": False,
diff --git a/lego/settings/development.py b/lego/settings/development.py
--- a/lego/settings/development.py
+++ b/lego/settings/development.py
@@ -17,9 +17,7 @@
stripe.api_key = os.environ.get("STRIPE_TEST_KEY")
STRIPE_WEBHOOK_SECRET = os.environ.get("STRIPE_WEBHOOK_SECRET")
-CAPTCHA_KEY = (
- os.environ.get("CAPTCHA_KEY") or "6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe"
-)
+CAPTCHA_KEY = os.environ.get("CAPTCHA_KEY") or "1x0000000000000000000000000000000AA"
SESSION_COOKIE_SECURE = False
DATABASES = {
| {"golden_diff": "diff --git a/lego/settings/base.py b/lego/settings/base.py\n--- a/lego/settings/base.py\n+++ b/lego/settings/base.py\n@@ -174,7 +174,7 @@\n \n LDAP_BASE_DN = \"dc=abakus,dc=no\"\n \n-CAPTCHA_URL = \"https://www.google.com/recaptcha/api/siteverify\"\n+CAPTCHA_URL = \"https://challenges.cloudflare.com/turnstile/v0/siteverify\"\n \n PUSH_NOTIFICATIONS_SETTINGS = {\n \"APNS_USE_SANDBOX\": False,\ndiff --git a/lego/settings/development.py b/lego/settings/development.py\n--- a/lego/settings/development.py\n+++ b/lego/settings/development.py\n@@ -17,9 +17,7 @@\n \n stripe.api_key = os.environ.get(\"STRIPE_TEST_KEY\")\n STRIPE_WEBHOOK_SECRET = os.environ.get(\"STRIPE_WEBHOOK_SECRET\")\n-CAPTCHA_KEY = (\n- os.environ.get(\"CAPTCHA_KEY\") or \"6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe\"\n-)\n+CAPTCHA_KEY = os.environ.get(\"CAPTCHA_KEY\") or \"1x0000000000000000000000000000000AA\"\n \n SESSION_COOKIE_SECURE = False\n DATABASES = {\n", "issue": "Migrate from CAPTCHA to Turnstile\nTurnstile\r\n* is not from *Google*.\r\n* doesn't force users to help train Google's car AI.\r\n* easy (?) to [migrate to](https://developers.cloudflare.com/turnstile/get-started/migrating-from-recaptcha/).\r\n* solves the stupid issue with showing captchas so early they expire (Bc. a Turnstile token is valid for 300 seconds).\r\n\r\nCloses #2291 \n", "before_files": [{"content": "import os\n\nimport stripe\n\nfrom .base import INSTALLED_APPS, MIDDLEWARE\nfrom .rest_framework import REST_FRAMEWORK\n\nDEBUG = True\nDEVELOPMENT = True\n\nSERVER_URL = \"http://127.0.0.1:8000\"\nFRONTEND_URL = \"http://127.0.0.1:3000\"\nSERVER_EMAIL = \"Abakus <[email protected]>\"\nENVIRONMENT_NAME = \"development\"\n\nSECRET_KEY = \"secret\"\n\nstripe.api_key = os.environ.get(\"STRIPE_TEST_KEY\")\nSTRIPE_WEBHOOK_SECRET = os.environ.get(\"STRIPE_WEBHOOK_SECRET\")\nCAPTCHA_KEY = (\n os.environ.get(\"CAPTCHA_KEY\") or \"6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe\"\n)\n\nSESSION_COOKIE_SECURE = False\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": \"127.0.0.1\",\n \"NAME\": \"lego\",\n \"USER\": \"lego\",\n \"PASSWORD\": \"\",\n \"PORT\": \"\",\n }\n}\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": \"redis://127.0.0.1:6379/0\",\n \"OPTIONS\": {\"CLIENT_CLASS\": \"django_redis.client.DefaultClient\"},\n }\n}\n\nEMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\nINTERNAL_IPS = [\"127.0.0.1\"]\nINSTALLED_APPS += [\"coverage\", \"debug_toolbar\"]\nMIDDLEWARE += [\"debug_toolbar.middleware.DebugToolbarMiddleware\"]\nDEBUG_TOOLBAR_PANELS = [\n \"debug_toolbar.panels.versions.VersionsPanel\",\n \"debug_toolbar.panels.timer.TimerPanel\",\n \"debug_toolbar.panels.settings.SettingsPanel\",\n \"debug_toolbar.panels.headers.HeadersPanel\",\n \"debug_toolbar.panels.request.RequestPanel\",\n \"debug_toolbar.panels.sql.SQLPanel\",\n \"debug_toolbar.panels.staticfiles.StaticFilesPanel\",\n \"debug_toolbar.panels.templates.TemplatesPanel\",\n \"debug_toolbar.panels.cache.CachePanel\",\n \"debug_toolbar.panels.signals.SignalsPanel\",\n \"debug_toolbar.panels.logging.LoggingPanel\",\n \"debug_toolbar.panels.redirects.RedirectsPanel\",\n]\n\nREST_FRAMEWORK[\"DEFAULT_RENDERER_CLASSES\"] += [ # type: ignore\n # \"rest_framework.renderers.BrowsableAPIRenderer\"\n \"djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer\"\n]\n\nAWS_ACCESS_KEY_ID = \"lego-dev\"\nAWS_SECRET_ACCESS_KEY = \"lego-dev\"\nAWS_REGION = \"us-east-1\"\nAWS_S3_BUCKET = \"lego\"\nAWS_ENTRYPOINT = \"http://127.0.0.1:9000\"\n\nTHUMBOR_SERVER = \"http://127.0.0.1:10000\"\nTHUMBOR_SECURITY_KEY = \"lego-dev\"\n\nCELERY_BROKER_URL = \"redis://127.0.0.1\"\nCELERY_TASK_ALWAYS_EAGER = True\n\nELASTICSEARCH = \"127.0.0.1\"\nSEARCH_BACKEND = os.environ.get(\"SEARCH_BACKEND\", \"postgres\")\n\nLDAP_SERVER = \"127.0.0.1:389\"\nLDAP_USER = \"cn=admin,dc=abakus,dc=no\"\nLDAP_PASSWORD = \"admin\"\n\nCORS_ORIGIN_WHITELIST = list({\"http://127.0.0.1:3000\", \"http://localhost:3000\"})\n\nSEARCH_INDEX = \"lego-search\"\n", "path": "lego/settings/development.py"}, {"content": "import base64\nimport datetime\nimport json\nimport os\n\nimport environ\n\nroot = environ.Path(__file__) - 2\nBASE_DIR = root()\n\nALLOWED_HOSTS = [\"*\"]\nSHELL_PLUS = \"ipython\"\n\nINSTALLED_APPS = [\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.postgres\",\n \"django_extensions\",\n \"oauth2_provider\",\n \"rest_framework\",\n \"rest_framework_jwt\",\n \"rest_framework_jwt.blacklist\", # Not used, but needed to avoid db issues.\n \"corsheaders\",\n \"mptt\",\n \"channels\",\n \"django_filters\",\n \"push_notifications\",\n \"health_check\",\n \"phonenumber_field\",\n \"lego.utils\",\n \"lego.apps.action_handlers\",\n \"lego.apps.articles\",\n \"lego.apps.comments\",\n \"lego.apps.companies\",\n \"lego.apps.contact\",\n \"lego.apps.content\",\n \"lego.apps.email\",\n \"lego.apps.emojis\",\n \"lego.apps.events\",\n \"lego.apps.external_sync\",\n \"lego.apps.feeds\",\n \"lego.apps.files\",\n \"lego.apps.flatpages\",\n \"lego.apps.followers\",\n \"lego.apps.frontpage\",\n \"lego.apps.gallery\",\n \"lego.apps.healthchecks\",\n \"lego.apps.ical\",\n \"lego.apps.joblistings\",\n \"lego.apps.meetings\",\n \"lego.apps.notifications\",\n \"lego.apps.oauth\",\n \"lego.apps.permissions\",\n \"lego.apps.podcasts\",\n \"lego.apps.polls\",\n \"lego.apps.quotes\",\n \"lego.apps.reactions\",\n \"lego.apps.restricted\",\n \"lego.apps.search\",\n \"lego.apps.stats\",\n \"lego.apps.surveys\",\n \"lego.apps.tags\",\n \"lego.apps.users\",\n \"lego.apps.websockets\",\n]\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\nSESSION_ENGINE = \"django.contrib.sessions.backends.cached_db\"\nAUTH_USER_MODEL = \"users.User\"\nAUTHENTICATION_BACKENDS = (\"lego.apps.permissions.backends.LegoPermissionBackend\",)\nLOGIN_URL = \"/authorization/login/\"\nLOGOUT_URL = \"/authorization/logout/\"\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\"\n },\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\"},\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"lego.utils.middleware.cors.CORSPatchMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"lego.utils.middleware.logging.LoggingMiddleware\",\n]\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.static\",\n \"django.template.context_processors.tz\",\n \"django.contrib.messages.context_processors.messages\",\n \"lego.utils.context_processors.site\",\n ]\n },\n }\n]\n\nJWT_AUTH = {\n \"JWT_RESPONSE_PAYLOAD_HANDLER\": \"lego.apps.jwt.handlers.response_handler\",\n # Tokens will expire after 14 days\n \"JWT_EXPIRATION_DELTA\": datetime.timedelta(days=14),\n # Allow refresh. Tokens can be refreshed for 180 days after initial login,\n # so users must login ~twice a year\n \"JWT_ALLOW_REFRESH\": True,\n \"JWT_REFRESH_EXPIRATION_DELTA\": datetime.timedelta(days=180),\n}\n\nOAUTH2_PROVIDER_APPLICATION_MODEL = \"oauth.APIApplication\"\nOAUTH2_PROVIDER_ACCESS_TOKEN_MODEL = \"oauth2_provider.AccessToken\"\nOAUTH2_PROVIDER_REFRESH_TOKEN_MODEL = \"oauth2_provider.RefreshToken\"\nOAUTH2_PROVIDER_ID_TOKEN_MODEL = \"oauth2_provider.IDToken\"\n# Tokens is valid for 7 days.\nOAUTH2_PROVIDER = {\n \"ACCESS_TOKEN_EXPIRE_SECONDS\": 86400 * 7,\n \"SCOPES\": {\n \"user\": (\n \"Enkel brukerinfo. Dette gir lesetilgang til ditt navn, \"\n \"brukernavn, profilbilde, epost, kj\u00f8nn og dine medlemskap\"\n ),\n \"all\": \"Gir tilgang til all brukerinfo. Kan ogs\u00e5 gj\u00f8re ting som deg\",\n },\n}\n\nROOT_URLCONF = \"lego.urls\"\n\nWSGI_APPLICATION = \"lego.wsgi.application\"\n\nSEARCH_BACKEND = \"postgres\"\n\nTIME_ZONE = \"UTC\"\nUSE_I18N = False\nUSE_L10N = False\nUSE_TZ = True\n\nSTATICFILES_STORAGE = \"django.contrib.staticfiles.storage.StaticFilesStorage\"\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n)\nSTATIC_ROOT = os.path.join(BASE_DIR, \"files\", \"static\")\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = ((\"assets\", os.path.join(BASE_DIR, \"assets\")),)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, \"files\", \"media\")\nMEDIA_URL = \"/media/\"\n\nASGI_APPLICATION = \"lego.apps.websockets.routing.application\"\nCHANNEL_LAYERS = {\n \"default\": {\n \"BACKEND\": \"channels_redis.core.RedisChannelLayer\",\n \"CONFIG\": {\"hosts\": [\"redis://127.0.0.1:6379\"]},\n }\n}\n\nLDAP_BASE_DN = \"dc=abakus,dc=no\"\n\nCAPTCHA_URL = \"https://www.google.com/recaptcha/api/siteverify\"\n\nPUSH_NOTIFICATIONS_SETTINGS = {\n \"APNS_USE_SANDBOX\": False,\n \"UPDATE_ON_DUPLICATE_REG_ID\": True,\n \"APNS_TOPIC\": \"no.abakus.abakus\",\n \"GCM_ERROR_TIMEOUT\": 30,\n \"FCM_ERROR_TIMEOUT\": 30,\n}\n\nGSUITE_DELEGATED_ACCOUNT = os.environ.get(\"GSUITE_DELEGATED_ACCOUNT\")\nSMTP_SSL_ENABLE = os.environ.get(\"SMTP_SSL_ENABLE\") or False\nSMTP_SSL_CERTIFICATE = os.environ.get(\"SMTP_SSL_CERTIFICATE\")\nSMTP_SSL_KEY = os.environ.get(\"SMTP_SSL_KEY\")\n\nif os.environ.get(\"GSUITE_CREDENTIALS\"):\n GSUITE_CREDENTIALS = json.loads(\n base64.b64decode(os.environ.get(\"GSUITE_CREDENTIALS\")), strict=False # type: ignore\n )\nelse:\n GSUITE_CREDENTIALS = None\n", "path": "lego/settings/base.py"}]} | 3,670 | 313 |
gh_patches_debug_41611 | rasdani/github-patches | git_diff | AnalogJ__lexicon-1252 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RFE: Replace use of `pkg_resources` with `importlib.metadata`
See discussions:
[astropy/astropy#11091](https://github.com/astropy/astropy/pull/11091)
[pypa/pip#7413](https://github.com/pypa/pip/issues/7413)
```console
[tkloczko@devel-g2v lexicon-3.11.0]$ grep -r pkg_resources
lexicon/discovery.py:import pkg_resources
lexicon/discovery.py: distribution = pkg_resources.get_distribution("dns-lexicon")
lexicon/discovery.py: except pkg_resources.DistributionNotFound:
lexicon/discovery.py: return pkg_resources.get_distribution("dns-lexicon").version
lexicon/discovery.py: except pkg_resources.DistributionNotFound:
lexicon/discovery.py: provider: str, distribution: pkg_resources.Distribution
lexicon/discovery.py: requirements: List[pkg_resources.Requirement] = distribution.requires(
lexicon/discovery.py: except pkg_resources.UnknownExtra:
lexicon/discovery.py: pkg_resources.get_distribution(requirement.name) # type: ignore
lexicon/discovery.py: pkg_resources.get_distribution(requirement)
lexicon/discovery.py: except (pkg_resources.DistributionNotFound, pkg_resources.VersionConflict):
t:| * 90642ca - (origin/fix-provider-disco-legacy, jamin/fix-provider-disco-legacy, giuse/fix-provider-disco-legacy) Support old versions of pkg_resources (2 years, 10 months ago) <Adrien Ferrand>
```
</issue>
<code>
[start of lexicon/client.py]
1 """Main module of Lexicon. Defines the Client class, that holds all Lexicon logic."""
2 import importlib
3 import logging
4 import os
5 from typing import Dict, List, Optional, Type, Union, cast
6
7 import tldextract # type: ignore
8
9 from lexicon import config as helper_config
10 from lexicon import discovery
11 from lexicon.exceptions import ProviderNotAvailableError
12 from lexicon.providers.base import Provider
13
14
15 class Client(object):
16 """This is the Lexicon client, that will execute all the logic."""
17
18 def __init__(
19 self, config: Optional[Union[helper_config.ConfigResolver, Dict]] = None
20 ):
21 if not config:
22 # If there is not config specified, we load a non-interactive configuration.
23 self.config = helper_config.non_interactive_config_resolver()
24 elif not isinstance(config, helper_config.ConfigResolver):
25 # If config is not a ConfigResolver, we are in a legacy situation.
26 # We protect this part of the Client API.
27 self.config = helper_config.legacy_config_resolver(config)
28 else:
29 self.config = config
30
31 # Validate configuration
32 self._validate_config()
33
34 runtime_config = {}
35
36 # Process domain, strip subdomain
37 try:
38 domain_extractor = tldextract.TLDExtract(
39 cache_dir=_get_tldextract_cache_path(), include_psl_private_domains=True
40 )
41 except TypeError:
42 domain_extractor = tldextract.TLDExtract(
43 cache_file=_get_tldextract_cache_path(), include_psl_private_domains=True # type: ignore
44 )
45 domain_parts = domain_extractor(
46 cast(str, self.config.resolve("lexicon:domain"))
47 )
48 runtime_config["domain"] = f"{domain_parts.domain}.{domain_parts.suffix}"
49
50 delegated = self.config.resolve("lexicon:delegated")
51 if delegated:
52 # handle delegated domain
53 delegated = str(delegated).rstrip(".")
54 initial_domain = str(runtime_config.get("domain"))
55 if delegated != initial_domain:
56 # convert to relative name
57 if delegated.endswith(initial_domain):
58 delegated = delegated[: -len(initial_domain)]
59 delegated = delegated.rstrip(".")
60 # update domain
61 runtime_config["domain"] = f"{delegated}.{initial_domain}"
62
63 self.action = self.config.resolve("lexicon:action")
64 self.provider_name = self.config.resolve(
65 "lexicon:provider_name"
66 ) or self.config.resolve("lexicon:provider")
67
68 if not self.provider_name:
69 raise ValueError("Could not resolve provider name.")
70
71 self.config.add_config_source(helper_config.DictConfigSource(runtime_config), 0)
72
73 provider_module = importlib.import_module(
74 "lexicon.providers." + self.provider_name
75 )
76 provider_class: Type[Provider] = getattr(provider_module, "Provider")
77 self.provider = provider_class(self.config)
78
79 def execute(self) -> Union[bool, List[Dict]]:
80 """Execute provided configuration in class constructor to the DNS records"""
81 self.provider.authenticate()
82 identifier = self.config.resolve("lexicon:identifier")
83 record_type = self.config.resolve("lexicon:type")
84 name = self.config.resolve("lexicon:name")
85 content = self.config.resolve("lexicon:content")
86
87 if self.action == "create":
88 if not record_type or not name or not content:
89 raise ValueError("Missing record_type, name or content parameters.")
90 return self.provider.create_record(record_type, name, content)
91
92 if self.action == "list":
93 return self.provider.list_records(record_type, name, content)
94
95 if self.action == "update":
96 return self.provider.update_record(identifier, record_type, name, content)
97
98 if self.action == "delete":
99 return self.provider.delete_record(identifier, record_type, name, content)
100
101 raise ValueError(f"Invalid action statement: {self.action}")
102
103 def _validate_config(self) -> None:
104 provider_name = self.config.resolve("lexicon:provider_name")
105 if not provider_name:
106 raise AttributeError("provider_name")
107
108 try:
109 available = discovery.find_providers()[provider_name]
110 except KeyError:
111 raise ProviderNotAvailableError(
112 f"This provider ({provider_name}) is not supported by Lexicon."
113 )
114 else:
115 if not available:
116 raise ProviderNotAvailableError(
117 f"This provider ({provider_name}) has required dependencies that are missing. "
118 f"Please install lexicon[{provider_name}] first."
119 )
120
121 if not self.config.resolve("lexicon:action"):
122 raise AttributeError("action")
123 if not self.config.resolve("lexicon:domain"):
124 raise AttributeError("domain")
125 if not self.config.resolve("lexicon:type"):
126 raise AttributeError("type")
127
128
129 def _get_tldextract_cache_path() -> str:
130 if os.environ.get("TLDEXTRACT_CACHE_FILE"):
131 logging.warning(
132 "TLD_EXTRACT_CACHE_FILE environment variable is deprecated, please use TLDEXTRACT_CACHE_PATH instead."
133 )
134 os.environ["TLDEXTRACT_CACHE_PATH"] = os.environ["TLDEXTRACT_CACHE_FILE"]
135
136 return os.path.expanduser(
137 os.environ.get("TLDEXTRACT_CACHE_PATH", os.path.join("~", ".lexicon_tld_set"))
138 )
139
[end of lexicon/client.py]
[start of lexicon/parser.py]
1 """Parsers definition for the Lexicon command-line interface"""
2 import argparse
3 import importlib
4 import os
5
6 from lexicon import discovery
7
8
9 def generate_base_provider_parser() -> argparse.ArgumentParser:
10 """Function that generates the base provider to be used by all dns providers."""
11 parser = argparse.ArgumentParser(add_help=False)
12 parser.add_argument(
13 "action",
14 help="specify the action to take",
15 default="list",
16 choices=["create", "list", "update", "delete"],
17 )
18 parser.add_argument(
19 "domain", help="specify the domain, supports subdomains as well"
20 )
21 parser.add_argument(
22 "type",
23 help="specify the entry type",
24 default="TXT",
25 choices=["A", "AAAA", "CNAME", "MX", "NS", "SOA", "TXT", "SRV", "LOC"],
26 )
27
28 parser.add_argument("--name", help="specify the record name")
29 parser.add_argument("--content", help="specify the record content")
30 parser.add_argument("--ttl", type=int, help="specify the record time-to-live")
31 parser.add_argument("--priority", help="specify the record priority")
32 parser.add_argument(
33 "--identifier", help="specify the record for update or delete actions"
34 )
35 parser.add_argument(
36 "--log_level",
37 help="specify the log level",
38 default="ERROR",
39 choices=["CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG", "NOTSET"],
40 )
41 parser.add_argument(
42 "--output",
43 help=(
44 "specify the type of output: by default a formatted table (TABLE), "
45 "a formatted table without header (TABLE-NO-HEADER), "
46 "a JSON string (JSON) or no output (QUIET)"
47 ),
48 default="TABLE",
49 choices=["TABLE", "TABLE-NO-HEADER", "JSON", "QUIET"],
50 )
51 return parser
52
53
54 def generate_cli_main_parser() -> argparse.ArgumentParser:
55 """Using all providers available, generate a parser that will be used by Lexicon CLI"""
56 parser = argparse.ArgumentParser(
57 description="Create, Update, Delete, List DNS entries"
58 )
59
60 parser.add_argument(
61 "--version",
62 help="show the current version of lexicon",
63 action="version",
64 version=f"%(prog)s {discovery.lexicon_version()}",
65 )
66 parser.add_argument("--delegated", help="specify the delegated domain")
67 parser.add_argument(
68 "--config-dir",
69 default=os.getcwd(),
70 help="specify the directory where to search lexicon.yml and "
71 "lexicon_[provider].yml configuration files "
72 "(default: current directory).",
73 )
74 subparsers = parser.add_subparsers(
75 dest="provider_name", help="specify the DNS provider to use"
76 )
77 subparsers.required = True
78
79 for provider, available in discovery.find_providers().items():
80 provider_module = importlib.import_module("lexicon.providers." + provider)
81 provider_parser = getattr(provider_module, "provider_parser")
82
83 subparser = subparsers.add_parser(
84 provider,
85 help=f"{provider} provider",
86 parents=[generate_base_provider_parser()],
87 )
88 provider_parser(subparser)
89
90 if not available:
91 subparser.epilog = (
92 "WARNING: some required dependencies for this provider are not "
93 f"installed. Please install lexicon[{provider}] first before using it."
94 )
95
96 return parser
97
[end of lexicon/parser.py]
[start of lexicon/discovery.py]
1 """
2 This module takes care of finding information about the runtime of Lexicon:
3 * what are the providers installed, and available
4 * what is the version of Lexicon
5 """
6 import pkgutil
7 from typing import Dict, List
8
9 import pkg_resources
10
11 from lexicon import providers
12
13
14 def find_providers() -> Dict[str, bool]:
15 """Find all providers registered in Lexicon, and their availability"""
16 providers_list = sorted(
17 {
18 modname
19 for (_, modname, _) in pkgutil.iter_modules(providers.__path__) # type: ignore
20 if modname != "base"
21 }
22 )
23
24 try:
25 distribution = pkg_resources.get_distribution("dns-lexicon")
26 except pkg_resources.DistributionNotFound:
27 return {provider: True for provider in providers_list}
28 else:
29 return {
30 provider: _resolve_requirements(provider, distribution)
31 for provider in providers_list
32 }
33
34
35 def lexicon_version() -> str:
36 """Retrieve current Lexicon version"""
37 try:
38 return pkg_resources.get_distribution("dns-lexicon").version
39 except pkg_resources.DistributionNotFound:
40 return "unknown"
41
42
43 def _resolve_requirements(
44 provider: str, distribution: pkg_resources.Distribution
45 ) -> bool:
46 try:
47 requirements: List[pkg_resources.Requirement] = distribution.requires(
48 extras=(provider,)
49 )
50 except pkg_resources.UnknownExtra:
51 # No extra for this provider
52 return True
53 else:
54 # Extra is defined
55 try:
56 for requirement in requirements:
57 if hasattr(requirement, "name"):
58 pkg_resources.get_distribution(requirement.name) # type: ignore
59 else:
60 pkg_resources.get_distribution(requirement)
61 except (pkg_resources.DistributionNotFound, pkg_resources.VersionConflict):
62 # At least one extra requirement is not fulfilled
63 return False
64
65 return True
66
[end of lexicon/discovery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lexicon/client.py b/lexicon/client.py
--- a/lexicon/client.py
+++ b/lexicon/client.py
@@ -114,8 +114,8 @@
else:
if not available:
raise ProviderNotAvailableError(
- f"This provider ({provider_name}) has required dependencies that are missing. "
- f"Please install lexicon[{provider_name}] first."
+ f"This provider ({provider_name}) has required extra dependencies that are missing. "
+ f"Please run `pip install lexicon[{provider_name}]` first before using it."
)
if not self.config.resolve("lexicon:action"):
diff --git a/lexicon/discovery.py b/lexicon/discovery.py
--- a/lexicon/discovery.py
+++ b/lexicon/discovery.py
@@ -4,9 +4,13 @@
* what is the version of Lexicon
"""
import pkgutil
-from typing import Dict, List
+import re
+from typing import Dict
-import pkg_resources
+try:
+ from importlib.metadata import Distribution, PackageNotFoundError
+except ModuleNotFoundError:
+ from importlib_metadata import Distribution, PackageNotFoundError # type: ignore[misc]
from lexicon import providers
@@ -22,8 +26,8 @@
)
try:
- distribution = pkg_resources.get_distribution("dns-lexicon")
- except pkg_resources.DistributionNotFound:
+ distribution = Distribution.from_name("dns-lexicon")
+ except PackageNotFoundError:
return {provider: True for provider in providers_list}
else:
return {
@@ -35,31 +39,32 @@
def lexicon_version() -> str:
"""Retrieve current Lexicon version"""
try:
- return pkg_resources.get_distribution("dns-lexicon").version
- except pkg_resources.DistributionNotFound:
+ return Distribution.from_name("dns-lexicon").version
+ except PackageNotFoundError:
return "unknown"
-def _resolve_requirements(
- provider: str, distribution: pkg_resources.Distribution
-) -> bool:
- try:
- requirements: List[pkg_resources.Requirement] = distribution.requires(
- extras=(provider,)
- )
- except pkg_resources.UnknownExtra:
+def _resolve_requirements(provider: str, distribution: Distribution) -> bool:
+ requires = distribution.requires
+ if requires is None:
+ raise ValueError("Error while trying finding requirements.")
+
+ requirements = [
+ re.sub(r"^(.*)\s\(.*\)(?:;.*|)$", r"\1", requirement)
+ for requirement in requires
+ if f'extra == "{provider}"' in requirement
+ ]
+
+ if not requirements:
# No extra for this provider
return True
- else:
- # Extra is defined
+
+ for requirement in requirements:
try:
- for requirement in requirements:
- if hasattr(requirement, "name"):
- pkg_resources.get_distribution(requirement.name) # type: ignore
- else:
- pkg_resources.get_distribution(requirement)
- except (pkg_resources.DistributionNotFound, pkg_resources.VersionConflict):
+ Distribution.from_name(requirement)
+ except PackageNotFoundError:
# At least one extra requirement is not fulfilled
return False
+ # All extra requirements are fulfilled
return True
diff --git a/lexicon/parser.py b/lexicon/parser.py
--- a/lexicon/parser.py
+++ b/lexicon/parser.py
@@ -90,7 +90,7 @@
if not available:
subparser.epilog = (
"WARNING: some required dependencies for this provider are not "
- f"installed. Please install lexicon[{provider}] first before using it."
+ f"installed. Please run `pip install lexicon[{provider}]` first before using it."
)
return parser
| {"golden_diff": "diff --git a/lexicon/client.py b/lexicon/client.py\n--- a/lexicon/client.py\n+++ b/lexicon/client.py\n@@ -114,8 +114,8 @@\n else:\n if not available:\n raise ProviderNotAvailableError(\n- f\"This provider ({provider_name}) has required dependencies that are missing. \"\n- f\"Please install lexicon[{provider_name}] first.\"\n+ f\"This provider ({provider_name}) has required extra dependencies that are missing. \"\n+ f\"Please run `pip install lexicon[{provider_name}]` first before using it.\"\n )\n \n if not self.config.resolve(\"lexicon:action\"):\ndiff --git a/lexicon/discovery.py b/lexicon/discovery.py\n--- a/lexicon/discovery.py\n+++ b/lexicon/discovery.py\n@@ -4,9 +4,13 @@\n * what is the version of Lexicon\n \"\"\"\n import pkgutil\n-from typing import Dict, List\n+import re\n+from typing import Dict\n \n-import pkg_resources\n+try:\n+ from importlib.metadata import Distribution, PackageNotFoundError\n+except ModuleNotFoundError:\n+ from importlib_metadata import Distribution, PackageNotFoundError # type: ignore[misc]\n \n from lexicon import providers\n \n@@ -22,8 +26,8 @@\n )\n \n try:\n- distribution = pkg_resources.get_distribution(\"dns-lexicon\")\n- except pkg_resources.DistributionNotFound:\n+ distribution = Distribution.from_name(\"dns-lexicon\")\n+ except PackageNotFoundError:\n return {provider: True for provider in providers_list}\n else:\n return {\n@@ -35,31 +39,32 @@\n def lexicon_version() -> str:\n \"\"\"Retrieve current Lexicon version\"\"\"\n try:\n- return pkg_resources.get_distribution(\"dns-lexicon\").version\n- except pkg_resources.DistributionNotFound:\n+ return Distribution.from_name(\"dns-lexicon\").version\n+ except PackageNotFoundError:\n return \"unknown\"\n \n \n-def _resolve_requirements(\n- provider: str, distribution: pkg_resources.Distribution\n-) -> bool:\n- try:\n- requirements: List[pkg_resources.Requirement] = distribution.requires(\n- extras=(provider,)\n- )\n- except pkg_resources.UnknownExtra:\n+def _resolve_requirements(provider: str, distribution: Distribution) -> bool:\n+ requires = distribution.requires\n+ if requires is None:\n+ raise ValueError(\"Error while trying finding requirements.\")\n+\n+ requirements = [\n+ re.sub(r\"^(.*)\\s\\(.*\\)(?:;.*|)$\", r\"\\1\", requirement)\n+ for requirement in requires\n+ if f'extra == \"{provider}\"' in requirement\n+ ]\n+\n+ if not requirements:\n # No extra for this provider\n return True\n- else:\n- # Extra is defined\n+\n+ for requirement in requirements:\n try:\n- for requirement in requirements:\n- if hasattr(requirement, \"name\"):\n- pkg_resources.get_distribution(requirement.name) # type: ignore\n- else:\n- pkg_resources.get_distribution(requirement)\n- except (pkg_resources.DistributionNotFound, pkg_resources.VersionConflict):\n+ Distribution.from_name(requirement)\n+ except PackageNotFoundError:\n # At least one extra requirement is not fulfilled\n return False\n \n+ # All extra requirements are fulfilled\n return True\ndiff --git a/lexicon/parser.py b/lexicon/parser.py\n--- a/lexicon/parser.py\n+++ b/lexicon/parser.py\n@@ -90,7 +90,7 @@\n if not available:\n subparser.epilog = (\n \"WARNING: some required dependencies for this provider are not \"\n- f\"installed. Please install lexicon[{provider}] first before using it.\"\n+ f\"installed. Please run `pip install lexicon[{provider}]` first before using it.\"\n )\n \n return parser\n", "issue": "RFE: Replace use of `pkg_resources` with `importlib.metadata`\nSee discussions:\r\n[astropy/astropy#11091](https://github.com/astropy/astropy/pull/11091)\r\n[pypa/pip#7413](https://github.com/pypa/pip/issues/7413)\r\n\r\n```console\r\n[tkloczko@devel-g2v lexicon-3.11.0]$ grep -r pkg_resources\r\nlexicon/discovery.py:import pkg_resources\r\nlexicon/discovery.py: distribution = pkg_resources.get_distribution(\"dns-lexicon\")\r\nlexicon/discovery.py: except pkg_resources.DistributionNotFound:\r\nlexicon/discovery.py: return pkg_resources.get_distribution(\"dns-lexicon\").version\r\nlexicon/discovery.py: except pkg_resources.DistributionNotFound:\r\nlexicon/discovery.py: provider: str, distribution: pkg_resources.Distribution\r\nlexicon/discovery.py: requirements: List[pkg_resources.Requirement] = distribution.requires(\r\nlexicon/discovery.py: except pkg_resources.UnknownExtra:\r\nlexicon/discovery.py: pkg_resources.get_distribution(requirement.name) # type: ignore\r\nlexicon/discovery.py: pkg_resources.get_distribution(requirement)\r\nlexicon/discovery.py: except (pkg_resources.DistributionNotFound, pkg_resources.VersionConflict):\r\nt:| * 90642ca - (origin/fix-provider-disco-legacy, jamin/fix-provider-disco-legacy, giuse/fix-provider-disco-legacy) Support old versions of pkg_resources (2 years, 10 months ago) <Adrien Ferrand>\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Main module of Lexicon. Defines the Client class, that holds all Lexicon logic.\"\"\"\nimport importlib\nimport logging\nimport os\nfrom typing import Dict, List, Optional, Type, Union, cast\n\nimport tldextract # type: ignore\n\nfrom lexicon import config as helper_config\nfrom lexicon import discovery\nfrom lexicon.exceptions import ProviderNotAvailableError\nfrom lexicon.providers.base import Provider\n\n\nclass Client(object):\n \"\"\"This is the Lexicon client, that will execute all the logic.\"\"\"\n\n def __init__(\n self, config: Optional[Union[helper_config.ConfigResolver, Dict]] = None\n ):\n if not config:\n # If there is not config specified, we load a non-interactive configuration.\n self.config = helper_config.non_interactive_config_resolver()\n elif not isinstance(config, helper_config.ConfigResolver):\n # If config is not a ConfigResolver, we are in a legacy situation.\n # We protect this part of the Client API.\n self.config = helper_config.legacy_config_resolver(config)\n else:\n self.config = config\n\n # Validate configuration\n self._validate_config()\n\n runtime_config = {}\n\n # Process domain, strip subdomain\n try:\n domain_extractor = tldextract.TLDExtract(\n cache_dir=_get_tldextract_cache_path(), include_psl_private_domains=True\n )\n except TypeError:\n domain_extractor = tldextract.TLDExtract(\n cache_file=_get_tldextract_cache_path(), include_psl_private_domains=True # type: ignore\n )\n domain_parts = domain_extractor(\n cast(str, self.config.resolve(\"lexicon:domain\"))\n )\n runtime_config[\"domain\"] = f\"{domain_parts.domain}.{domain_parts.suffix}\"\n\n delegated = self.config.resolve(\"lexicon:delegated\")\n if delegated:\n # handle delegated domain\n delegated = str(delegated).rstrip(\".\")\n initial_domain = str(runtime_config.get(\"domain\"))\n if delegated != initial_domain:\n # convert to relative name\n if delegated.endswith(initial_domain):\n delegated = delegated[: -len(initial_domain)]\n delegated = delegated.rstrip(\".\")\n # update domain\n runtime_config[\"domain\"] = f\"{delegated}.{initial_domain}\"\n\n self.action = self.config.resolve(\"lexicon:action\")\n self.provider_name = self.config.resolve(\n \"lexicon:provider_name\"\n ) or self.config.resolve(\"lexicon:provider\")\n\n if not self.provider_name:\n raise ValueError(\"Could not resolve provider name.\")\n\n self.config.add_config_source(helper_config.DictConfigSource(runtime_config), 0)\n\n provider_module = importlib.import_module(\n \"lexicon.providers.\" + self.provider_name\n )\n provider_class: Type[Provider] = getattr(provider_module, \"Provider\")\n self.provider = provider_class(self.config)\n\n def execute(self) -> Union[bool, List[Dict]]:\n \"\"\"Execute provided configuration in class constructor to the DNS records\"\"\"\n self.provider.authenticate()\n identifier = self.config.resolve(\"lexicon:identifier\")\n record_type = self.config.resolve(\"lexicon:type\")\n name = self.config.resolve(\"lexicon:name\")\n content = self.config.resolve(\"lexicon:content\")\n\n if self.action == \"create\":\n if not record_type or not name or not content:\n raise ValueError(\"Missing record_type, name or content parameters.\")\n return self.provider.create_record(record_type, name, content)\n\n if self.action == \"list\":\n return self.provider.list_records(record_type, name, content)\n\n if self.action == \"update\":\n return self.provider.update_record(identifier, record_type, name, content)\n\n if self.action == \"delete\":\n return self.provider.delete_record(identifier, record_type, name, content)\n\n raise ValueError(f\"Invalid action statement: {self.action}\")\n\n def _validate_config(self) -> None:\n provider_name = self.config.resolve(\"lexicon:provider_name\")\n if not provider_name:\n raise AttributeError(\"provider_name\")\n\n try:\n available = discovery.find_providers()[provider_name]\n except KeyError:\n raise ProviderNotAvailableError(\n f\"This provider ({provider_name}) is not supported by Lexicon.\"\n )\n else:\n if not available:\n raise ProviderNotAvailableError(\n f\"This provider ({provider_name}) has required dependencies that are missing. \"\n f\"Please install lexicon[{provider_name}] first.\"\n )\n\n if not self.config.resolve(\"lexicon:action\"):\n raise AttributeError(\"action\")\n if not self.config.resolve(\"lexicon:domain\"):\n raise AttributeError(\"domain\")\n if not self.config.resolve(\"lexicon:type\"):\n raise AttributeError(\"type\")\n\n\ndef _get_tldextract_cache_path() -> str:\n if os.environ.get(\"TLDEXTRACT_CACHE_FILE\"):\n logging.warning(\n \"TLD_EXTRACT_CACHE_FILE environment variable is deprecated, please use TLDEXTRACT_CACHE_PATH instead.\"\n )\n os.environ[\"TLDEXTRACT_CACHE_PATH\"] = os.environ[\"TLDEXTRACT_CACHE_FILE\"]\n\n return os.path.expanduser(\n os.environ.get(\"TLDEXTRACT_CACHE_PATH\", os.path.join(\"~\", \".lexicon_tld_set\"))\n )\n", "path": "lexicon/client.py"}, {"content": "\"\"\"Parsers definition for the Lexicon command-line interface\"\"\"\nimport argparse\nimport importlib\nimport os\n\nfrom lexicon import discovery\n\n\ndef generate_base_provider_parser() -> argparse.ArgumentParser:\n \"\"\"Function that generates the base provider to be used by all dns providers.\"\"\"\n parser = argparse.ArgumentParser(add_help=False)\n parser.add_argument(\n \"action\",\n help=\"specify the action to take\",\n default=\"list\",\n choices=[\"create\", \"list\", \"update\", \"delete\"],\n )\n parser.add_argument(\n \"domain\", help=\"specify the domain, supports subdomains as well\"\n )\n parser.add_argument(\n \"type\",\n help=\"specify the entry type\",\n default=\"TXT\",\n choices=[\"A\", \"AAAA\", \"CNAME\", \"MX\", \"NS\", \"SOA\", \"TXT\", \"SRV\", \"LOC\"],\n )\n\n parser.add_argument(\"--name\", help=\"specify the record name\")\n parser.add_argument(\"--content\", help=\"specify the record content\")\n parser.add_argument(\"--ttl\", type=int, help=\"specify the record time-to-live\")\n parser.add_argument(\"--priority\", help=\"specify the record priority\")\n parser.add_argument(\n \"--identifier\", help=\"specify the record for update or delete actions\"\n )\n parser.add_argument(\n \"--log_level\",\n help=\"specify the log level\",\n default=\"ERROR\",\n choices=[\"CRITICAL\", \"ERROR\", \"WARNING\", \"INFO\", \"DEBUG\", \"NOTSET\"],\n )\n parser.add_argument(\n \"--output\",\n help=(\n \"specify the type of output: by default a formatted table (TABLE), \"\n \"a formatted table without header (TABLE-NO-HEADER), \"\n \"a JSON string (JSON) or no output (QUIET)\"\n ),\n default=\"TABLE\",\n choices=[\"TABLE\", \"TABLE-NO-HEADER\", \"JSON\", \"QUIET\"],\n )\n return parser\n\n\ndef generate_cli_main_parser() -> argparse.ArgumentParser:\n \"\"\"Using all providers available, generate a parser that will be used by Lexicon CLI\"\"\"\n parser = argparse.ArgumentParser(\n description=\"Create, Update, Delete, List DNS entries\"\n )\n\n parser.add_argument(\n \"--version\",\n help=\"show the current version of lexicon\",\n action=\"version\",\n version=f\"%(prog)s {discovery.lexicon_version()}\",\n )\n parser.add_argument(\"--delegated\", help=\"specify the delegated domain\")\n parser.add_argument(\n \"--config-dir\",\n default=os.getcwd(),\n help=\"specify the directory where to search lexicon.yml and \"\n \"lexicon_[provider].yml configuration files \"\n \"(default: current directory).\",\n )\n subparsers = parser.add_subparsers(\n dest=\"provider_name\", help=\"specify the DNS provider to use\"\n )\n subparsers.required = True\n\n for provider, available in discovery.find_providers().items():\n provider_module = importlib.import_module(\"lexicon.providers.\" + provider)\n provider_parser = getattr(provider_module, \"provider_parser\")\n\n subparser = subparsers.add_parser(\n provider,\n help=f\"{provider} provider\",\n parents=[generate_base_provider_parser()],\n )\n provider_parser(subparser)\n\n if not available:\n subparser.epilog = (\n \"WARNING: some required dependencies for this provider are not \"\n f\"installed. Please install lexicon[{provider}] first before using it.\"\n )\n\n return parser\n", "path": "lexicon/parser.py"}, {"content": "\"\"\"\nThis module takes care of finding information about the runtime of Lexicon:\n* what are the providers installed, and available\n* what is the version of Lexicon\n\"\"\"\nimport pkgutil\nfrom typing import Dict, List\n\nimport pkg_resources\n\nfrom lexicon import providers\n\n\ndef find_providers() -> Dict[str, bool]:\n \"\"\"Find all providers registered in Lexicon, and their availability\"\"\"\n providers_list = sorted(\n {\n modname\n for (_, modname, _) in pkgutil.iter_modules(providers.__path__) # type: ignore\n if modname != \"base\"\n }\n )\n\n try:\n distribution = pkg_resources.get_distribution(\"dns-lexicon\")\n except pkg_resources.DistributionNotFound:\n return {provider: True for provider in providers_list}\n else:\n return {\n provider: _resolve_requirements(provider, distribution)\n for provider in providers_list\n }\n\n\ndef lexicon_version() -> str:\n \"\"\"Retrieve current Lexicon version\"\"\"\n try:\n return pkg_resources.get_distribution(\"dns-lexicon\").version\n except pkg_resources.DistributionNotFound:\n return \"unknown\"\n\n\ndef _resolve_requirements(\n provider: str, distribution: pkg_resources.Distribution\n) -> bool:\n try:\n requirements: List[pkg_resources.Requirement] = distribution.requires(\n extras=(provider,)\n )\n except pkg_resources.UnknownExtra:\n # No extra for this provider\n return True\n else:\n # Extra is defined\n try:\n for requirement in requirements:\n if hasattr(requirement, \"name\"):\n pkg_resources.get_distribution(requirement.name) # type: ignore\n else:\n pkg_resources.get_distribution(requirement)\n except (pkg_resources.DistributionNotFound, pkg_resources.VersionConflict):\n # At least one extra requirement is not fulfilled\n return False\n\n return True\n", "path": "lexicon/discovery.py"}]} | 3,783 | 850 |
gh_patches_debug_34753 | rasdani/github-patches | git_diff | ESMCI__cime-2956 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
case_submit.py has an inaccurate test for continuing a run
The test for making sure continue_run makes sense in case_submit.py is not really valid. There's a line that uses successful completion of a previous run as the test instead of checking for valid pointer files. That section of the script looks like:
# Check if CONTINUE_RUN value makes sense
if job != "case.test" and case.get_value("CONTINUE_RUN"):
rundir = case.get_value("RUNDIR")
caseroot = case.get_value("CASEROOT")
expect(os.path.isdir(rundir),
"CONTINUE_RUN is true but RUNDIR {} does not exist".format(rundir))
expect(len(glob.glob(os.path.join(rundir, "*.nc"))) > 0,
"CONTINUE_RUN is true but this case does not appear to have been run before (no .nc files in RUNDIR)")
expect(does_file_have_string(os.path.join(caseroot, "CaseStatus"), "case.run {}".format(CASE_SUCCESS)),
"CONTINUE_RUN is true but this case does not appear to have ever run successfully")
The last line either should be modified to really check for the requirements of a continued run or removed.
Note that we often start production runs to go many years and drop sets of restart files along the way. If they ever time out, this test fails and makes it difficult to restart, even thought there are valid sets of restart files.
</issue>
<code>
[start of scripts/lib/CIME/case/case_submit.py]
1 #!/usr/bin/env python
2
3 """
4 case.submit - Submit a cesm workflow to the queueing system or run it
5 if there is no queueing system. A cesm workflow may include multiple
6 jobs.
7 submit, check_case and check_da_settings are members of class Case in file case.py
8 """
9 from six.moves import configparser
10 from CIME.XML.standard_module_setup import *
11 from CIME.utils import expect, run_and_log_case_status, verbatim_success_msg, CASE_SUCCESS, does_file_have_string, CIMEError
12 from CIME.locked_files import unlock_file, lock_file
13 from CIME.test_status import *
14
15 import socket, glob
16
17 logger = logging.getLogger(__name__)
18
19 def _build_prereq_str(case, prev_job_ids):
20 delimiter = case.get_value("depend_separator")
21 prereq_str = ""
22 for job_id in prev_job_ids.values():
23 prereq_str += str(job_id) + delimiter
24 return prereq_str[:-1]
25
26 def _submit(case, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,
27 resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,
28 batch_args=None):
29 if job is None:
30 job = case.get_primary_job()
31
32 # Check if CONTINUE_RUN value makes sense
33 if job != "case.test" and case.get_value("CONTINUE_RUN"):
34 rundir = case.get_value("RUNDIR")
35 caseroot = case.get_value("CASEROOT")
36 expect(os.path.isdir(rundir),
37 "CONTINUE_RUN is true but RUNDIR {} does not exist".format(rundir))
38 expect(len(glob.glob(os.path.join(rundir, "*.nc"))) > 0,
39 "CONTINUE_RUN is true but this case does not appear to have been run before (no .nc files in RUNDIR)")
40 expect(does_file_have_string(os.path.join(caseroot, "CaseStatus"), "case.run {}".format(CASE_SUCCESS)),
41 "CONTINUE_RUN is true but this case does not appear to have ever run successfully")
42
43 # if case.submit is called with the no_batch flag then we assume that this
44 # flag will stay in effect for the duration of the RESUBMITs
45 env_batch = case.get_env("batch")
46 if resubmit:
47 if env_batch.get_batch_system_type() == "none":
48 no_batch = True
49
50 # This is a resubmission, do not reinitialize test values
51 if job == "case.test":
52 case.set_value("IS_FIRST_RUN", False)
53
54 resub = case.get_value("RESUBMIT")
55 logger.info("Submitting job '{}', resubmit={:d}".format(job, resub))
56 case.set_value("RESUBMIT", resub-1)
57 if case.get_value("RESUBMIT_SETS_CONTINUE_RUN"):
58 case.set_value("CONTINUE_RUN", True)
59
60 else:
61 if job == "case.test":
62 case.set_value("IS_FIRST_RUN", True)
63
64 if no_batch:
65 batch_system = "none"
66 else:
67 batch_system = env_batch.get_batch_system_type()
68
69 case.set_value("BATCH_SYSTEM", batch_system)
70
71 env_batch_has_changed = False
72 try:
73 case.check_lockedfile(os.path.basename(env_batch.filename))
74 except CIMEError:
75 env_batch_has_changed = True
76
77 if env_batch.get_batch_system_type() != "none" and env_batch_has_changed:
78 # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
79 logger.warning(\
80 """
81 env_batch.xml appears to have changed, regenerating batch scripts
82 manual edits to these file will be lost!
83 """)
84 env_batch.make_all_batch_files(case)
85
86 unlock_file(os.path.basename(env_batch.filename))
87 lock_file(os.path.basename(env_batch.filename))
88
89 if job == case.get_primary_job():
90 case.check_case()
91 case.check_DA_settings()
92 if case.get_value("MACH") == "mira":
93 with open(".original_host", "w") as fd:
94 fd.write( socket.gethostname())
95
96 #Load Modules
97 case.load_env()
98
99 case.flush()
100
101 logger.warning("submit_jobs {}".format(job))
102 job_ids = case.submit_jobs(no_batch=no_batch, job=job, prereq=prereq,
103 skip_pnl=skip_pnl, resubmit_immediate=resubmit_immediate,
104 allow_fail=allow_fail, mail_user=mail_user,
105 mail_type=mail_type, batch_args=batch_args)
106
107 xml_jobids = []
108 for jobname, jobid in job_ids.items():
109 logger.info("Submitted job {} with id {}".format(jobname, jobid))
110 if jobid:
111 xml_jobids.append("{}:{}".format(jobname, jobid))
112
113 xml_jobid_text = ", ".join(xml_jobids)
114 if xml_jobid_text:
115 case.set_value("JOB_IDS", xml_jobid_text)
116
117 return xml_jobid_text
118
119 def submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,
120 resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,
121 batch_args=None):
122 if resubmit_immediate and self.get_value("MACH") in ['mira', 'cetus']:
123 logger.warning("resubmit_immediate does not work on Mira/Cetus, submitting normally")
124 resubmit_immediate = False
125
126 caseroot = self.get_value("CASEROOT")
127 if self.get_value("TEST"):
128 casebaseid = self.get_value("CASEBASEID")
129 # This should take care of the race condition where the submitted job
130 # begins immediately and tries to set RUN phase. We proactively assume
131 # a passed SUBMIT phase. If this state is already PASS, don't set it again
132 # because then we'll lose RUN phase info if it's there. This info is important
133 # for system_tests_common to know if it needs to reinitialize the test or not.
134 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
135 phase_status = ts.get_status(SUBMIT_PHASE)
136 if phase_status != TEST_PASS_STATUS:
137 ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)
138
139 # If this is a resubmit check the hidden file .submit_options for
140 # any submit options used on the original submit and use them again
141 submit_options = os.path.join(caseroot, ".submit_options")
142 if resubmit and os.path.exists(submit_options):
143 config = configparser.SafeConfigParser()
144 config.read(submit_options)
145 if not skip_pnl and config.has_option('SubmitOptions','skip_pnl'):
146 skip_pnl = config.getboolean('SubmitOptions', 'skip_pnl')
147 if mail_user is None and config.has_option('SubmitOptions', 'mail_user'):
148 mail_user = config.get('SubmitOptions', 'mail_user')
149 if mail_type is None and config.has_option('SubmitOptions', 'mail_type'):
150 mail_type = str(config.get('SubmitOptions', 'mail_type')).split(',')
151 if batch_args is None and config.has_option('SubmitOptions', 'batch_args'):
152 batch_args = config.get('SubmitOptions', 'batch_args')
153
154 try:
155 functor = lambda: _submit(self, job=job, no_batch=no_batch, prereq=prereq,
156 allow_fail=allow_fail, resubmit=resubmit,
157 resubmit_immediate=resubmit_immediate, skip_pnl=skip_pnl,
158 mail_user=mail_user, mail_type=mail_type,
159 batch_args=batch_args)
160 run_and_log_case_status(functor, "case.submit", caseroot=caseroot,
161 custom_success_msg_functor=verbatim_success_msg)
162 except BaseException: # Want to catch KeyboardInterrupt too
163 # If something failed in the batch system, make sure to mark
164 # the test as failed if we are running a test.
165 if self.get_value("TEST"):
166 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
167 ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)
168
169 raise
170
171 def check_case(self):
172 self.check_lockedfiles()
173 self.create_namelists() # Must be called before check_all_input_data
174 logger.info("Checking that inputdata is available as part of case submission")
175 self.check_all_input_data()
176
177 if self.get_value('COMP_WAV') == 'ww':
178 # the ww3 buildnml has dependancies on inputdata so we must run it again
179 self.create_namelists(component='WAV')
180
181
182 expect(self.get_value("BUILD_COMPLETE"), "Build complete is "
183 "not True please rebuild the model by calling case.build")
184 logger.info("Check case OK")
185
186 def check_DA_settings(self):
187 script = self.get_value("DATA_ASSIMILATION_SCRIPT")
188 cycles = self.get_value("DATA_ASSIMILATION_CYCLES")
189 if len(script) > 0 and os.path.isfile(script) and cycles > 0:
190 logger.info("Data Assimilation enabled using script {} with {:d} cycles".format(script,
191 cycles))
192
[end of scripts/lib/CIME/case/case_submit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/lib/CIME/case/case_submit.py b/scripts/lib/CIME/case/case_submit.py
--- a/scripts/lib/CIME/case/case_submit.py
+++ b/scripts/lib/CIME/case/case_submit.py
@@ -8,11 +8,11 @@
"""
from six.moves import configparser
from CIME.XML.standard_module_setup import *
-from CIME.utils import expect, run_and_log_case_status, verbatim_success_msg, CASE_SUCCESS, does_file_have_string, CIMEError
+from CIME.utils import expect, run_and_log_case_status, verbatim_success_msg, CIMEError
from CIME.locked_files import unlock_file, lock_file
from CIME.test_status import *
-import socket, glob
+import socket
logger = logging.getLogger(__name__)
@@ -32,13 +32,18 @@
# Check if CONTINUE_RUN value makes sense
if job != "case.test" and case.get_value("CONTINUE_RUN"):
rundir = case.get_value("RUNDIR")
- caseroot = case.get_value("CASEROOT")
expect(os.path.isdir(rundir),
"CONTINUE_RUN is true but RUNDIR {} does not exist".format(rundir))
- expect(len(glob.glob(os.path.join(rundir, "*.nc"))) > 0,
- "CONTINUE_RUN is true but this case does not appear to have been run before (no .nc files in RUNDIR)")
- expect(does_file_have_string(os.path.join(caseroot, "CaseStatus"), "case.run {}".format(CASE_SUCCESS)),
- "CONTINUE_RUN is true but this case does not appear to have ever run successfully")
+ expect(os.path.exists(os.path.join(rundir,"rpointer.drv")),
+ "CONTINUE_RUN is true but this case does not appear to have restart files staged in {}".format(rundir))
+ # Finally we open the rpointer.drv file and check that it's correct
+ casename = case.get_value("CASE")
+ with open(os.path.join(rundir,"rpointer.drv"), "r") as fd:
+ ncfile = fd.readline().strip()
+ expect(ncfile.startswith(casename) and
+ os.path.exists(os.path.join(rundir,ncfile)),
+ "File {ncfile} not present or does not match case {casename}".
+ format(ncfile=os.path.join(rundir,ncfile),casename=casename))
# if case.submit is called with the no_batch flag then we assume that this
# flag will stay in effect for the duration of the RESUBMITs
| {"golden_diff": "diff --git a/scripts/lib/CIME/case/case_submit.py b/scripts/lib/CIME/case/case_submit.py\n--- a/scripts/lib/CIME/case/case_submit.py\n+++ b/scripts/lib/CIME/case/case_submit.py\n@@ -8,11 +8,11 @@\n \"\"\"\n from six.moves import configparser\n from CIME.XML.standard_module_setup import *\n-from CIME.utils import expect, run_and_log_case_status, verbatim_success_msg, CASE_SUCCESS, does_file_have_string, CIMEError\n+from CIME.utils import expect, run_and_log_case_status, verbatim_success_msg, CIMEError\n from CIME.locked_files import unlock_file, lock_file\n from CIME.test_status import *\n \n-import socket, glob\n+import socket\n \n logger = logging.getLogger(__name__)\n \n@@ -32,13 +32,18 @@\n # Check if CONTINUE_RUN value makes sense\n if job != \"case.test\" and case.get_value(\"CONTINUE_RUN\"):\n rundir = case.get_value(\"RUNDIR\")\n- caseroot = case.get_value(\"CASEROOT\")\n expect(os.path.isdir(rundir),\n \"CONTINUE_RUN is true but RUNDIR {} does not exist\".format(rundir))\n- expect(len(glob.glob(os.path.join(rundir, \"*.nc\"))) > 0,\n- \"CONTINUE_RUN is true but this case does not appear to have been run before (no .nc files in RUNDIR)\")\n- expect(does_file_have_string(os.path.join(caseroot, \"CaseStatus\"), \"case.run {}\".format(CASE_SUCCESS)),\n- \"CONTINUE_RUN is true but this case does not appear to have ever run successfully\")\n+ expect(os.path.exists(os.path.join(rundir,\"rpointer.drv\")),\n+ \"CONTINUE_RUN is true but this case does not appear to have restart files staged in {}\".format(rundir))\n+ # Finally we open the rpointer.drv file and check that it's correct\n+ casename = case.get_value(\"CASE\")\n+ with open(os.path.join(rundir,\"rpointer.drv\"), \"r\") as fd:\n+ ncfile = fd.readline().strip()\n+ expect(ncfile.startswith(casename) and\n+ os.path.exists(os.path.join(rundir,ncfile)),\n+ \"File {ncfile} not present or does not match case {casename}\".\n+ format(ncfile=os.path.join(rundir,ncfile),casename=casename))\n \n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n", "issue": "case_submit.py has an inaccurate test for continuing a run\nThe test for making sure continue_run makes sense in case_submit.py is not really valid. There's a line that uses successful completion of a previous run as the test instead of checking for valid pointer files. That section of the script looks like:\r\n\r\n # Check if CONTINUE_RUN value makes sense\r\n if job != \"case.test\" and case.get_value(\"CONTINUE_RUN\"):\r\n rundir = case.get_value(\"RUNDIR\")\r\n caseroot = case.get_value(\"CASEROOT\")\r\n expect(os.path.isdir(rundir),\r\n \"CONTINUE_RUN is true but RUNDIR {} does not exist\".format(rundir))\r\n expect(len(glob.glob(os.path.join(rundir, \"*.nc\"))) > 0,\r\n \"CONTINUE_RUN is true but this case does not appear to have been run before (no .nc files in RUNDIR)\")\r\n expect(does_file_have_string(os.path.join(caseroot, \"CaseStatus\"), \"case.run {}\".format(CASE_SUCCESS)),\r\n \"CONTINUE_RUN is true but this case does not appear to have ever run successfully\")\r\nThe last line either should be modified to really check for the requirements of a continued run or removed.\r\n\r\nNote that we often start production runs to go many years and drop sets of restart files along the way. If they ever time out, this test fails and makes it difficult to restart, even thought there are valid sets of restart files.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\ncase.submit - Submit a cesm workflow to the queueing system or run it\nif there is no queueing system. A cesm workflow may include multiple\njobs.\nsubmit, check_case and check_da_settings are members of class Case in file case.py\n\"\"\"\nfrom six.moves import configparser\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_and_log_case_status, verbatim_success_msg, CASE_SUCCESS, does_file_have_string, CIMEError\nfrom CIME.locked_files import unlock_file, lock_file\nfrom CIME.test_status import *\n\nimport socket, glob\n\nlogger = logging.getLogger(__name__)\n\ndef _build_prereq_str(case, prev_job_ids):\n delimiter = case.get_value(\"depend_separator\")\n prereq_str = \"\"\n for job_id in prev_job_ids.values():\n prereq_str += str(job_id) + delimiter\n return prereq_str[:-1]\n\ndef _submit(case, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if job is None:\n job = case.get_primary_job()\n\n # Check if CONTINUE_RUN value makes sense\n if job != \"case.test\" and case.get_value(\"CONTINUE_RUN\"):\n rundir = case.get_value(\"RUNDIR\")\n caseroot = case.get_value(\"CASEROOT\")\n expect(os.path.isdir(rundir),\n \"CONTINUE_RUN is true but RUNDIR {} does not exist\".format(rundir))\n expect(len(glob.glob(os.path.join(rundir, \"*.nc\"))) > 0,\n \"CONTINUE_RUN is true but this case does not appear to have been run before (no .nc files in RUNDIR)\")\n expect(does_file_have_string(os.path.join(caseroot, \"CaseStatus\"), \"case.run {}\".format(CASE_SUCCESS)),\n \"CONTINUE_RUN is true but this case does not appear to have ever run successfully\")\n\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n if resubmit:\n if env_batch.get_batch_system_type() == \"none\":\n no_batch = True\n\n # This is a resubmission, do not reinitialize test values\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", False)\n\n resub = case.get_value(\"RESUBMIT\")\n logger.info(\"Submitting job '{}', resubmit={:d}\".format(job, resub))\n case.set_value(\"RESUBMIT\", resub-1)\n if case.get_value(\"RESUBMIT_SETS_CONTINUE_RUN\"):\n case.set_value(\"CONTINUE_RUN\", True)\n\n else:\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", True)\n\n if no_batch:\n batch_system = \"none\"\n else:\n batch_system = env_batch.get_batch_system_type()\n\n case.set_value(\"BATCH_SYSTEM\", batch_system)\n\n env_batch_has_changed = False\n try:\n case.check_lockedfile(os.path.basename(env_batch.filename))\n except CIMEError:\n env_batch_has_changed = True\n\n if env_batch.get_batch_system_type() != \"none\" and env_batch_has_changed:\n # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n logger.warning(\\\n\"\"\"\nenv_batch.xml appears to have changed, regenerating batch scripts\nmanual edits to these file will be lost!\n\"\"\")\n env_batch.make_all_batch_files(case)\n\n unlock_file(os.path.basename(env_batch.filename))\n lock_file(os.path.basename(env_batch.filename))\n\n if job == case.get_primary_job():\n case.check_case()\n case.check_DA_settings()\n if case.get_value(\"MACH\") == \"mira\":\n with open(\".original_host\", \"w\") as fd:\n fd.write( socket.gethostname())\n\n #Load Modules\n case.load_env()\n\n case.flush()\n\n logger.warning(\"submit_jobs {}\".format(job))\n job_ids = case.submit_jobs(no_batch=no_batch, job=job, prereq=prereq,\n skip_pnl=skip_pnl, resubmit_immediate=resubmit_immediate,\n allow_fail=allow_fail, mail_user=mail_user,\n mail_type=mail_type, batch_args=batch_args)\n\n xml_jobids = []\n for jobname, jobid in job_ids.items():\n logger.info(\"Submitted job {} with id {}\".format(jobname, jobid))\n if jobid:\n xml_jobids.append(\"{}:{}\".format(jobname, jobid))\n\n xml_jobid_text = \", \".join(xml_jobids)\n if xml_jobid_text:\n case.set_value(\"JOB_IDS\", xml_jobid_text)\n\n return xml_jobid_text\n\ndef submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if resubmit_immediate and self.get_value(\"MACH\") in ['mira', 'cetus']:\n logger.warning(\"resubmit_immediate does not work on Mira/Cetus, submitting normally\")\n resubmit_immediate = False\n\n caseroot = self.get_value(\"CASEROOT\")\n if self.get_value(\"TEST\"):\n casebaseid = self.get_value(\"CASEBASEID\")\n # This should take care of the race condition where the submitted job\n # begins immediately and tries to set RUN phase. We proactively assume\n # a passed SUBMIT phase. If this state is already PASS, don't set it again\n # because then we'll lose RUN phase info if it's there. This info is important\n # for system_tests_common to know if it needs to reinitialize the test or not.\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n phase_status = ts.get_status(SUBMIT_PHASE)\n if phase_status != TEST_PASS_STATUS:\n ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n\n # If this is a resubmit check the hidden file .submit_options for\n # any submit options used on the original submit and use them again\n submit_options = os.path.join(caseroot, \".submit_options\")\n if resubmit and os.path.exists(submit_options):\n config = configparser.SafeConfigParser()\n config.read(submit_options)\n if not skip_pnl and config.has_option('SubmitOptions','skip_pnl'):\n skip_pnl = config.getboolean('SubmitOptions', 'skip_pnl')\n if mail_user is None and config.has_option('SubmitOptions', 'mail_user'):\n mail_user = config.get('SubmitOptions', 'mail_user')\n if mail_type is None and config.has_option('SubmitOptions', 'mail_type'):\n mail_type = str(config.get('SubmitOptions', 'mail_type')).split(',')\n if batch_args is None and config.has_option('SubmitOptions', 'batch_args'):\n batch_args = config.get('SubmitOptions', 'batch_args')\n\n try:\n functor = lambda: _submit(self, job=job, no_batch=no_batch, prereq=prereq,\n allow_fail=allow_fail, resubmit=resubmit,\n resubmit_immediate=resubmit_immediate, skip_pnl=skip_pnl,\n mail_user=mail_user, mail_type=mail_type,\n batch_args=batch_args)\n run_and_log_case_status(functor, \"case.submit\", caseroot=caseroot,\n custom_success_msg_functor=verbatim_success_msg)\n except BaseException: # Want to catch KeyboardInterrupt too\n # If something failed in the batch system, make sure to mark\n # the test as failed if we are running a test.\n if self.get_value(\"TEST\"):\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)\n\n raise\n\ndef check_case(self):\n self.check_lockedfiles()\n self.create_namelists() # Must be called before check_all_input_data\n logger.info(\"Checking that inputdata is available as part of case submission\")\n self.check_all_input_data()\n\n if self.get_value('COMP_WAV') == 'ww':\n # the ww3 buildnml has dependancies on inputdata so we must run it again\n self.create_namelists(component='WAV')\n\n\n expect(self.get_value(\"BUILD_COMPLETE\"), \"Build complete is \"\n \"not True please rebuild the model by calling case.build\")\n logger.info(\"Check case OK\")\n\ndef check_DA_settings(self):\n script = self.get_value(\"DATA_ASSIMILATION_SCRIPT\")\n cycles = self.get_value(\"DATA_ASSIMILATION_CYCLES\")\n if len(script) > 0 and os.path.isfile(script) and cycles > 0:\n logger.info(\"Data Assimilation enabled using script {} with {:d} cycles\".format(script,\n cycles))\n", "path": "scripts/lib/CIME/case/case_submit.py"}]} | 3,312 | 577 |
gh_patches_debug_17065 | rasdani/github-patches | git_diff | feast-dev__feast-2031 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in requirement definition
After installing feast in a conda environment, exporting the environment to a .yml fails. This is probably related to a missing comma at the end of this line:
https://github.com/feast-dev/feast/blob/63680bad344f55e41a281d643bef5972f2ea28da/sdk/python/setup.py#L50
</issue>
<code>
[start of sdk/python/setup.py]
1 # Copyright 2019 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import glob
15 import os
16 import re
17 import shutil
18 import subprocess
19 import pathlib
20
21 from distutils.cmd import Command
22 from setuptools import find_packages
23
24 try:
25 from setuptools import setup
26 from setuptools.command.install import install
27 from setuptools.command.develop import develop
28 from setuptools.command.egg_info import egg_info
29 from setuptools.command.sdist import sdist
30 from setuptools.command.build_py import build_py
31 except ImportError:
32 from distutils.core import setup
33 from distutils.command.install import install
34 from distutils.command.build_py import build_py
35
36 NAME = "feast"
37 DESCRIPTION = "Python SDK for Feast"
38 URL = "https://github.com/feast-dev/feast"
39 AUTHOR = "Feast"
40 REQUIRES_PYTHON = ">=3.7.0"
41
42 REQUIRED = [
43 "Click==7.*",
44 "colorama>=0.3.9",
45 "dill==0.3.*",
46 "fastavro>=1.1.0",
47 "google-api-core>=1.23.0",
48 "googleapis-common-protos==1.52.*",
49 "grpcio>=1.34.0",
50 "grpcio-reflection>=1.34.0"
51 "Jinja2>=2.0.0",
52 "jsonschema",
53 "mmh3",
54 "pandas>=1.0.0",
55 "pandavro==1.5.*",
56 "protobuf>=3.10",
57 "proto-plus",
58 "pyarrow>=4.0.0",
59 "pydantic>=1.0.0",
60 "PyYAML>=5.4.*",
61 "tabulate==0.8.*",
62 "tenacity>=7.*",
63 "toml==0.10.*",
64 "tqdm==4.*",
65 "fastapi>=0.68.0",
66 "uvicorn[standard]>=0.14.0",
67 ]
68
69 GCP_REQUIRED = [
70 "proto-plus<1.19.7",
71 "google-cloud-bigquery>=2.28.1",
72 "google-cloud-bigquery-storage >= 2.0.0",
73 "google-cloud-datastore>=2.1.*",
74 "google-cloud-storage>=1.34.*,<1.41",
75 "google-cloud-core==1.4.*",
76 ]
77
78 REDIS_REQUIRED = [
79 "redis-py-cluster==2.1.2",
80 ]
81
82 AWS_REQUIRED = [
83 "boto3==1.17.*",
84 "docker>=5.0.2",
85 ]
86
87 CI_REQUIRED = [
88 "cryptography==3.3.2",
89 "flake8",
90 "black==19.10b0",
91 "isort>=5",
92 "grpcio-tools==1.34.0",
93 "grpcio-testing==1.34.0",
94 "minio==7.1.0",
95 "mock==2.0.0",
96 "moto",
97 "mypy==0.790",
98 "mypy-protobuf==1.24",
99 "avro==1.10.0",
100 "gcsfs",
101 "urllib3>=1.25.4",
102 "pytest==6.0.0",
103 "pytest-cov",
104 "pytest-xdist",
105 "pytest-benchmark>=3.4.1",
106 "pytest-lazy-fixture==0.6.3",
107 "pytest-timeout==1.4.2",
108 "pytest-ordering==0.6.*",
109 "pytest-mock==1.10.4",
110 "Sphinx!=4.0.0",
111 "sphinx-rtd-theme",
112 "testcontainers==3.4.2",
113 "adlfs==0.5.9",
114 "firebase-admin==4.5.2",
115 "pre-commit",
116 "assertpy==1.1",
117 "pip-tools"
118 ] + GCP_REQUIRED + REDIS_REQUIRED + AWS_REQUIRED
119
120 DEV_REQUIRED = ["mypy-protobuf==1.*", "grpcio-testing==1.*"] + CI_REQUIRED
121
122 # Get git repo root directory
123 repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
124
125 # README file from Feast repo root directory
126 README_FILE = os.path.join(repo_root, "README.md")
127 with open(README_FILE, "r") as f:
128 LONG_DESCRIPTION = f.read()
129
130 # Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.
131 # Regex modified from default tag regex in:
132 # https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9
133 TAG_REGEX = re.compile(
134 r"^(?:[\/\w-]+)?(?P<version>[vV]?\d+(?:\.\d+){0,2}[^\+]*)(?:\+.*)?$"
135 )
136
137 # Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)
138 if shutil.which("git"):
139 use_scm_version = {"root": "../..", "relative_to": __file__, "tag_regex": TAG_REGEX}
140 else:
141 use_scm_version = None
142
143
144 class BuildProtoCommand(Command):
145 description = "Builds the proto files into python files."
146
147 def initialize_options(self):
148 self.protoc = ["python", "-m", "grpc_tools.protoc"] # find_executable("protoc")
149 self.proto_folder = os.path.join(repo_root, "protos")
150 self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')
151 self.sub_folders = ["core", "serving", "types", "storage"]
152
153 def finalize_options(self):
154 pass
155
156 def _generate_protos(self, path):
157 proto_files = glob.glob(os.path.join(self.proto_folder, path))
158
159 subprocess.check_call(self.protoc + [
160 '-I', self.proto_folder,
161 '--python_out', self.this_package,
162 '--grpc_python_out', self.this_package,
163 '--mypy_out', self.this_package] + proto_files)
164
165 def run(self):
166 for sub_folder in self.sub_folders:
167 self._generate_protos(f'feast/{sub_folder}/*.proto')
168
169 from pathlib import Path
170
171 for path in Path('feast/protos').rglob('*.py'):
172 for folder in self.sub_folders:
173 # Read in the file
174 with open(path, 'r') as file:
175 filedata = file.read()
176
177 # Replace the target string
178 filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')
179
180 # Write the file out again
181 with open(path, 'w') as file:
182 file.write(filedata)
183
184
185 class BuildCommand(build_py):
186 """Custom build command."""
187
188 def run(self):
189 self.run_command('build_proto')
190 build_py.run(self)
191
192
193 class DevelopCommand(develop):
194 """Custom develop command."""
195
196 def run(self):
197 self.run_command('build_proto')
198 develop.run(self)
199
200
201 setup(
202 name=NAME,
203 author=AUTHOR,
204 description=DESCRIPTION,
205 long_description=LONG_DESCRIPTION,
206 long_description_content_type="text/markdown",
207 python_requires=REQUIRES_PYTHON,
208 url=URL,
209 packages=find_packages(exclude=("tests",)),
210 install_requires=REQUIRED,
211 # https://stackoverflow.com/questions/28509965/setuptools-development-requirements
212 # Install dev requirements with: pip install -e .[dev]
213 extras_require={
214 "dev": DEV_REQUIRED,
215 "ci": CI_REQUIRED,
216 "gcp": GCP_REQUIRED,
217 "aws": AWS_REQUIRED,
218 "redis": REDIS_REQUIRED,
219 },
220 include_package_data=True,
221 license="Apache",
222 classifiers=[
223 # Trove classifiers
224 # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
225 "License :: OSI Approved :: Apache Software License",
226 "Programming Language :: Python",
227 "Programming Language :: Python :: 3",
228 "Programming Language :: Python :: 3.7",
229 ],
230 entry_points={"console_scripts": ["feast=feast.cli:cli"]},
231 use_scm_version=use_scm_version,
232 setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==1.*", "sphinx!=4.0.0"],
233 package_data={
234 "": [
235 "protos/feast/**/*.proto",
236 "protos/feast/third_party/grpc/health/v1/*.proto",
237 "protos/tensorflow_metadata/proto/v0/*.proto",
238 "feast/protos/feast/**/*.py",
239 "tensorflow_metadata/proto/v0/*.py"
240 ],
241 },
242 cmdclass={
243 "build_proto": BuildProtoCommand,
244 "build_py": BuildCommand,
245 "develop": DevelopCommand,
246 },
247 )
248
[end of sdk/python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -47,14 +47,14 @@
"google-api-core>=1.23.0",
"googleapis-common-protos==1.52.*",
"grpcio>=1.34.0",
- "grpcio-reflection>=1.34.0"
+ "grpcio-reflection>=1.34.0",
"Jinja2>=2.0.0",
"jsonschema",
"mmh3",
"pandas>=1.0.0",
"pandavro==1.5.*",
"protobuf>=3.10",
- "proto-plus",
+ "proto-plus<1.19.7",
"pyarrow>=4.0.0",
"pydantic>=1.0.0",
"PyYAML>=5.4.*",
@@ -67,7 +67,6 @@
]
GCP_REQUIRED = [
- "proto-plus<1.19.7",
"google-cloud-bigquery>=2.28.1",
"google-cloud-bigquery-storage >= 2.0.0",
"google-cloud-datastore>=2.1.*",
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -47,14 +47,14 @@\n \"google-api-core>=1.23.0\",\n \"googleapis-common-protos==1.52.*\",\n \"grpcio>=1.34.0\",\n- \"grpcio-reflection>=1.34.0\"\n+ \"grpcio-reflection>=1.34.0\",\n \"Jinja2>=2.0.0\",\n \"jsonschema\",\n \"mmh3\",\n \"pandas>=1.0.0\",\n \"pandavro==1.5.*\",\n \"protobuf>=3.10\",\n- \"proto-plus\",\n+ \"proto-plus<1.19.7\",\n \"pyarrow>=4.0.0\",\n \"pydantic>=1.0.0\",\n \"PyYAML>=5.4.*\",\n@@ -67,7 +67,6 @@\n ]\n \n GCP_REQUIRED = [\n- \"proto-plus<1.19.7\",\n \"google-cloud-bigquery>=2.28.1\",\n \"google-cloud-bigquery-storage >= 2.0.0\",\n \"google-cloud-datastore>=2.1.*\",\n", "issue": "Error in requirement definition \nAfter installing feast in a conda environment, exporting the environment to a .yml fails. This is probably related to a missing comma at the end of this line:\r\nhttps://github.com/feast-dev/feast/blob/63680bad344f55e41a281d643bef5972f2ea28da/sdk/python/setup.py#L50\r\n\n", "before_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport glob\nimport os\nimport re\nimport shutil\nimport subprocess\nimport pathlib\n\nfrom distutils.cmd import Command\nfrom setuptools import find_packages\n\ntry:\n from setuptools import setup\n from setuptools.command.install import install\n from setuptools.command.develop import develop\n from setuptools.command.egg_info import egg_info\n from setuptools.command.sdist import sdist\n from setuptools.command.build_py import build_py\nexcept ImportError:\n from distutils.core import setup\n from distutils.command.install import install\n from distutils.command.build_py import build_py\n\nNAME = \"feast\"\nDESCRIPTION = \"Python SDK for Feast\"\nURL = \"https://github.com/feast-dev/feast\"\nAUTHOR = \"Feast\"\nREQUIRES_PYTHON = \">=3.7.0\"\n\nREQUIRED = [\n \"Click==7.*\",\n \"colorama>=0.3.9\",\n \"dill==0.3.*\",\n \"fastavro>=1.1.0\",\n \"google-api-core>=1.23.0\",\n \"googleapis-common-protos==1.52.*\",\n \"grpcio>=1.34.0\",\n \"grpcio-reflection>=1.34.0\"\n \"Jinja2>=2.0.0\",\n \"jsonschema\",\n \"mmh3\",\n \"pandas>=1.0.0\",\n \"pandavro==1.5.*\",\n \"protobuf>=3.10\",\n \"proto-plus\",\n \"pyarrow>=4.0.0\",\n \"pydantic>=1.0.0\",\n \"PyYAML>=5.4.*\",\n \"tabulate==0.8.*\",\n \"tenacity>=7.*\",\n \"toml==0.10.*\",\n \"tqdm==4.*\",\n \"fastapi>=0.68.0\",\n \"uvicorn[standard]>=0.14.0\",\n]\n\nGCP_REQUIRED = [\n \"proto-plus<1.19.7\",\n \"google-cloud-bigquery>=2.28.1\",\n \"google-cloud-bigquery-storage >= 2.0.0\",\n \"google-cloud-datastore>=2.1.*\",\n \"google-cloud-storage>=1.34.*,<1.41\",\n \"google-cloud-core==1.4.*\",\n]\n\nREDIS_REQUIRED = [\n \"redis-py-cluster==2.1.2\",\n]\n\nAWS_REQUIRED = [\n \"boto3==1.17.*\",\n \"docker>=5.0.2\",\n]\n\nCI_REQUIRED = [\n \"cryptography==3.3.2\",\n \"flake8\",\n \"black==19.10b0\",\n \"isort>=5\",\n \"grpcio-tools==1.34.0\",\n \"grpcio-testing==1.34.0\",\n \"minio==7.1.0\",\n \"mock==2.0.0\",\n \"moto\",\n \"mypy==0.790\",\n \"mypy-protobuf==1.24\",\n \"avro==1.10.0\",\n \"gcsfs\",\n \"urllib3>=1.25.4\",\n \"pytest==6.0.0\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-benchmark>=3.4.1\",\n \"pytest-lazy-fixture==0.6.3\",\n \"pytest-timeout==1.4.2\",\n \"pytest-ordering==0.6.*\",\n \"pytest-mock==1.10.4\",\n \"Sphinx!=4.0.0\",\n \"sphinx-rtd-theme\",\n \"testcontainers==3.4.2\",\n \"adlfs==0.5.9\",\n \"firebase-admin==4.5.2\",\n \"pre-commit\",\n \"assertpy==1.1\",\n \"pip-tools\"\n] + GCP_REQUIRED + REDIS_REQUIRED + AWS_REQUIRED\n\nDEV_REQUIRED = [\"mypy-protobuf==1.*\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n\n# Get git repo root directory\nrepo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n\n# README file from Feast repo root directory\nREADME_FILE = os.path.join(repo_root, \"README.md\")\nwith open(README_FILE, \"r\") as f:\n LONG_DESCRIPTION = f.read()\n\n# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.\n# Regex modified from default tag regex in:\n# https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9\nTAG_REGEX = re.compile(\n r\"^(?:[\\/\\w-]+)?(?P<version>[vV]?\\d+(?:\\.\\d+){0,2}[^\\+]*)(?:\\+.*)?$\"\n)\n\n# Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)\nif shutil.which(\"git\"):\n use_scm_version = {\"root\": \"../..\", \"relative_to\": __file__, \"tag_regex\": TAG_REGEX}\nelse:\n use_scm_version = None\n\n\nclass BuildProtoCommand(Command):\n description = \"Builds the proto files into python files.\"\n\n def initialize_options(self):\n self.protoc = [\"python\", \"-m\", \"grpc_tools.protoc\"] # find_executable(\"protoc\")\n self.proto_folder = os.path.join(repo_root, \"protos\")\n self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')\n self.sub_folders = [\"core\", \"serving\", \"types\", \"storage\"]\n\n def finalize_options(self):\n pass\n\n def _generate_protos(self, path):\n proto_files = glob.glob(os.path.join(self.proto_folder, path))\n\n subprocess.check_call(self.protoc + [\n '-I', self.proto_folder,\n '--python_out', self.this_package,\n '--grpc_python_out', self.this_package,\n '--mypy_out', self.this_package] + proto_files)\n\n def run(self):\n for sub_folder in self.sub_folders:\n self._generate_protos(f'feast/{sub_folder}/*.proto')\n\n from pathlib import Path\n\n for path in Path('feast/protos').rglob('*.py'):\n for folder in self.sub_folders:\n # Read in the file\n with open(path, 'r') as file:\n filedata = file.read()\n\n # Replace the target string\n filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')\n\n # Write the file out again\n with open(path, 'w') as file:\n file.write(filedata)\n\n\nclass BuildCommand(build_py):\n \"\"\"Custom build command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n build_py.run(self)\n\n\nclass DevelopCommand(develop):\n \"\"\"Custom develop command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n develop.run(self)\n\n\nsetup(\n name=NAME,\n author=AUTHOR,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=(\"tests\",)),\n install_requires=REQUIRED,\n # https://stackoverflow.com/questions/28509965/setuptools-development-requirements\n # Install dev requirements with: pip install -e .[dev]\n extras_require={\n \"dev\": DEV_REQUIRED,\n \"ci\": CI_REQUIRED,\n \"gcp\": GCP_REQUIRED,\n \"aws\": AWS_REQUIRED,\n \"redis\": REDIS_REQUIRED,\n },\n include_package_data=True,\n license=\"Apache\",\n classifiers=[\n # Trove classifiers\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==1.*\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n \"protos/feast/third_party/grpc/health/v1/*.proto\",\n \"protos/tensorflow_metadata/proto/v0/*.proto\",\n \"feast/protos/feast/**/*.py\",\n \"tensorflow_metadata/proto/v0/*.py\"\n ],\n },\n cmdclass={\n \"build_proto\": BuildProtoCommand,\n \"build_py\": BuildCommand,\n \"develop\": DevelopCommand,\n },\n)\n", "path": "sdk/python/setup.py"}]} | 3,401 | 293 |
gh_patches_debug_35606 | rasdani/github-patches | git_diff | Kinto__kinto-972 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a configuration of collections that the history plugin needs to keep track on
Today the history plugin applies to all the collection but most of them don't need it.
For instance with the kinto-signer plugin we don't want to track history of changes in the preview and signed collection.
The same goes with the kinto-changes plugin when we don't want to track monitor changes modifications.
The same way we can configure the kinto-signer resources we want to track, we should be able to configure the list of collections we want the history plugin to track.
Add a configuration of collections that the history plugin needs to keep track on
Today the history plugin applies to all the collection but most of them don't need it.
For instance with the kinto-signer plugin we don't want to track history of changes in the preview and signed collection.
The same goes with the kinto-changes plugin when we don't want to track monitor changes modifications.
The same way we can configure the kinto-signer resources we want to track, we should be able to configure the list of collections we want the history plugin to track.
</issue>
<code>
[start of kinto/plugins/history/listener.py]
1 from kinto.core.utils import instance_uri
2 from datetime import datetime
3
4
5 def on_resource_changed(event):
6 """
7 Everytime an object is created/changed/deleted, we create an entry in the
8 ``history`` resource. The entries are served as read-only in the
9 :mod:`kinto.plugins.history.views` module.
10 """
11 payload = event.payload
12 resource_name = payload['resource_name']
13 event_uri = payload['uri']
14
15 bucket_id = None
16 bucket_uri = None
17 collection_uri = None
18
19 storage = event.request.registry.storage
20 permission = event.request.registry.permission
21
22 targets = []
23 for impacted in event.impacted_records:
24 target = impacted['new']
25 obj_id = target['id']
26
27 try:
28 bucket_id = payload['bucket_id']
29 except KeyError:
30 # e.g. DELETE /buckets
31 bucket_id = obj_id
32 bucket_uri = instance_uri(event.request, 'bucket', id=bucket_id)
33
34 if 'collection_id' in payload:
35 collection_id = payload['collection_id']
36 collection_uri = instance_uri(event.request,
37 'collection',
38 bucket_id=bucket_id,
39 id=collection_id)
40
41 # On POST .../records, the URI does not contain the newly created
42 # record id.
43 parts = event_uri.split('/')
44 if resource_name in parts[-1]:
45 parts.append(obj_id)
46 else:
47 # Make sure the id is correct on grouped events.
48 parts[-1] = obj_id
49 uri = '/'.join(parts)
50 targets.append((uri, target))
51
52 # Prepare a list of object ids to be fetched from permission backend,
53 # and fetch them all at once. Use a mapping for later convenience.
54 all_perms_objects_ids = [oid for (oid, _) in targets]
55 all_perms_objects_ids.append(bucket_uri)
56 if collection_uri is not None:
57 all_perms_objects_ids.append(collection_uri)
58 all_perms_objects_ids = list(set(all_perms_objects_ids))
59 all_permissions = permission.get_objects_permissions(all_perms_objects_ids)
60 perms_by_object_id = dict(zip(all_perms_objects_ids, all_permissions))
61
62 bucket_perms = perms_by_object_id[bucket_uri]
63 collection_perms = {}
64 if collection_uri is not None:
65 collection_perms = perms_by_object_id[collection_uri]
66
67 # The principals allowed to read the bucket and collection.
68 # (Note: ``write`` means ``read``)
69 read_principals = set(bucket_perms.get('read', []))
70 read_principals.update(bucket_perms.get('write', []))
71 read_principals.update(collection_perms.get('read', []))
72 read_principals.update(collection_perms.get('write', []))
73
74 # Create a history entry for each impacted record.
75 for (uri, target) in targets:
76 obj_id = target['id']
77 # Prepare the history entry attributes.
78 perms = {k: list(v) for k, v in perms_by_object_id[uri].items()}
79 eventattrs = dict(**payload)
80 eventattrs.pop('timestamp', None) # Already in target `last_modified`.
81 eventattrs.pop('bucket_id', None)
82 eventattrs['%s_id' % resource_name] = obj_id
83 eventattrs['uri'] = uri
84 attrs = dict(date=datetime.now().isoformat(),
85 target={'data': target, 'permissions': perms},
86 **eventattrs)
87
88 # Create a record for the 'history' resource, whose parent_id is
89 # the bucket URI (c.f. views.py).
90 # Note: this will be rolledback if the transaction is rolledback.
91 entry = storage.create(parent_id=bucket_uri,
92 collection_id='history',
93 record=attrs)
94
95 # The read permission on the newly created history entry is the union
96 # of the record permissions with the one from bucket and collection.
97 entry_principals = set(read_principals)
98 entry_principals.update(perms.get('read', []))
99 entry_principals.update(perms.get('write', []))
100 entry_perms = {'read': list(entry_principals)}
101 # /buckets/{id}/history is the URI for the list of history entries.
102 entry_perm_id = '/buckets/%s/history/%s' % (bucket_id, entry['id'])
103 permission.replace_object_permissions(entry_perm_id, entry_perms)
104
[end of kinto/plugins/history/listener.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/plugins/history/listener.py b/kinto/plugins/history/listener.py
--- a/kinto/plugins/history/listener.py
+++ b/kinto/plugins/history/listener.py
@@ -1,3 +1,5 @@
+from pyramid.settings import aslist
+
from kinto.core.utils import instance_uri
from datetime import datetime
@@ -18,6 +20,9 @@
storage = event.request.registry.storage
permission = event.request.registry.permission
+ settings = event.request.registry.settings
+
+ excluded_resources = aslist(settings.get('history.exclude_resources', ''))
targets = []
for impacted in event.impacted_records:
@@ -31,12 +36,17 @@
bucket_id = obj_id
bucket_uri = instance_uri(event.request, 'bucket', id=bucket_id)
+ if bucket_uri in excluded_resources:
+ continue
+
if 'collection_id' in payload:
collection_id = payload['collection_id']
collection_uri = instance_uri(event.request,
'collection',
bucket_id=bucket_id,
id=collection_id)
+ if collection_uri in excluded_resources:
+ continue
# On POST .../records, the URI does not contain the newly created
# record id.
@@ -47,8 +57,15 @@
# Make sure the id is correct on grouped events.
parts[-1] = obj_id
uri = '/'.join(parts)
+
+ if uri in excluded_resources:
+ continue
+
targets.append((uri, target))
+ if not targets:
+ return # Nothing to do.
+
# Prepare a list of object ids to be fetched from permission backend,
# and fetch them all at once. Use a mapping for later convenience.
all_perms_objects_ids = [oid for (oid, _) in targets]
| {"golden_diff": "diff --git a/kinto/plugins/history/listener.py b/kinto/plugins/history/listener.py\n--- a/kinto/plugins/history/listener.py\n+++ b/kinto/plugins/history/listener.py\n@@ -1,3 +1,5 @@\n+from pyramid.settings import aslist\n+\n from kinto.core.utils import instance_uri\n from datetime import datetime\n \n@@ -18,6 +20,9 @@\n \n storage = event.request.registry.storage\n permission = event.request.registry.permission\n+ settings = event.request.registry.settings\n+\n+ excluded_resources = aslist(settings.get('history.exclude_resources', ''))\n \n targets = []\n for impacted in event.impacted_records:\n@@ -31,12 +36,17 @@\n bucket_id = obj_id\n bucket_uri = instance_uri(event.request, 'bucket', id=bucket_id)\n \n+ if bucket_uri in excluded_resources:\n+ continue\n+\n if 'collection_id' in payload:\n collection_id = payload['collection_id']\n collection_uri = instance_uri(event.request,\n 'collection',\n bucket_id=bucket_id,\n id=collection_id)\n+ if collection_uri in excluded_resources:\n+ continue\n \n # On POST .../records, the URI does not contain the newly created\n # record id.\n@@ -47,8 +57,15 @@\n # Make sure the id is correct on grouped events.\n parts[-1] = obj_id\n uri = '/'.join(parts)\n+\n+ if uri in excluded_resources:\n+ continue\n+\n targets.append((uri, target))\n \n+ if not targets:\n+ return # Nothing to do.\n+\n # Prepare a list of object ids to be fetched from permission backend,\n # and fetch them all at once. Use a mapping for later convenience.\n all_perms_objects_ids = [oid for (oid, _) in targets]\n", "issue": "Add a configuration of collections that the history plugin needs to keep track on\nToday the history plugin applies to all the collection but most of them don't need it.\r\nFor instance with the kinto-signer plugin we don't want to track history of changes in the preview and signed collection.\r\nThe same goes with the kinto-changes plugin when we don't want to track monitor changes modifications.\r\n\r\nThe same way we can configure the kinto-signer resources we want to track, we should be able to configure the list of collections we want the history plugin to track.\nAdd a configuration of collections that the history plugin needs to keep track on\nToday the history plugin applies to all the collection but most of them don't need it.\r\nFor instance with the kinto-signer plugin we don't want to track history of changes in the preview and signed collection.\r\nThe same goes with the kinto-changes plugin when we don't want to track monitor changes modifications.\r\n\r\nThe same way we can configure the kinto-signer resources we want to track, we should be able to configure the list of collections we want the history plugin to track.\n", "before_files": [{"content": "from kinto.core.utils import instance_uri\nfrom datetime import datetime\n\n\ndef on_resource_changed(event):\n \"\"\"\n Everytime an object is created/changed/deleted, we create an entry in the\n ``history`` resource. The entries are served as read-only in the\n :mod:`kinto.plugins.history.views` module.\n \"\"\"\n payload = event.payload\n resource_name = payload['resource_name']\n event_uri = payload['uri']\n\n bucket_id = None\n bucket_uri = None\n collection_uri = None\n\n storage = event.request.registry.storage\n permission = event.request.registry.permission\n\n targets = []\n for impacted in event.impacted_records:\n target = impacted['new']\n obj_id = target['id']\n\n try:\n bucket_id = payload['bucket_id']\n except KeyError:\n # e.g. DELETE /buckets\n bucket_id = obj_id\n bucket_uri = instance_uri(event.request, 'bucket', id=bucket_id)\n\n if 'collection_id' in payload:\n collection_id = payload['collection_id']\n collection_uri = instance_uri(event.request,\n 'collection',\n bucket_id=bucket_id,\n id=collection_id)\n\n # On POST .../records, the URI does not contain the newly created\n # record id.\n parts = event_uri.split('/')\n if resource_name in parts[-1]:\n parts.append(obj_id)\n else:\n # Make sure the id is correct on grouped events.\n parts[-1] = obj_id\n uri = '/'.join(parts)\n targets.append((uri, target))\n\n # Prepare a list of object ids to be fetched from permission backend,\n # and fetch them all at once. Use a mapping for later convenience.\n all_perms_objects_ids = [oid for (oid, _) in targets]\n all_perms_objects_ids.append(bucket_uri)\n if collection_uri is not None:\n all_perms_objects_ids.append(collection_uri)\n all_perms_objects_ids = list(set(all_perms_objects_ids))\n all_permissions = permission.get_objects_permissions(all_perms_objects_ids)\n perms_by_object_id = dict(zip(all_perms_objects_ids, all_permissions))\n\n bucket_perms = perms_by_object_id[bucket_uri]\n collection_perms = {}\n if collection_uri is not None:\n collection_perms = perms_by_object_id[collection_uri]\n\n # The principals allowed to read the bucket and collection.\n # (Note: ``write`` means ``read``)\n read_principals = set(bucket_perms.get('read', []))\n read_principals.update(bucket_perms.get('write', []))\n read_principals.update(collection_perms.get('read', []))\n read_principals.update(collection_perms.get('write', []))\n\n # Create a history entry for each impacted record.\n for (uri, target) in targets:\n obj_id = target['id']\n # Prepare the history entry attributes.\n perms = {k: list(v) for k, v in perms_by_object_id[uri].items()}\n eventattrs = dict(**payload)\n eventattrs.pop('timestamp', None) # Already in target `last_modified`.\n eventattrs.pop('bucket_id', None)\n eventattrs['%s_id' % resource_name] = obj_id\n eventattrs['uri'] = uri\n attrs = dict(date=datetime.now().isoformat(),\n target={'data': target, 'permissions': perms},\n **eventattrs)\n\n # Create a record for the 'history' resource, whose parent_id is\n # the bucket URI (c.f. views.py).\n # Note: this will be rolledback if the transaction is rolledback.\n entry = storage.create(parent_id=bucket_uri,\n collection_id='history',\n record=attrs)\n\n # The read permission on the newly created history entry is the union\n # of the record permissions with the one from bucket and collection.\n entry_principals = set(read_principals)\n entry_principals.update(perms.get('read', []))\n entry_principals.update(perms.get('write', []))\n entry_perms = {'read': list(entry_principals)}\n # /buckets/{id}/history is the URI for the list of history entries.\n entry_perm_id = '/buckets/%s/history/%s' % (bucket_id, entry['id'])\n permission.replace_object_permissions(entry_perm_id, entry_perms)\n", "path": "kinto/plugins/history/listener.py"}]} | 1,902 | 402 |
gh_patches_debug_39634 | rasdani/github-patches | git_diff | python-trio__trio-440 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add tests for fspath
</issue>
<code>
[start of trio/_util.py]
1 # Little utilities we use internally
2
3 import os
4 import sys
5 import pathlib
6 from functools import wraps
7 import typing as t
8
9 import async_generator
10
11 # There's a dependency loop here... _core is allowed to use this file (in fact
12 # it's the *only* file in the main trio/ package it's allowed to use), but
13 # ConflictDetector needs checkpoint so it also has to import
14 # _core. Possibly we should split this file into two: one for true generic
15 # low-level utility code, and one for higher level helpers?
16 from . import _core
17
18 __all__ = [
19 "signal_raise", "aiter_compat", "acontextmanager", "ConflictDetector",
20 "fixup_module_metadata", "fspath"
21 ]
22
23 # Equivalent to the C function raise(), which Python doesn't wrap
24 if os.name == "nt":
25 # On windows, os.kill exists but is really weird.
26 #
27 # If you give it CTRL_C_EVENT or CTRL_BREAK_EVENT, it tries to deliver
28 # those using GenerateConsoleCtrlEvent. But I found that when I tried
29 # to run my test normally, it would freeze waiting... unless I added
30 # print statements, in which case the test suddenly worked. So I guess
31 # these signals are only delivered if/when you access the console? I
32 # don't really know what was going on there. From reading the
33 # GenerateConsoleCtrlEvent docs I don't know how it worked at all.
34 #
35 # I later spent a bunch of time trying to make GenerateConsoleCtrlEvent
36 # work for creating synthetic control-C events, and... failed
37 # utterly. There are lots of details in the code and comments
38 # removed/added at this commit:
39 # https://github.com/python-trio/trio/commit/95843654173e3e826c34d70a90b369ba6edf2c23
40 #
41 # OTOH, if you pass os.kill any *other* signal number... then CPython
42 # just calls TerminateProcess (wtf).
43 #
44 # So, anyway, os.kill is not so useful for testing purposes. Instead
45 # we use raise():
46 #
47 # https://msdn.microsoft.com/en-us/library/dwwzkt4c.aspx
48 #
49 # Have to import cffi inside the 'if os.name' block because we don't
50 # depend on cffi on non-Windows platforms. (It would be easy to switch
51 # this to ctypes though if we ever remove the cffi dependency.)
52 #
53 # Some more information:
54 # https://bugs.python.org/issue26350
55 #
56 # Anyway, we use this for two things:
57 # - redelivering unhandled signals
58 # - generating synthetic signals for tests
59 # and for both of those purposes, 'raise' works fine.
60 import cffi
61
62 _ffi = cffi.FFI()
63 _ffi.cdef("int raise(int);")
64 _lib = _ffi.dlopen("api-ms-win-crt-runtime-l1-1-0.dll")
65 signal_raise = getattr(_lib, "raise")
66 else:
67
68 def signal_raise(signum):
69 os.kill(os.getpid(), signum)
70
71
72 # Decorator to handle the change to __aiter__ in 3.5.2
73 def aiter_compat(aiter_impl):
74 if sys.version_info < (3, 5, 2):
75
76 @wraps(aiter_impl)
77 async def __aiter__(*args, **kwargs):
78 return aiter_impl(*args, **kwargs)
79
80 return __aiter__
81 else:
82 return aiter_impl
83
84
85 # Very much derived from the one in contextlib, by copy/pasting and then
86 # asyncifying everything. (Also I dropped the obscure support for using
87 # context managers as function decorators. It could be re-added; I just
88 # couldn't be bothered.)
89 # So this is a derivative work licensed under the PSF License, which requires
90 # the following notice:
91 #
92 # Copyright © 2001-2017 Python Software Foundation; All Rights Reserved
93 class _AsyncGeneratorContextManager:
94 def __init__(self, func, args, kwds):
95 self._func_name = func.__name__
96 self._agen = func(*args, **kwds).__aiter__()
97
98 async def __aenter__(self):
99 if sys.version_info < (3, 5, 2):
100 self._agen = await self._agen
101 try:
102 return await self._agen.asend(None)
103 except StopAsyncIteration:
104 raise RuntimeError("async generator didn't yield") from None
105
106 async def __aexit__(self, type, value, traceback):
107 if type is None:
108 try:
109 await self._agen.asend(None)
110 except StopAsyncIteration:
111 return False
112 else:
113 raise RuntimeError("async generator didn't stop")
114 else:
115 # It used to be possible to have type != None, value == None:
116 # https://bugs.python.org/issue1705170
117 # but AFAICT this can't happen anymore.
118 assert value is not None
119 try:
120 await self._agen.athrow(type, value, traceback)
121 raise RuntimeError(
122 "async generator didn't stop after athrow()"
123 )
124 except StopAsyncIteration as exc:
125 # Suppress StopIteration *unless* it's the same exception that
126 # was passed to throw(). This prevents a StopIteration
127 # raised inside the "with" statement from being suppressed.
128 return (exc is not value)
129 except RuntimeError as exc:
130 # Don't re-raise the passed in exception. (issue27112)
131 if exc is value:
132 return False
133 # Likewise, avoid suppressing if a StopIteration exception
134 # was passed to throw() and later wrapped into a RuntimeError
135 # (see PEP 479).
136 if (
137 isinstance(value,
138 (StopIteration, StopAsyncIteration))
139 and exc.__cause__ is value
140 ):
141 return False
142 raise
143 except:
144 # only re-raise if it's *not* the exception that was
145 # passed to throw(), because __exit__() must not raise
146 # an exception unless __exit__() itself failed. But throw()
147 # has to raise the exception to signal propagation, so this
148 # fixes the impedance mismatch between the throw() protocol
149 # and the __exit__() protocol.
150 #
151 if sys.exc_info()[1] is value:
152 return False
153 raise
154
155 def __enter__(self):
156 raise RuntimeError(
157 "use 'async with {func_name}(...)', not 'with {func_name}(...)'".
158 format(func_name=self._func_name)
159 )
160
161 def __exit__(self): # pragma: no cover
162 assert False, """Never called, but should be defined"""
163
164
165 def acontextmanager(func):
166 """Like @contextmanager, but async."""
167 if not async_generator.isasyncgenfunction(func):
168 raise TypeError(
169 "must be an async generator (native or from async_generator; "
170 "if using @async_generator then @acontextmanager must be on top."
171 )
172
173 @wraps(func)
174 def helper(*args, **kwds):
175 return _AsyncGeneratorContextManager(func, args, kwds)
176
177 # A hint for sphinxcontrib-trio:
178 helper.__returns_acontextmanager__ = True
179 return helper
180
181
182 class _ConflictDetectorSync:
183 def __init__(self, msg):
184 self._msg = msg
185 self._held = False
186
187 def __enter__(self):
188 if self._held:
189 raise _core.ResourceBusyError(self._msg)
190 else:
191 self._held = True
192
193 def __exit__(self, *args):
194 self._held = False
195
196
197 class ConflictDetector:
198 """Detect when two tasks are about to perform operations that would
199 conflict.
200
201 Use as an async context manager; if two tasks enter it at the same
202 time then the second one raises an error. You can use it when there are
203 two pieces of code that *would* collide and need a lock if they ever were
204 called at the same time, but that should never happen.
205
206 We use this in particular for things like, making sure that two different
207 tasks don't call sendall simultaneously on the same stream.
208
209 This executes a checkpoint on entry. That's the only reason it's async.
210
211 To use from sync code, do ``with cd.sync``; this is just like ``async with
212 cd`` except that it doesn't execute a checkpoint.
213
214 """
215
216 def __init__(self, msg):
217 self.sync = _ConflictDetectorSync(msg)
218
219 async def __aenter__(self):
220 await _core.checkpoint()
221 return self.sync.__enter__()
222
223 async def __aexit__(self, *args):
224 return self.sync.__exit__()
225
226
227 def async_wraps(cls, wrapped_cls, attr_name):
228 """Similar to wraps, but for async wrappers of non-async functions.
229
230 """
231
232 def decorator(func):
233 func.__name__ = attr_name
234 func.__qualname__ = '.'.join((cls.__qualname__, attr_name))
235
236 func.__doc__ = """Like :meth:`~{}.{}.{}`, but async.
237
238 """.format(
239 wrapped_cls.__module__, wrapped_cls.__qualname__, attr_name
240 )
241
242 return func
243
244 return decorator
245
246
247 def fixup_module_metadata(module_name, namespace):
248 def fix_one(obj):
249 mod = getattr(obj, "__module__", None)
250 if mod is not None and mod.startswith("trio."):
251 obj.__module__ = module_name
252 if isinstance(obj, type):
253 for attr_value in obj.__dict__.values():
254 fix_one(attr_value)
255
256 for objname in namespace["__all__"]:
257 obj = namespace[objname]
258 fix_one(obj)
259
260
261 # This is based on the PEP 519 fspath implementation.
262 # The function has been adapted to work with pathlib objects on python 3.5
263 # The input typehint is removed as there is no os.PathLike on 3.5.
264 # See: https://www.python.org/dev/peps/pep-0519/#os
265
266
267 def fspath(path) -> t.Union[str, bytes]:
268 """Return the string representation of the path.
269
270 If str or bytes is passed in, it is returned unchanged. If a pre-python 3.6
271 pathlib object is passed, its string representation is returned. If
272 __fspath__() returns something other than str or bytes then TypeError is
273 raised. If this function is given something that is not str, bytes,
274 pathlib.PurePath or os.PathLike then TypeError is raised.
275 """
276 if isinstance(path, (str, bytes)):
277 return path
278
279 # Work from the object's type to match method resolution of other magic
280 # methods.
281 path_type = type(path)
282 try:
283 path = path_type.__fspath__(path)
284 except AttributeError:
285 if hasattr(path_type, '__fspath__'):
286 raise
287 # On python 3.5 pathlib objects don't have an __fspath__ method.
288 # but we still want to get their string representation.
289 if isinstance(path, pathlib.PurePath):
290 return str(path)
291 else:
292 if isinstance(path, (str, bytes)):
293 return path
294 else:
295 raise TypeError(
296 "expected __fspath__() to return str or bytes, "
297 "not " + type(path).__name__
298 )
299
300 raise TypeError(
301 "expected str, bytes or os.PathLike object, not " + path_type.__name__
302 )
303
304
305 if hasattr(os, "fspath"):
306 fspath = os.fspath
307
[end of trio/_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/trio/_util.py b/trio/_util.py
--- a/trio/_util.py
+++ b/trio/_util.py
@@ -258,48 +258,60 @@
fix_one(obj)
-# This is based on the PEP 519 fspath implementation.
-# The function has been adapted to work with pathlib objects on python 3.5
+# os.fspath is defined on Python 3.6+ but we need to support Python 3.5 too
+# This is why we provide our own implementation. On Python 3.6+ we use the
+# StdLib's version and on Python 3.5 our own version.
+# Our own implementation implementation is based on PEP 519 while it has also
+# been adapted to work with pathlib objects on python 3.5
# The input typehint is removed as there is no os.PathLike on 3.5.
# See: https://www.python.org/dev/peps/pep-0519/#os
def fspath(path) -> t.Union[str, bytes]:
- """Return the string representation of the path.
-
- If str or bytes is passed in, it is returned unchanged. If a pre-python 3.6
- pathlib object is passed, its string representation is returned. If
- __fspath__() returns something other than str or bytes then TypeError is
- raised. If this function is given something that is not str, bytes,
- pathlib.PurePath or os.PathLike then TypeError is raised.
+ """Return the path representation of a path-like object.
+
+ Returns
+ -------
+ - If str or bytes is passed in, it is returned unchanged.
+ - If the os.PathLike interface is implemented it is used to get the path
+ representation.
+ - If the python version is 3.5 or earlier and a pathlib object is passed,
+ the object's string representation is returned.
+
+ Raises
+ ------
+ - Regardless of the input, if the path representation (e.g. the value
+ returned from __fspath__) is not str or bytes, TypeError is raised.
+ - If the provided path is not str, bytes, pathlib.PurePath or os.PathLike,
+ TypeError is raised.
"""
if isinstance(path, (str, bytes)):
return path
-
# Work from the object's type to match method resolution of other magic
# methods.
path_type = type(path)
+ # On python 3.5, pathlib objects don't have the __fspath__ method,
+ # but we still want to get their string representation.
+ if issubclass(path_type, pathlib.PurePath):
+ return str(path)
try:
- path = path_type.__fspath__(path)
+ path_repr = path_type.__fspath__(path)
except AttributeError:
if hasattr(path_type, '__fspath__'):
raise
- # On python 3.5 pathlib objects don't have an __fspath__ method.
- # but we still want to get their string representation.
- if isinstance(path, pathlib.PurePath):
- return str(path)
- else:
- if isinstance(path, (str, bytes)):
- return path
else:
raise TypeError(
- "expected __fspath__() to return str or bytes, "
- "not " + type(path).__name__
+ "expected str, bytes or os.PathLike object, "
+ "not " + path_type.__name__
)
-
- raise TypeError(
- "expected str, bytes or os.PathLike object, not " + path_type.__name__
- )
+ if isinstance(path_repr, (str, bytes)):
+ return path_repr
+ else:
+ raise TypeError(
+ "expected {}.__fspath__() to return str or bytes, "
+ "not {}".format(path_type.__name__,
+ type(path_repr).__name__)
+ )
if hasattr(os, "fspath"):
| {"golden_diff": "diff --git a/trio/_util.py b/trio/_util.py\n--- a/trio/_util.py\n+++ b/trio/_util.py\n@@ -258,48 +258,60 @@\n fix_one(obj)\n \n \n-# This is based on the PEP 519 fspath implementation.\n-# The function has been adapted to work with pathlib objects on python 3.5\n+# os.fspath is defined on Python 3.6+ but we need to support Python 3.5 too\n+# This is why we provide our own implementation. On Python 3.6+ we use the\n+# StdLib's version and on Python 3.5 our own version.\n+# Our own implementation implementation is based on PEP 519 while it has also\n+# been adapted to work with pathlib objects on python 3.5\n # The input typehint is removed as there is no os.PathLike on 3.5.\n # See: https://www.python.org/dev/peps/pep-0519/#os\n \n \n def fspath(path) -> t.Union[str, bytes]:\n- \"\"\"Return the string representation of the path.\n-\n- If str or bytes is passed in, it is returned unchanged. If a pre-python 3.6\n- pathlib object is passed, its string representation is returned. If\n- __fspath__() returns something other than str or bytes then TypeError is\n- raised. If this function is given something that is not str, bytes,\n- pathlib.PurePath or os.PathLike then TypeError is raised.\n+ \"\"\"Return the path representation of a path-like object.\n+\n+ Returns\n+ -------\n+ - If str or bytes is passed in, it is returned unchanged.\n+ - If the os.PathLike interface is implemented it is used to get the path\n+ representation.\n+ - If the python version is 3.5 or earlier and a pathlib object is passed,\n+ the object's string representation is returned.\n+\n+ Raises\n+ ------\n+ - Regardless of the input, if the path representation (e.g. the value\n+ returned from __fspath__) is not str or bytes, TypeError is raised.\n+ - If the provided path is not str, bytes, pathlib.PurePath or os.PathLike,\n+ TypeError is raised.\n \"\"\"\n if isinstance(path, (str, bytes)):\n return path\n-\n # Work from the object's type to match method resolution of other magic\n # methods.\n path_type = type(path)\n+ # On python 3.5, pathlib objects don't have the __fspath__ method,\n+ # but we still want to get their string representation.\n+ if issubclass(path_type, pathlib.PurePath):\n+ return str(path)\n try:\n- path = path_type.__fspath__(path)\n+ path_repr = path_type.__fspath__(path)\n except AttributeError:\n if hasattr(path_type, '__fspath__'):\n raise\n- # On python 3.5 pathlib objects don't have an __fspath__ method.\n- # but we still want to get their string representation.\n- if isinstance(path, pathlib.PurePath):\n- return str(path)\n- else:\n- if isinstance(path, (str, bytes)):\n- return path\n else:\n raise TypeError(\n- \"expected __fspath__() to return str or bytes, \"\n- \"not \" + type(path).__name__\n+ \"expected str, bytes or os.PathLike object, \"\n+ \"not \" + path_type.__name__\n )\n-\n- raise TypeError(\n- \"expected str, bytes or os.PathLike object, not \" + path_type.__name__\n- )\n+ if isinstance(path_repr, (str, bytes)):\n+ return path_repr\n+ else:\n+ raise TypeError(\n+ \"expected {}.__fspath__() to return str or bytes, \"\n+ \"not {}\".format(path_type.__name__,\n+ type(path_repr).__name__)\n+ )\n \n \n if hasattr(os, \"fspath\"):\n", "issue": "Add tests for fspath\n\n", "before_files": [{"content": "# Little utilities we use internally\n\nimport os\nimport sys\nimport pathlib\nfrom functools import wraps\nimport typing as t\n\nimport async_generator\n\n# There's a dependency loop here... _core is allowed to use this file (in fact\n# it's the *only* file in the main trio/ package it's allowed to use), but\n# ConflictDetector needs checkpoint so it also has to import\n# _core. Possibly we should split this file into two: one for true generic\n# low-level utility code, and one for higher level helpers?\nfrom . import _core\n\n__all__ = [\n \"signal_raise\", \"aiter_compat\", \"acontextmanager\", \"ConflictDetector\",\n \"fixup_module_metadata\", \"fspath\"\n]\n\n# Equivalent to the C function raise(), which Python doesn't wrap\nif os.name == \"nt\":\n # On windows, os.kill exists but is really weird.\n #\n # If you give it CTRL_C_EVENT or CTRL_BREAK_EVENT, it tries to deliver\n # those using GenerateConsoleCtrlEvent. But I found that when I tried\n # to run my test normally, it would freeze waiting... unless I added\n # print statements, in which case the test suddenly worked. So I guess\n # these signals are only delivered if/when you access the console? I\n # don't really know what was going on there. From reading the\n # GenerateConsoleCtrlEvent docs I don't know how it worked at all.\n #\n # I later spent a bunch of time trying to make GenerateConsoleCtrlEvent\n # work for creating synthetic control-C events, and... failed\n # utterly. There are lots of details in the code and comments\n # removed/added at this commit:\n # https://github.com/python-trio/trio/commit/95843654173e3e826c34d70a90b369ba6edf2c23\n #\n # OTOH, if you pass os.kill any *other* signal number... then CPython\n # just calls TerminateProcess (wtf).\n #\n # So, anyway, os.kill is not so useful for testing purposes. Instead\n # we use raise():\n #\n # https://msdn.microsoft.com/en-us/library/dwwzkt4c.aspx\n #\n # Have to import cffi inside the 'if os.name' block because we don't\n # depend on cffi on non-Windows platforms. (It would be easy to switch\n # this to ctypes though if we ever remove the cffi dependency.)\n #\n # Some more information:\n # https://bugs.python.org/issue26350\n #\n # Anyway, we use this for two things:\n # - redelivering unhandled signals\n # - generating synthetic signals for tests\n # and for both of those purposes, 'raise' works fine.\n import cffi\n\n _ffi = cffi.FFI()\n _ffi.cdef(\"int raise(int);\")\n _lib = _ffi.dlopen(\"api-ms-win-crt-runtime-l1-1-0.dll\")\n signal_raise = getattr(_lib, \"raise\")\nelse:\n\n def signal_raise(signum):\n os.kill(os.getpid(), signum)\n\n\n# Decorator to handle the change to __aiter__ in 3.5.2\ndef aiter_compat(aiter_impl):\n if sys.version_info < (3, 5, 2):\n\n @wraps(aiter_impl)\n async def __aiter__(*args, **kwargs):\n return aiter_impl(*args, **kwargs)\n\n return __aiter__\n else:\n return aiter_impl\n\n\n# Very much derived from the one in contextlib, by copy/pasting and then\n# asyncifying everything. (Also I dropped the obscure support for using\n# context managers as function decorators. It could be re-added; I just\n# couldn't be bothered.)\n# So this is a derivative work licensed under the PSF License, which requires\n# the following notice:\n#\n# Copyright \u00a9 2001-2017 Python Software Foundation; All Rights Reserved\nclass _AsyncGeneratorContextManager:\n def __init__(self, func, args, kwds):\n self._func_name = func.__name__\n self._agen = func(*args, **kwds).__aiter__()\n\n async def __aenter__(self):\n if sys.version_info < (3, 5, 2):\n self._agen = await self._agen\n try:\n return await self._agen.asend(None)\n except StopAsyncIteration:\n raise RuntimeError(\"async generator didn't yield\") from None\n\n async def __aexit__(self, type, value, traceback):\n if type is None:\n try:\n await self._agen.asend(None)\n except StopAsyncIteration:\n return False\n else:\n raise RuntimeError(\"async generator didn't stop\")\n else:\n # It used to be possible to have type != None, value == None:\n # https://bugs.python.org/issue1705170\n # but AFAICT this can't happen anymore.\n assert value is not None\n try:\n await self._agen.athrow(type, value, traceback)\n raise RuntimeError(\n \"async generator didn't stop after athrow()\"\n )\n except StopAsyncIteration as exc:\n # Suppress StopIteration *unless* it's the same exception that\n # was passed to throw(). This prevents a StopIteration\n # raised inside the \"with\" statement from being suppressed.\n return (exc is not value)\n except RuntimeError as exc:\n # Don't re-raise the passed in exception. (issue27112)\n if exc is value:\n return False\n # Likewise, avoid suppressing if a StopIteration exception\n # was passed to throw() and later wrapped into a RuntimeError\n # (see PEP 479).\n if (\n isinstance(value,\n (StopIteration, StopAsyncIteration))\n and exc.__cause__ is value\n ):\n return False\n raise\n except:\n # only re-raise if it's *not* the exception that was\n # passed to throw(), because __exit__() must not raise\n # an exception unless __exit__() itself failed. But throw()\n # has to raise the exception to signal propagation, so this\n # fixes the impedance mismatch between the throw() protocol\n # and the __exit__() protocol.\n #\n if sys.exc_info()[1] is value:\n return False\n raise\n\n def __enter__(self):\n raise RuntimeError(\n \"use 'async with {func_name}(...)', not 'with {func_name}(...)'\".\n format(func_name=self._func_name)\n )\n\n def __exit__(self): # pragma: no cover\n assert False, \"\"\"Never called, but should be defined\"\"\"\n\n\ndef acontextmanager(func):\n \"\"\"Like @contextmanager, but async.\"\"\"\n if not async_generator.isasyncgenfunction(func):\n raise TypeError(\n \"must be an async generator (native or from async_generator; \"\n \"if using @async_generator then @acontextmanager must be on top.\"\n )\n\n @wraps(func)\n def helper(*args, **kwds):\n return _AsyncGeneratorContextManager(func, args, kwds)\n\n # A hint for sphinxcontrib-trio:\n helper.__returns_acontextmanager__ = True\n return helper\n\n\nclass _ConflictDetectorSync:\n def __init__(self, msg):\n self._msg = msg\n self._held = False\n\n def __enter__(self):\n if self._held:\n raise _core.ResourceBusyError(self._msg)\n else:\n self._held = True\n\n def __exit__(self, *args):\n self._held = False\n\n\nclass ConflictDetector:\n \"\"\"Detect when two tasks are about to perform operations that would\n conflict.\n\n Use as an async context manager; if two tasks enter it at the same\n time then the second one raises an error. You can use it when there are\n two pieces of code that *would* collide and need a lock if they ever were\n called at the same time, but that should never happen.\n\n We use this in particular for things like, making sure that two different\n tasks don't call sendall simultaneously on the same stream.\n\n This executes a checkpoint on entry. That's the only reason it's async.\n\n To use from sync code, do ``with cd.sync``; this is just like ``async with\n cd`` except that it doesn't execute a checkpoint.\n\n \"\"\"\n\n def __init__(self, msg):\n self.sync = _ConflictDetectorSync(msg)\n\n async def __aenter__(self):\n await _core.checkpoint()\n return self.sync.__enter__()\n\n async def __aexit__(self, *args):\n return self.sync.__exit__()\n\n\ndef async_wraps(cls, wrapped_cls, attr_name):\n \"\"\"Similar to wraps, but for async wrappers of non-async functions.\n\n \"\"\"\n\n def decorator(func):\n func.__name__ = attr_name\n func.__qualname__ = '.'.join((cls.__qualname__, attr_name))\n\n func.__doc__ = \"\"\"Like :meth:`~{}.{}.{}`, but async.\n\n \"\"\".format(\n wrapped_cls.__module__, wrapped_cls.__qualname__, attr_name\n )\n\n return func\n\n return decorator\n\n\ndef fixup_module_metadata(module_name, namespace):\n def fix_one(obj):\n mod = getattr(obj, \"__module__\", None)\n if mod is not None and mod.startswith(\"trio.\"):\n obj.__module__ = module_name\n if isinstance(obj, type):\n for attr_value in obj.__dict__.values():\n fix_one(attr_value)\n\n for objname in namespace[\"__all__\"]:\n obj = namespace[objname]\n fix_one(obj)\n\n\n# This is based on the PEP 519 fspath implementation.\n# The function has been adapted to work with pathlib objects on python 3.5\n# The input typehint is removed as there is no os.PathLike on 3.5.\n# See: https://www.python.org/dev/peps/pep-0519/#os\n\n\ndef fspath(path) -> t.Union[str, bytes]:\n \"\"\"Return the string representation of the path.\n\n If str or bytes is passed in, it is returned unchanged. If a pre-python 3.6\n pathlib object is passed, its string representation is returned. If\n __fspath__() returns something other than str or bytes then TypeError is\n raised. If this function is given something that is not str, bytes,\n pathlib.PurePath or os.PathLike then TypeError is raised.\n \"\"\"\n if isinstance(path, (str, bytes)):\n return path\n\n # Work from the object's type to match method resolution of other magic\n # methods.\n path_type = type(path)\n try:\n path = path_type.__fspath__(path)\n except AttributeError:\n if hasattr(path_type, '__fspath__'):\n raise\n # On python 3.5 pathlib objects don't have an __fspath__ method.\n # but we still want to get their string representation.\n if isinstance(path, pathlib.PurePath):\n return str(path)\n else:\n if isinstance(path, (str, bytes)):\n return path\n else:\n raise TypeError(\n \"expected __fspath__() to return str or bytes, \"\n \"not \" + type(path).__name__\n )\n\n raise TypeError(\n \"expected str, bytes or os.PathLike object, not \" + path_type.__name__\n )\n\n\nif hasattr(os, \"fspath\"):\n fspath = os.fspath\n", "path": "trio/_util.py"}]} | 3,947 | 887 |
gh_patches_debug_25576 | rasdani/github-patches | git_diff | sublimelsp__LSP-1772 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
On Windows, the drive letter in server responsed file URIs are lowercase.
**Describe the bug**
I tried both intelephense and pyright, they both returned lowercased drive letter thus I suspect it's a standard. (or maybe VSCode's LSP lib does it)
https://user-images.githubusercontent.com/6594915/123961095-96286c80-d9e2-11eb-8ada-0da9af754a55.mp4
In "Goto Definition...", this causes ST to open a file whose drive letter is in lowercase. And that may cause various mysterious problem sometimes... Or maybe, this should be fixed in ST core.
**To Reproduce**
Steps to reproduce the behavior:
1. Install LSP-intelephense with a Windows build ST
2. Open a PHP project
3. Make sure the definition file is not opened in a tab already
4. Do "Goto Definition"
5. The newly opened tab should have a lower drive letter
**Expected behavior**
The drive letter should be uppercase.
**Environment (please complete the following information):**
- OS: Win10 21H1 x64
- Sublime Text version: 4109
- LSP version: 4070-1.6.1
- Language servers used: intelephense, pyright
**Additional context**
This is a Windows-only issue as it's case-insensitive.
</issue>
<code>
[start of plugin/core/url.py]
1 from .typing import Any, Tuple
2 from urllib.parse import quote
3 from urllib.parse import urljoin
4 from urllib.parse import urlparse
5 from urllib.request import pathname2url
6 from urllib.request import url2pathname
7 import os
8 import re
9
10 import sublime
11
12
13 def filename_to_uri(file_name: str) -> str:
14 """
15 Convert a file name obtained from view.file_name() into an URI
16 """
17 prefix = sublime.installed_packages_path()
18 if file_name.startswith(prefix):
19 return _to_resource_uri(file_name, prefix)
20 prefix = sublime.packages_path()
21 if file_name.startswith(prefix) and not os.path.exists(file_name):
22 return _to_resource_uri(file_name, prefix)
23 path = pathname2url(file_name)
24 re.sub(r"^([A-Z]):/", _lowercase_driveletter, path)
25 return urljoin("file:", path)
26
27
28 def view_to_uri(view: sublime.View) -> str:
29 file_name = view.file_name()
30 if not file_name:
31 return "buffer://sublime/{}".format(view.buffer_id())
32 return filename_to_uri(file_name)
33
34
35 def uri_to_filename(uri: str) -> str:
36 """
37 DEPRECATED: An URI associated to a view does not necessarily have a "file:" scheme.
38 Use urllib.parse.urlparse to determine the scheme and go from there.
39 Use urllib.parse.unquote to unquote the path.
40 """
41 parsed = urlparse(uri)
42 assert parsed.scheme == "file"
43 if os.name == 'nt':
44 # url2pathname does not understand %3A (VS Code's encoding forced on all servers :/)
45 return url2pathname(parsed.path).strip('\\')
46 else:
47 return url2pathname(parsed.path)
48
49
50 def parse_uri(uri: str) -> Tuple[str, str]:
51 """
52 Parses an URI into a tuple where the first element is the URI scheme. The
53 second element is the local filesystem path if the URI is a file URI,
54 otherwise the second element is the original URI.
55 """
56 parsed = urlparse(uri)
57 if parsed.scheme == "file":
58 if os.name == 'nt':
59 # TODO: this is wrong for UNC paths
60 return parsed.scheme, url2pathname(parsed.path).strip('\\')
61 return parsed.scheme, url2pathname(parsed.path)
62 return parsed.scheme, uri
63
64
65 def _to_resource_uri(path: str, prefix: str) -> str:
66 """
67 Terrible hacks from ST core leak into packages as well.
68
69 See: https://github.com/sublimehq/sublime_text/issues/3742
70 """
71 return "res://Packages{}".format(quote(path[len(prefix):]))
72
73
74 def _lowercase_driveletter(match: Any) -> str:
75 """
76 For compatibility with certain other language clients.
77 """
78 return "{}:/".format(match.group(1).lower())
79
[end of plugin/core/url.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/core/url.py b/plugin/core/url.py
--- a/plugin/core/url.py
+++ b/plugin/core/url.py
@@ -21,7 +21,6 @@
if file_name.startswith(prefix) and not os.path.exists(file_name):
return _to_resource_uri(file_name, prefix)
path = pathname2url(file_name)
- re.sub(r"^([A-Z]):/", _lowercase_driveletter, path)
return urljoin("file:", path)
@@ -42,7 +41,8 @@
assert parsed.scheme == "file"
if os.name == 'nt':
# url2pathname does not understand %3A (VS Code's encoding forced on all servers :/)
- return url2pathname(parsed.path).strip('\\')
+ path = url2pathname(parsed.path).strip('\\')
+ return re.sub(r"^([a-z]):", _uppercase_driveletter, path)
else:
return url2pathname(parsed.path)
@@ -71,8 +71,8 @@
return "res://Packages{}".format(quote(path[len(prefix):]))
-def _lowercase_driveletter(match: Any) -> str:
+def _uppercase_driveletter(match: Any) -> str:
"""
- For compatibility with certain other language clients.
+ For compatibility with Sublime's VCS status in the status bar.
"""
- return "{}:/".format(match.group(1).lower())
+ return "{}:".format(match.group(1).upper())
| {"golden_diff": "diff --git a/plugin/core/url.py b/plugin/core/url.py\n--- a/plugin/core/url.py\n+++ b/plugin/core/url.py\n@@ -21,7 +21,6 @@\n if file_name.startswith(prefix) and not os.path.exists(file_name):\n return _to_resource_uri(file_name, prefix)\n path = pathname2url(file_name)\n- re.sub(r\"^([A-Z]):/\", _lowercase_driveletter, path)\n return urljoin(\"file:\", path)\n \n \n@@ -42,7 +41,8 @@\n assert parsed.scheme == \"file\"\n if os.name == 'nt':\n # url2pathname does not understand %3A (VS Code's encoding forced on all servers :/)\n- return url2pathname(parsed.path).strip('\\\\')\n+ path = url2pathname(parsed.path).strip('\\\\')\n+ return re.sub(r\"^([a-z]):\", _uppercase_driveletter, path)\n else:\n return url2pathname(parsed.path)\n \n@@ -71,8 +71,8 @@\n return \"res://Packages{}\".format(quote(path[len(prefix):]))\n \n \n-def _lowercase_driveletter(match: Any) -> str:\n+def _uppercase_driveletter(match: Any) -> str:\n \"\"\"\n- For compatibility with certain other language clients.\n+ For compatibility with Sublime's VCS status in the status bar.\n \"\"\"\n- return \"{}:/\".format(match.group(1).lower())\n+ return \"{}:\".format(match.group(1).upper())\n", "issue": "On Windows, the drive letter in server responsed file URIs are lowercase.\n**Describe the bug**\r\n\r\nI tried both intelephense and pyright, they both returned lowercased drive letter thus I suspect it's a standard. (or maybe VSCode's LSP lib does it)\r\n\r\nhttps://user-images.githubusercontent.com/6594915/123961095-96286c80-d9e2-11eb-8ada-0da9af754a55.mp4\r\n\r\nIn \"Goto Definition...\", this causes ST to open a file whose drive letter is in lowercase. And that may cause various mysterious problem sometimes... Or maybe, this should be fixed in ST core.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Install LSP-intelephense with a Windows build ST\r\n2. Open a PHP project\r\n3. Make sure the definition file is not opened in a tab already\r\n4. Do \"Goto Definition\"\r\n5. The newly opened tab should have a lower drive letter\r\n\r\n**Expected behavior**\r\n\r\nThe drive letter should be uppercase.\r\n\r\n**Environment (please complete the following information):**\r\n- OS: Win10 21H1 x64\r\n- Sublime Text version: 4109\r\n- LSP version: 4070-1.6.1\r\n- Language servers used: intelephense, pyright\r\n\r\n**Additional context**\r\n\r\nThis is a Windows-only issue as it's case-insensitive.\r\n\n", "before_files": [{"content": "from .typing import Any, Tuple\nfrom urllib.parse import quote\nfrom urllib.parse import urljoin\nfrom urllib.parse import urlparse\nfrom urllib.request import pathname2url\nfrom urllib.request import url2pathname\nimport os\nimport re\n\nimport sublime\n\n\ndef filename_to_uri(file_name: str) -> str:\n \"\"\"\n Convert a file name obtained from view.file_name() into an URI\n \"\"\"\n prefix = sublime.installed_packages_path()\n if file_name.startswith(prefix):\n return _to_resource_uri(file_name, prefix)\n prefix = sublime.packages_path()\n if file_name.startswith(prefix) and not os.path.exists(file_name):\n return _to_resource_uri(file_name, prefix)\n path = pathname2url(file_name)\n re.sub(r\"^([A-Z]):/\", _lowercase_driveletter, path)\n return urljoin(\"file:\", path)\n\n\ndef view_to_uri(view: sublime.View) -> str:\n file_name = view.file_name()\n if not file_name:\n return \"buffer://sublime/{}\".format(view.buffer_id())\n return filename_to_uri(file_name)\n\n\ndef uri_to_filename(uri: str) -> str:\n \"\"\"\n DEPRECATED: An URI associated to a view does not necessarily have a \"file:\" scheme.\n Use urllib.parse.urlparse to determine the scheme and go from there.\n Use urllib.parse.unquote to unquote the path.\n \"\"\"\n parsed = urlparse(uri)\n assert parsed.scheme == \"file\"\n if os.name == 'nt':\n # url2pathname does not understand %3A (VS Code's encoding forced on all servers :/)\n return url2pathname(parsed.path).strip('\\\\')\n else:\n return url2pathname(parsed.path)\n\n\ndef parse_uri(uri: str) -> Tuple[str, str]:\n \"\"\"\n Parses an URI into a tuple where the first element is the URI scheme. The\n second element is the local filesystem path if the URI is a file URI,\n otherwise the second element is the original URI.\n \"\"\"\n parsed = urlparse(uri)\n if parsed.scheme == \"file\":\n if os.name == 'nt':\n # TODO: this is wrong for UNC paths\n return parsed.scheme, url2pathname(parsed.path).strip('\\\\')\n return parsed.scheme, url2pathname(parsed.path)\n return parsed.scheme, uri\n\n\ndef _to_resource_uri(path: str, prefix: str) -> str:\n \"\"\"\n Terrible hacks from ST core leak into packages as well.\n\n See: https://github.com/sublimehq/sublime_text/issues/3742\n \"\"\"\n return \"res://Packages{}\".format(quote(path[len(prefix):]))\n\n\ndef _lowercase_driveletter(match: Any) -> str:\n \"\"\"\n For compatibility with certain other language clients.\n \"\"\"\n return \"{}:/\".format(match.group(1).lower())\n", "path": "plugin/core/url.py"}]} | 1,611 | 321 |
gh_patches_debug_7213 | rasdani/github-patches | git_diff | great-expectations__great_expectations-2586 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Asset Name is not populated in Data Docs for SimpleSqlalchemy data source
**Describe the bug**
When using a SimpleSqlalchemy data source, the Asset Name column is not populated in the Validations index of the data docs. SImilarly, the Data Asset entry in the corresponding Validation result is None.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure a SimpleSqlalchemy datasource
```yaml
datasources:
db:
credentials: ${my_postgres_db}
class_name: SimpleSqlalchemyDatasource
introspection:
whole_table: {}
```
2. Configure an expectation suite
3. Configure a SimpleCheckpoint
```yaml
name: my_simple_checkpoint
config_version: 1.0
class_name: SimpleCheckpoint
expectation_suite_name: ge_validations_store.warning
validations:
- batch_request:
datasource_name: db
data_connector_name: whole_table
data_asset_name: ge_validations_store
partition_request:
index: -1
```
4. Run checkpoint my_simple_checkpoint
5. The generated data docs do not contain properly populated data asset entries
**Expected behavior**
The generated data docs should be populated the same way when using the v3 API as when using the v2 API
**Environment (please complete the following information):**
- Operating System: Linux, MacOS
- Great Expectations Version: develop branch
</issue>
<code>
[start of great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py]
1 from typing import Dict, List, Optional
2
3 from great_expectations.core.batch import (
4 BatchDefinition,
5 BatchRequest,
6 PartitionDefinition,
7 )
8 from great_expectations.datasource.data_connector.data_connector import DataConnector
9 from great_expectations.datasource.data_connector.util import (
10 batch_definition_matches_batch_request,
11 )
12 from great_expectations.execution_engine import ExecutionEngine
13
14 try:
15 import sqlalchemy as sa
16 except ImportError:
17 sa = None
18
19
20 class ConfiguredAssetSqlDataConnector(DataConnector):
21 """
22 A DataConnector that requires explicit listing of SQL tables you want to connect to.
23
24 Args:
25 name (str): The name of this DataConnector
26 datasource_name (str): The name of the Datasource that contains it
27 execution_engine (ExecutionEngine): An ExecutionEngine
28 data_assets (str): data_assets
29 """
30
31 def __init__(
32 self,
33 name: str,
34 datasource_name: str,
35 execution_engine: Optional[ExecutionEngine] = None,
36 data_assets: Optional[Dict[str, dict]] = None,
37 ):
38 if data_assets is None:
39 data_assets = {}
40 self._data_assets = data_assets
41
42 super().__init__(
43 name=name,
44 datasource_name=datasource_name,
45 execution_engine=execution_engine,
46 )
47
48 @property
49 def data_assets(self) -> Dict[str, dict]:
50 return self._data_assets
51
52 def add_data_asset(
53 self,
54 name: str,
55 config: dict,
56 ):
57 """
58 Add data_asset to DataConnector using data_asset name as key, and data_asset configuration as value.
59 """
60 self._data_assets[name] = config
61
62 def _get_partition_definition_list_from_data_asset_config(
63 self,
64 data_asset_name,
65 data_asset_config,
66 ):
67 if "table_name" in data_asset_config:
68 table_name = data_asset_config["table_name"]
69 else:
70 table_name = data_asset_name
71
72 if "splitter_method" in data_asset_config:
73 splitter_fn = getattr(self, data_asset_config["splitter_method"])
74 split_query = splitter_fn(
75 table_name=table_name, **data_asset_config["splitter_kwargs"]
76 )
77
78 rows = self._execution_engine.engine.execute(split_query).fetchall()
79
80 # Zip up split parameters with column names
81 column_names = self._get_column_names_from_splitter_kwargs(
82 data_asset_config["splitter_kwargs"]
83 )
84 partition_definition_list = [dict(zip(column_names, row)) for row in rows]
85
86 else:
87 partition_definition_list = [{}]
88
89 return partition_definition_list
90
91 def _refresh_data_references_cache(self):
92 self._data_references_cache = {}
93
94 for data_asset_name in self.data_assets:
95 data_asset = self.data_assets[data_asset_name]
96 partition_definition_list = (
97 self._get_partition_definition_list_from_data_asset_config(
98 data_asset_name,
99 data_asset,
100 )
101 )
102
103 # TODO Abe 20201029 : Apply sorters to partition_definition_list here
104 # TODO Will 20201102 : add sorting code here
105 self._data_references_cache[data_asset_name] = partition_definition_list
106
107 def _get_column_names_from_splitter_kwargs(self, splitter_kwargs) -> List[str]:
108 column_names: List[str] = []
109
110 if "column_names" in splitter_kwargs:
111 column_names = splitter_kwargs["column_names"]
112 elif "column_name" in splitter_kwargs:
113 column_names = [splitter_kwargs["column_name"]]
114
115 return column_names
116
117 def get_available_data_asset_names(self):
118 """
119 Return the list of asset names known by this DataConnector.
120
121 Returns:
122 A list of available names
123 """
124 return list(self.data_assets.keys())
125
126 def get_unmatched_data_references(self) -> List[str]:
127 """
128 Returns the list of data_references unmatched by configuration by looping through items in _data_references_cache
129 and returning data_reference that do not have an associated data_asset.
130
131 Returns:
132 list of data_references that are not matched by configuration.
133 """
134 return []
135
136 def get_batch_definition_list_from_batch_request(self, batch_request: BatchRequest):
137 self._validate_batch_request(batch_request=batch_request)
138
139 if len(self._data_references_cache) == 0:
140 self._refresh_data_references_cache()
141
142 batch_definition_list: List[BatchDefinition] = []
143 try:
144 sub_cache = self._data_references_cache[batch_request.data_asset_name]
145 except KeyError as e:
146 raise KeyError(
147 f"data_asset_name {batch_request.data_asset_name} is not recognized."
148 )
149
150 for partition_definition in sub_cache:
151 batch_definition: BatchDefinition = BatchDefinition(
152 datasource_name=self.datasource_name,
153 data_connector_name=self.name,
154 data_asset_name=batch_request.data_asset_name,
155 partition_definition=PartitionDefinition(partition_definition),
156 )
157 if batch_definition_matches_batch_request(batch_definition, batch_request):
158 batch_definition_list.append(batch_definition)
159
160 return batch_definition_list
161
162 def _get_data_reference_list_from_cache_by_data_asset_name(
163 self, data_asset_name: str
164 ) -> List[str]:
165 return self._data_references_cache[data_asset_name]
166
167 def _map_data_reference_to_batch_definition_list(
168 self, data_reference, data_asset_name: Optional[str] = None #: Any,
169 ) -> Optional[List[BatchDefinition]]:
170 # Note: This is a bit hacky, but it works. In sql_data_connectors, data references *are* dictionaries,
171 # allowing us to invoke `PartitionDefinition(data_reference)`
172 return [
173 BatchDefinition(
174 datasource_name=self.datasource_name,
175 data_connector_name=self.name,
176 data_asset_name=data_asset_name,
177 partition_definition=PartitionDefinition(data_reference),
178 )
179 ]
180
181 def _generate_batch_spec_parameters_from_batch_definition(
182 self, batch_definition: BatchDefinition
183 ) -> Dict:
184 """
185 Build BatchSpec parameters from batch_definition with the following components:
186 1. data_asset_name from batch_definition
187 2. partition_definition from batch_definition
188 3. data_asset from data_connector
189
190 Args:
191 batch_definition (BatchDefinition): to be used to build batch_spec
192
193 Returns:
194 dict built from batch_definition
195 """
196 data_asset_name: str = batch_definition.data_asset_name
197 return {
198 "table_name": data_asset_name,
199 "partition_definition": batch_definition.partition_definition,
200 **self.data_assets[data_asset_name],
201 }
202
203 # Splitter methods for listing partitions
204
205 def _split_on_whole_table(
206 self,
207 table_name: str,
208 ):
209 """'Split' by returning the whole table
210
211 Note: the table_name parameter is a required to keep the signature of this method consistent with other methods.
212 """
213
214 return sa.select([sa.true()])
215
216 def _split_on_column_value(
217 self,
218 table_name: str,
219 column_name: str,
220 ):
221 """Split using the values in the named column"""
222 # query = f"SELECT DISTINCT(\"{self.column_name}\") FROM {self.table_name}"
223
224 return sa.select([sa.func.distinct(sa.column(column_name))]).select_from(
225 sa.text(table_name)
226 )
227
228 def _split_on_converted_datetime(
229 self,
230 table_name: str,
231 column_name: str,
232 date_format_string: str = "%Y-%m-%d",
233 ):
234 """Convert the values in the named column to the given date_format, and split on that"""
235 # query = f"SELECT DISTINCT( strftime(\"{date_format_string}\", \"{self.column_name}\")) as my_var FROM {self.table_name}"
236
237 return sa.select(
238 [
239 sa.func.distinct(
240 sa.func.strftime(
241 date_format_string,
242 sa.column(column_name),
243 )
244 )
245 ]
246 ).select_from(sa.text(table_name))
247
248 def _split_on_divided_integer(
249 self, table_name: str, column_name: str, divisor: int
250 ):
251 """Divide the values in the named column by `divisor`, and split on that"""
252 # query = f"SELECT DISTINCT(\"{self.column_name}\" / {divisor}) AS my_var FROM {self.table_name}"
253
254 return sa.select(
255 [sa.func.distinct(sa.cast(sa.column(column_name) / divisor, sa.Integer))]
256 ).select_from(sa.text(table_name))
257
258 def _split_on_mod_integer(self, table_name: str, column_name: str, mod: int):
259 """Divide the values in the named column by `divisor`, and split on that"""
260 # query = f"SELECT DISTINCT(\"{self.column_name}\" / {divisor}) AS my_var FROM {self.table_name}"
261
262 return sa.select(
263 [sa.func.distinct(sa.cast(sa.column(column_name) % mod, sa.Integer))]
264 ).select_from(sa.text(table_name))
265
266 def _split_on_multi_column_values(
267 self,
268 table_name: str,
269 column_names: List[str],
270 ):
271 """Split on the joint values in the named columns"""
272 # query = f"SELECT DISTINCT(\"{self.column_name}\") FROM {self.table_name}"
273
274 return (
275 sa.select([sa.column(column_name) for column_name in column_names])
276 .distinct()
277 .select_from(sa.text(table_name))
278 )
279
280 def _split_on_hashed_column(
281 self,
282 table_name: str,
283 column_name: str,
284 hash_digits: int,
285 ):
286 """Note: this method is experimental. It does not work with all SQL dialects."""
287 # query = f"SELECT MD5(\"{self.column_name}\") = {matching_hash}) AS hashed_var FROM {self.table_name}"
288
289 return sa.select([sa.func.md5(sa.column(column_name))]).select_from(
290 sa.text(table_name)
291 )
292
[end of great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py b/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py
--- a/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py
+++ b/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py
@@ -195,6 +195,7 @@
"""
data_asset_name: str = batch_definition.data_asset_name
return {
+ "data_asset_name": data_asset_name,
"table_name": data_asset_name,
"partition_definition": batch_definition.partition_definition,
**self.data_assets[data_asset_name],
| {"golden_diff": "diff --git a/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py b/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py\n--- a/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py\n+++ b/great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py\n@@ -195,6 +195,7 @@\n \"\"\"\n data_asset_name: str = batch_definition.data_asset_name\n return {\n+ \"data_asset_name\": data_asset_name,\n \"table_name\": data_asset_name,\n \"partition_definition\": batch_definition.partition_definition,\n **self.data_assets[data_asset_name],\n", "issue": "Asset Name is not populated in Data Docs for SimpleSqlalchemy data source\n**Describe the bug**\r\nWhen using a SimpleSqlalchemy data source, the Asset Name column is not populated in the Validations index of the data docs. SImilarly, the Data Asset entry in the corresponding Validation result is None. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Configure a SimpleSqlalchemy datasource\r\n```yaml\r\ndatasources:\r\n db:\r\n credentials: ${my_postgres_db}\r\n class_name: SimpleSqlalchemyDatasource\r\n introspection:\r\n whole_table: {}\r\n``` \r\n2. Configure an expectation suite\r\n3. Configure a SimpleCheckpoint \r\n```yaml\r\nname: my_simple_checkpoint\r\nconfig_version: 1.0\r\nclass_name: SimpleCheckpoint\r\nexpectation_suite_name: ge_validations_store.warning\r\nvalidations:\r\n - batch_request:\r\n datasource_name: db\r\n data_connector_name: whole_table\r\n data_asset_name: ge_validations_store\r\n partition_request:\r\n index: -1\r\n```\r\n4. Run checkpoint my_simple_checkpoint\r\n5. The generated data docs do not contain properly populated data asset entries\r\n\r\n**Expected behavior**\r\nThe generated data docs should be populated the same way when using the v3 API as when using the v2 API\r\n\r\n**Environment (please complete the following information):**\r\n - Operating System: Linux, MacOS\r\n - Great Expectations Version: develop branch\r\n\n", "before_files": [{"content": "from typing import Dict, List, Optional\n\nfrom great_expectations.core.batch import (\n BatchDefinition,\n BatchRequest,\n PartitionDefinition,\n)\nfrom great_expectations.datasource.data_connector.data_connector import DataConnector\nfrom great_expectations.datasource.data_connector.util import (\n batch_definition_matches_batch_request,\n)\nfrom great_expectations.execution_engine import ExecutionEngine\n\ntry:\n import sqlalchemy as sa\nexcept ImportError:\n sa = None\n\n\nclass ConfiguredAssetSqlDataConnector(DataConnector):\n \"\"\"\n A DataConnector that requires explicit listing of SQL tables you want to connect to.\n\n Args:\n name (str): The name of this DataConnector\n datasource_name (str): The name of the Datasource that contains it\n execution_engine (ExecutionEngine): An ExecutionEngine\n data_assets (str): data_assets\n \"\"\"\n\n def __init__(\n self,\n name: str,\n datasource_name: str,\n execution_engine: Optional[ExecutionEngine] = None,\n data_assets: Optional[Dict[str, dict]] = None,\n ):\n if data_assets is None:\n data_assets = {}\n self._data_assets = data_assets\n\n super().__init__(\n name=name,\n datasource_name=datasource_name,\n execution_engine=execution_engine,\n )\n\n @property\n def data_assets(self) -> Dict[str, dict]:\n return self._data_assets\n\n def add_data_asset(\n self,\n name: str,\n config: dict,\n ):\n \"\"\"\n Add data_asset to DataConnector using data_asset name as key, and data_asset configuration as value.\n \"\"\"\n self._data_assets[name] = config\n\n def _get_partition_definition_list_from_data_asset_config(\n self,\n data_asset_name,\n data_asset_config,\n ):\n if \"table_name\" in data_asset_config:\n table_name = data_asset_config[\"table_name\"]\n else:\n table_name = data_asset_name\n\n if \"splitter_method\" in data_asset_config:\n splitter_fn = getattr(self, data_asset_config[\"splitter_method\"])\n split_query = splitter_fn(\n table_name=table_name, **data_asset_config[\"splitter_kwargs\"]\n )\n\n rows = self._execution_engine.engine.execute(split_query).fetchall()\n\n # Zip up split parameters with column names\n column_names = self._get_column_names_from_splitter_kwargs(\n data_asset_config[\"splitter_kwargs\"]\n )\n partition_definition_list = [dict(zip(column_names, row)) for row in rows]\n\n else:\n partition_definition_list = [{}]\n\n return partition_definition_list\n\n def _refresh_data_references_cache(self):\n self._data_references_cache = {}\n\n for data_asset_name in self.data_assets:\n data_asset = self.data_assets[data_asset_name]\n partition_definition_list = (\n self._get_partition_definition_list_from_data_asset_config(\n data_asset_name,\n data_asset,\n )\n )\n\n # TODO Abe 20201029 : Apply sorters to partition_definition_list here\n # TODO Will 20201102 : add sorting code here\n self._data_references_cache[data_asset_name] = partition_definition_list\n\n def _get_column_names_from_splitter_kwargs(self, splitter_kwargs) -> List[str]:\n column_names: List[str] = []\n\n if \"column_names\" in splitter_kwargs:\n column_names = splitter_kwargs[\"column_names\"]\n elif \"column_name\" in splitter_kwargs:\n column_names = [splitter_kwargs[\"column_name\"]]\n\n return column_names\n\n def get_available_data_asset_names(self):\n \"\"\"\n Return the list of asset names known by this DataConnector.\n\n Returns:\n A list of available names\n \"\"\"\n return list(self.data_assets.keys())\n\n def get_unmatched_data_references(self) -> List[str]:\n \"\"\"\n Returns the list of data_references unmatched by configuration by looping through items in _data_references_cache\n and returning data_reference that do not have an associated data_asset.\n\n Returns:\n list of data_references that are not matched by configuration.\n \"\"\"\n return []\n\n def get_batch_definition_list_from_batch_request(self, batch_request: BatchRequest):\n self._validate_batch_request(batch_request=batch_request)\n\n if len(self._data_references_cache) == 0:\n self._refresh_data_references_cache()\n\n batch_definition_list: List[BatchDefinition] = []\n try:\n sub_cache = self._data_references_cache[batch_request.data_asset_name]\n except KeyError as e:\n raise KeyError(\n f\"data_asset_name {batch_request.data_asset_name} is not recognized.\"\n )\n\n for partition_definition in sub_cache:\n batch_definition: BatchDefinition = BatchDefinition(\n datasource_name=self.datasource_name,\n data_connector_name=self.name,\n data_asset_name=batch_request.data_asset_name,\n partition_definition=PartitionDefinition(partition_definition),\n )\n if batch_definition_matches_batch_request(batch_definition, batch_request):\n batch_definition_list.append(batch_definition)\n\n return batch_definition_list\n\n def _get_data_reference_list_from_cache_by_data_asset_name(\n self, data_asset_name: str\n ) -> List[str]:\n return self._data_references_cache[data_asset_name]\n\n def _map_data_reference_to_batch_definition_list(\n self, data_reference, data_asset_name: Optional[str] = None #: Any,\n ) -> Optional[List[BatchDefinition]]:\n # Note: This is a bit hacky, but it works. In sql_data_connectors, data references *are* dictionaries,\n # allowing us to invoke `PartitionDefinition(data_reference)`\n return [\n BatchDefinition(\n datasource_name=self.datasource_name,\n data_connector_name=self.name,\n data_asset_name=data_asset_name,\n partition_definition=PartitionDefinition(data_reference),\n )\n ]\n\n def _generate_batch_spec_parameters_from_batch_definition(\n self, batch_definition: BatchDefinition\n ) -> Dict:\n \"\"\"\n Build BatchSpec parameters from batch_definition with the following components:\n 1. data_asset_name from batch_definition\n 2. partition_definition from batch_definition\n 3. data_asset from data_connector\n\n Args:\n batch_definition (BatchDefinition): to be used to build batch_spec\n\n Returns:\n dict built from batch_definition\n \"\"\"\n data_asset_name: str = batch_definition.data_asset_name\n return {\n \"table_name\": data_asset_name,\n \"partition_definition\": batch_definition.partition_definition,\n **self.data_assets[data_asset_name],\n }\n\n # Splitter methods for listing partitions\n\n def _split_on_whole_table(\n self,\n table_name: str,\n ):\n \"\"\"'Split' by returning the whole table\n\n Note: the table_name parameter is a required to keep the signature of this method consistent with other methods.\n \"\"\"\n\n return sa.select([sa.true()])\n\n def _split_on_column_value(\n self,\n table_name: str,\n column_name: str,\n ):\n \"\"\"Split using the values in the named column\"\"\"\n # query = f\"SELECT DISTINCT(\\\"{self.column_name}\\\") FROM {self.table_name}\"\n\n return sa.select([sa.func.distinct(sa.column(column_name))]).select_from(\n sa.text(table_name)\n )\n\n def _split_on_converted_datetime(\n self,\n table_name: str,\n column_name: str,\n date_format_string: str = \"%Y-%m-%d\",\n ):\n \"\"\"Convert the values in the named column to the given date_format, and split on that\"\"\"\n # query = f\"SELECT DISTINCT( strftime(\\\"{date_format_string}\\\", \\\"{self.column_name}\\\")) as my_var FROM {self.table_name}\"\n\n return sa.select(\n [\n sa.func.distinct(\n sa.func.strftime(\n date_format_string,\n sa.column(column_name),\n )\n )\n ]\n ).select_from(sa.text(table_name))\n\n def _split_on_divided_integer(\n self, table_name: str, column_name: str, divisor: int\n ):\n \"\"\"Divide the values in the named column by `divisor`, and split on that\"\"\"\n # query = f\"SELECT DISTINCT(\\\"{self.column_name}\\\" / {divisor}) AS my_var FROM {self.table_name}\"\n\n return sa.select(\n [sa.func.distinct(sa.cast(sa.column(column_name) / divisor, sa.Integer))]\n ).select_from(sa.text(table_name))\n\n def _split_on_mod_integer(self, table_name: str, column_name: str, mod: int):\n \"\"\"Divide the values in the named column by `divisor`, and split on that\"\"\"\n # query = f\"SELECT DISTINCT(\\\"{self.column_name}\\\" / {divisor}) AS my_var FROM {self.table_name}\"\n\n return sa.select(\n [sa.func.distinct(sa.cast(sa.column(column_name) % mod, sa.Integer))]\n ).select_from(sa.text(table_name))\n\n def _split_on_multi_column_values(\n self,\n table_name: str,\n column_names: List[str],\n ):\n \"\"\"Split on the joint values in the named columns\"\"\"\n # query = f\"SELECT DISTINCT(\\\"{self.column_name}\\\") FROM {self.table_name}\"\n\n return (\n sa.select([sa.column(column_name) for column_name in column_names])\n .distinct()\n .select_from(sa.text(table_name))\n )\n\n def _split_on_hashed_column(\n self,\n table_name: str,\n column_name: str,\n hash_digits: int,\n ):\n \"\"\"Note: this method is experimental. It does not work with all SQL dialects.\"\"\"\n # query = f\"SELECT MD5(\\\"{self.column_name}\\\") = {matching_hash}) AS hashed_var FROM {self.table_name}\"\n\n return sa.select([sa.func.md5(sa.column(column_name))]).select_from(\n sa.text(table_name)\n )\n", "path": "great_expectations/datasource/data_connector/configured_asset_sql_data_connector.py"}]} | 3,724 | 147 |
gh_patches_debug_20591 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-296 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PAM/AD auth doesn't create homedir on first login
So from what i can see in auth.py, the 'login' pam service is used to authenticate the user. For some reason I don't think the 'session' part of this service is honoured. I've tested this by using pamtester with the login service (see below) and the home directory is created.
pamtester login <username> open_session close_session
Any idea why this may be? I think it has something to do with the simplepam python module functionality but i'm not certain.
any help is appreciated. Cheers
</issue>
<code>
[start of jupyterhub/auth.py]
1 """Simple PAM authenticator"""
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from grp import getgrnam
7 import pwd
8 from subprocess import check_call, check_output, CalledProcessError
9
10 from tornado import gen
11 import simplepam
12
13 from traitlets.config import LoggingConfigurable
14 from traitlets import Bool, Set, Unicode, Any
15
16 from .handlers.login import LoginHandler
17 from .utils import url_path_join
18
19 class Authenticator(LoggingConfigurable):
20 """A class for authentication.
21
22 The API is one method, `authenticate`, a tornado gen.coroutine.
23 """
24
25 db = Any()
26 admin_users = Set(config=True,
27 help="""set of usernames of admin users
28
29 If unspecified, only the user that launches the server will be admin.
30 """
31 )
32 whitelist = Set(config=True,
33 help="""Username whitelist.
34
35 Use this to restrict which users can login.
36 If empty, allow any user to attempt login.
37 """
38 )
39 custom_html = Unicode('',
40 help="""HTML login form for custom handlers.
41 Override in form-based custom authenticators
42 that don't use username+password,
43 or need custom branding.
44 """
45 )
46 login_service = Unicode('',
47 help="""Name of the login service for external
48 login services (e.g. 'GitHub').
49 """
50 )
51
52 @gen.coroutine
53 def authenticate(self, handler, data):
54 """Authenticate a user with login form data.
55
56 This must be a tornado gen.coroutine.
57 It must return the username on successful authentication,
58 and return None on failed authentication.
59 """
60
61 def check_whitelist(self, user):
62 """
63 Return True if the whitelist is empty or user is in the whitelist.
64 """
65 # Parens aren't necessary here, but they make this easier to parse.
66 return (not self.whitelist) or (user in self.whitelist)
67
68 def add_user(self, user):
69 """Add a new user
70
71 By default, this just adds the user to the whitelist.
72
73 Subclasses may do more extensive things,
74 such as adding actual unix users.
75 """
76 if self.whitelist:
77 self.whitelist.add(user.name)
78
79 def delete_user(self, user):
80 """Triggered when a user is deleted.
81
82 Removes the user from the whitelist.
83 """
84 self.whitelist.discard(user.name)
85
86 def login_url(self, base_url):
87 """Override to register a custom login handler"""
88 return url_path_join(base_url, 'login')
89
90 def logout_url(self, base_url):
91 """Override to register a custom logout handler"""
92 return url_path_join(base_url, 'logout')
93
94 def get_handlers(self, app):
95 """Return any custom handlers the authenticator needs to register
96
97 (e.g. for OAuth)
98 """
99 return [
100 ('/login', LoginHandler),
101 ]
102
103 class LocalAuthenticator(Authenticator):
104 """Base class for Authenticators that work with local *ix users
105
106 Checks for local users, and can attempt to create them if they exist.
107 """
108
109 create_system_users = Bool(False, config=True,
110 help="""If a user is added that doesn't exist on the system,
111 should I try to create the system user?
112 """
113 )
114
115 group_whitelist = Set(
116 config=True,
117 help="Automatically whitelist anyone in this group.",
118 )
119
120 def _group_whitelist_changed(self, name, old, new):
121 if self.whitelist:
122 self.log.warn(
123 "Ignoring username whitelist because group whitelist supplied!"
124 )
125
126 def check_whitelist(self, username):
127 if self.group_whitelist:
128 return self.check_group_whitelist(username)
129 else:
130 return super().check_whitelist(username)
131
132 def check_group_whitelist(self, username):
133 if not self.group_whitelist:
134 return False
135 for grnam in self.group_whitelist:
136 try:
137 group = getgrnam(grnam)
138 except KeyError:
139 self.log.error('No such group: [%s]' % grnam)
140 continue
141 if username in group.gr_mem:
142 return True
143 return False
144
145 @gen.coroutine
146 def add_user(self, user):
147 """Add a new user
148
149 By default, this just adds the user to the whitelist.
150
151 Subclasses may do more extensive things,
152 such as adding actual unix users.
153 """
154 user_exists = yield gen.maybe_future(self.system_user_exists(user))
155 if not user_exists:
156 if self.create_system_users:
157 yield gen.maybe_future(self.add_system_user(user))
158 else:
159 raise KeyError("User %s does not exist." % user.name)
160
161 yield gen.maybe_future(super().add_user(user))
162
163 @staticmethod
164 def system_user_exists(user):
165 """Check if the user exists on the system"""
166 try:
167 pwd.getpwnam(user.name)
168 except KeyError:
169 return False
170 else:
171 return True
172
173 @staticmethod
174 def add_system_user(user):
175 """Create a new *ix user on the system. Works on FreeBSD and Linux, at least."""
176 name = user.name
177 for useradd in (
178 ['pw', 'useradd', '-m'],
179 ['useradd', '-m'],
180 ):
181 try:
182 check_output(['which', useradd[0]])
183 except CalledProcessError:
184 continue
185 else:
186 break
187 else:
188 raise RuntimeError("I don't know how to add users on this system.")
189
190 check_call(useradd + [name])
191
192
193 class PAMAuthenticator(LocalAuthenticator):
194 """Authenticate local *ix users with PAM"""
195 encoding = Unicode('utf8', config=True,
196 help="""The encoding to use for PAM"""
197 )
198 service = Unicode('login', config=True,
199 help="""The PAM service to use for authentication."""
200 )
201
202 @gen.coroutine
203 def authenticate(self, handler, data):
204 """Authenticate with PAM, and return the username if login is successful.
205
206 Return None otherwise.
207 """
208 username = data['username']
209 if not self.check_whitelist(username):
210 return
211 # simplepam wants bytes, not unicode
212 # see simplepam#3
213 busername = username.encode(self.encoding)
214 bpassword = data['password'].encode(self.encoding)
215 if simplepam.authenticate(busername, bpassword, service=self.service):
216 return username
217
218
[end of jupyterhub/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jupyterhub/auth.py b/jupyterhub/auth.py
--- a/jupyterhub/auth.py
+++ b/jupyterhub/auth.py
@@ -8,7 +8,7 @@
from subprocess import check_call, check_output, CalledProcessError
from tornado import gen
-import simplepam
+import pamela
from traitlets.config import LoggingConfigurable
from traitlets import Bool, Set, Unicode, Any
@@ -208,10 +208,11 @@
username = data['username']
if not self.check_whitelist(username):
return
- # simplepam wants bytes, not unicode
- # see simplepam#3
- busername = username.encode(self.encoding)
- bpassword = data['password'].encode(self.encoding)
- if simplepam.authenticate(busername, bpassword, service=self.service):
+ try:
+ pamela.authenticate(username, data['password'], service=self.service)
+ pamela.open_session(username, service=self.service)
+ except pamela.PAMError as e:
+ self.log.warn("PAM Authentication failed: %s", e)
+ else:
return username
| {"golden_diff": "diff --git a/jupyterhub/auth.py b/jupyterhub/auth.py\n--- a/jupyterhub/auth.py\n+++ b/jupyterhub/auth.py\n@@ -8,7 +8,7 @@\n from subprocess import check_call, check_output, CalledProcessError\n \n from tornado import gen\n-import simplepam\n+import pamela\n \n from traitlets.config import LoggingConfigurable\n from traitlets import Bool, Set, Unicode, Any\n@@ -208,10 +208,11 @@\n username = data['username']\n if not self.check_whitelist(username):\n return\n- # simplepam wants bytes, not unicode\n- # see simplepam#3\n- busername = username.encode(self.encoding)\n- bpassword = data['password'].encode(self.encoding)\n- if simplepam.authenticate(busername, bpassword, service=self.service):\n+ try:\n+ pamela.authenticate(username, data['password'], service=self.service)\n+ pamela.open_session(username, service=self.service)\n+ except pamela.PAMError as e:\n+ self.log.warn(\"PAM Authentication failed: %s\", e)\n+ else:\n return username\n", "issue": "PAM/AD auth doesn't create homedir on first login\nSo from what i can see in auth.py, the 'login' pam service is used to authenticate the user. For some reason I don't think the 'session' part of this service is honoured. I've tested this by using pamtester with the login service (see below) and the home directory is created.\n\npamtester login <username> open_session close_session\n\nAny idea why this may be? I think it has something to do with the simplepam python module functionality but i'm not certain.\n\nany help is appreciated. Cheers\n\n", "before_files": [{"content": "\"\"\"Simple PAM authenticator\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom grp import getgrnam\nimport pwd\nfrom subprocess import check_call, check_output, CalledProcessError\n\nfrom tornado import gen\nimport simplepam\n\nfrom traitlets.config import LoggingConfigurable\nfrom traitlets import Bool, Set, Unicode, Any\n\nfrom .handlers.login import LoginHandler\nfrom .utils import url_path_join\n\nclass Authenticator(LoggingConfigurable):\n \"\"\"A class for authentication.\n \n The API is one method, `authenticate`, a tornado gen.coroutine.\n \"\"\"\n \n db = Any()\n admin_users = Set(config=True,\n help=\"\"\"set of usernames of admin users\n\n If unspecified, only the user that launches the server will be admin.\n \"\"\"\n )\n whitelist = Set(config=True,\n help=\"\"\"Username whitelist.\n \n Use this to restrict which users can login.\n If empty, allow any user to attempt login.\n \"\"\"\n )\n custom_html = Unicode('',\n help=\"\"\"HTML login form for custom handlers.\n Override in form-based custom authenticators\n that don't use username+password,\n or need custom branding.\n \"\"\"\n )\n login_service = Unicode('',\n help=\"\"\"Name of the login service for external\n login services (e.g. 'GitHub').\n \"\"\"\n )\n \n @gen.coroutine\n def authenticate(self, handler, data):\n \"\"\"Authenticate a user with login form data.\n \n This must be a tornado gen.coroutine.\n It must return the username on successful authentication,\n and return None on failed authentication.\n \"\"\"\n\n def check_whitelist(self, user):\n \"\"\"\n Return True if the whitelist is empty or user is in the whitelist.\n \"\"\"\n # Parens aren't necessary here, but they make this easier to parse.\n return (not self.whitelist) or (user in self.whitelist)\n\n def add_user(self, user):\n \"\"\"Add a new user\n \n By default, this just adds the user to the whitelist.\n \n Subclasses may do more extensive things,\n such as adding actual unix users.\n \"\"\"\n if self.whitelist:\n self.whitelist.add(user.name)\n \n def delete_user(self, user):\n \"\"\"Triggered when a user is deleted.\n \n Removes the user from the whitelist.\n \"\"\"\n self.whitelist.discard(user.name)\n \n def login_url(self, base_url):\n \"\"\"Override to register a custom login handler\"\"\"\n return url_path_join(base_url, 'login')\n \n def logout_url(self, base_url):\n \"\"\"Override to register a custom logout handler\"\"\"\n return url_path_join(base_url, 'logout')\n \n def get_handlers(self, app):\n \"\"\"Return any custom handlers the authenticator needs to register\n \n (e.g. for OAuth)\n \"\"\"\n return [\n ('/login', LoginHandler),\n ]\n\nclass LocalAuthenticator(Authenticator):\n \"\"\"Base class for Authenticators that work with local *ix users\n \n Checks for local users, and can attempt to create them if they exist.\n \"\"\"\n \n create_system_users = Bool(False, config=True,\n help=\"\"\"If a user is added that doesn't exist on the system,\n should I try to create the system user?\n \"\"\"\n )\n\n group_whitelist = Set(\n config=True,\n help=\"Automatically whitelist anyone in this group.\",\n )\n\n def _group_whitelist_changed(self, name, old, new):\n if self.whitelist:\n self.log.warn(\n \"Ignoring username whitelist because group whitelist supplied!\"\n )\n\n def check_whitelist(self, username):\n if self.group_whitelist:\n return self.check_group_whitelist(username)\n else:\n return super().check_whitelist(username)\n\n def check_group_whitelist(self, username):\n if not self.group_whitelist:\n return False\n for grnam in self.group_whitelist:\n try:\n group = getgrnam(grnam)\n except KeyError:\n self.log.error('No such group: [%s]' % grnam)\n continue\n if username in group.gr_mem:\n return True\n return False\n\n @gen.coroutine\n def add_user(self, user):\n \"\"\"Add a new user\n \n By default, this just adds the user to the whitelist.\n \n Subclasses may do more extensive things,\n such as adding actual unix users.\n \"\"\"\n user_exists = yield gen.maybe_future(self.system_user_exists(user))\n if not user_exists:\n if self.create_system_users:\n yield gen.maybe_future(self.add_system_user(user))\n else:\n raise KeyError(\"User %s does not exist.\" % user.name)\n \n yield gen.maybe_future(super().add_user(user))\n \n @staticmethod\n def system_user_exists(user):\n \"\"\"Check if the user exists on the system\"\"\"\n try:\n pwd.getpwnam(user.name)\n except KeyError:\n return False\n else:\n return True\n \n @staticmethod\n def add_system_user(user):\n \"\"\"Create a new *ix user on the system. Works on FreeBSD and Linux, at least.\"\"\"\n name = user.name\n for useradd in (\n ['pw', 'useradd', '-m'],\n ['useradd', '-m'],\n ):\n try:\n check_output(['which', useradd[0]])\n except CalledProcessError:\n continue\n else:\n break\n else:\n raise RuntimeError(\"I don't know how to add users on this system.\")\n \n check_call(useradd + [name])\n\n\nclass PAMAuthenticator(LocalAuthenticator):\n \"\"\"Authenticate local *ix users with PAM\"\"\"\n encoding = Unicode('utf8', config=True,\n help=\"\"\"The encoding to use for PAM\"\"\"\n )\n service = Unicode('login', config=True,\n help=\"\"\"The PAM service to use for authentication.\"\"\"\n )\n \n @gen.coroutine\n def authenticate(self, handler, data):\n \"\"\"Authenticate with PAM, and return the username if login is successful.\n \n Return None otherwise.\n \"\"\"\n username = data['username']\n if not self.check_whitelist(username):\n return\n # simplepam wants bytes, not unicode\n # see simplepam#3\n busername = username.encode(self.encoding)\n bpassword = data['password'].encode(self.encoding)\n if simplepam.authenticate(busername, bpassword, service=self.service):\n return username\n \n", "path": "jupyterhub/auth.py"}]} | 2,599 | 256 |
gh_patches_debug_22480 | rasdani/github-patches | git_diff | ivy-llc__ivy-25886 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ihfft2
</issue>
<code>
[start of ivy/functional/frontends/paddle/fft.py]
1 # global
2 import ivy
3 from ivy.func_wrapper import with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_supported_dtypes(
10 {"2.5.1 and below": ("complex64", "complex128")},
11 "paddle",
12 )
13 @to_ivy_arrays_and_back
14 def fft(x, n=None, axis=-1.0, norm="backward", name=None):
15 ret = ivy.fft(ivy.astype(x, "complex128"), axis, norm=norm, n=n)
16 return ivy.astype(ret, x.dtype)
17
18
19 @with_supported_dtypes(
20 {
21 "2.5.1 and below": (
22 "int32",
23 "int64",
24 "float32",
25 "float64",
26 )
27 },
28 "paddle",
29 )
30 @to_ivy_arrays_and_back
31 def fftfreq(n, d=1.0, dtype=None, name=None):
32 if d * n == 0:
33 raise ValueError("d or n should not be 0.")
34
35 if dtype is None:
36 dtype = ivy.default_dtype()
37 val = 1.0 / (n * d)
38 pos_max = (n + 1) // 2
39 neg_max = n // 2
40 indices = ivy.arange(-neg_max, pos_max, dtype=dtype)
41 indices = ivy.roll(indices, -neg_max)
42 return ivy.multiply(indices, val)
43
44
45 @with_supported_dtypes(
46 {
47 "2.5.1 and below": (
48 "int32",
49 "int64",
50 "float32",
51 "float64",
52 "complex64",
53 "complex128",
54 )
55 },
56 "paddle",
57 )
58 @to_ivy_arrays_and_back
59 def fftshift(x, axes=None, name=None):
60 shape = x.shape
61
62 if axes is None:
63 axes = tuple(range(x.ndim))
64 shifts = [(dim // 2) for dim in shape]
65 elif isinstance(axes, int):
66 shifts = shape[axes] // 2
67 else:
68 shifts = ivy.concat([shape[ax] // 2 for ax in axes])
69
70 roll = ivy.roll(x, shifts, axis=axes)
71
72 return roll
73
74
75 @with_supported_dtypes(
76 {"2.5.1 and below": ("complex64", "complex128")},
77 "paddle",
78 )
79 @to_ivy_arrays_and_back
80 def hfft(x, n=None, axes=-1, norm="backward", name=None):
81 """Compute the FFT of a signal that has Hermitian symmetry, resulting in a real
82 spectrum."""
83 # Determine the input shape and axis length
84 input_shape = x.shape
85 input_len = input_shape[axes]
86
87 # Calculate n if not provided
88 if n is None:
89 n = 2 * (input_len - 1)
90
91 # Perform the FFT along the specified axis
92 result = ivy.fft(x, axes, n=n, norm=norm)
93
94 return ivy.real(result)
95
96
97 @with_supported_dtypes(
98 {"2.5.1 and below": "complex64"},
99 "paddle",
100 )
101 @to_ivy_arrays_and_back
102 def hfft2(x, s=None, axis=(-2, -1), norm="backward"):
103 # check if the input tensor x is a hermitian complex
104 if not ivy.allclose(ivy.conj(ivy.matrix_transpose(x)), x):
105 raise ValueError("Input tensor x must be Hermitian complex.")
106
107 fft_result = ivy.fft2(x, s=s, dim=axis, norm=norm)
108
109 # Depending on the norm, apply scaling and normalization
110 if norm == "forward":
111 fft_result /= ivy.sqrt(ivy.prod(ivy.shape(fft_result)))
112 elif norm == "ortho":
113 fft_result /= ivy.sqrt(ivy.prod(ivy.shape(x)))
114
115 return ivy.real(fft_result) # Return the real part of the result
116
117
118 @with_supported_dtypes(
119 {"2.5.1 and below": ("complex64", "complex128")},
120 "paddle",
121 )
122 @to_ivy_arrays_and_back
123 def ifft(x, n=None, axis=-1.0, norm="backward", name=None):
124 ret = ivy.ifft(ivy.astype(x, "complex128"), axis, norm=norm, n=n)
125 return ivy.astype(ret, x.dtype)
126
127
128 @with_supported_dtypes(
129 {
130 "2.5.1 and below": (
131 "int32",
132 "int64",
133 "float32",
134 "float64",
135 )
136 },
137 "paddle",
138 )
139 @to_ivy_arrays_and_back
140 def ifftshift(x, axes=None, name=None):
141 shape = x.shape
142
143 if axes is None:
144 axes = tuple(range(x.ndim))
145 shifts = [-(dim // 2) for dim in shape]
146 elif isinstance(axes, int):
147 shifts = -(shape[axes] // 2)
148 else:
149 shifts = ivy.concat([-shape[ax] // 2 for ax in axes])
150
151 roll = ivy.roll(x, shifts, axis=axes)
152
153 return roll
154
155
156 @with_supported_dtypes(
157 {"2.5.1 and below": ("complex64", "complex128")},
158 "paddle",
159 )
160 @to_ivy_arrays_and_back
161 def irfft(x, n=None, axis=-1.0, norm="backward", name=None):
162 if n is None:
163 n = 2 * (x.shape[axis] - 1)
164
165 pos_freq_terms = ivy.take_along_axis(x, range(n // 2 + 1), axis)
166 neg_freq_terms = ivy.conj(pos_freq_terms[1:-1][::-1])
167 combined_freq_terms = ivy.concat((pos_freq_terms, neg_freq_terms), axis=axis)
168 time_domain = ivy.ifft(combined_freq_terms, axis, norm=norm, n=n)
169 if ivy.isreal(x):
170 time_domain = ivy.real(time_domain)
171 return time_domain
172
173
174 @with_supported_dtypes(
175 {
176 "2.5.1 and below": (
177 "int32",
178 "int64",
179 "float16",
180 "float32",
181 "float64",
182 "complex64",
183 "complex128",
184 )
185 },
186 "paddle",
187 )
188 @to_ivy_arrays_and_back
189 def irfft2(x, s=None, axes=(-2, -1), norm="backward"):
190 # Handle values if None
191 if s is None:
192 s = x.shape
193 if axes is None:
194 axes = (-2, -1)
195
196 # Calculate the normalization factor 'n' based on the shape 's'
197 n = ivy.prod(ivy.array(s))
198
199 result = ivy.ifftn(x, dim=axes[0], norm=norm)
200
201 # Normalize the result based on the 'norm' parameter
202 if norm == "backward":
203 result /= n
204 elif norm == "forward":
205 result *= n
206 elif norm == "ortho":
207 result /= ivy.sqrt(n)
208 return result
209
210
211 @with_supported_dtypes(
212 {"2.5.1 and below": ("complex64", "complex128")},
213 "paddle",
214 )
215 @to_ivy_arrays_and_back
216 def irfftn(x, s=None, axes=None, norm="backward", name=None):
217 x = ivy.array(x)
218
219 if axes is None:
220 axes = list(range(len(x.shape)))
221
222 include_last_axis = len(x.shape) - 1 in axes
223
224 if s is None:
225 s = [
226 x.shape[axis] if axis != (len(x.shape) - 1) else 2 * (x.shape[axis] - 1)
227 for axis in axes
228 ]
229
230 real_result = x
231 remaining_axes = [axis for axis in axes if axis != (len(x.shape) - 1)]
232
233 if remaining_axes:
234 real_result = ivy.ifftn(
235 x,
236 s=[s[axes.index(axis)] for axis in remaining_axes],
237 axes=remaining_axes,
238 norm=norm,
239 )
240
241 if include_last_axis:
242 axis = len(x.shape) - 1
243 size = s[axes.index(axis)]
244 freq_domain = ivy.moveaxis(real_result, axis, -1)
245 slices = [slice(None)] * ivy.get_num_dims(freq_domain)
246 slices[-1] = slice(0, size // 2 + 1)
247 pos_freq_terms = freq_domain[tuple(slices)]
248 slices[-1] = slice(1, -1)
249 neg_freq_terms = ivy.conj(pos_freq_terms[tuple(slices)][..., ::-1])
250 combined_freq_terms = ivy.concat((pos_freq_terms, neg_freq_terms), axis=-1)
251 real_result = ivy.ifftn(combined_freq_terms, s=[size], axes=[-1], norm=norm)
252 real_result = ivy.moveaxis(real_result, -1, axis)
253
254 if ivy.is_complex_dtype(x.dtype):
255 output_dtype = "float32" if x.dtype == "complex64" else "float64"
256 else:
257 output_dtype = "float32"
258
259 result_t = ivy.astype(real_result, output_dtype)
260 return result_t
261
262
263 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
264 @to_ivy_arrays_and_back
265 def rfft(x, n=None, axis=-1, norm="backward", name=None):
266 return ivy.dft(x, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)
267
268
269 @to_ivy_arrays_and_back
270 def rfftfreq(n, d=1.0, dtype=None, name=None):
271 dtype = ivy.default_dtype()
272 val = 1.0 / (n * d)
273 pos_max = n // 2 + 1
274 indices = ivy.arange(0, pos_max, dtype=dtype)
275 return indices * val
276
[end of ivy/functional/frontends/paddle/fft.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/paddle/fft.py b/ivy/functional/frontends/paddle/fft.py
--- a/ivy/functional/frontends/paddle/fft.py
+++ b/ivy/functional/frontends/paddle/fft.py
@@ -153,6 +153,41 @@
return roll
+@with_supported_dtypes(
+ {
+ "2.5.1 and below": (
+ "int32",
+ "int64",
+ "float32",
+ "float64",
+ )
+ },
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def ihfft2(x, s=None, axes=(-2, -1), norm="backward", name=None):
+ # check if the input array is two-dimensional and real
+ if len(ivy.array(x).shape) != 2 or ivy.is_complex_dtype(x):
+ raise ValueError("input must be a two-dimensional real array")
+
+ # cast the input to the same float64 type so that there are no backend issues
+ x_ = ivy.astype(x, ivy.float64)
+
+ ihfft2_result = 0
+ # Compute the complex conjugate of the 2-dimensional discrete Fourier Transform
+ if norm == "backward":
+ ihfft2_result = ivy.conj(ivy.rfftn(x_, s=s, axes=axes, norm="forward"))
+ if norm == "forward":
+ ihfft2_result = ivy.conj(ivy.rfftn(x_, s=s, axes=axes, norm="backward"))
+ if norm == "ortho":
+ ihfft2_result = ivy.conj(ivy.rfftn(x_, s=s, axes=axes, norm="ortho"))
+
+ if x.dtype == ivy.float32 or x.dtype == ivy.int32 or x.dtype == ivy.int64:
+ return ivy.astype(ihfft2_result, ivy.complex64)
+ if x.dtype == ivy.float64:
+ return ivy.astype(ihfft2_result, ivy.complex128)
+
+
@with_supported_dtypes(
{"2.5.1 and below": ("complex64", "complex128")},
"paddle",
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/fft.py b/ivy/functional/frontends/paddle/fft.py\n--- a/ivy/functional/frontends/paddle/fft.py\n+++ b/ivy/functional/frontends/paddle/fft.py\n@@ -153,6 +153,41 @@\n return roll\n \n \n+@with_supported_dtypes(\n+ {\n+ \"2.5.1 and below\": (\n+ \"int32\",\n+ \"int64\",\n+ \"float32\",\n+ \"float64\",\n+ )\n+ },\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def ihfft2(x, s=None, axes=(-2, -1), norm=\"backward\", name=None):\n+ # check if the input array is two-dimensional and real\n+ if len(ivy.array(x).shape) != 2 or ivy.is_complex_dtype(x):\n+ raise ValueError(\"input must be a two-dimensional real array\")\n+\n+ # cast the input to the same float64 type so that there are no backend issues\n+ x_ = ivy.astype(x, ivy.float64)\n+\n+ ihfft2_result = 0\n+ # Compute the complex conjugate of the 2-dimensional discrete Fourier Transform\n+ if norm == \"backward\":\n+ ihfft2_result = ivy.conj(ivy.rfftn(x_, s=s, axes=axes, norm=\"forward\"))\n+ if norm == \"forward\":\n+ ihfft2_result = ivy.conj(ivy.rfftn(x_, s=s, axes=axes, norm=\"backward\"))\n+ if norm == \"ortho\":\n+ ihfft2_result = ivy.conj(ivy.rfftn(x_, s=s, axes=axes, norm=\"ortho\"))\n+\n+ if x.dtype == ivy.float32 or x.dtype == ivy.int32 or x.dtype == ivy.int64:\n+ return ivy.astype(ihfft2_result, ivy.complex64)\n+ if x.dtype == ivy.float64:\n+ return ivy.astype(ihfft2_result, ivy.complex128)\n+\n+\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n", "issue": "ihfft2\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1.0, norm=\"backward\", name=None):\n ret = ivy.fft(ivy.astype(x, \"complex128\"), axis, norm=norm, n=n)\n return ivy.astype(ret, x.dtype)\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef fftfreq(n, d=1.0, dtype=None, name=None):\n if d * n == 0:\n raise ValueError(\"d or n should not be 0.\")\n\n if dtype is None:\n dtype = ivy.default_dtype()\n val = 1.0 / (n * d)\n pos_max = (n + 1) // 2\n neg_max = n // 2\n indices = ivy.arange(-neg_max, pos_max, dtype=dtype)\n indices = ivy.roll(indices, -neg_max)\n return ivy.multiply(indices, val)\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef fftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = shape[axes] // 2\n else:\n shifts = ivy.concat([shape[ax] // 2 for ax in axes])\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef hfft(x, n=None, axes=-1, norm=\"backward\", name=None):\n \"\"\"Compute the FFT of a signal that has Hermitian symmetry, resulting in a real\n spectrum.\"\"\"\n # Determine the input shape and axis length\n input_shape = x.shape\n input_len = input_shape[axes]\n\n # Calculate n if not provided\n if n is None:\n n = 2 * (input_len - 1)\n\n # Perform the FFT along the specified axis\n result = ivy.fft(x, axes, n=n, norm=norm)\n\n return ivy.real(result)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": \"complex64\"},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef hfft2(x, s=None, axis=(-2, -1), norm=\"backward\"):\n # check if the input tensor x is a hermitian complex\n if not ivy.allclose(ivy.conj(ivy.matrix_transpose(x)), x):\n raise ValueError(\"Input tensor x must be Hermitian complex.\")\n\n fft_result = ivy.fft2(x, s=s, dim=axis, norm=norm)\n\n # Depending on the norm, apply scaling and normalization\n if norm == \"forward\":\n fft_result /= ivy.sqrt(ivy.prod(ivy.shape(fft_result)))\n elif norm == \"ortho\":\n fft_result /= ivy.sqrt(ivy.prod(ivy.shape(x)))\n\n return ivy.real(fft_result) # Return the real part of the result\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef ifft(x, n=None, axis=-1.0, norm=\"backward\", name=None):\n ret = ivy.ifft(ivy.astype(x, \"complex128\"), axis, norm=norm, n=n)\n return ivy.astype(ret, x.dtype)\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef ifftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [-(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = -(shape[axes] // 2)\n else:\n shifts = ivy.concat([-shape[ax] // 2 for ax in axes])\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef irfft(x, n=None, axis=-1.0, norm=\"backward\", name=None):\n if n is None:\n n = 2 * (x.shape[axis] - 1)\n\n pos_freq_terms = ivy.take_along_axis(x, range(n // 2 + 1), axis)\n neg_freq_terms = ivy.conj(pos_freq_terms[1:-1][::-1])\n combined_freq_terms = ivy.concat((pos_freq_terms, neg_freq_terms), axis=axis)\n time_domain = ivy.ifft(combined_freq_terms, axis, norm=norm, n=n)\n if ivy.isreal(x):\n time_domain = ivy.real(time_domain)\n return time_domain\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef irfft2(x, s=None, axes=(-2, -1), norm=\"backward\"):\n # Handle values if None\n if s is None:\n s = x.shape\n if axes is None:\n axes = (-2, -1)\n\n # Calculate the normalization factor 'n' based on the shape 's'\n n = ivy.prod(ivy.array(s))\n\n result = ivy.ifftn(x, dim=axes[0], norm=norm)\n\n # Normalize the result based on the 'norm' parameter\n if norm == \"backward\":\n result /= n\n elif norm == \"forward\":\n result *= n\n elif norm == \"ortho\":\n result /= ivy.sqrt(n)\n return result\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef irfftn(x, s=None, axes=None, norm=\"backward\", name=None):\n x = ivy.array(x)\n\n if axes is None:\n axes = list(range(len(x.shape)))\n\n include_last_axis = len(x.shape) - 1 in axes\n\n if s is None:\n s = [\n x.shape[axis] if axis != (len(x.shape) - 1) else 2 * (x.shape[axis] - 1)\n for axis in axes\n ]\n\n real_result = x\n remaining_axes = [axis for axis in axes if axis != (len(x.shape) - 1)]\n\n if remaining_axes:\n real_result = ivy.ifftn(\n x,\n s=[s[axes.index(axis)] for axis in remaining_axes],\n axes=remaining_axes,\n norm=norm,\n )\n\n if include_last_axis:\n axis = len(x.shape) - 1\n size = s[axes.index(axis)]\n freq_domain = ivy.moveaxis(real_result, axis, -1)\n slices = [slice(None)] * ivy.get_num_dims(freq_domain)\n slices[-1] = slice(0, size // 2 + 1)\n pos_freq_terms = freq_domain[tuple(slices)]\n slices[-1] = slice(1, -1)\n neg_freq_terms = ivy.conj(pos_freq_terms[tuple(slices)][..., ::-1])\n combined_freq_terms = ivy.concat((pos_freq_terms, neg_freq_terms), axis=-1)\n real_result = ivy.ifftn(combined_freq_terms, s=[size], axes=[-1], norm=norm)\n real_result = ivy.moveaxis(real_result, -1, axis)\n\n if ivy.is_complex_dtype(x.dtype):\n output_dtype = \"float32\" if x.dtype == \"complex64\" else \"float64\"\n else:\n output_dtype = \"float32\"\n\n result_t = ivy.astype(real_result, output_dtype)\n return result_t\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rfft(x, n=None, axis=-1, norm=\"backward\", name=None):\n return ivy.dft(x, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef rfftfreq(n, d=1.0, dtype=None, name=None):\n dtype = ivy.default_dtype()\n val = 1.0 / (n * d)\n pos_max = n // 2 + 1\n indices = ivy.arange(0, pos_max, dtype=dtype)\n return indices * val\n", "path": "ivy/functional/frontends/paddle/fft.py"}]} | 3,528 | 516 |
gh_patches_debug_16303 | rasdani/github-patches | git_diff | conan-io__conan-5521 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[suggestion] Search custom sub-settings
To help us debug your issue please explain:
From slack discussion: https://cpplang.slack.com/archives/C41CWV9HA/p1560980070212900
For now `conan search` is only able to look regular settings, as `os`, `arch`, `build_type` and `compiler`, but it's limited in terms of customized settings, for example:
```
$ conan search openssl/1.0.2s@ennor/stable -q "os=iOS"
Existing packages for recipe openssl/1.0.2s@ennor/stable:
Package_ID: 08859e5949c46fd78bb8b4d045f854329c141a56
[settings]
build_type: Release
compiler: apple-clang
compiler.version: 10.0
os: iOS
os.version: 10.0
Outdated from recipe: False
Package_ID: 5d2c60a906ecd568347527895bc32e4890264eea
[settings]
build_type: Debug
compiler: apple-clang
compiler.version: 10.0
os: iOS
os.version: 10.0
Outdated from recipe: False
$ conan search openssl/1.0.2s@ennor/stable -q "os.version=10.0"
Existing packages for recipe openssl/1.0.2s@ennor/stable:
There are no packages for reference 'openssl/1.0.2s@ennor/stable' matching the query 'os.version=10.0'
conan search openssl/1.0.2s@ennor/stable -q "os=iOS AND os.version=10.0"
Existing packages for recipe openssl/1.0.2s@ennor/stable:
There are no packages for reference 'openssl/1.0.2s@ennor/stable' matching the query 'os=iOS AND os.version=10.0'
```
The command `search` is limited because the rule:
https://github.com/conan-io/conan/blob/df8fd41b63ee3054fe1160ded5e52e19c0a933c3/conans/search/search.py#L85
The suggestion is supporting custom settings, or sub-settings at least.
Regards!
/cc @Minimonium @Paultergeist
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [ ] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
</issue>
<code>
[start of conans/search/search.py]
1 import os
2 import re
3 from collections import OrderedDict
4 from fnmatch import translate
5
6 from conans.errors import ConanException, RecipeNotFoundException
7 from conans.model.info import ConanInfo
8 from conans.model.ref import ConanFileReference, PackageReference
9 from conans.paths import CONANINFO
10 from conans.search.query_parse import evaluate_postfix, infix_to_postfix
11 from conans.util.files import list_folder_subdirs, load
12 from conans.util.log import logger
13
14
15 def filter_outdated(packages_infos, recipe_hash):
16 if not recipe_hash:
17 return packages_infos
18 result = OrderedDict()
19 for package_id, info in packages_infos.items():
20 try: # Existing package_info of old package might not have recipe_hash
21 if info["recipe_hash"] != recipe_hash:
22 result[package_id] = info
23 except KeyError:
24 pass
25 return result
26
27
28 def filter_by_revision(metadata, packages_infos):
29 ok = OrderedDict()
30 recipe_revision = metadata.recipe.revision
31 for package_id, info in packages_infos.items():
32 try:
33 rec_rev = metadata.packages[package_id].recipe_revision
34 if rec_rev == recipe_revision:
35 ok[package_id] = info
36 except KeyError:
37 pass
38 return ok
39
40
41 def filter_packages(query, package_infos):
42 if query is None:
43 return package_infos
44 try:
45 if "!" in query:
46 raise ConanException("'!' character is not allowed")
47 if " not " in query or query.startswith("not "):
48 raise ConanException("'not' operator is not allowed")
49 postfix = infix_to_postfix(query) if query else []
50 result = OrderedDict()
51 for package_id, info in package_infos.items():
52 if evaluate_postfix_with_info(postfix, info):
53 result[package_id] = info
54 return result
55 except Exception as exc:
56 raise ConanException("Invalid package query: %s. %s" % (query, exc))
57
58
59 def evaluate_postfix_with_info(postfix, conan_vars_info):
60
61 # Evaluate conaninfo with the expression
62
63 def evaluate_info(expression):
64 """Receives an expression like compiler.version="12"
65 Uses conan_vars_info in the closure to evaluate it"""
66 name, value = expression.split("=", 1)
67 value = value.replace("\"", "")
68 return evaluate(name, value, conan_vars_info)
69
70 return evaluate_postfix(postfix, evaluate_info)
71
72
73 def evaluate(prop_name, prop_value, conan_vars_info):
74 """
75 Evaluates a single prop_name, prop_value like "os", "Windows" against
76 conan_vars_info.serialize_min()
77 """
78
79 def compatible_prop(setting_value, prop_value):
80 return (prop_value == setting_value) or (prop_value == "None" and setting_value is None)
81
82 info_settings = conan_vars_info.get("settings", [])
83 info_options = conan_vars_info.get("options", [])
84
85 if (prop_name in ["os", "os_build", "compiler", "arch", "arch_build", "build_type"] or
86 prop_name.startswith("compiler.")):
87 return compatible_prop(info_settings.get(prop_name, None), prop_value)
88 else:
89 return compatible_prop(info_options.get(prop_name, None), prop_value)
90
91
92 def search_recipes(cache, pattern=None, ignorecase=True):
93 # Conan references in main storage
94 if pattern:
95 if isinstance(pattern, ConanFileReference):
96 pattern = str(pattern)
97 pattern = translate(pattern)
98 pattern = re.compile(pattern, re.IGNORECASE) if ignorecase else re.compile(pattern)
99
100 subdirs = list_folder_subdirs(basedir=cache.store, level=4)
101 refs = [ConanFileReference(*folder.split("/")) for folder in subdirs]
102 refs.extend(cache.editable_packages.edited_refs.keys())
103 if pattern:
104 refs = [r for r in refs if _partial_match(pattern, r)]
105 refs = sorted(refs)
106 return refs
107
108
109 def _partial_match(pattern, ref):
110 """
111 Finds if pattern matches any of partial sums of tokens of conan reference
112 """
113 tokens = str(ref).replace('/', ' / ').replace('@', ' @ ').replace('#', ' # ').split()
114
115 def partial_sums(iterable):
116 partial = ''
117 for i in iterable:
118 partial += i
119 yield partial
120
121 return any(map(pattern.match, list(partial_sums(tokens))))
122
123
124 def search_packages(package_layout, query):
125 """ Return a dict like this:
126
127 {package_ID: {name: "OpenCV",
128 version: "2.14",
129 settings: {os: Windows}}}
130 param package_layout: Layout for the given reference
131 """
132 if not os.path.exists(package_layout.base_folder()) or (
133 package_layout.ref.revision and
134 package_layout.recipe_revision() != package_layout.ref.revision):
135 raise RecipeNotFoundException(package_layout.ref, print_rev=True)
136 infos = _get_local_infos_min(package_layout)
137 return filter_packages(query, infos)
138
139
140 def _get_local_infos_min(package_layout):
141 result = OrderedDict()
142
143 packages_path = package_layout.packages()
144 subdirs = list_folder_subdirs(packages_path, level=1)
145 for package_id in subdirs:
146 # Read conaninfo
147 pref = PackageReference(package_layout.ref, package_id)
148 info_path = os.path.join(package_layout.package(pref), CONANINFO)
149 if not os.path.exists(info_path):
150 logger.error("There is no ConanInfo: %s" % str(info_path))
151 continue
152 conan_info_content = load(info_path)
153
154 info = ConanInfo.loads(conan_info_content)
155 if package_layout.ref.revision:
156 metadata = package_layout.load_metadata()
157 recipe_revision = metadata.packages[package_id].recipe_revision
158 if recipe_revision and recipe_revision != package_layout.ref.revision:
159 continue
160 conan_vars_info = info.serialize_min()
161 result[package_id] = conan_vars_info
162
163 return result
164
[end of conans/search/search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/search/search.py b/conans/search/search.py
--- a/conans/search/search.py
+++ b/conans/search/search.py
@@ -81,9 +81,16 @@
info_settings = conan_vars_info.get("settings", [])
info_options = conan_vars_info.get("options", [])
+ properties = ["os", "os_build", "compiler", "arch", "arch_build", "build_type"]
- if (prop_name in ["os", "os_build", "compiler", "arch", "arch_build", "build_type"] or
- prop_name.startswith("compiler.")):
+ def starts_with_common_settings(prop_name):
+ for setting in properties:
+ if prop_name.startswith(setting + '.'):
+ return True
+ return False
+
+ if (prop_name in properties or
+ starts_with_common_settings(prop_name)):
return compatible_prop(info_settings.get(prop_name, None), prop_value)
else:
return compatible_prop(info_options.get(prop_name, None), prop_value)
| {"golden_diff": "diff --git a/conans/search/search.py b/conans/search/search.py\n--- a/conans/search/search.py\n+++ b/conans/search/search.py\n@@ -81,9 +81,16 @@\n \n info_settings = conan_vars_info.get(\"settings\", [])\n info_options = conan_vars_info.get(\"options\", [])\n+ properties = [\"os\", \"os_build\", \"compiler\", \"arch\", \"arch_build\", \"build_type\"]\n \n- if (prop_name in [\"os\", \"os_build\", \"compiler\", \"arch\", \"arch_build\", \"build_type\"] or\n- prop_name.startswith(\"compiler.\")):\n+ def starts_with_common_settings(prop_name):\n+ for setting in properties:\n+ if prop_name.startswith(setting + '.'):\n+ return True\n+ return False\n+\n+ if (prop_name in properties or\n+ starts_with_common_settings(prop_name)):\n return compatible_prop(info_settings.get(prop_name, None), prop_value)\n else:\n return compatible_prop(info_options.get(prop_name, None), prop_value)\n", "issue": "[suggestion] Search custom sub-settings\nTo help us debug your issue please explain:\r\n\r\nFrom slack discussion: https://cpplang.slack.com/archives/C41CWV9HA/p1560980070212900\r\n\r\nFor now `conan search` is only able to look regular settings, as `os`, `arch`, `build_type` and `compiler`, but it's limited in terms of customized settings, for example:\r\n\r\n```\r\n$ conan search openssl/1.0.2s@ennor/stable -q \"os=iOS\"\r\nExisting packages for recipe openssl/1.0.2s@ennor/stable:\r\n Package_ID: 08859e5949c46fd78bb8b4d045f854329c141a56\r\n [settings]\r\n build_type: Release\r\n compiler: apple-clang\r\n compiler.version: 10.0\r\n os: iOS\r\n os.version: 10.0\r\n Outdated from recipe: False\r\n Package_ID: 5d2c60a906ecd568347527895bc32e4890264eea\r\n [settings]\r\n build_type: Debug\r\n compiler: apple-clang\r\n compiler.version: 10.0\r\n os: iOS\r\n os.version: 10.0\r\n Outdated from recipe: False\r\n\r\n$ conan search openssl/1.0.2s@ennor/stable -q \"os.version=10.0\"\r\nExisting packages for recipe openssl/1.0.2s@ennor/stable:\r\n\r\nThere are no packages for reference 'openssl/1.0.2s@ennor/stable' matching the query 'os.version=10.0'\r\n\r\nconan search openssl/1.0.2s@ennor/stable -q \"os=iOS AND os.version=10.0\"\r\nExisting packages for recipe openssl/1.0.2s@ennor/stable:\r\n\r\nThere are no packages for reference 'openssl/1.0.2s@ennor/stable' matching the query 'os=iOS AND os.version=10.0'\r\n```\r\n\r\nThe command `search` is limited because the rule:\r\n\r\nhttps://github.com/conan-io/conan/blob/df8fd41b63ee3054fe1160ded5e52e19c0a933c3/conans/search/search.py#L85\r\n\r\nThe suggestion is supporting custom settings, or sub-settings at least.\r\n\r\nRegards!\r\n\r\n/cc @Minimonium @Paultergeist\r\n\r\n- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n- [ ] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\n\n", "before_files": [{"content": "import os\nimport re\nfrom collections import OrderedDict\nfrom fnmatch import translate\n\nfrom conans.errors import ConanException, RecipeNotFoundException\nfrom conans.model.info import ConanInfo\nfrom conans.model.ref import ConanFileReference, PackageReference\nfrom conans.paths import CONANINFO\nfrom conans.search.query_parse import evaluate_postfix, infix_to_postfix\nfrom conans.util.files import list_folder_subdirs, load\nfrom conans.util.log import logger\n\n\ndef filter_outdated(packages_infos, recipe_hash):\n if not recipe_hash:\n return packages_infos\n result = OrderedDict()\n for package_id, info in packages_infos.items():\n try: # Existing package_info of old package might not have recipe_hash\n if info[\"recipe_hash\"] != recipe_hash:\n result[package_id] = info\n except KeyError:\n pass\n return result\n\n\ndef filter_by_revision(metadata, packages_infos):\n ok = OrderedDict()\n recipe_revision = metadata.recipe.revision\n for package_id, info in packages_infos.items():\n try:\n rec_rev = metadata.packages[package_id].recipe_revision\n if rec_rev == recipe_revision:\n ok[package_id] = info\n except KeyError:\n pass\n return ok\n\n\ndef filter_packages(query, package_infos):\n if query is None:\n return package_infos\n try:\n if \"!\" in query:\n raise ConanException(\"'!' character is not allowed\")\n if \" not \" in query or query.startswith(\"not \"):\n raise ConanException(\"'not' operator is not allowed\")\n postfix = infix_to_postfix(query) if query else []\n result = OrderedDict()\n for package_id, info in package_infos.items():\n if evaluate_postfix_with_info(postfix, info):\n result[package_id] = info\n return result\n except Exception as exc:\n raise ConanException(\"Invalid package query: %s. %s\" % (query, exc))\n\n\ndef evaluate_postfix_with_info(postfix, conan_vars_info):\n\n # Evaluate conaninfo with the expression\n\n def evaluate_info(expression):\n \"\"\"Receives an expression like compiler.version=\"12\"\n Uses conan_vars_info in the closure to evaluate it\"\"\"\n name, value = expression.split(\"=\", 1)\n value = value.replace(\"\\\"\", \"\")\n return evaluate(name, value, conan_vars_info)\n\n return evaluate_postfix(postfix, evaluate_info)\n\n\ndef evaluate(prop_name, prop_value, conan_vars_info):\n \"\"\"\n Evaluates a single prop_name, prop_value like \"os\", \"Windows\" against\n conan_vars_info.serialize_min()\n \"\"\"\n\n def compatible_prop(setting_value, prop_value):\n return (prop_value == setting_value) or (prop_value == \"None\" and setting_value is None)\n\n info_settings = conan_vars_info.get(\"settings\", [])\n info_options = conan_vars_info.get(\"options\", [])\n\n if (prop_name in [\"os\", \"os_build\", \"compiler\", \"arch\", \"arch_build\", \"build_type\"] or\n prop_name.startswith(\"compiler.\")):\n return compatible_prop(info_settings.get(prop_name, None), prop_value)\n else:\n return compatible_prop(info_options.get(prop_name, None), prop_value)\n\n\ndef search_recipes(cache, pattern=None, ignorecase=True):\n # Conan references in main storage\n if pattern:\n if isinstance(pattern, ConanFileReference):\n pattern = str(pattern)\n pattern = translate(pattern)\n pattern = re.compile(pattern, re.IGNORECASE) if ignorecase else re.compile(pattern)\n\n subdirs = list_folder_subdirs(basedir=cache.store, level=4)\n refs = [ConanFileReference(*folder.split(\"/\")) for folder in subdirs]\n refs.extend(cache.editable_packages.edited_refs.keys())\n if pattern:\n refs = [r for r in refs if _partial_match(pattern, r)]\n refs = sorted(refs)\n return refs\n\n\ndef _partial_match(pattern, ref):\n \"\"\"\n Finds if pattern matches any of partial sums of tokens of conan reference\n \"\"\"\n tokens = str(ref).replace('/', ' / ').replace('@', ' @ ').replace('#', ' # ').split()\n\n def partial_sums(iterable):\n partial = ''\n for i in iterable:\n partial += i\n yield partial\n\n return any(map(pattern.match, list(partial_sums(tokens))))\n\n\ndef search_packages(package_layout, query):\n \"\"\" Return a dict like this:\n\n {package_ID: {name: \"OpenCV\",\n version: \"2.14\",\n settings: {os: Windows}}}\n param package_layout: Layout for the given reference\n \"\"\"\n if not os.path.exists(package_layout.base_folder()) or (\n package_layout.ref.revision and\n package_layout.recipe_revision() != package_layout.ref.revision):\n raise RecipeNotFoundException(package_layout.ref, print_rev=True)\n infos = _get_local_infos_min(package_layout)\n return filter_packages(query, infos)\n\n\ndef _get_local_infos_min(package_layout):\n result = OrderedDict()\n\n packages_path = package_layout.packages()\n subdirs = list_folder_subdirs(packages_path, level=1)\n for package_id in subdirs:\n # Read conaninfo\n pref = PackageReference(package_layout.ref, package_id)\n info_path = os.path.join(package_layout.package(pref), CONANINFO)\n if not os.path.exists(info_path):\n logger.error(\"There is no ConanInfo: %s\" % str(info_path))\n continue\n conan_info_content = load(info_path)\n\n info = ConanInfo.loads(conan_info_content)\n if package_layout.ref.revision:\n metadata = package_layout.load_metadata()\n recipe_revision = metadata.packages[package_id].recipe_revision\n if recipe_revision and recipe_revision != package_layout.ref.revision:\n continue\n conan_vars_info = info.serialize_min()\n result[package_id] = conan_vars_info\n\n return result\n", "path": "conans/search/search.py"}]} | 2,873 | 227 |
gh_patches_debug_2968 | rasdani/github-patches | git_diff | ibis-project__ibis-2426 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fix bigquery version
https://dev.azure.com/ibis-project/ibis/_build/results?buildId=3396&view=logs&j=8f09edc2-e3b7-52de-126a-0225c4f3efa1&t=78a72aec-b398-558e-7c0d-2d33604b9e53
I think we need to limit the upper bound of bigquery library here.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 """Ibis setup module."""
3 import pathlib
4 import sys
5
6 from setuptools import find_packages, setup
7
8 import versioneer
9
10 LONG_DESCRIPTION = """
11 Ibis is a productivity-centric Python big data framework.
12
13 See http://ibis-project.org
14 """
15
16 VERSION = sys.version_info.major, sys.version_info.minor
17
18 impala_requires = ['hdfs>=2.0.16', 'sqlalchemy>=1.1,<1.3.7', 'requests']
19 impala_requires.append('impyla[kerberos]>=0.15.0')
20
21 sqlite_requires = ['sqlalchemy>=1.1,<1.3.7']
22 postgres_requires = sqlite_requires + ['psycopg2']
23 mysql_requires = sqlite_requires + ['pymysql']
24
25 omniscidb_requires = ['pymapd>=0.12.0']
26 kerberos_requires = ['requests-kerberos']
27 visualization_requires = ['graphviz']
28 clickhouse_requires = [
29 'clickhouse-driver>=0.1.3',
30 'clickhouse-cityhash',
31 ]
32 bigquery_requires = ['google-cloud-bigquery>=1.12.0', 'pydata-google-auth']
33 hdf5_requires = ['tables>=3.0.0']
34
35 parquet_requires = ['pyarrow>=0.12.0']
36 spark_requires = ['pyspark>=2.4.3']
37
38 geospatial_requires = ['geoalchemy2', 'geopandas', 'shapely']
39
40 all_requires = (
41 impala_requires
42 + postgres_requires
43 + omniscidb_requires
44 + mysql_requires
45 + kerberos_requires
46 + visualization_requires
47 + clickhouse_requires
48 + bigquery_requires
49 + hdf5_requires
50 + parquet_requires
51 + spark_requires
52 + geospatial_requires
53 )
54
55 develop_requires = all_requires + [
56 'black',
57 'click',
58 'pydocstyle==4.0.1',
59 'flake8',
60 'isort',
61 'mypy',
62 'pre-commit',
63 'pygit2',
64 'pytest>=4.5',
65 ]
66
67 install_requires = [
68 line.strip()
69 for line in pathlib.Path(__file__)
70 .parent.joinpath('requirements.txt')
71 .read_text()
72 .splitlines()
73 ]
74
75 setup(
76 name='ibis-framework',
77 url='https://github.com/ibis-project/ibis',
78 packages=find_packages(),
79 version=versioneer.get_version(),
80 cmdclass=versioneer.get_cmdclass(),
81 install_requires=install_requires,
82 python_requires='>=3.7',
83 extras_require={
84 'all': all_requires,
85 'develop': develop_requires,
86 'impala': impala_requires,
87 'kerberos': kerberos_requires,
88 'postgres': postgres_requires,
89 'omniscidb': omniscidb_requires,
90 'mysql': mysql_requires,
91 'sqlite': sqlite_requires,
92 'visualization': visualization_requires,
93 'clickhouse': clickhouse_requires,
94 'bigquery': bigquery_requires,
95 'hdf5': hdf5_requires,
96 'parquet': parquet_requires,
97 'spark': spark_requires,
98 'geospatial': geospatial_requires,
99 },
100 description="Productivity-centric Python Big Data Framework",
101 long_description=LONG_DESCRIPTION,
102 classifiers=[
103 'Development Status :: 4 - Beta',
104 'Operating System :: OS Independent',
105 'Intended Audience :: Science/Research',
106 'Programming Language :: Python',
107 'Programming Language :: Python :: 3',
108 'Topic :: Scientific/Engineering',
109 ],
110 license='Apache License, Version 2.0',
111 maintainer="Phillip Cloud",
112 maintainer_email="[email protected]",
113 )
114
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,10 @@
'clickhouse-driver>=0.1.3',
'clickhouse-cityhash',
]
-bigquery_requires = ['google-cloud-bigquery>=1.12.0', 'pydata-google-auth']
+bigquery_requires = [
+ 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev',
+ 'pydata-google-auth',
+]
hdf5_requires = ['tables>=3.0.0']
parquet_requires = ['pyarrow>=0.12.0']
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,7 +29,10 @@\n 'clickhouse-driver>=0.1.3',\n 'clickhouse-cityhash',\n ]\n-bigquery_requires = ['google-cloud-bigquery>=1.12.0', 'pydata-google-auth']\n+bigquery_requires = [\n+ 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev',\n+ 'pydata-google-auth',\n+]\n hdf5_requires = ['tables>=3.0.0']\n \n parquet_requires = ['pyarrow>=0.12.0']\n", "issue": "fix bigquery version\nhttps://dev.azure.com/ibis-project/ibis/_build/results?buildId=3396&view=logs&j=8f09edc2-e3b7-52de-126a-0225c4f3efa1&t=78a72aec-b398-558e-7c0d-2d33604b9e53\r\n\r\nI think we need to limit the upper bound of bigquery library here.\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Ibis setup module.\"\"\"\nimport pathlib\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"\nIbis is a productivity-centric Python big data framework.\n\nSee http://ibis-project.org\n\"\"\"\n\nVERSION = sys.version_info.major, sys.version_info.minor\n\nimpala_requires = ['hdfs>=2.0.16', 'sqlalchemy>=1.1,<1.3.7', 'requests']\nimpala_requires.append('impyla[kerberos]>=0.15.0')\n\nsqlite_requires = ['sqlalchemy>=1.1,<1.3.7']\npostgres_requires = sqlite_requires + ['psycopg2']\nmysql_requires = sqlite_requires + ['pymysql']\n\nomniscidb_requires = ['pymapd>=0.12.0']\nkerberos_requires = ['requests-kerberos']\nvisualization_requires = ['graphviz']\nclickhouse_requires = [\n 'clickhouse-driver>=0.1.3',\n 'clickhouse-cityhash',\n]\nbigquery_requires = ['google-cloud-bigquery>=1.12.0', 'pydata-google-auth']\nhdf5_requires = ['tables>=3.0.0']\n\nparquet_requires = ['pyarrow>=0.12.0']\nspark_requires = ['pyspark>=2.4.3']\n\ngeospatial_requires = ['geoalchemy2', 'geopandas', 'shapely']\n\nall_requires = (\n impala_requires\n + postgres_requires\n + omniscidb_requires\n + mysql_requires\n + kerberos_requires\n + visualization_requires\n + clickhouse_requires\n + bigquery_requires\n + hdf5_requires\n + parquet_requires\n + spark_requires\n + geospatial_requires\n)\n\ndevelop_requires = all_requires + [\n 'black',\n 'click',\n 'pydocstyle==4.0.1',\n 'flake8',\n 'isort',\n 'mypy',\n 'pre-commit',\n 'pygit2',\n 'pytest>=4.5',\n]\n\ninstall_requires = [\n line.strip()\n for line in pathlib.Path(__file__)\n .parent.joinpath('requirements.txt')\n .read_text()\n .splitlines()\n]\n\nsetup(\n name='ibis-framework',\n url='https://github.com/ibis-project/ibis',\n packages=find_packages(),\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n install_requires=install_requires,\n python_requires='>=3.7',\n extras_require={\n 'all': all_requires,\n 'develop': develop_requires,\n 'impala': impala_requires,\n 'kerberos': kerberos_requires,\n 'postgres': postgres_requires,\n 'omniscidb': omniscidb_requires,\n 'mysql': mysql_requires,\n 'sqlite': sqlite_requires,\n 'visualization': visualization_requires,\n 'clickhouse': clickhouse_requires,\n 'bigquery': bigquery_requires,\n 'hdf5': hdf5_requires,\n 'parquet': parquet_requires,\n 'spark': spark_requires,\n 'geospatial': geospatial_requires,\n },\n description=\"Productivity-centric Python Big Data Framework\",\n long_description=LONG_DESCRIPTION,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n ],\n license='Apache License, Version 2.0',\n maintainer=\"Phillip Cloud\",\n maintainer_email=\"[email protected]\",\n)\n", "path": "setup.py"}]} | 1,681 | 148 |
gh_patches_debug_44034 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-341 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AWS::ECR::Repository RepositoryPolicyText doesn't need 'Resource'
*cfn-lint version: (`cfn-lint --version`)* 0.7.1
*Description of issue.*
When upgrading to 0.7.1 my template with a valid policy adapted from [step 3k of this tutorial](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-ecr.html) triggers "E2507 IAM Policy statement missing Resource or NotResource". This kind of policy apparently doesn't need a `Resource` key, so this shouldn't happen.
</issue>
<code>
[start of src/cfnlint/rules/resources/iam/Policy.py]
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 from datetime import date
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21
22 class Policy(CloudFormationLintRule):
23 """Check if IAM Policy JSON is correct"""
24 id = 'E2507'
25 shortdesc = 'Check if IAM Policies are properly configured'
26 description = 'See if there elements inside an IAM policy ' + \
27 'are correct'
28 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html'
29 tags = ['properties', 'iam']
30
31 def __init__(self):
32 """Init"""
33 self.resources_and_keys = {
34 'AWS::SNS::TopicPolicy': 'PolicyDocument',
35 'AWS::S3::BucketPolicy': 'PolicyDocument',
36 'AWS::KMS::Key': 'KeyPolicy',
37 'AWS::SQS::QueuePolicy': 'PolicyDocument',
38 'AWS::ECR::Repository': 'RepositoryPolicyText',
39 'AWS::Elasticsearch::Domain': 'AccessPolicies',
40 }
41 self.idp_and_keys = {
42 'AWS::IAM::Group': 'Policies',
43 'AWS::IAM::ManagedPolicy': 'PolicyDocument',
44 'AWS::IAM::Policy': 'PolicyDocument',
45 'AWS::IAM::Role': 'Policies',
46 'AWS::IAM::User': 'Policies',
47 }
48 for resource_type in self.resources_and_keys:
49 self.resource_property_types.append(resource_type)
50 for resource_type in self.idp_and_keys:
51 self.resource_property_types.append(resource_type)
52
53 def check_policy_document(self, value, path, cfn, is_identity_policy):
54 """Check policy document"""
55 matches = []
56
57 valid_keys = [
58 'Version',
59 'Id',
60 'Statement',
61 ]
62 valid_versions = ['2012-10-17', '2008-10-17', date(2012, 10, 17), date(2008, 10, 17)]
63
64 if not isinstance(value, dict):
65 message = 'IAM Policy Documents needs to be JSON'
66 matches.append(
67 RuleMatch(path[:], message))
68 return matches
69
70 for parent_key, parent_value in value.items():
71 if parent_key not in valid_keys:
72 message = 'IAM Policy key %s doesn\'t exist.' % (parent_key)
73 matches.append(
74 RuleMatch(path[:] + [parent_key], message))
75 if parent_key == 'Version':
76 if parent_value not in valid_versions:
77 message = 'IAM Policy Version needs to be one of (%s).' % (
78 ', '.join(map(str, ['2012-10-17', '2008-10-17'])))
79 matches.append(
80 RuleMatch(path[:] + [parent_key], message))
81 if parent_key == 'Statement':
82 if isinstance(parent_value, (list)):
83 statements = cfn.get_values(value, 'Statement', path[:])
84 for statement in statements:
85 matches.extend(
86 self._check_policy_statement(
87 statement['Path'], statement['Value'], is_identity_policy
88 )
89 )
90 else:
91 message = 'IAM Policy statement should be of list.'
92 matches.append(
93 RuleMatch(path[:] + [parent_key], message))
94 return matches
95
96 def _check_policy_statement(self, branch, statement, is_identity_policy):
97 """Check statements"""
98 matches = []
99 statement_valid_keys = [
100 'Effect',
101 'Principal',
102 'NotPrincipal',
103 'Action',
104 'NotAction',
105 'Resource',
106 'NotResource',
107 'Condition',
108 'Sid',
109 ]
110
111 for key, _ in statement.items():
112 if key not in statement_valid_keys:
113 message = 'IAM Policy statement key %s isn\'t valid' % (key)
114 matches.append(
115 RuleMatch(branch[:] + [key], message))
116 if 'Effect' not in statement:
117 message = 'IAM Policy statement missing Effect'
118 matches.append(
119 RuleMatch(branch[:], message))
120 else:
121 effect = statement.get('Effect')
122 if effect not in ['Allow', 'Deny']:
123 message = 'IAM Policy Effect should be Allow or Deny'
124 matches.append(
125 RuleMatch(branch[:] + ['Effect'], message))
126 if 'Action' not in statement and 'NotAction' not in statement:
127 message = 'IAM Policy statement missing Action or NotAction'
128 matches.append(
129 RuleMatch(branch[:], message))
130 if is_identity_policy:
131 if 'Principal' in statement or 'NotPrincipal' in statement:
132 message = 'IAM Resource Policy statement shouldn\'t have Principal or NotPrincipal'
133 matches.append(
134 RuleMatch(branch[:], message))
135 else:
136 if 'Principal' not in statement and 'NotPrincipal' not in statement:
137 message = 'IAM Resource Policy statement should have Principal or NotPrincipal'
138 matches.append(
139 RuleMatch(branch[:] + ['Principal'], message))
140 if 'Resource' not in statement and 'NotResource' not in statement:
141 message = 'IAM Policy statement missing Resource or NotResource'
142 matches.append(
143 RuleMatch(branch[:], message))
144
145 return(matches)
146
147 def match_resource_properties(self, properties, resourcetype, path, cfn):
148 """Check CloudFormation Properties"""
149 matches = []
150
151 is_identity_policy = True
152 if resourcetype in self.resources_and_keys:
153 is_identity_policy = False
154
155 key = None
156 if resourcetype in self.resources_and_keys:
157 key = self.resources_and_keys.get(resourcetype)
158 else:
159 key = self.idp_and_keys.get(resourcetype)
160
161 if not key:
162 # Key isn't defined return nothing
163 return matches
164
165 if key == 'Policies':
166 for index, policy in enumerate(properties.get(key, [])):
167 matches.extend(
168 cfn.check_value(
169 obj=policy, key='PolicyDocument',
170 path=path[:] + ['Policies', index],
171 check_value=self.check_policy_document,
172 cfn=cfn,
173 is_identity_policy=is_identity_policy
174 ))
175 else:
176 matches.extend(
177 cfn.check_value(
178 obj=properties, key=key,
179 path=path[:],
180 check_value=self.check_policy_document,
181 cfn=cfn,
182 is_identity_policy=is_identity_policy
183 ))
184
185 return matches
186
[end of src/cfnlint/rules/resources/iam/Policy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/iam/Policy.py b/src/cfnlint/rules/resources/iam/Policy.py
--- a/src/cfnlint/rules/resources/iam/Policy.py
+++ b/src/cfnlint/rules/resources/iam/Policy.py
@@ -30,6 +30,9 @@
def __init__(self):
"""Init"""
+ self.resource_exceptions = {
+ 'AWS::ECR::Repository': 'RepositoryPolicyText',
+ }
self.resources_and_keys = {
'AWS::SNS::TopicPolicy': 'PolicyDocument',
'AWS::S3::BucketPolicy': 'PolicyDocument',
@@ -50,7 +53,7 @@
for resource_type in self.idp_and_keys:
self.resource_property_types.append(resource_type)
- def check_policy_document(self, value, path, cfn, is_identity_policy):
+ def check_policy_document(self, value, path, cfn, is_identity_policy, resource_exceptions):
"""Check policy document"""
matches = []
@@ -84,7 +87,7 @@
for statement in statements:
matches.extend(
self._check_policy_statement(
- statement['Path'], statement['Value'], is_identity_policy
+ statement['Path'], statement['Value'], is_identity_policy, resource_exceptions
)
)
else:
@@ -93,7 +96,7 @@
RuleMatch(path[:] + [parent_key], message))
return matches
- def _check_policy_statement(self, branch, statement, is_identity_policy):
+ def _check_policy_statement(self, branch, statement, is_identity_policy, resource_exceptions):
"""Check statements"""
matches = []
statement_valid_keys = [
@@ -137,10 +140,11 @@
message = 'IAM Resource Policy statement should have Principal or NotPrincipal'
matches.append(
RuleMatch(branch[:] + ['Principal'], message))
- if 'Resource' not in statement and 'NotResource' not in statement:
- message = 'IAM Policy statement missing Resource or NotResource'
- matches.append(
- RuleMatch(branch[:], message))
+ if not resource_exceptions:
+ if 'Resource' not in statement and 'NotResource' not in statement:
+ message = 'IAM Policy statement missing Resource or NotResource'
+ matches.append(
+ RuleMatch(branch[:], message))
return(matches)
@@ -162,6 +166,10 @@
# Key isn't defined return nothing
return matches
+ resource_exceptions = False
+ if key == self.resource_exceptions.get(resourcetype):
+ resource_exceptions = True
+
if key == 'Policies':
for index, policy in enumerate(properties.get(key, [])):
matches.extend(
@@ -170,7 +178,8 @@
path=path[:] + ['Policies', index],
check_value=self.check_policy_document,
cfn=cfn,
- is_identity_policy=is_identity_policy
+ is_identity_policy=is_identity_policy,
+ resource_exceptions=resource_exceptions,
))
else:
matches.extend(
@@ -179,7 +188,8 @@
path=path[:],
check_value=self.check_policy_document,
cfn=cfn,
- is_identity_policy=is_identity_policy
+ is_identity_policy=is_identity_policy,
+ resource_exceptions=resource_exceptions,
))
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/iam/Policy.py b/src/cfnlint/rules/resources/iam/Policy.py\n--- a/src/cfnlint/rules/resources/iam/Policy.py\n+++ b/src/cfnlint/rules/resources/iam/Policy.py\n@@ -30,6 +30,9 @@\n \n def __init__(self):\n \"\"\"Init\"\"\"\n+ self.resource_exceptions = {\n+ 'AWS::ECR::Repository': 'RepositoryPolicyText',\n+ }\n self.resources_and_keys = {\n 'AWS::SNS::TopicPolicy': 'PolicyDocument',\n 'AWS::S3::BucketPolicy': 'PolicyDocument',\n@@ -50,7 +53,7 @@\n for resource_type in self.idp_and_keys:\n self.resource_property_types.append(resource_type)\n \n- def check_policy_document(self, value, path, cfn, is_identity_policy):\n+ def check_policy_document(self, value, path, cfn, is_identity_policy, resource_exceptions):\n \"\"\"Check policy document\"\"\"\n matches = []\n \n@@ -84,7 +87,7 @@\n for statement in statements:\n matches.extend(\n self._check_policy_statement(\n- statement['Path'], statement['Value'], is_identity_policy\n+ statement['Path'], statement['Value'], is_identity_policy, resource_exceptions\n )\n )\n else:\n@@ -93,7 +96,7 @@\n RuleMatch(path[:] + [parent_key], message))\n return matches\n \n- def _check_policy_statement(self, branch, statement, is_identity_policy):\n+ def _check_policy_statement(self, branch, statement, is_identity_policy, resource_exceptions):\n \"\"\"Check statements\"\"\"\n matches = []\n statement_valid_keys = [\n@@ -137,10 +140,11 @@\n message = 'IAM Resource Policy statement should have Principal or NotPrincipal'\n matches.append(\n RuleMatch(branch[:] + ['Principal'], message))\n- if 'Resource' not in statement and 'NotResource' not in statement:\n- message = 'IAM Policy statement missing Resource or NotResource'\n- matches.append(\n- RuleMatch(branch[:], message))\n+ if not resource_exceptions:\n+ if 'Resource' not in statement and 'NotResource' not in statement:\n+ message = 'IAM Policy statement missing Resource or NotResource'\n+ matches.append(\n+ RuleMatch(branch[:], message))\n \n return(matches)\n \n@@ -162,6 +166,10 @@\n # Key isn't defined return nothing\n return matches\n \n+ resource_exceptions = False\n+ if key == self.resource_exceptions.get(resourcetype):\n+ resource_exceptions = True\n+\n if key == 'Policies':\n for index, policy in enumerate(properties.get(key, [])):\n matches.extend(\n@@ -170,7 +178,8 @@\n path=path[:] + ['Policies', index],\n check_value=self.check_policy_document,\n cfn=cfn,\n- is_identity_policy=is_identity_policy\n+ is_identity_policy=is_identity_policy,\n+ resource_exceptions=resource_exceptions,\n ))\n else:\n matches.extend(\n@@ -179,7 +188,8 @@\n path=path[:],\n check_value=self.check_policy_document,\n cfn=cfn,\n- is_identity_policy=is_identity_policy\n+ is_identity_policy=is_identity_policy,\n+ resource_exceptions=resource_exceptions,\n ))\n \n return matches\n", "issue": "AWS::ECR::Repository RepositoryPolicyText doesn't need 'Resource'\n*cfn-lint version: (`cfn-lint --version`)* 0.7.1\r\n\r\n*Description of issue.*\r\n\r\nWhen upgrading to 0.7.1 my template with a valid policy adapted from [step 3k of this tutorial](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-ecr.html) triggers \"E2507 IAM Policy statement missing Resource or NotResource\". This kind of policy apparently doesn't need a `Resource` key, so this shouldn't happen.\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom datetime import date\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass Policy(CloudFormationLintRule):\n \"\"\"Check if IAM Policy JSON is correct\"\"\"\n id = 'E2507'\n shortdesc = 'Check if IAM Policies are properly configured'\n description = 'See if there elements inside an IAM policy ' + \\\n 'are correct'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html'\n tags = ['properties', 'iam']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n self.resources_and_keys = {\n 'AWS::SNS::TopicPolicy': 'PolicyDocument',\n 'AWS::S3::BucketPolicy': 'PolicyDocument',\n 'AWS::KMS::Key': 'KeyPolicy',\n 'AWS::SQS::QueuePolicy': 'PolicyDocument',\n 'AWS::ECR::Repository': 'RepositoryPolicyText',\n 'AWS::Elasticsearch::Domain': 'AccessPolicies',\n }\n self.idp_and_keys = {\n 'AWS::IAM::Group': 'Policies',\n 'AWS::IAM::ManagedPolicy': 'PolicyDocument',\n 'AWS::IAM::Policy': 'PolicyDocument',\n 'AWS::IAM::Role': 'Policies',\n 'AWS::IAM::User': 'Policies',\n }\n for resource_type in self.resources_and_keys:\n self.resource_property_types.append(resource_type)\n for resource_type in self.idp_and_keys:\n self.resource_property_types.append(resource_type)\n\n def check_policy_document(self, value, path, cfn, is_identity_policy):\n \"\"\"Check policy document\"\"\"\n matches = []\n\n valid_keys = [\n 'Version',\n 'Id',\n 'Statement',\n ]\n valid_versions = ['2012-10-17', '2008-10-17', date(2012, 10, 17), date(2008, 10, 17)]\n\n if not isinstance(value, dict):\n message = 'IAM Policy Documents needs to be JSON'\n matches.append(\n RuleMatch(path[:], message))\n return matches\n\n for parent_key, parent_value in value.items():\n if parent_key not in valid_keys:\n message = 'IAM Policy key %s doesn\\'t exist.' % (parent_key)\n matches.append(\n RuleMatch(path[:] + [parent_key], message))\n if parent_key == 'Version':\n if parent_value not in valid_versions:\n message = 'IAM Policy Version needs to be one of (%s).' % (\n ', '.join(map(str, ['2012-10-17', '2008-10-17'])))\n matches.append(\n RuleMatch(path[:] + [parent_key], message))\n if parent_key == 'Statement':\n if isinstance(parent_value, (list)):\n statements = cfn.get_values(value, 'Statement', path[:])\n for statement in statements:\n matches.extend(\n self._check_policy_statement(\n statement['Path'], statement['Value'], is_identity_policy\n )\n )\n else:\n message = 'IAM Policy statement should be of list.'\n matches.append(\n RuleMatch(path[:] + [parent_key], message))\n return matches\n\n def _check_policy_statement(self, branch, statement, is_identity_policy):\n \"\"\"Check statements\"\"\"\n matches = []\n statement_valid_keys = [\n 'Effect',\n 'Principal',\n 'NotPrincipal',\n 'Action',\n 'NotAction',\n 'Resource',\n 'NotResource',\n 'Condition',\n 'Sid',\n ]\n\n for key, _ in statement.items():\n if key not in statement_valid_keys:\n message = 'IAM Policy statement key %s isn\\'t valid' % (key)\n matches.append(\n RuleMatch(branch[:] + [key], message))\n if 'Effect' not in statement:\n message = 'IAM Policy statement missing Effect'\n matches.append(\n RuleMatch(branch[:], message))\n else:\n effect = statement.get('Effect')\n if effect not in ['Allow', 'Deny']:\n message = 'IAM Policy Effect should be Allow or Deny'\n matches.append(\n RuleMatch(branch[:] + ['Effect'], message))\n if 'Action' not in statement and 'NotAction' not in statement:\n message = 'IAM Policy statement missing Action or NotAction'\n matches.append(\n RuleMatch(branch[:], message))\n if is_identity_policy:\n if 'Principal' in statement or 'NotPrincipal' in statement:\n message = 'IAM Resource Policy statement shouldn\\'t have Principal or NotPrincipal'\n matches.append(\n RuleMatch(branch[:], message))\n else:\n if 'Principal' not in statement and 'NotPrincipal' not in statement:\n message = 'IAM Resource Policy statement should have Principal or NotPrincipal'\n matches.append(\n RuleMatch(branch[:] + ['Principal'], message))\n if 'Resource' not in statement and 'NotResource' not in statement:\n message = 'IAM Policy statement missing Resource or NotResource'\n matches.append(\n RuleMatch(branch[:], message))\n\n return(matches)\n\n def match_resource_properties(self, properties, resourcetype, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n is_identity_policy = True\n if resourcetype in self.resources_and_keys:\n is_identity_policy = False\n\n key = None\n if resourcetype in self.resources_and_keys:\n key = self.resources_and_keys.get(resourcetype)\n else:\n key = self.idp_and_keys.get(resourcetype)\n\n if not key:\n # Key isn't defined return nothing\n return matches\n\n if key == 'Policies':\n for index, policy in enumerate(properties.get(key, [])):\n matches.extend(\n cfn.check_value(\n obj=policy, key='PolicyDocument',\n path=path[:] + ['Policies', index],\n check_value=self.check_policy_document,\n cfn=cfn,\n is_identity_policy=is_identity_policy\n ))\n else:\n matches.extend(\n cfn.check_value(\n obj=properties, key=key,\n path=path[:],\n check_value=self.check_policy_document,\n cfn=cfn,\n is_identity_policy=is_identity_policy\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/iam/Policy.py"}]} | 2,699 | 745 |
gh_patches_debug_32383 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-2248 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
La branche dev est cassée
Nous avons la branche dev à nouveau cassée. Tous les tests passent mais, après ça, l'erreur suivante est afficher dans Travis :
``` console
back_mysql runtests: commands[1] | flake8
./zds/utils/templatetags/emarkdown.py:33:37: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:33:39: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:35:34: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:35:36: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:37:45: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:37:47: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:42:38: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:42:40: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:45:41: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:45:43: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:48:42: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:48:44: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:50:35: E251 unexpected spaces around keyword / parameter equals
./zds/utils/templatetags/emarkdown.py:50:37: E251 unexpected spaces around keyword / parameter equals
ERROR: InvocationError: '/home/travis/build/zestedesavoir/zds-site/.tox/back_mysql/bin/flake8'
___________________________________ summary ____________________________________
ERROR: back_mysql: commands failed
The command "tox $TEST_APP" exited with 1.
```
Quelqu'un comprend le problème ?
</issue>
<code>
[start of zds/utils/templatetags/emarkdown.py]
1 # coding: utf-8
2
3 import re
4
5 from django import template
6 from django.utils.safestring import mark_safe
7
8 import markdown
9 from markdown.extensions.zds import ZdsExtension
10 from zds.utils.templatetags.smileysDef import smileys
11
12 register = template.Library()
13
14 """
15 Markdown related filters.
16 """
17
18 # Constant strings
19 __MD_ERROR_PARSING = u"Une erreur est survenue dans la génération de texte Markdown. " \
20 u"Veuillez rapporter le bug."
21
22
23 def get_markdown_instance(inline=False, js_support=False):
24 """
25 Provide a pre-configured markdown parser.
26
27 :param bool inline: If `True`, configure parser to parse only inline content.
28 :return: A ZMarkdown parser.
29 """
30 zdsext = ZdsExtension({"inline": inline, "emoticons": smileys, "js_support": js_support})
31 # Generate parser
32 md = markdown.Markdown(extensions=(zdsext,),
33 safe_mode = 'escape',
34 # Protect use of html by escape it
35 inline = inline,
36 # Parse only inline content.
37 enable_attributes = False,
38 # Disable the conversion of attributes.
39 # This could potentially allow an
40 # untrusted user to inject JavaScript
41 # into documents.
42 tab_length = 4,
43 # Length of tabs in the source.
44 # This is the default value
45 output_format = 'html5',
46 # html5 output
47 # This is the default value
48 smart_emphasis = True,
49 # Enable smart emphasis for underscore syntax
50 lazy_ol = True,
51 # Enable smart ordered list start support
52 )
53 return md
54
55
56 def render_markdown(text, inline=False, js_support=False):
57 """
58 Render a markdown text to html.
59
60 :param str text: Text to render.
61 :param bool inline: If `True`, parse only inline content.
62 :return: Equivalent html string.
63 :rtype: str
64 """
65 return get_markdown_instance(inline=inline, js_support=js_support).convert(text).encode('utf-8').strip()
66
67
68 @register.filter(needs_autoescape=False)
69 def emarkdown(text, js=""):
70 """
71 Filter markdown text and render it to html.
72
73 :param str text: Text to render.
74 :return: Equivalent html string.
75 :rtype: str
76 """
77 is_js = (js == "js")
78 try:
79 return mark_safe(render_markdown(text, inline=False, js_support=is_js))
80 except:
81 return mark_safe(u'<div class="error ico-after"><p>{}</p></div>'.format(__MD_ERROR_PARSING))
82
83
84 @register.filter(needs_autoescape=False)
85 def emarkdown_inline(text):
86 """
87 Filter markdown text and render it to html. Only inline elements will be parsed.
88
89 :param str text: Text to render.
90 :return: Equivalent html string.
91 :rtype: str
92 """
93
94 try:
95 return mark_safe(render_markdown(text, inline=True))
96 except:
97 return mark_safe(u'<p>{}</p>'.format(__MD_ERROR_PARSING))
98
99
100 def sub_hd(match, count):
101 """Replace header shifted."""
102 st = match.group(1)
103 lvl = match.group('level')
104 hd = match.group('header')
105 end = match.group(4)
106
107 new_content = st + "#" * count + lvl + hd + end
108
109 return new_content
110
111
112 def decale_header(text, count):
113 """
114 Shift header in markdown document.
115
116 :param str text: Text to filter.
117 :param int count:
118 :return: Filtered text.
119 :rtype: str
120 """
121 return re.sub(
122 r'(^|\n)(?P<level>#{1,4})(?P<header>.*?)#*(\n|$)',
123 lambda t: sub_hd(t, count),
124 text.encode("utf-8"))
125
126
127 @register.filter('decale_header_1')
128 def decale_header_1(text):
129 return decale_header(text, 1)
130
131
132 @register.filter('decale_header_2')
133 def decale_header_2(text):
134 return decale_header(text, 2)
135
136
137 @register.filter('decale_header_3')
138 def decale_header_3(text):
139 return decale_header(text, 3)
140
[end of zds/utils/templatetags/emarkdown.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zds/utils/templatetags/emarkdown.py b/zds/utils/templatetags/emarkdown.py
--- a/zds/utils/templatetags/emarkdown.py
+++ b/zds/utils/templatetags/emarkdown.py
@@ -30,24 +30,24 @@
zdsext = ZdsExtension({"inline": inline, "emoticons": smileys, "js_support": js_support})
# Generate parser
md = markdown.Markdown(extensions=(zdsext,),
- safe_mode = 'escape',
+ safe_mode='escape',
# Protect use of html by escape it
- inline = inline,
+ inline=inline,
# Parse only inline content.
- enable_attributes = False,
+ enable_attributes=False,
# Disable the conversion of attributes.
# This could potentially allow an
# untrusted user to inject JavaScript
# into documents.
- tab_length = 4,
+ tab_length=4,
# Length of tabs in the source.
# This is the default value
- output_format = 'html5',
+ output_format='html5',
# html5 output
# This is the default value
- smart_emphasis = True,
+ smart_emphasis=True,
# Enable smart emphasis for underscore syntax
- lazy_ol = True,
+ lazy_ol=True,
# Enable smart ordered list start support
)
return md
| {"golden_diff": "diff --git a/zds/utils/templatetags/emarkdown.py b/zds/utils/templatetags/emarkdown.py\n--- a/zds/utils/templatetags/emarkdown.py\n+++ b/zds/utils/templatetags/emarkdown.py\n@@ -30,24 +30,24 @@\n zdsext = ZdsExtension({\"inline\": inline, \"emoticons\": smileys, \"js_support\": js_support})\n # Generate parser\n md = markdown.Markdown(extensions=(zdsext,),\n- safe_mode = 'escape',\n+ safe_mode='escape',\n # Protect use of html by escape it\n- inline = inline,\n+ inline=inline,\n # Parse only inline content.\n- enable_attributes = False,\n+ enable_attributes=False,\n # Disable the conversion of attributes.\n # This could potentially allow an\n # untrusted user to inject JavaScript\n # into documents.\n- tab_length = 4,\n+ tab_length=4,\n # Length of tabs in the source.\n # This is the default value\n- output_format = 'html5',\n+ output_format='html5',\n # html5 output\n # This is the default value\n- smart_emphasis = True,\n+ smart_emphasis=True,\n # Enable smart emphasis for underscore syntax\n- lazy_ol = True,\n+ lazy_ol=True,\n # Enable smart ordered list start support\n )\n return md\n", "issue": "La branche dev est cass\u00e9e\nNous avons la branche dev \u00e0 nouveau cass\u00e9e. Tous les tests passent mais, apr\u00e8s \u00e7a, l'erreur suivante est afficher dans Travis :\n\n``` console\nback_mysql runtests: commands[1] | flake8\n./zds/utils/templatetags/emarkdown.py:33:37: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:33:39: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:35:34: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:35:36: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:37:45: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:37:47: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:42:38: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:42:40: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:45:41: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:45:43: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:48:42: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:48:44: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:50:35: E251 unexpected spaces around keyword / parameter equals\n./zds/utils/templatetags/emarkdown.py:50:37: E251 unexpected spaces around keyword / parameter equals\nERROR: InvocationError: '/home/travis/build/zestedesavoir/zds-site/.tox/back_mysql/bin/flake8'\n___________________________________ summary ____________________________________\nERROR: back_mysql: commands failed\nThe command \"tox $TEST_APP\" exited with 1.\n```\n\nQuelqu'un comprend le probl\u00e8me ?\n\n", "before_files": [{"content": "# coding: utf-8\n\nimport re\n\nfrom django import template\nfrom django.utils.safestring import mark_safe\n\nimport markdown\nfrom markdown.extensions.zds import ZdsExtension\nfrom zds.utils.templatetags.smileysDef import smileys\n\nregister = template.Library()\n\n\"\"\"\nMarkdown related filters.\n\"\"\"\n\n# Constant strings\n__MD_ERROR_PARSING = u\"Une erreur est survenue dans la g\u00e9n\u00e9ration de texte Markdown. \" \\\n u\"Veuillez rapporter le bug.\"\n\n\ndef get_markdown_instance(inline=False, js_support=False):\n \"\"\"\n Provide a pre-configured markdown parser.\n\n :param bool inline: If `True`, configure parser to parse only inline content.\n :return: A ZMarkdown parser.\n \"\"\"\n zdsext = ZdsExtension({\"inline\": inline, \"emoticons\": smileys, \"js_support\": js_support})\n # Generate parser\n md = markdown.Markdown(extensions=(zdsext,),\n safe_mode = 'escape',\n # Protect use of html by escape it\n inline = inline,\n # Parse only inline content.\n enable_attributes = False,\n # Disable the conversion of attributes.\n # This could potentially allow an\n # untrusted user to inject JavaScript\n # into documents.\n tab_length = 4,\n # Length of tabs in the source.\n # This is the default value\n output_format = 'html5',\n # html5 output\n # This is the default value\n smart_emphasis = True,\n # Enable smart emphasis for underscore syntax\n lazy_ol = True,\n # Enable smart ordered list start support\n )\n return md\n\n\ndef render_markdown(text, inline=False, js_support=False):\n \"\"\"\n Render a markdown text to html.\n\n :param str text: Text to render.\n :param bool inline: If `True`, parse only inline content.\n :return: Equivalent html string.\n :rtype: str\n \"\"\"\n return get_markdown_instance(inline=inline, js_support=js_support).convert(text).encode('utf-8').strip()\n\n\[email protected](needs_autoescape=False)\ndef emarkdown(text, js=\"\"):\n \"\"\"\n Filter markdown text and render it to html.\n\n :param str text: Text to render.\n :return: Equivalent html string.\n :rtype: str\n \"\"\"\n is_js = (js == \"js\")\n try:\n return mark_safe(render_markdown(text, inline=False, js_support=is_js))\n except:\n return mark_safe(u'<div class=\"error ico-after\"><p>{}</p></div>'.format(__MD_ERROR_PARSING))\n\n\[email protected](needs_autoescape=False)\ndef emarkdown_inline(text):\n \"\"\"\n Filter markdown text and render it to html. Only inline elements will be parsed.\n\n :param str text: Text to render.\n :return: Equivalent html string.\n :rtype: str\n \"\"\"\n\n try:\n return mark_safe(render_markdown(text, inline=True))\n except:\n return mark_safe(u'<p>{}</p>'.format(__MD_ERROR_PARSING))\n\n\ndef sub_hd(match, count):\n \"\"\"Replace header shifted.\"\"\"\n st = match.group(1)\n lvl = match.group('level')\n hd = match.group('header')\n end = match.group(4)\n\n new_content = st + \"#\" * count + lvl + hd + end\n\n return new_content\n\n\ndef decale_header(text, count):\n \"\"\"\n Shift header in markdown document.\n\n :param str text: Text to filter.\n :param int count:\n :return: Filtered text.\n :rtype: str\n \"\"\"\n return re.sub(\n r'(^|\\n)(?P<level>#{1,4})(?P<header>.*?)#*(\\n|$)',\n lambda t: sub_hd(t, count),\n text.encode(\"utf-8\"))\n\n\[email protected]('decale_header_1')\ndef decale_header_1(text):\n return decale_header(text, 1)\n\n\[email protected]('decale_header_2')\ndef decale_header_2(text):\n return decale_header(text, 2)\n\n\[email protected]('decale_header_3')\ndef decale_header_3(text):\n return decale_header(text, 3)\n", "path": "zds/utils/templatetags/emarkdown.py"}]} | 2,376 | 324 |
gh_patches_debug_6975 | rasdani/github-patches | git_diff | Mailu__Mailu-934 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CLI domain creation arguments bug
When creating a new domain using cli we have optional arguments that can be passed
```bash
$ docker-compose exec admin flask mailu domain --help
Usage: flask mailu domain [OPTIONS] DOMAIN_NAME
Create a domain
Options:
-u, --max-users TEXT
-a, --max-aliases TEXT
-q, --max-quota-bytes TEXT
--help Show this message and exit.
```
Looks like we are passing only domain name, keeping the rest of variables to default
https://github.com/Mailu/Mailu/blob/626b8a9d05bed54777704124a77ef4a884a4d052/core/admin/mailu/manage.py#L97
</issue>
<code>
[start of core/admin/mailu/manage.py]
1 from mailu import models
2
3 from flask import current_app as app
4 from flask import cli as flask_cli
5
6 import flask
7 import os
8 import socket
9 import uuid
10 import click
11
12
13 db = models.db
14
15
16 @click.group()
17 def mailu(cls=flask_cli.FlaskGroup):
18 """ Mailu command line
19 """
20
21
22 @mailu.command()
23 @flask_cli.with_appcontext
24 def advertise():
25 """ Advertise this server against statistic services.
26 """
27 if os.path.isfile(app.config["INSTANCE_ID_PATH"]):
28 with open(app.config["INSTANCE_ID_PATH"], "r") as handle:
29 instance_id = handle.read()
30 else:
31 instance_id = str(uuid.uuid4())
32 with open(app.config["INSTANCE_ID_PATH"], "w") as handle:
33 handle.write(instance_id)
34 if not app.config["DISABLE_STATISTICS"]:
35 try:
36 socket.gethostbyname(app.config["STATS_ENDPOINT"].format(instance_id))
37 except:
38 pass
39
40
41 @mailu.command()
42 @click.argument('localpart')
43 @click.argument('domain_name')
44 @click.argument('password')
45 @flask_cli.with_appcontext
46 def admin(localpart, domain_name, password):
47 """ Create an admin user
48 """
49 domain = models.Domain.query.get(domain_name)
50 if not domain:
51 domain = models.Domain(name=domain_name)
52 db.session.add(domain)
53 user = models.User(
54 localpart=localpart,
55 domain=domain,
56 global_admin=True
57 )
58 user.set_password(password)
59 db.session.add(user)
60 db.session.commit()
61
62
63 @mailu.command()
64 @click.argument('localpart')
65 @click.argument('domain_name')
66 @click.argument('password')
67 @click.argument('hash_scheme')
68 @flask_cli.with_appcontext
69 def user(localpart, domain_name, password, hash_scheme=None):
70 """ Create a user
71 """
72 if hash_scheme is None:
73 hash_scheme = app.config['PASSWORD_SCHEME']
74 domain = models.Domain.query.get(domain_name)
75 if not domain:
76 domain = models.Domain(name=domain_name)
77 db.session.add(domain)
78 user = models.User(
79 localpart=localpart,
80 domain=domain,
81 global_admin=False
82 )
83 user.set_password(password, hash_scheme=hash_scheme)
84 db.session.add(user)
85 db.session.commit()
86
87
88 @mailu.command()
89 @click.option('-n', '--domain_name')
90 @click.option('-u', '--max_users')
91 @click.option('-a', '--max_aliases')
92 @click.option('-q', '--max_quota_bytes')
93 @flask_cli.with_appcontext
94 def domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):
95 domain = models.Domain.query.get(domain_name)
96 if not domain:
97 domain = models.Domain(name=domain_name)
98 db.session.add(domain)
99 db.session.commit()
100
101
102 @mailu.command()
103 @click.argument('localpart')
104 @click.argument('domain_name')
105 @click.argument('password_hash')
106 @click.argument('hash_scheme')
107 @flask_cli.with_appcontext
108 def user_import(localpart, domain_name, password_hash, hash_scheme = None):
109 """ Import a user along with password hash.
110 """
111 if hash_scheme is None:
112 hash_scheme = app.config['PASSWORD_SCHEME']
113 domain = models.Domain.query.get(domain_name)
114 if not domain:
115 domain = models.Domain(name=domain_name)
116 db.session.add(domain)
117 user = models.User(
118 localpart=localpart,
119 domain=domain,
120 global_admin=False
121 )
122 user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)
123 db.session.add(user)
124 db.session.commit()
125
126
127 @mailu.command()
128 @click.option('-v', '--verbose')
129 @click.option('-d', '--delete_objects')
130 @flask_cli.with_appcontext
131 def config_update(verbose=False, delete_objects=False):
132 """sync configuration with data from YAML-formatted stdin"""
133 import yaml
134 import sys
135 new_config = yaml.load(sys.stdin)
136 # print new_config
137 domains = new_config.get('domains', [])
138 tracked_domains = set()
139 for domain_config in domains:
140 if verbose:
141 print(str(domain_config))
142 domain_name = domain_config['name']
143 max_users = domain_config.get('max_users', -1)
144 max_aliases = domain_config.get('max_aliases', -1)
145 max_quota_bytes = domain_config.get('max_quota_bytes', 0)
146 tracked_domains.add(domain_name)
147 domain = models.Domain.query.get(domain_name)
148 if not domain:
149 domain = models.Domain(name=domain_name,
150 max_users=max_users,
151 max_aliases=max_aliases,
152 max_quota_bytes=max_quota_bytes)
153 db.session.add(domain)
154 print("Added " + str(domain_config))
155 else:
156 domain.max_users = max_users
157 domain.max_aliases = max_aliases
158 domain.max_quota_bytes = max_quota_bytes
159 db.session.add(domain)
160 print("Updated " + str(domain_config))
161
162 users = new_config.get('users', [])
163 tracked_users = set()
164 user_optional_params = ('comment', 'quota_bytes', 'global_admin',
165 'enable_imap', 'enable_pop', 'forward_enabled',
166 'forward_destination', 'reply_enabled',
167 'reply_subject', 'reply_body', 'displayed_name',
168 'spam_enabled', 'email', 'spam_threshold')
169 for user_config in users:
170 if verbose:
171 print(str(user_config))
172 localpart = user_config['localpart']
173 domain_name = user_config['domain']
174 password_hash = user_config.get('password_hash', None)
175 hash_scheme = user_config.get('hash_scheme', None)
176 domain = models.Domain.query.get(domain_name)
177 email = '{0}@{1}'.format(localpart, domain_name)
178 optional_params = {}
179 for k in user_optional_params:
180 if k in user_config:
181 optional_params[k] = user_config[k]
182 if not domain:
183 domain = models.Domain(name=domain_name)
184 db.session.add(domain)
185 user = models.User.query.get(email)
186 tracked_users.add(email)
187 tracked_domains.add(domain_name)
188 if not user:
189 user = models.User(
190 localpart=localpart,
191 domain=domain,
192 **optional_params
193 )
194 else:
195 for k in optional_params:
196 setattr(user, k, optional_params[k])
197 user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)
198 db.session.add(user)
199
200 aliases = new_config.get('aliases', [])
201 tracked_aliases = set()
202 for alias_config in aliases:
203 if verbose:
204 print(str(alias_config))
205 localpart = alias_config['localpart']
206 domain_name = alias_config['domain']
207 if type(alias_config['destination']) is str:
208 destination = alias_config['destination'].split(',')
209 else:
210 destination = alias_config['destination']
211 wildcard = alias_config.get('wildcard', False)
212 domain = models.Domain.query.get(domain_name)
213 email = '{0}@{1}'.format(localpart, domain_name)
214 if not domain:
215 domain = models.Domain(name=domain_name)
216 db.session.add(domain)
217 alias = models.Alias.query.get(email)
218 tracked_aliases.add(email)
219 tracked_domains.add(domain_name)
220 if not alias:
221 alias = models.Alias(
222 localpart=localpart,
223 domain=domain,
224 wildcard=wildcard,
225 destination=destination,
226 email=email
227 )
228 else:
229 alias.destination = destination
230 alias.wildcard = wildcard
231 db.session.add(alias)
232
233 db.session.commit()
234
235 managers = new_config.get('managers', [])
236 # tracked_managers=set()
237 for manager_config in managers:
238 if verbose:
239 print(str(manager_config))
240 domain_name = manager_config['domain']
241 user_name = manager_config['user']
242 domain = models.Domain.query.get(domain_name)
243 manageruser = models.User.query.get(user_name + '@' + domain_name)
244 if manageruser not in domain.managers:
245 domain.managers.append(manageruser)
246 db.session.add(domain)
247
248 db.session.commit()
249
250 if delete_objects:
251 for user in db.session.query(models.User).all():
252 if not (user.email in tracked_users):
253 if verbose:
254 print("Deleting user: " + str(user.email))
255 db.session.delete(user)
256 for alias in db.session.query(models.Alias).all():
257 if not (alias.email in tracked_aliases):
258 if verbose:
259 print("Deleting alias: " + str(alias.email))
260 db.session.delete(alias)
261 for domain in db.session.query(models.Domain).all():
262 if not (domain.name in tracked_domains):
263 if verbose:
264 print("Deleting domain: " + str(domain.name))
265 db.session.delete(domain)
266 db.session.commit()
267
268
269 @mailu.command()
270 @click.argument('email')
271 @flask_cli.with_appcontext
272 def user_delete(email):
273 """delete user"""
274 user = models.User.query.get(email)
275 if user:
276 db.session.delete(user)
277 db.session.commit()
278
279
280 @mailu.command()
281 @click.argument('email')
282 @flask_cli.with_appcontext
283 def alias_delete(email):
284 """delete alias"""
285 alias = models.Alias.query.get(email)
286 if alias:
287 db.session.delete(alias)
288 db.session.commit()
289
290
291 @mailu.command()
292 @click.argument('localpart')
293 @click.argument('domain_name')
294 @click.argument('destination')
295 @click.option('-w', '--wildcard', is_flag=True)
296 @flask_cli.with_appcontext
297 def alias(localpart, domain_name, destination, wildcard=False):
298 """ Create an alias
299 """
300 domain = models.Domain.query.get(domain_name)
301 if not domain:
302 domain = models.Domain(name=domain_name)
303 db.session.add(domain)
304 alias = models.Alias(
305 localpart=localpart,
306 domain=domain,
307 wildcard=wildcard,
308 destination=destination.split(','),
309 email="%s@%s" % (localpart, domain_name)
310 )
311 db.session.add(alias)
312 db.session.commit()
313
314
315 @mailu.command()
316 @click.argument('domain_name')
317 @click.argument('max_users')
318 @click.argument('max_aliases')
319 @click.argument('max_quota_bytes')
320 @flask_cli.with_appcontext
321 def setlimits(domain_name, max_users, max_aliases, max_quota_bytes):
322 """ Set domain limits
323 """
324 domain = models.Domain.query.get(domain_name)
325 domain.max_users = max_users
326 domain.max_aliases = max_aliases
327 domain.max_quota_bytes = max_quota_bytes
328 db.session.add(domain)
329 db.session.commit()
330
331
332 @mailu.command()
333 @click.argument('domain_name')
334 @click.argument('user_name')
335 @flask_cli.with_appcontext
336 def setmanager(domain_name, user_name='manager'):
337 """ Make a user manager of a domain
338 """
339 domain = models.Domain.query.get(domain_name)
340 manageruser = models.User.query.get(user_name + '@' + domain_name)
341 domain.managers.append(manageruser)
342 db.session.add(domain)
343 db.session.commit()
344
345
346 if __name__ == '__main__':
347 cli()
348
[end of core/admin/mailu/manage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/manage.py b/core/admin/mailu/manage.py
--- a/core/admin/mailu/manage.py
+++ b/core/admin/mailu/manage.py
@@ -94,7 +94,8 @@
def domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):
domain = models.Domain.query.get(domain_name)
if not domain:
- domain = models.Domain(name=domain_name)
+ domain = models.Domain(name=domain_name, max_users=max_users,
+ max_aliases=max_aliases, max_quota_bytes=max_quota_bytes)
db.session.add(domain)
db.session.commit()
| {"golden_diff": "diff --git a/core/admin/mailu/manage.py b/core/admin/mailu/manage.py\n--- a/core/admin/mailu/manage.py\n+++ b/core/admin/mailu/manage.py\n@@ -94,7 +94,8 @@\n def domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):\n domain = models.Domain.query.get(domain_name)\n if not domain:\n- domain = models.Domain(name=domain_name)\n+ domain = models.Domain(name=domain_name, max_users=max_users,\n+ max_aliases=max_aliases, max_quota_bytes=max_quota_bytes)\n db.session.add(domain)\n db.session.commit()\n", "issue": "CLI domain creation arguments bug\nWhen creating a new domain using cli we have optional arguments that can be passed\r\n\r\n```bash\r\n$ docker-compose exec admin flask mailu domain --help\r\nUsage: flask mailu domain [OPTIONS] DOMAIN_NAME\r\n\r\n Create a domain\r\n\r\nOptions:\r\n -u, --max-users TEXT\r\n -a, --max-aliases TEXT\r\n -q, --max-quota-bytes TEXT\r\n --help Show this message and exit.\r\n```\r\n\r\nLooks like we are passing only domain name, keeping the rest of variables to default\r\nhttps://github.com/Mailu/Mailu/blob/626b8a9d05bed54777704124a77ef4a884a4d052/core/admin/mailu/manage.py#L97\r\n\n", "before_files": [{"content": "from mailu import models\n\nfrom flask import current_app as app\nfrom flask import cli as flask_cli\n\nimport flask\nimport os\nimport socket\nimport uuid\nimport click\n\n\ndb = models.db\n\n\[email protected]()\ndef mailu(cls=flask_cli.FlaskGroup):\n \"\"\" Mailu command line\n \"\"\"\n\n\[email protected]()\n@flask_cli.with_appcontext\ndef advertise():\n \"\"\" Advertise this server against statistic services.\n \"\"\"\n if os.path.isfile(app.config[\"INSTANCE_ID_PATH\"]):\n with open(app.config[\"INSTANCE_ID_PATH\"], \"r\") as handle:\n instance_id = handle.read()\n else:\n instance_id = str(uuid.uuid4())\n with open(app.config[\"INSTANCE_ID_PATH\"], \"w\") as handle:\n handle.write(instance_id)\n if not app.config[\"DISABLE_STATISTICS\"]:\n try:\n socket.gethostbyname(app.config[\"STATS_ENDPOINT\"].format(instance_id))\n except:\n pass\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('password')\n@flask_cli.with_appcontext\ndef admin(localpart, domain_name, password):\n \"\"\" Create an admin user\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User(\n localpart=localpart,\n domain=domain,\n global_admin=True\n )\n user.set_password(password)\n db.session.add(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('password')\[email protected]('hash_scheme')\n@flask_cli.with_appcontext\ndef user(localpart, domain_name, password, hash_scheme=None):\n \"\"\" Create a user\n \"\"\"\n if hash_scheme is None:\n hash_scheme = app.config['PASSWORD_SCHEME']\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User(\n localpart=localpart,\n domain=domain,\n global_admin=False\n )\n user.set_password(password, hash_scheme=hash_scheme)\n db.session.add(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('-n', '--domain_name')\[email protected]('-u', '--max_users')\[email protected]('-a', '--max_aliases')\[email protected]('-q', '--max_quota_bytes')\n@flask_cli.with_appcontext\ndef domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n db.session.commit()\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('password_hash')\[email protected]('hash_scheme')\n@flask_cli.with_appcontext\ndef user_import(localpart, domain_name, password_hash, hash_scheme = None):\n \"\"\" Import a user along with password hash.\n \"\"\"\n if hash_scheme is None:\n hash_scheme = app.config['PASSWORD_SCHEME']\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User(\n localpart=localpart,\n domain=domain,\n global_admin=False\n )\n user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)\n db.session.add(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('-v', '--verbose')\[email protected]('-d', '--delete_objects')\n@flask_cli.with_appcontext\ndef config_update(verbose=False, delete_objects=False):\n \"\"\"sync configuration with data from YAML-formatted stdin\"\"\"\n import yaml\n import sys\n new_config = yaml.load(sys.stdin)\n # print new_config\n domains = new_config.get('domains', [])\n tracked_domains = set()\n for domain_config in domains:\n if verbose:\n print(str(domain_config))\n domain_name = domain_config['name']\n max_users = domain_config.get('max_users', -1)\n max_aliases = domain_config.get('max_aliases', -1)\n max_quota_bytes = domain_config.get('max_quota_bytes', 0)\n tracked_domains.add(domain_name)\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name,\n max_users=max_users,\n max_aliases=max_aliases,\n max_quota_bytes=max_quota_bytes)\n db.session.add(domain)\n print(\"Added \" + str(domain_config))\n else:\n domain.max_users = max_users\n domain.max_aliases = max_aliases\n domain.max_quota_bytes = max_quota_bytes\n db.session.add(domain)\n print(\"Updated \" + str(domain_config))\n\n users = new_config.get('users', [])\n tracked_users = set()\n user_optional_params = ('comment', 'quota_bytes', 'global_admin',\n 'enable_imap', 'enable_pop', 'forward_enabled',\n 'forward_destination', 'reply_enabled',\n 'reply_subject', 'reply_body', 'displayed_name',\n 'spam_enabled', 'email', 'spam_threshold')\n for user_config in users:\n if verbose:\n print(str(user_config))\n localpart = user_config['localpart']\n domain_name = user_config['domain']\n password_hash = user_config.get('password_hash', None)\n hash_scheme = user_config.get('hash_scheme', None)\n domain = models.Domain.query.get(domain_name)\n email = '{0}@{1}'.format(localpart, domain_name)\n optional_params = {}\n for k in user_optional_params:\n if k in user_config:\n optional_params[k] = user_config[k]\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User.query.get(email)\n tracked_users.add(email)\n tracked_domains.add(domain_name)\n if not user:\n user = models.User(\n localpart=localpart,\n domain=domain,\n **optional_params\n )\n else:\n for k in optional_params:\n setattr(user, k, optional_params[k])\n user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)\n db.session.add(user)\n\n aliases = new_config.get('aliases', [])\n tracked_aliases = set()\n for alias_config in aliases:\n if verbose:\n print(str(alias_config))\n localpart = alias_config['localpart']\n domain_name = alias_config['domain']\n if type(alias_config['destination']) is str:\n destination = alias_config['destination'].split(',')\n else:\n destination = alias_config['destination']\n wildcard = alias_config.get('wildcard', False)\n domain = models.Domain.query.get(domain_name)\n email = '{0}@{1}'.format(localpart, domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n alias = models.Alias.query.get(email)\n tracked_aliases.add(email)\n tracked_domains.add(domain_name)\n if not alias:\n alias = models.Alias(\n localpart=localpart,\n domain=domain,\n wildcard=wildcard,\n destination=destination,\n email=email\n )\n else:\n alias.destination = destination\n alias.wildcard = wildcard\n db.session.add(alias)\n\n db.session.commit()\n\n managers = new_config.get('managers', [])\n # tracked_managers=set()\n for manager_config in managers:\n if verbose:\n print(str(manager_config))\n domain_name = manager_config['domain']\n user_name = manager_config['user']\n domain = models.Domain.query.get(domain_name)\n manageruser = models.User.query.get(user_name + '@' + domain_name)\n if manageruser not in domain.managers:\n domain.managers.append(manageruser)\n db.session.add(domain)\n\n db.session.commit()\n\n if delete_objects:\n for user in db.session.query(models.User).all():\n if not (user.email in tracked_users):\n if verbose:\n print(\"Deleting user: \" + str(user.email))\n db.session.delete(user)\n for alias in db.session.query(models.Alias).all():\n if not (alias.email in tracked_aliases):\n if verbose:\n print(\"Deleting alias: \" + str(alias.email))\n db.session.delete(alias)\n for domain in db.session.query(models.Domain).all():\n if not (domain.name in tracked_domains):\n if verbose:\n print(\"Deleting domain: \" + str(domain.name))\n db.session.delete(domain)\n db.session.commit()\n\n\[email protected]()\[email protected]('email')\n@flask_cli.with_appcontext\ndef user_delete(email):\n \"\"\"delete user\"\"\"\n user = models.User.query.get(email)\n if user:\n db.session.delete(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('email')\n@flask_cli.with_appcontext\ndef alias_delete(email):\n \"\"\"delete alias\"\"\"\n alias = models.Alias.query.get(email)\n if alias:\n db.session.delete(alias)\n db.session.commit()\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('destination')\[email protected]('-w', '--wildcard', is_flag=True)\n@flask_cli.with_appcontext\ndef alias(localpart, domain_name, destination, wildcard=False):\n \"\"\" Create an alias\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n alias = models.Alias(\n localpart=localpart,\n domain=domain,\n wildcard=wildcard,\n destination=destination.split(','),\n email=\"%s@%s\" % (localpart, domain_name)\n )\n db.session.add(alias)\n db.session.commit()\n\n\[email protected]()\[email protected]('domain_name')\[email protected]('max_users')\[email protected]('max_aliases')\[email protected]('max_quota_bytes')\n@flask_cli.with_appcontext\ndef setlimits(domain_name, max_users, max_aliases, max_quota_bytes):\n \"\"\" Set domain limits\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n domain.max_users = max_users\n domain.max_aliases = max_aliases\n domain.max_quota_bytes = max_quota_bytes\n db.session.add(domain)\n db.session.commit()\n\n\[email protected]()\[email protected]('domain_name')\[email protected]('user_name')\n@flask_cli.with_appcontext\ndef setmanager(domain_name, user_name='manager'):\n \"\"\" Make a user manager of a domain\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n manageruser = models.User.query.get(user_name + '@' + domain_name)\n domain.managers.append(manageruser)\n db.session.add(domain)\n db.session.commit()\n\n\nif __name__ == '__main__':\n cli()\n", "path": "core/admin/mailu/manage.py"}]} | 4,016 | 136 |
gh_patches_debug_44404 | rasdani/github-patches | git_diff | sublimelsp__LSP-2186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`force_group` argument support for `lsp_symbol_references` command
Working in multiple groups, `lsp_symbol_references` command doesn't work across groups but instead duplicates already open files when one of the listed references is selected to open. It should have a flag to disable `force_group` setting.
</issue>
<code>
[start of plugin/references.py]
1 from .core.protocol import Location
2 from .core.protocol import Point
3 from .core.protocol import Request
4 from .core.registry import get_position
5 from .core.registry import LspTextCommand
6 from .core.registry import windows
7 from .core.sessions import Session
8 from .core.settings import userprefs
9 from .core.types import ClientConfig
10 from .core.typing import Dict, List, Optional, Tuple
11 from .core.views import get_line
12 from .core.views import get_symbol_kind_from_scope
13 from .core.views import get_uri_and_position_from_location
14 from .core.views import text_document_position_params
15 from .locationpicker import LocationPicker
16 import functools
17 import linecache
18 import os
19 import sublime
20
21
22 class LspSymbolReferencesCommand(LspTextCommand):
23
24 capability = 'referencesProvider'
25
26 def is_enabled(
27 self,
28 event: Optional[dict] = None,
29 point: Optional[int] = None,
30 side_by_side: bool = False,
31 fallback: bool = False,
32 ) -> bool:
33 return fallback or super().is_enabled(event, point)
34
35 def is_visible(
36 self,
37 event: Optional[dict] = None,
38 point: Optional[int] = None,
39 side_by_side: bool = False,
40 fallback: bool = False,
41 ) -> bool:
42 if self.applies_to_context_menu(event):
43 return self.is_enabled(event, point, side_by_side, fallback)
44 return True
45
46 def run(
47 self,
48 _: sublime.Edit,
49 event: Optional[dict] = None,
50 point: Optional[int] = None,
51 side_by_side: bool = False,
52 fallback: bool = False,
53 ) -> None:
54 session = self.best_session(self.capability)
55 file_path = self.view.file_name()
56 pos = get_position(self.view, event, point)
57 if session and file_path and pos is not None:
58 position_params = text_document_position_params(self.view, pos)
59 params = {
60 'textDocument': position_params['textDocument'],
61 'position': position_params['position'],
62 'context': {"includeDeclaration": False},
63 }
64 request = Request("textDocument/references", params, self.view, progress=True)
65 word_range = self.view.word(pos)
66 session.send_request(
67 request,
68 functools.partial(
69 self._handle_response_async,
70 self.view.substr(word_range),
71 session,
72 side_by_side,
73 fallback,
74 word_range.begin()
75 )
76 )
77 else:
78 self._handle_no_results(fallback, side_by_side)
79
80 def _handle_response_async(
81 self,
82 word: str,
83 session: Session,
84 side_by_side: bool,
85 fallback: bool,
86 position: int,
87 response: Optional[List[Location]]
88 ) -> None:
89 sublime.set_timeout(lambda: self._handle_response(word, session, side_by_side, fallback, position, response))
90
91 def _handle_response(
92 self,
93 word: str,
94 session: Session,
95 side_by_side: bool,
96 fallback: bool,
97 position: int,
98 response: Optional[List[Location]]
99 ) -> None:
100 if response:
101 if userprefs().show_references_in_quick_panel:
102 self._show_references_in_quick_panel(word, session, response, side_by_side, position)
103 else:
104 self._show_references_in_output_panel(word, session, response)
105 else:
106 self._handle_no_results(fallback, side_by_side)
107
108 def _handle_no_results(self, fallback: bool = False, side_by_side: bool = False) -> None:
109 window = self.view.window()
110 if not window:
111 return
112 if fallback:
113 window.run_command("goto_reference", {"side_by_side": side_by_side})
114 else:
115 window.status_message("No references found")
116
117 def _show_references_in_quick_panel(
118 self, word: str, session: Session, locations: List[Location], side_by_side: bool, position: int
119 ) -> None:
120 self.view.run_command("add_jump_record", {"selection": [(r.a, r.b) for r in self.view.sel()]})
121 kind = get_symbol_kind_from_scope(self.view.scope_name(position))
122 LocationPicker(self.view, session, locations, side_by_side, placeholder="References to " + word, kind=kind)
123
124 def _show_references_in_output_panel(self, word: str, session: Session, locations: List[Location]) -> None:
125 wm = windows.lookup(session.window)
126 if not wm:
127 return
128 panel = wm.panel_manager and wm.panel_manager.ensure_references_panel()
129 if not panel:
130 return
131 base_dir = wm.get_project_path(self.view.file_name() or "")
132 to_render = [] # type: List[str]
133 references_count = 0
134 references_by_file = _group_locations_by_uri(wm.window, session.config, locations)
135 for file, references in references_by_file.items():
136 to_render.append('{}:'.format(_get_relative_path(base_dir, file)))
137 for reference in references:
138 references_count += 1
139 point, line = reference
140 to_render.append(" {:>4}:{:<4} {}".format(point.row + 1, point.col + 1, line))
141 to_render.append("") # add spacing between filenames
142 characters = "\n".join(to_render)
143 panel.settings().set("result_base_dir", base_dir)
144 panel.run_command("lsp_clear_panel")
145 wm.window.run_command("show_panel", {"panel": "output.references"})
146 panel.run_command('append', {
147 'characters': "{} references for '{}'\n\n{}".format(references_count, word, characters),
148 'force': True,
149 'scroll_to_end': False
150 })
151 # highlight all word occurrences
152 regions = panel.find_all(r"\b{}\b".format(word))
153 panel.add_regions('ReferenceHighlight', regions, 'comment', flags=sublime.DRAW_OUTLINED)
154
155
156 def _get_relative_path(base_dir: Optional[str], file_path: str) -> str:
157 if base_dir:
158 try:
159 return os.path.relpath(file_path, base_dir)
160 except ValueError:
161 # On Windows, ValueError is raised when path and start are on different drives.
162 pass
163 return file_path
164
165
166 def _group_locations_by_uri(
167 window: sublime.Window,
168 config: ClientConfig,
169 locations: List[Location]
170 ) -> Dict[str, List[Tuple[Point, str]]]:
171 """Return a dictionary that groups locations by the URI it belongs."""
172 grouped_locations = {} # type: Dict[str, List[Tuple[Point, str]]]
173 for location in locations:
174 uri, position = get_uri_and_position_from_location(location)
175 file_path = config.map_server_uri_to_client_path(uri)
176 point = Point.from_lsp(position)
177 # get line of the reference, to showcase its use
178 reference_line = get_line(window, file_path, point.row)
179 if grouped_locations.get(file_path) is None:
180 grouped_locations[file_path] = []
181 grouped_locations[file_path].append((point, reference_line))
182 # we don't want to cache the line, we always want to get fresh data
183 linecache.clearcache()
184 return grouped_locations
185
[end of plugin/references.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/references.py b/plugin/references.py
--- a/plugin/references.py
+++ b/plugin/references.py
@@ -28,7 +28,9 @@
event: Optional[dict] = None,
point: Optional[int] = None,
side_by_side: bool = False,
+ force_group: bool = True,
fallback: bool = False,
+ group: int = -1
) -> bool:
return fallback or super().is_enabled(event, point)
@@ -37,7 +39,9 @@
event: Optional[dict] = None,
point: Optional[int] = None,
side_by_side: bool = False,
+ force_group: bool = True,
fallback: bool = False,
+ group: int = -1
) -> bool:
if self.applies_to_context_menu(event):
return self.is_enabled(event, point, side_by_side, fallback)
@@ -49,7 +53,9 @@
event: Optional[dict] = None,
point: Optional[int] = None,
side_by_side: bool = False,
+ force_group: bool = True,
fallback: bool = False,
+ group: int = -1
) -> None:
session = self.best_session(self.capability)
file_path = self.view.file_name()
@@ -70,7 +76,9 @@
self.view.substr(word_range),
session,
side_by_side,
+ force_group,
fallback,
+ group,
word_range.begin()
)
)
@@ -82,24 +90,30 @@
word: str,
session: Session,
side_by_side: bool,
+ force_group: bool,
fallback: bool,
+ group: int,
position: int,
response: Optional[List[Location]]
) -> None:
- sublime.set_timeout(lambda: self._handle_response(word, session, side_by_side, fallback, position, response))
+ sublime.set_timeout(lambda: self._handle_response(
+ word, session, side_by_side, force_group, fallback, group, position, response))
def _handle_response(
self,
word: str,
session: Session,
side_by_side: bool,
+ force_group: bool,
fallback: bool,
+ group: int,
position: int,
response: Optional[List[Location]]
) -> None:
if response:
if userprefs().show_references_in_quick_panel:
- self._show_references_in_quick_panel(word, session, response, side_by_side, position)
+ self._show_references_in_quick_panel(
+ word, session, response, side_by_side, force_group, group, position)
else:
self._show_references_in_output_panel(word, session, response)
else:
@@ -115,11 +129,19 @@
window.status_message("No references found")
def _show_references_in_quick_panel(
- self, word: str, session: Session, locations: List[Location], side_by_side: bool, position: int
+ self,
+ word: str,
+ session: Session,
+ locations: List[Location],
+ side_by_side: bool,
+ force_group: bool,
+ group: int,
+ position: int
) -> None:
self.view.run_command("add_jump_record", {"selection": [(r.a, r.b) for r in self.view.sel()]})
+ placeholder = "References to " + word
kind = get_symbol_kind_from_scope(self.view.scope_name(position))
- LocationPicker(self.view, session, locations, side_by_side, placeholder="References to " + word, kind=kind)
+ LocationPicker(self.view, session, locations, side_by_side, force_group, group, placeholder, kind)
def _show_references_in_output_panel(self, word: str, session: Session, locations: List[Location]) -> None:
wm = windows.lookup(session.window)
| {"golden_diff": "diff --git a/plugin/references.py b/plugin/references.py\n--- a/plugin/references.py\n+++ b/plugin/references.py\n@@ -28,7 +28,9 @@\n event: Optional[dict] = None,\n point: Optional[int] = None,\n side_by_side: bool = False,\n+ force_group: bool = True,\n fallback: bool = False,\n+ group: int = -1\n ) -> bool:\n return fallback or super().is_enabled(event, point)\n \n@@ -37,7 +39,9 @@\n event: Optional[dict] = None,\n point: Optional[int] = None,\n side_by_side: bool = False,\n+ force_group: bool = True,\n fallback: bool = False,\n+ group: int = -1\n ) -> bool:\n if self.applies_to_context_menu(event):\n return self.is_enabled(event, point, side_by_side, fallback)\n@@ -49,7 +53,9 @@\n event: Optional[dict] = None,\n point: Optional[int] = None,\n side_by_side: bool = False,\n+ force_group: bool = True,\n fallback: bool = False,\n+ group: int = -1\n ) -> None:\n session = self.best_session(self.capability)\n file_path = self.view.file_name()\n@@ -70,7 +76,9 @@\n self.view.substr(word_range),\n session,\n side_by_side,\n+ force_group,\n fallback,\n+ group,\n word_range.begin()\n )\n )\n@@ -82,24 +90,30 @@\n word: str,\n session: Session,\n side_by_side: bool,\n+ force_group: bool,\n fallback: bool,\n+ group: int,\n position: int,\n response: Optional[List[Location]]\n ) -> None:\n- sublime.set_timeout(lambda: self._handle_response(word, session, side_by_side, fallback, position, response))\n+ sublime.set_timeout(lambda: self._handle_response(\n+ word, session, side_by_side, force_group, fallback, group, position, response))\n \n def _handle_response(\n self,\n word: str,\n session: Session,\n side_by_side: bool,\n+ force_group: bool,\n fallback: bool,\n+ group: int,\n position: int,\n response: Optional[List[Location]]\n ) -> None:\n if response:\n if userprefs().show_references_in_quick_panel:\n- self._show_references_in_quick_panel(word, session, response, side_by_side, position)\n+ self._show_references_in_quick_panel(\n+ word, session, response, side_by_side, force_group, group, position)\n else:\n self._show_references_in_output_panel(word, session, response)\n else:\n@@ -115,11 +129,19 @@\n window.status_message(\"No references found\")\n \n def _show_references_in_quick_panel(\n- self, word: str, session: Session, locations: List[Location], side_by_side: bool, position: int\n+ self,\n+ word: str,\n+ session: Session,\n+ locations: List[Location],\n+ side_by_side: bool,\n+ force_group: bool,\n+ group: int,\n+ position: int\n ) -> None:\n self.view.run_command(\"add_jump_record\", {\"selection\": [(r.a, r.b) for r in self.view.sel()]})\n+ placeholder = \"References to \" + word\n kind = get_symbol_kind_from_scope(self.view.scope_name(position))\n- LocationPicker(self.view, session, locations, side_by_side, placeholder=\"References to \" + word, kind=kind)\n+ LocationPicker(self.view, session, locations, side_by_side, force_group, group, placeholder, kind)\n \n def _show_references_in_output_panel(self, word: str, session: Session, locations: List[Location]) -> None:\n wm = windows.lookup(session.window)\n", "issue": "`force_group` argument support for `lsp_symbol_references` command\nWorking in multiple groups, `lsp_symbol_references` command doesn't work across groups but instead duplicates already open files when one of the listed references is selected to open. It should have a flag to disable `force_group` setting.\n", "before_files": [{"content": "from .core.protocol import Location\nfrom .core.protocol import Point\nfrom .core.protocol import Request\nfrom .core.registry import get_position\nfrom .core.registry import LspTextCommand\nfrom .core.registry import windows\nfrom .core.sessions import Session\nfrom .core.settings import userprefs\nfrom .core.types import ClientConfig\nfrom .core.typing import Dict, List, Optional, Tuple\nfrom .core.views import get_line\nfrom .core.views import get_symbol_kind_from_scope\nfrom .core.views import get_uri_and_position_from_location\nfrom .core.views import text_document_position_params\nfrom .locationpicker import LocationPicker\nimport functools\nimport linecache\nimport os\nimport sublime\n\n\nclass LspSymbolReferencesCommand(LspTextCommand):\n\n capability = 'referencesProvider'\n\n def is_enabled(\n self,\n event: Optional[dict] = None,\n point: Optional[int] = None,\n side_by_side: bool = False,\n fallback: bool = False,\n ) -> bool:\n return fallback or super().is_enabled(event, point)\n\n def is_visible(\n self,\n event: Optional[dict] = None,\n point: Optional[int] = None,\n side_by_side: bool = False,\n fallback: bool = False,\n ) -> bool:\n if self.applies_to_context_menu(event):\n return self.is_enabled(event, point, side_by_side, fallback)\n return True\n\n def run(\n self,\n _: sublime.Edit,\n event: Optional[dict] = None,\n point: Optional[int] = None,\n side_by_side: bool = False,\n fallback: bool = False,\n ) -> None:\n session = self.best_session(self.capability)\n file_path = self.view.file_name()\n pos = get_position(self.view, event, point)\n if session and file_path and pos is not None:\n position_params = text_document_position_params(self.view, pos)\n params = {\n 'textDocument': position_params['textDocument'],\n 'position': position_params['position'],\n 'context': {\"includeDeclaration\": False},\n }\n request = Request(\"textDocument/references\", params, self.view, progress=True)\n word_range = self.view.word(pos)\n session.send_request(\n request,\n functools.partial(\n self._handle_response_async,\n self.view.substr(word_range),\n session,\n side_by_side,\n fallback,\n word_range.begin()\n )\n )\n else:\n self._handle_no_results(fallback, side_by_side)\n\n def _handle_response_async(\n self,\n word: str,\n session: Session,\n side_by_side: bool,\n fallback: bool,\n position: int,\n response: Optional[List[Location]]\n ) -> None:\n sublime.set_timeout(lambda: self._handle_response(word, session, side_by_side, fallback, position, response))\n\n def _handle_response(\n self,\n word: str,\n session: Session,\n side_by_side: bool,\n fallback: bool,\n position: int,\n response: Optional[List[Location]]\n ) -> None:\n if response:\n if userprefs().show_references_in_quick_panel:\n self._show_references_in_quick_panel(word, session, response, side_by_side, position)\n else:\n self._show_references_in_output_panel(word, session, response)\n else:\n self._handle_no_results(fallback, side_by_side)\n\n def _handle_no_results(self, fallback: bool = False, side_by_side: bool = False) -> None:\n window = self.view.window()\n if not window:\n return\n if fallback:\n window.run_command(\"goto_reference\", {\"side_by_side\": side_by_side})\n else:\n window.status_message(\"No references found\")\n\n def _show_references_in_quick_panel(\n self, word: str, session: Session, locations: List[Location], side_by_side: bool, position: int\n ) -> None:\n self.view.run_command(\"add_jump_record\", {\"selection\": [(r.a, r.b) for r in self.view.sel()]})\n kind = get_symbol_kind_from_scope(self.view.scope_name(position))\n LocationPicker(self.view, session, locations, side_by_side, placeholder=\"References to \" + word, kind=kind)\n\n def _show_references_in_output_panel(self, word: str, session: Session, locations: List[Location]) -> None:\n wm = windows.lookup(session.window)\n if not wm:\n return\n panel = wm.panel_manager and wm.panel_manager.ensure_references_panel()\n if not panel:\n return\n base_dir = wm.get_project_path(self.view.file_name() or \"\")\n to_render = [] # type: List[str]\n references_count = 0\n references_by_file = _group_locations_by_uri(wm.window, session.config, locations)\n for file, references in references_by_file.items():\n to_render.append('{}:'.format(_get_relative_path(base_dir, file)))\n for reference in references:\n references_count += 1\n point, line = reference\n to_render.append(\" {:>4}:{:<4} {}\".format(point.row + 1, point.col + 1, line))\n to_render.append(\"\") # add spacing between filenames\n characters = \"\\n\".join(to_render)\n panel.settings().set(\"result_base_dir\", base_dir)\n panel.run_command(\"lsp_clear_panel\")\n wm.window.run_command(\"show_panel\", {\"panel\": \"output.references\"})\n panel.run_command('append', {\n 'characters': \"{} references for '{}'\\n\\n{}\".format(references_count, word, characters),\n 'force': True,\n 'scroll_to_end': False\n })\n # highlight all word occurrences\n regions = panel.find_all(r\"\\b{}\\b\".format(word))\n panel.add_regions('ReferenceHighlight', regions, 'comment', flags=sublime.DRAW_OUTLINED)\n\n\ndef _get_relative_path(base_dir: Optional[str], file_path: str) -> str:\n if base_dir:\n try:\n return os.path.relpath(file_path, base_dir)\n except ValueError:\n # On Windows, ValueError is raised when path and start are on different drives.\n pass\n return file_path\n\n\ndef _group_locations_by_uri(\n window: sublime.Window,\n config: ClientConfig,\n locations: List[Location]\n) -> Dict[str, List[Tuple[Point, str]]]:\n \"\"\"Return a dictionary that groups locations by the URI it belongs.\"\"\"\n grouped_locations = {} # type: Dict[str, List[Tuple[Point, str]]]\n for location in locations:\n uri, position = get_uri_and_position_from_location(location)\n file_path = config.map_server_uri_to_client_path(uri)\n point = Point.from_lsp(position)\n # get line of the reference, to showcase its use\n reference_line = get_line(window, file_path, point.row)\n if grouped_locations.get(file_path) is None:\n grouped_locations[file_path] = []\n grouped_locations[file_path].append((point, reference_line))\n # we don't want to cache the line, we always want to get fresh data\n linecache.clearcache()\n return grouped_locations\n", "path": "plugin/references.py"}]} | 2,580 | 875 |
gh_patches_debug_65084 | rasdani/github-patches | git_diff | cupy__cupy-1837 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bundle header files for fp16
CUDA 9.2 or later allows redistribution of `cuda_fp16.h` and `cuda_fp16.hpp`.
https://docs.nvidia.com/cuda/archive/9.2/eula/#attachment-a
Let's bundle them into the repository and use it to avoid `CUDA_PATH`-based header discovery at runtime.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4 from setuptools import setup
5 import sys
6
7 import cupy_setup_build
8
9
10 if sys.version_info[:3] == (3, 5, 0):
11 if not int(os.getenv('CUPY_PYTHON_350_FORCE', '0')):
12 msg = """
13 CuPy does not work with Python 3.5.0.
14
15 We strongly recommend to use another version of Python.
16 If you want to use CuPy with Python 3.5.0 at your own risk,
17 set 1 to CUPY_PYTHON_350_FORCE environment variable."""
18 print(msg)
19 sys.exit(1)
20
21
22 requirements = {
23 'setup': [
24 'fastrlock>=0.3',
25 ],
26 'install': [
27 'numpy>=1.9.0',
28 'six>=1.9.0',
29 'fastrlock>=0.3',
30 ],
31 'stylecheck': [
32 'autopep8==1.3.5',
33 'flake8==3.5.0',
34 'pbr==4.0.4',
35 'pycodestyle==2.3.1',
36 ],
37 'test': [
38 'pytest',
39 'mock',
40 ],
41 'doctest': [
42 'matplotlib',
43 'theano',
44 ],
45 'docs': [
46 'sphinx',
47 'sphinx_rtd_theme',
48 ],
49 'travis': [
50 '-r stylecheck',
51 '-r docs',
52 ],
53 'appveyor': [
54 '-r test',
55 ],
56 }
57
58
59 def reduce_requirements(key):
60 # Resolve recursive requirements notation (-r)
61 reqs = requirements[key]
62 resolved_reqs = []
63 for req in reqs:
64 if req.startswith('-r'):
65 depend_key = req[2:].lstrip()
66 reduce_requirements(depend_key)
67 resolved_reqs += requirements[depend_key]
68 else:
69 resolved_reqs.append(req)
70 requirements[key] = resolved_reqs
71
72
73 for k in requirements.keys():
74 reduce_requirements(k)
75
76
77 extras_require = {k: v for k, v in requirements.items() if k != 'install'}
78
79
80 setup_requires = requirements['setup']
81 install_requires = requirements['install']
82 tests_require = requirements['test']
83
84
85 package_data = {
86 'cupy': [
87 'core/include/cupy/complex/arithmetic.h',
88 'core/include/cupy/complex/catrig.h',
89 'core/include/cupy/complex/catrigf.h',
90 'core/include/cupy/complex/ccosh.h',
91 'core/include/cupy/complex/ccoshf.h',
92 'core/include/cupy/complex/cexp.h',
93 'core/include/cupy/complex/cexpf.h',
94 'core/include/cupy/complex/clog.h',
95 'core/include/cupy/complex/clogf.h',
96 'core/include/cupy/complex/complex.h',
97 'core/include/cupy/complex/complex_inl.h',
98 'core/include/cupy/complex/cpow.h',
99 'core/include/cupy/complex/cproj.h',
100 'core/include/cupy/complex/csinh.h',
101 'core/include/cupy/complex/csinhf.h',
102 'core/include/cupy/complex/csqrt.h',
103 'core/include/cupy/complex/csqrtf.h',
104 'core/include/cupy/complex/ctanh.h',
105 'core/include/cupy/complex/ctanhf.h',
106 'core/include/cupy/complex/math_private.h',
107 'core/include/cupy/carray.cuh',
108 'core/include/cupy/complex.cuh',
109 'core/include/cupy/atomics.cuh',
110 'cuda/cupy_thrust.cu',
111 ],
112 }
113
114 package_data['cupy'] += cupy_setup_build.prepare_wheel_libs()
115
116 package_name = cupy_setup_build.get_package_name()
117 long_description = cupy_setup_build.get_long_description()
118 ext_modules = cupy_setup_build.get_ext_modules()
119 build_ext = cupy_setup_build.custom_build_ext
120 sdist = cupy_setup_build.sdist_with_cython
121
122 here = os.path.abspath(os.path.dirname(__file__))
123 # Get __version__ variable
124 exec(open(os.path.join(here, 'cupy', '_version.py')).read())
125
126 setup(
127 name=package_name,
128 version=__version__, # NOQA
129 description='CuPy: NumPy-like API accelerated with CUDA',
130 long_description=long_description,
131 author='Seiya Tokui',
132 author_email='[email protected]',
133 url='https://docs-cupy.chainer.org/',
134 license='MIT License',
135 packages=[
136 'cupy',
137 'cupy.binary',
138 'cupy.core',
139 'cupy.creation',
140 'cupy.cuda',
141 'cupy.cuda.memory_hooks',
142 'cupy.ext',
143 'cupy.fft',
144 'cupy.indexing',
145 'cupy.io',
146 'cupy.linalg',
147 'cupy.logic',
148 'cupy.manipulation',
149 'cupy.math',
150 'cupy.padding',
151 'cupy.prof',
152 'cupy.random',
153 'cupy.sorting',
154 'cupy.sparse',
155 'cupy.sparse.linalg',
156 'cupy.statistics',
157 'cupy.testing',
158 'cupyx',
159 'cupyx.scipy',
160 'cupyx.scipy.ndimage',
161 'cupyx.scipy.sparse',
162 'cupyx.scipy.sparse.linalg',
163 'cupyx.scipy.special',
164 'cupyx.scipy.linalg',
165 'cupyx.linalg',
166 'cupyx.linalg.sparse'
167 ],
168 package_data=package_data,
169 zip_safe=False,
170 setup_requires=setup_requires,
171 install_requires=install_requires,
172 tests_require=tests_require,
173 extras_require=extras_require,
174 ext_modules=ext_modules,
175 cmdclass={'build_ext': build_ext,
176 'sdist': sdist},
177 )
178
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -107,6 +107,8 @@
'core/include/cupy/carray.cuh',
'core/include/cupy/complex.cuh',
'core/include/cupy/atomics.cuh',
+ 'core/include/cupy/_cuda/cuda-*/*.h',
+ 'core/include/cupy/_cuda/cuda-*/*.hpp',
'cuda/cupy_thrust.cu',
],
}
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -107,6 +107,8 @@\n 'core/include/cupy/carray.cuh',\n 'core/include/cupy/complex.cuh',\n 'core/include/cupy/atomics.cuh',\n+ 'core/include/cupy/_cuda/cuda-*/*.h',\n+ 'core/include/cupy/_cuda/cuda-*/*.hpp',\n 'cuda/cupy_thrust.cu',\n ],\n }\n", "issue": "Bundle header files for fp16\nCUDA 9.2 or later allows redistribution of `cuda_fp16.h` and `cuda_fp16.hpp`.\r\nhttps://docs.nvidia.com/cuda/archive/9.2/eula/#attachment-a\r\n\r\nLet's bundle them into the repository and use it to avoid `CUDA_PATH`-based header discovery at runtime.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nfrom setuptools import setup\nimport sys\n\nimport cupy_setup_build\n\n\nif sys.version_info[:3] == (3, 5, 0):\n if not int(os.getenv('CUPY_PYTHON_350_FORCE', '0')):\n msg = \"\"\"\nCuPy does not work with Python 3.5.0.\n\nWe strongly recommend to use another version of Python.\nIf you want to use CuPy with Python 3.5.0 at your own risk,\nset 1 to CUPY_PYTHON_350_FORCE environment variable.\"\"\"\n print(msg)\n sys.exit(1)\n\n\nrequirements = {\n 'setup': [\n 'fastrlock>=0.3',\n ],\n 'install': [\n 'numpy>=1.9.0',\n 'six>=1.9.0',\n 'fastrlock>=0.3',\n ],\n 'stylecheck': [\n 'autopep8==1.3.5',\n 'flake8==3.5.0',\n 'pbr==4.0.4',\n 'pycodestyle==2.3.1',\n ],\n 'test': [\n 'pytest',\n 'mock',\n ],\n 'doctest': [\n 'matplotlib',\n 'theano',\n ],\n 'docs': [\n 'sphinx',\n 'sphinx_rtd_theme',\n ],\n 'travis': [\n '-r stylecheck',\n '-r docs',\n ],\n 'appveyor': [\n '-r test',\n ],\n}\n\n\ndef reduce_requirements(key):\n # Resolve recursive requirements notation (-r)\n reqs = requirements[key]\n resolved_reqs = []\n for req in reqs:\n if req.startswith('-r'):\n depend_key = req[2:].lstrip()\n reduce_requirements(depend_key)\n resolved_reqs += requirements[depend_key]\n else:\n resolved_reqs.append(req)\n requirements[key] = resolved_reqs\n\n\nfor k in requirements.keys():\n reduce_requirements(k)\n\n\nextras_require = {k: v for k, v in requirements.items() if k != 'install'}\n\n\nsetup_requires = requirements['setup']\ninstall_requires = requirements['install']\ntests_require = requirements['test']\n\n\npackage_data = {\n 'cupy': [\n 'core/include/cupy/complex/arithmetic.h',\n 'core/include/cupy/complex/catrig.h',\n 'core/include/cupy/complex/catrigf.h',\n 'core/include/cupy/complex/ccosh.h',\n 'core/include/cupy/complex/ccoshf.h',\n 'core/include/cupy/complex/cexp.h',\n 'core/include/cupy/complex/cexpf.h',\n 'core/include/cupy/complex/clog.h',\n 'core/include/cupy/complex/clogf.h',\n 'core/include/cupy/complex/complex.h',\n 'core/include/cupy/complex/complex_inl.h',\n 'core/include/cupy/complex/cpow.h',\n 'core/include/cupy/complex/cproj.h',\n 'core/include/cupy/complex/csinh.h',\n 'core/include/cupy/complex/csinhf.h',\n 'core/include/cupy/complex/csqrt.h',\n 'core/include/cupy/complex/csqrtf.h',\n 'core/include/cupy/complex/ctanh.h',\n 'core/include/cupy/complex/ctanhf.h',\n 'core/include/cupy/complex/math_private.h',\n 'core/include/cupy/carray.cuh',\n 'core/include/cupy/complex.cuh',\n 'core/include/cupy/atomics.cuh',\n 'cuda/cupy_thrust.cu',\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs()\n\npackage_name = cupy_setup_build.get_package_name()\nlong_description = cupy_setup_build.get_long_description()\next_modules = cupy_setup_build.get_ext_modules()\nbuild_ext = cupy_setup_build.custom_build_ext\nsdist = cupy_setup_build.sdist_with_cython\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nexec(open(os.path.join(here, 'cupy', '_version.py')).read())\n\nsetup(\n name=package_name,\n version=__version__, # NOQA\n description='CuPy: NumPy-like API accelerated with CUDA',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://docs-cupy.chainer.org/',\n license='MIT License',\n packages=[\n 'cupy',\n 'cupy.binary',\n 'cupy.core',\n 'cupy.creation',\n 'cupy.cuda',\n 'cupy.cuda.memory_hooks',\n 'cupy.ext',\n 'cupy.fft',\n 'cupy.indexing',\n 'cupy.io',\n 'cupy.linalg',\n 'cupy.logic',\n 'cupy.manipulation',\n 'cupy.math',\n 'cupy.padding',\n 'cupy.prof',\n 'cupy.random',\n 'cupy.sorting',\n 'cupy.sparse',\n 'cupy.sparse.linalg',\n 'cupy.statistics',\n 'cupy.testing',\n 'cupyx',\n 'cupyx.scipy',\n 'cupyx.scipy.ndimage',\n 'cupyx.scipy.sparse',\n 'cupyx.scipy.sparse.linalg',\n 'cupyx.scipy.special',\n 'cupyx.scipy.linalg',\n 'cupyx.linalg',\n 'cupyx.linalg.sparse'\n ],\n package_data=package_data,\n zip_safe=False,\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext,\n 'sdist': sdist},\n)\n", "path": "setup.py"}]} | 2,295 | 111 |
gh_patches_debug_28757 | rasdani/github-patches | git_diff | WordPress__openverse-api-1083 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add database connectivity to healthcheck endpoint
## Problem
<!-- Describe a problem solved by this feature; or delete the section entirely. -->
The healtcheck endpoint should check that the database is accessible. If the db is inaccessible, the service is definitively not healthy.
## Description
<!-- Describe the feature and how it solves the problem. -->
Add another check (in addition to the ES check) for the database connectivity. Calling `django.db.connection.ensure_connection()` should be sufficient. It raises an error when the database connection is unavailable.
## Alternatives
<!-- Describe any alternative solutions or features you have considered. How is this feature better? -->
## Additional context
<!-- Add any other context about the feature here; or delete the section entirely. -->
<!-- If you would like to work on this, please comment below separately. -->
</issue>
<code>
[start of api/catalog/api/views/health_views.py]
1 from django.conf import settings
2 from rest_framework import status
3 from rest_framework.exceptions import APIException
4 from rest_framework.request import Request
5 from rest_framework.response import Response
6 from rest_framework.views import APIView
7
8
9 class ElasticsearchHealthcheckException(APIException):
10 status_code = status.HTTP_503_SERVICE_UNAVAILABLE
11
12
13 class HealthCheck(APIView):
14 """
15 Return a "200 OK" response if the server is running normally, 503 otherwise.
16
17 This endpoint is used in production to ensure that the server should receive
18 traffic. If no response is provided, the server is deregistered from the
19 load balancer and destroyed.
20 """
21
22 swagger_schema = None
23
24 def _check_es(self) -> Response | None:
25 """Check ES cluster health and raise an exception if ES is not healthy."""
26
27 es_health = settings.ES.cluster.health(timeout="5s")
28
29 if es_health["timed_out"]:
30 raise ElasticsearchHealthcheckException("es_timed_out")
31
32 if (status := es_health["status"]) != "green":
33 raise ElasticsearchHealthcheckException(f"es_status_{status}")
34
35 def get(self, request: Request):
36 if "check_es" in request.query_params:
37 self._check_es()
38
39 return Response({"status": "200 OK"}, status=200)
40
[end of api/catalog/api/views/health_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/catalog/api/views/health_views.py b/api/catalog/api/views/health_views.py
--- a/api/catalog/api/views/health_views.py
+++ b/api/catalog/api/views/health_views.py
@@ -1,4 +1,5 @@
from django.conf import settings
+from django.db import connection
from rest_framework import status
from rest_framework.exceptions import APIException
from rest_framework.request import Request
@@ -21,19 +22,33 @@
swagger_schema = None
- def _check_es(self) -> Response | None:
- """Check ES cluster health and raise an exception if ES is not healthy."""
+ @staticmethod
+ def _check_db() -> None:
+ """
+ Check that the database is available.
+ Returns nothing if everything is OK, throws error otherwise.
+ """
+ connection.ensure_connection()
+
+ @staticmethod
+ def _check_es() -> None:
+ """
+ Check Elasticsearch cluster health.
+
+ Raises an exception if ES is not healthy.
+ """
es_health = settings.ES.cluster.health(timeout="5s")
if es_health["timed_out"]:
raise ElasticsearchHealthcheckException("es_timed_out")
- if (status := es_health["status"]) != "green":
- raise ElasticsearchHealthcheckException(f"es_status_{status}")
+ if (es_status := es_health["status"]) != "green":
+ raise ElasticsearchHealthcheckException(f"es_status_{es_status}")
def get(self, request: Request):
if "check_es" in request.query_params:
self._check_es()
+ self._check_db()
return Response({"status": "200 OK"}, status=200)
| {"golden_diff": "diff --git a/api/catalog/api/views/health_views.py b/api/catalog/api/views/health_views.py\n--- a/api/catalog/api/views/health_views.py\n+++ b/api/catalog/api/views/health_views.py\n@@ -1,4 +1,5 @@\n from django.conf import settings\n+from django.db import connection\n from rest_framework import status\n from rest_framework.exceptions import APIException\n from rest_framework.request import Request\n@@ -21,19 +22,33 @@\n \n swagger_schema = None\n \n- def _check_es(self) -> Response | None:\n- \"\"\"Check ES cluster health and raise an exception if ES is not healthy.\"\"\"\n+ @staticmethod\n+ def _check_db() -> None:\n+ \"\"\"\n+ Check that the database is available.\n \n+ Returns nothing if everything is OK, throws error otherwise.\n+ \"\"\"\n+ connection.ensure_connection()\n+\n+ @staticmethod\n+ def _check_es() -> None:\n+ \"\"\"\n+ Check Elasticsearch cluster health.\n+\n+ Raises an exception if ES is not healthy.\n+ \"\"\"\n es_health = settings.ES.cluster.health(timeout=\"5s\")\n \n if es_health[\"timed_out\"]:\n raise ElasticsearchHealthcheckException(\"es_timed_out\")\n \n- if (status := es_health[\"status\"]) != \"green\":\n- raise ElasticsearchHealthcheckException(f\"es_status_{status}\")\n+ if (es_status := es_health[\"status\"]) != \"green\":\n+ raise ElasticsearchHealthcheckException(f\"es_status_{es_status}\")\n \n def get(self, request: Request):\n if \"check_es\" in request.query_params:\n self._check_es()\n+ self._check_db()\n \n return Response({\"status\": \"200 OK\"}, status=200)\n", "issue": "Add database connectivity to healthcheck endpoint\n## Problem\r\n\r\n<!-- Describe a problem solved by this feature; or delete the section entirely. -->\r\nThe healtcheck endpoint should check that the database is accessible. If the db is inaccessible, the service is definitively not healthy.\r\n\r\n## Description\r\n\r\n<!-- Describe the feature and how it solves the problem. -->\r\nAdd another check (in addition to the ES check) for the database connectivity. Calling `django.db.connection.ensure_connection()` should be sufficient. It raises an error when the database connection is unavailable.\r\n\r\n## Alternatives\r\n\r\n<!-- Describe any alternative solutions or features you have considered. How is this feature better? -->\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the feature here; or delete the section entirely. -->\r\n\r\n<!-- If you would like to work on this, please comment below separately. -->\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom rest_framework import status\nfrom rest_framework.exceptions import APIException\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\n\nclass ElasticsearchHealthcheckException(APIException):\n status_code = status.HTTP_503_SERVICE_UNAVAILABLE\n\n\nclass HealthCheck(APIView):\n \"\"\"\n Return a \"200 OK\" response if the server is running normally, 503 otherwise.\n\n This endpoint is used in production to ensure that the server should receive\n traffic. If no response is provided, the server is deregistered from the\n load balancer and destroyed.\n \"\"\"\n\n swagger_schema = None\n\n def _check_es(self) -> Response | None:\n \"\"\"Check ES cluster health and raise an exception if ES is not healthy.\"\"\"\n\n es_health = settings.ES.cluster.health(timeout=\"5s\")\n\n if es_health[\"timed_out\"]:\n raise ElasticsearchHealthcheckException(\"es_timed_out\")\n\n if (status := es_health[\"status\"]) != \"green\":\n raise ElasticsearchHealthcheckException(f\"es_status_{status}\")\n\n def get(self, request: Request):\n if \"check_es\" in request.query_params:\n self._check_es()\n\n return Response({\"status\": \"200 OK\"}, status=200)\n", "path": "api/catalog/api/views/health_views.py"}]} | 1,070 | 380 |
gh_patches_debug_14995 | rasdani/github-patches | git_diff | getsentry__sentry-python-702 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem with typing
Some error is occurring with sentry for python (using python 3.8)
```
[ERROR] AttributeError: type object 'Callable' has no attribute '_abc_registry'
Traceback (most recent call last):
File "/var/task/handler.py", line 609, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 240, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 134, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/var/task/src/main.py", line 6, in <module>
import sentry_sdk
File "/var/task/sentry_sdk/__init__.py", line 1, in <module>
from sentry_sdk.hub import Hub, init
File "/var/task/sentry_sdk/hub.py", line 8, in <module>
from sentry_sdk._compat import with_metaclass
File "/var/task/sentry_sdk/_compat.py", line 3, in <module>
from sentry_sdk._types import MYPY
File "/var/task/sentry_sdk/_types.py", line 2, in <module>
from typing import TYPE_CHECKING as MYPY
File "/var/task/typing.py", line 1357, in <module>
class Callable(extra=collections_abc.Callable, metaclass=CallableMeta):
File "/var/task/typing.py", line 1005, in __new__
self._abc_registry = extra._abc_registry
```
Any hint?
</issue>
<code>
[start of sentry_sdk/api.py]
1 import inspect
2 from contextlib import contextmanager
3
4 from sentry_sdk.hub import Hub
5 from sentry_sdk.scope import Scope
6
7 from sentry_sdk._types import MYPY
8
9 if MYPY:
10 from typing import Any
11 from typing import Dict
12 from typing import Optional
13 from typing import overload
14 from typing import Callable
15 from typing import TypeVar
16 from typing import ContextManager
17
18 from sentry_sdk._types import Event, Hint, Breadcrumb, BreadcrumbHint
19 from sentry_sdk.tracing import Span
20
21 T = TypeVar("T")
22 F = TypeVar("F", bound=Callable[..., Any])
23 else:
24
25 def overload(x):
26 # type: (T) -> T
27 return x
28
29
30 __all__ = [
31 "capture_event",
32 "capture_message",
33 "capture_exception",
34 "add_breadcrumb",
35 "configure_scope",
36 "push_scope",
37 "flush",
38 "last_event_id",
39 "start_span",
40 "set_tag",
41 "set_context",
42 "set_extra",
43 "set_user",
44 "set_level",
45 ]
46
47
48 def hubmethod(f):
49 # type: (F) -> F
50 f.__doc__ = "%s\n\n%s" % (
51 "Alias for :py:meth:`sentry_sdk.Hub.%s`" % f.__name__,
52 inspect.getdoc(getattr(Hub, f.__name__)),
53 )
54 return f
55
56
57 def scopemethod(f):
58 # type: (F) -> F
59 f.__doc__ = "%s\n\n%s" % (
60 "Alias for :py:meth:`sentry_sdk.Scope.%s`" % f.__name__,
61 inspect.getdoc(getattr(Scope, f.__name__)),
62 )
63 return f
64
65
66 @hubmethod
67 def capture_event(
68 event, # type: Event
69 hint=None, # type: Optional[Hint]
70 scope=None, # type: Optional[Any]
71 **scope_args # type: Dict[str, Any]
72 ):
73 # type: (...) -> Optional[str]
74 hub = Hub.current
75 if hub is not None:
76 return hub.capture_event(event, hint, scope=scope, **scope_args)
77 return None
78
79
80 @hubmethod
81 def capture_message(
82 message, # type: str
83 level=None, # type: Optional[str]
84 scope=None, # type: Optional[Any]
85 **scope_args # type: Dict[str, Any]
86 ):
87 # type: (...) -> Optional[str]
88 hub = Hub.current
89 if hub is not None:
90 return hub.capture_message(message, level, scope=scope, **scope_args)
91 return None
92
93
94 @hubmethod
95 def capture_exception(
96 error=None, # type: Optional[BaseException]
97 scope=None, # type: Optional[Any]
98 **scope_args # type: Dict[str, Any]
99 ):
100 # type: (...) -> Optional[str]
101 hub = Hub.current
102 if hub is not None:
103 return hub.capture_exception(error, scope=scope, **scope_args)
104 return None
105
106
107 @hubmethod
108 def add_breadcrumb(
109 crumb=None, # type: Optional[Breadcrumb]
110 hint=None, # type: Optional[BreadcrumbHint]
111 **kwargs # type: Any
112 ):
113 # type: (...) -> None
114 hub = Hub.current
115 if hub is not None:
116 return hub.add_breadcrumb(crumb, hint, **kwargs)
117
118
119 @overload # noqa
120 def configure_scope():
121 # type: () -> ContextManager[Scope]
122 pass
123
124
125 @overload # noqa
126 def configure_scope(
127 callback, # type: Callable[[Scope], None]
128 ):
129 # type: (...) -> None
130 pass
131
132
133 @hubmethod # noqa
134 def configure_scope(
135 callback=None, # type: Optional[Callable[[Scope], None]]
136 ):
137 # type: (...) -> Optional[ContextManager[Scope]]
138 hub = Hub.current
139 if hub is not None:
140 return hub.configure_scope(callback)
141 elif callback is None:
142
143 @contextmanager
144 def inner():
145 yield Scope()
146
147 return inner()
148 else:
149 # returned if user provided callback
150 return None
151
152
153 @overload # noqa
154 def push_scope():
155 # type: () -> ContextManager[Scope]
156 pass
157
158
159 @overload # noqa
160 def push_scope(
161 callback, # type: Callable[[Scope], None]
162 ):
163 # type: (...) -> None
164 pass
165
166
167 @hubmethod # noqa
168 def push_scope(
169 callback=None, # type: Optional[Callable[[Scope], None]]
170 ):
171 # type: (...) -> Optional[ContextManager[Scope]]
172 hub = Hub.current
173 if hub is not None:
174 return hub.push_scope(callback)
175 elif callback is None:
176
177 @contextmanager
178 def inner():
179 yield Scope()
180
181 return inner()
182 else:
183 # returned if user provided callback
184 return None
185
186
187 @scopemethod # noqa
188 def set_tag(key, value):
189 # type: (str, Any) -> None
190 hub = Hub.current
191 if hub is not None:
192 hub.scope.set_tag(key, value)
193
194
195 @scopemethod # noqa
196 def set_context(key, value):
197 # type: (str, Any) -> None
198 hub = Hub.current
199 if hub is not None:
200 hub.scope.set_context(key, value)
201
202
203 @scopemethod # noqa
204 def set_extra(key, value):
205 # type: (str, Any) -> None
206 hub = Hub.current
207 if hub is not None:
208 hub.scope.set_extra(key, value)
209
210
211 @scopemethod # noqa
212 def set_user(value):
213 # type: (Dict[str, Any]) -> None
214 hub = Hub.current
215 if hub is not None:
216 hub.scope.set_user(value)
217
218
219 @scopemethod # noqa
220 def set_level(value):
221 # type: (str) -> None
222 hub = Hub.current
223 if hub is not None:
224 hub.scope.set_level(value)
225
226
227 @hubmethod
228 def flush(
229 timeout=None, # type: Optional[float]
230 callback=None, # type: Optional[Callable[[int, float], None]]
231 ):
232 # type: (...) -> None
233 hub = Hub.current
234 if hub is not None:
235 return hub.flush(timeout=timeout, callback=callback)
236
237
238 @hubmethod
239 def last_event_id():
240 # type: () -> Optional[str]
241 hub = Hub.current
242 if hub is not None:
243 return hub.last_event_id()
244 return None
245
246
247 @hubmethod
248 def start_span(
249 span=None, # type: Optional[Span]
250 **kwargs # type: Any
251 ):
252 # type: (...) -> Span
253
254 # TODO: All other functions in this module check for
255 # `Hub.current is None`. That actually should never happen?
256 return Hub.current.start_span(span=span, **kwargs)
257
[end of sentry_sdk/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/api.py b/sentry_sdk/api.py
--- a/sentry_sdk/api.py
+++ b/sentry_sdk/api.py
@@ -14,8 +14,9 @@
from typing import Callable
from typing import TypeVar
from typing import ContextManager
+ from typing import Union
- from sentry_sdk._types import Event, Hint, Breadcrumb, BreadcrumbHint
+ from sentry_sdk._types import Event, Hint, Breadcrumb, BreadcrumbHint, ExcInfo
from sentry_sdk.tracing import Span
T = TypeVar("T")
@@ -93,7 +94,7 @@
@hubmethod
def capture_exception(
- error=None, # type: Optional[BaseException]
+ error=None, # type: Optional[Union[BaseException, ExcInfo]]
scope=None, # type: Optional[Any]
**scope_args # type: Dict[str, Any]
):
| {"golden_diff": "diff --git a/sentry_sdk/api.py b/sentry_sdk/api.py\n--- a/sentry_sdk/api.py\n+++ b/sentry_sdk/api.py\n@@ -14,8 +14,9 @@\n from typing import Callable\n from typing import TypeVar\n from typing import ContextManager\n+ from typing import Union\n \n- from sentry_sdk._types import Event, Hint, Breadcrumb, BreadcrumbHint\n+ from sentry_sdk._types import Event, Hint, Breadcrumb, BreadcrumbHint, ExcInfo\n from sentry_sdk.tracing import Span\n \n T = TypeVar(\"T\")\n@@ -93,7 +94,7 @@\n \n @hubmethod\n def capture_exception(\n- error=None, # type: Optional[BaseException]\n+ error=None, # type: Optional[Union[BaseException, ExcInfo]]\n scope=None, # type: Optional[Any]\n **scope_args # type: Dict[str, Any]\n ):\n", "issue": "Problem with typing\nSome error is occurring with sentry for python (using python 3.8)\r\n\r\n```\r\n[ERROR] AttributeError: type object 'Callable' has no attribute '_abc_registry'\r\nTraceback (most recent call last):\r\n File \"/var/task/handler.py\", line 609, in lambda_handler\r\n return LambdaHandler.lambda_handler(event, context)\r\n File \"/var/task/handler.py\", line 240, in lambda_handler\r\n handler = cls()\r\n File \"/var/task/handler.py\", line 134, in __init__\r\n self.app_module = importlib.import_module(self.settings.APP_MODULE)\r\n File \"/var/lang/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/var/task/src/main.py\", line 6, in <module>\r\n import sentry_sdk\r\n File \"/var/task/sentry_sdk/__init__.py\", line 1, in <module>\r\n from sentry_sdk.hub import Hub, init\r\n File \"/var/task/sentry_sdk/hub.py\", line 8, in <module>\r\n from sentry_sdk._compat import with_metaclass\r\n File \"/var/task/sentry_sdk/_compat.py\", line 3, in <module>\r\n from sentry_sdk._types import MYPY\r\n File \"/var/task/sentry_sdk/_types.py\", line 2, in <module>\r\n from typing import TYPE_CHECKING as MYPY\r\n File \"/var/task/typing.py\", line 1357, in <module>\r\n class Callable(extra=collections_abc.Callable, metaclass=CallableMeta):\r\n File \"/var/task/typing.py\", line 1005, in __new__\r\n self._abc_registry = extra._abc_registry\r\n\r\n```\r\nAny hint?\n", "before_files": [{"content": "import inspect\nfrom contextlib import contextmanager\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.scope import Scope\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Dict\n from typing import Optional\n from typing import overload\n from typing import Callable\n from typing import TypeVar\n from typing import ContextManager\n\n from sentry_sdk._types import Event, Hint, Breadcrumb, BreadcrumbHint\n from sentry_sdk.tracing import Span\n\n T = TypeVar(\"T\")\n F = TypeVar(\"F\", bound=Callable[..., Any])\nelse:\n\n def overload(x):\n # type: (T) -> T\n return x\n\n\n__all__ = [\n \"capture_event\",\n \"capture_message\",\n \"capture_exception\",\n \"add_breadcrumb\",\n \"configure_scope\",\n \"push_scope\",\n \"flush\",\n \"last_event_id\",\n \"start_span\",\n \"set_tag\",\n \"set_context\",\n \"set_extra\",\n \"set_user\",\n \"set_level\",\n]\n\n\ndef hubmethod(f):\n # type: (F) -> F\n f.__doc__ = \"%s\\n\\n%s\" % (\n \"Alias for :py:meth:`sentry_sdk.Hub.%s`\" % f.__name__,\n inspect.getdoc(getattr(Hub, f.__name__)),\n )\n return f\n\n\ndef scopemethod(f):\n # type: (F) -> F\n f.__doc__ = \"%s\\n\\n%s\" % (\n \"Alias for :py:meth:`sentry_sdk.Scope.%s`\" % f.__name__,\n inspect.getdoc(getattr(Scope, f.__name__)),\n )\n return f\n\n\n@hubmethod\ndef capture_event(\n event, # type: Event\n hint=None, # type: Optional[Hint]\n scope=None, # type: Optional[Any]\n **scope_args # type: Dict[str, Any]\n):\n # type: (...) -> Optional[str]\n hub = Hub.current\n if hub is not None:\n return hub.capture_event(event, hint, scope=scope, **scope_args)\n return None\n\n\n@hubmethod\ndef capture_message(\n message, # type: str\n level=None, # type: Optional[str]\n scope=None, # type: Optional[Any]\n **scope_args # type: Dict[str, Any]\n):\n # type: (...) -> Optional[str]\n hub = Hub.current\n if hub is not None:\n return hub.capture_message(message, level, scope=scope, **scope_args)\n return None\n\n\n@hubmethod\ndef capture_exception(\n error=None, # type: Optional[BaseException]\n scope=None, # type: Optional[Any]\n **scope_args # type: Dict[str, Any]\n):\n # type: (...) -> Optional[str]\n hub = Hub.current\n if hub is not None:\n return hub.capture_exception(error, scope=scope, **scope_args)\n return None\n\n\n@hubmethod\ndef add_breadcrumb(\n crumb=None, # type: Optional[Breadcrumb]\n hint=None, # type: Optional[BreadcrumbHint]\n **kwargs # type: Any\n):\n # type: (...) -> None\n hub = Hub.current\n if hub is not None:\n return hub.add_breadcrumb(crumb, hint, **kwargs)\n\n\n@overload # noqa\ndef configure_scope():\n # type: () -> ContextManager[Scope]\n pass\n\n\n@overload # noqa\ndef configure_scope(\n callback, # type: Callable[[Scope], None]\n):\n # type: (...) -> None\n pass\n\n\n@hubmethod # noqa\ndef configure_scope(\n callback=None, # type: Optional[Callable[[Scope], None]]\n):\n # type: (...) -> Optional[ContextManager[Scope]]\n hub = Hub.current\n if hub is not None:\n return hub.configure_scope(callback)\n elif callback is None:\n\n @contextmanager\n def inner():\n yield Scope()\n\n return inner()\n else:\n # returned if user provided callback\n return None\n\n\n@overload # noqa\ndef push_scope():\n # type: () -> ContextManager[Scope]\n pass\n\n\n@overload # noqa\ndef push_scope(\n callback, # type: Callable[[Scope], None]\n):\n # type: (...) -> None\n pass\n\n\n@hubmethod # noqa\ndef push_scope(\n callback=None, # type: Optional[Callable[[Scope], None]]\n):\n # type: (...) -> Optional[ContextManager[Scope]]\n hub = Hub.current\n if hub is not None:\n return hub.push_scope(callback)\n elif callback is None:\n\n @contextmanager\n def inner():\n yield Scope()\n\n return inner()\n else:\n # returned if user provided callback\n return None\n\n\n@scopemethod # noqa\ndef set_tag(key, value):\n # type: (str, Any) -> None\n hub = Hub.current\n if hub is not None:\n hub.scope.set_tag(key, value)\n\n\n@scopemethod # noqa\ndef set_context(key, value):\n # type: (str, Any) -> None\n hub = Hub.current\n if hub is not None:\n hub.scope.set_context(key, value)\n\n\n@scopemethod # noqa\ndef set_extra(key, value):\n # type: (str, Any) -> None\n hub = Hub.current\n if hub is not None:\n hub.scope.set_extra(key, value)\n\n\n@scopemethod # noqa\ndef set_user(value):\n # type: (Dict[str, Any]) -> None\n hub = Hub.current\n if hub is not None:\n hub.scope.set_user(value)\n\n\n@scopemethod # noqa\ndef set_level(value):\n # type: (str) -> None\n hub = Hub.current\n if hub is not None:\n hub.scope.set_level(value)\n\n\n@hubmethod\ndef flush(\n timeout=None, # type: Optional[float]\n callback=None, # type: Optional[Callable[[int, float], None]]\n):\n # type: (...) -> None\n hub = Hub.current\n if hub is not None:\n return hub.flush(timeout=timeout, callback=callback)\n\n\n@hubmethod\ndef last_event_id():\n # type: () -> Optional[str]\n hub = Hub.current\n if hub is not None:\n return hub.last_event_id()\n return None\n\n\n@hubmethod\ndef start_span(\n span=None, # type: Optional[Span]\n **kwargs # type: Any\n):\n # type: (...) -> Span\n\n # TODO: All other functions in this module check for\n # `Hub.current is None`. That actually should never happen?\n return Hub.current.start_span(span=span, **kwargs)\n", "path": "sentry_sdk/api.py"}]} | 3,276 | 211 |
gh_patches_debug_40029 | rasdani/github-patches | git_diff | watchdogpolska__small_eod-919 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Niekompletny wykaz endpointów API w /api
Na `/api` (np. https://dev.small-eod.siecobywatelska.pl/api/ ) nie mamy kompletnego wykazu endpointów API. Kompletny jest dostępny przez ReDoc np. na https://dev.small-eod.siecobywatelska.pl/api/redoc/ .
Powinniśmy to naprawić, bo wprowadza ryzyko mylnego wrażenia co do zakresu API.
</issue>
<code>
[start of backend-project/config/urls.py]
1 """small_eod URL Configuration
2
3 The `urlpatterns` list routes URLs to views. For more information please see:
4 https://docs.djangoproject.com/en/3.0/topics/http/urls/
5 Examples:
6 Function views
7 1. Add an import: from my_app import views
8 2. Add a URL to urlpatterns: path('', views.home, name='home')
9 Class-based views
10 1. Add an import: from other_app.views import Home
11 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
12 Including another URLconf
13 1. Import the include() function: from django.urls import include, path
14 2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
15 """
16 from django.conf import settings
17 from django.conf.urls.static import static
18 from django.contrib import admin
19 from django.urls import include, path, re_path
20 from drf_yasg2.views import get_schema_view
21 from rest_framework import permissions, routers
22
23 from small_eod.channels.views import ChannelViewSet
24 from small_eod.events.views import EventViewSet
25 from small_eod.institutions.views import InstitutionViewSet
26 from small_eod.notes.views import NoteViewSet
27 from small_eod.tags.views import TagViewSet
28 from small_eod.users.views import UserViewSet
29
30 from .swagger import info
31
32 router = routers.DefaultRouter()
33 router.register(r"channels", ChannelViewSet)
34 router.register(r"events", EventViewSet)
35 router.register(r"institutions", InstitutionViewSet)
36 router.register(r"notes", NoteViewSet)
37 router.register(r"tags", TagViewSet)
38 router.register(r"users", UserViewSet)
39
40 schema_view = get_schema_view(
41 info,
42 # validators=['flex', 'ssv'],
43 public=True,
44 permission_classes=(permissions.AllowAny,),
45 )
46
47 urlpatterns = [
48 path("admin/", admin.site.urls),
49 path("api/", include("small_eod.collections.urls")),
50 path("api/", include("small_eod.cases.urls")),
51 path("api/", include("small_eod.letters.urls")),
52 path("api/", include("small_eod.features.urls")),
53 path("api/", include("small_eod.administrative_units.urls")),
54 path("api/", include("small_eod.autocomplete.urls")),
55 path("api/docs/", schema_view.with_ui("swagger"), name="api_docs"),
56 path("api/redoc/", schema_view.with_ui("redoc"), name="api_redocs"),
57 re_path(
58 "^api/swagger(?P<format>.json|.yaml)$",
59 schema_view.without_ui(),
60 name="schema_swagger",
61 ),
62 path("api/", include(router.urls)),
63 ]
64
65
66 if settings.DEBUG:
67 import debug_toolbar
68
69 urlpatterns += [
70 path("__debug__/", include(debug_toolbar.urls)),
71 ]
72
73 urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
74 urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_URL)
75
[end of backend-project/config/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend-project/config/urls.py b/backend-project/config/urls.py
--- a/backend-project/config/urls.py
+++ b/backend-project/config/urls.py
@@ -13,6 +13,9 @@
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
+
+import re
+
from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
@@ -29,13 +32,56 @@
from .swagger import info
-router = routers.DefaultRouter()
+
+class BetterDefaultRouter(routers.DefaultRouter):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.include_urls = []
+ self.api_root_dict = {}
+
+ def get_urls(self):
+ urls = super().get_urls()
+ urls.extend(self.include_urls)
+ return urls
+
+ def include(self, module):
+ urlpatterns = getattr(include(module)[0], "urlpatterns")
+ viewnames = set()
+ for urlpattern in urlpatterns:
+ self.include_urls.append(urlpattern)
+ if hasattr(urlpattern, "url_patterns"):
+ viewnames.update([pattern.name for pattern in urlpattern.url_patterns])
+ elif hasattr(urlpattern, "name"):
+ viewnames.add(urlpattern.name)
+ self.api_root_dict.update(
+ {re.sub(r"-list$", "", viewname): viewname for viewname in viewnames}
+ )
+
+ def get_api_root_view(self, api_urls=None):
+ api_root_dict = {}
+ list_name = self.routes[0].name
+
+ for prefix, viewset, basename in self.registry:
+ api_root_dict[prefix] = list_name.format(basename=basename)
+ api_root_dict.update(self.api_root_dict)
+
+ return self.APIRootView.as_view(api_root_dict=api_root_dict)
+
+
+router = BetterDefaultRouter()
+
router.register(r"channels", ChannelViewSet)
router.register(r"events", EventViewSet)
router.register(r"institutions", InstitutionViewSet)
router.register(r"notes", NoteViewSet)
router.register(r"tags", TagViewSet)
router.register(r"users", UserViewSet)
+router.include("small_eod.cases.urls")
+router.include("small_eod.features.urls")
+router.include("small_eod.collections.urls")
+router.include("small_eod.letters.urls")
+router.include("small_eod.administrative_units.urls")
+router.include("small_eod.autocomplete.urls")
schema_view = get_schema_view(
info,
@@ -46,12 +92,6 @@
urlpatterns = [
path("admin/", admin.site.urls),
- path("api/", include("small_eod.collections.urls")),
- path("api/", include("small_eod.cases.urls")),
- path("api/", include("small_eod.letters.urls")),
- path("api/", include("small_eod.features.urls")),
- path("api/", include("small_eod.administrative_units.urls")),
- path("api/", include("small_eod.autocomplete.urls")),
path("api/docs/", schema_view.with_ui("swagger"), name="api_docs"),
path("api/redoc/", schema_view.with_ui("redoc"), name="api_redocs"),
re_path(
@@ -62,7 +102,6 @@
path("api/", include(router.urls)),
]
-
if settings.DEBUG:
import debug_toolbar
| {"golden_diff": "diff --git a/backend-project/config/urls.py b/backend-project/config/urls.py\n--- a/backend-project/config/urls.py\n+++ b/backend-project/config/urls.py\n@@ -13,6 +13,9 @@\n 1. Import the include() function: from django.urls import include, path\n 2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))\n \"\"\"\n+\n+import re\n+\n from django.conf import settings\n from django.conf.urls.static import static\n from django.contrib import admin\n@@ -29,13 +32,56 @@\n \n from .swagger import info\n \n-router = routers.DefaultRouter()\n+\n+class BetterDefaultRouter(routers.DefaultRouter):\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.include_urls = []\n+ self.api_root_dict = {}\n+\n+ def get_urls(self):\n+ urls = super().get_urls()\n+ urls.extend(self.include_urls)\n+ return urls\n+\n+ def include(self, module):\n+ urlpatterns = getattr(include(module)[0], \"urlpatterns\")\n+ viewnames = set()\n+ for urlpattern in urlpatterns:\n+ self.include_urls.append(urlpattern)\n+ if hasattr(urlpattern, \"url_patterns\"):\n+ viewnames.update([pattern.name for pattern in urlpattern.url_patterns])\n+ elif hasattr(urlpattern, \"name\"):\n+ viewnames.add(urlpattern.name)\n+ self.api_root_dict.update(\n+ {re.sub(r\"-list$\", \"\", viewname): viewname for viewname in viewnames}\n+ )\n+\n+ def get_api_root_view(self, api_urls=None):\n+ api_root_dict = {}\n+ list_name = self.routes[0].name\n+\n+ for prefix, viewset, basename in self.registry:\n+ api_root_dict[prefix] = list_name.format(basename=basename)\n+ api_root_dict.update(self.api_root_dict)\n+\n+ return self.APIRootView.as_view(api_root_dict=api_root_dict)\n+\n+\n+router = BetterDefaultRouter()\n+\n router.register(r\"channels\", ChannelViewSet)\n router.register(r\"events\", EventViewSet)\n router.register(r\"institutions\", InstitutionViewSet)\n router.register(r\"notes\", NoteViewSet)\n router.register(r\"tags\", TagViewSet)\n router.register(r\"users\", UserViewSet)\n+router.include(\"small_eod.cases.urls\")\n+router.include(\"small_eod.features.urls\")\n+router.include(\"small_eod.collections.urls\")\n+router.include(\"small_eod.letters.urls\")\n+router.include(\"small_eod.administrative_units.urls\")\n+router.include(\"small_eod.autocomplete.urls\")\n \n schema_view = get_schema_view(\n info,\n@@ -46,12 +92,6 @@\n \n urlpatterns = [\n path(\"admin/\", admin.site.urls),\n- path(\"api/\", include(\"small_eod.collections.urls\")),\n- path(\"api/\", include(\"small_eod.cases.urls\")),\n- path(\"api/\", include(\"small_eod.letters.urls\")),\n- path(\"api/\", include(\"small_eod.features.urls\")),\n- path(\"api/\", include(\"small_eod.administrative_units.urls\")),\n- path(\"api/\", include(\"small_eod.autocomplete.urls\")),\n path(\"api/docs/\", schema_view.with_ui(\"swagger\"), name=\"api_docs\"),\n path(\"api/redoc/\", schema_view.with_ui(\"redoc\"), name=\"api_redocs\"),\n re_path(\n@@ -62,7 +102,6 @@\n path(\"api/\", include(router.urls)),\n ]\n \n-\n if settings.DEBUG:\n import debug_toolbar\n", "issue": "Niekompletny wykaz endpoint\u00f3w API w /api\nNa `/api` (np. https://dev.small-eod.siecobywatelska.pl/api/ ) nie mamy kompletnego wykazu endpoint\u00f3w API. Kompletny jest dost\u0119pny przez ReDoc np. na https://dev.small-eod.siecobywatelska.pl/api/redoc/ .\r\n\r\nPowinni\u015bmy to naprawi\u0107, bo wprowadza ryzyko mylnego wra\u017cenia co do zakresu API.\n", "before_files": [{"content": "\"\"\"small_eod URL Configuration\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/3.0/topics/http/urls/\nExamples:\nFunction views\n 1. Add an import: from my_app import views\n 2. Add a URL to urlpatterns: path('', views.home, name='home')\nClass-based views\n 1. Add an import: from other_app.views import Home\n 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')\nIncluding another URLconf\n 1. Import the include() function: from django.urls import include, path\n 2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))\n\"\"\"\nfrom django.conf import settings\nfrom django.conf.urls.static import static\nfrom django.contrib import admin\nfrom django.urls import include, path, re_path\nfrom drf_yasg2.views import get_schema_view\nfrom rest_framework import permissions, routers\n\nfrom small_eod.channels.views import ChannelViewSet\nfrom small_eod.events.views import EventViewSet\nfrom small_eod.institutions.views import InstitutionViewSet\nfrom small_eod.notes.views import NoteViewSet\nfrom small_eod.tags.views import TagViewSet\nfrom small_eod.users.views import UserViewSet\n\nfrom .swagger import info\n\nrouter = routers.DefaultRouter()\nrouter.register(r\"channels\", ChannelViewSet)\nrouter.register(r\"events\", EventViewSet)\nrouter.register(r\"institutions\", InstitutionViewSet)\nrouter.register(r\"notes\", NoteViewSet)\nrouter.register(r\"tags\", TagViewSet)\nrouter.register(r\"users\", UserViewSet)\n\nschema_view = get_schema_view(\n info,\n # validators=['flex', 'ssv'],\n public=True,\n permission_classes=(permissions.AllowAny,),\n)\n\nurlpatterns = [\n path(\"admin/\", admin.site.urls),\n path(\"api/\", include(\"small_eod.collections.urls\")),\n path(\"api/\", include(\"small_eod.cases.urls\")),\n path(\"api/\", include(\"small_eod.letters.urls\")),\n path(\"api/\", include(\"small_eod.features.urls\")),\n path(\"api/\", include(\"small_eod.administrative_units.urls\")),\n path(\"api/\", include(\"small_eod.autocomplete.urls\")),\n path(\"api/docs/\", schema_view.with_ui(\"swagger\"), name=\"api_docs\"),\n path(\"api/redoc/\", schema_view.with_ui(\"redoc\"), name=\"api_redocs\"),\n re_path(\n \"^api/swagger(?P<format>.json|.yaml)$\",\n schema_view.without_ui(),\n name=\"schema_swagger\",\n ),\n path(\"api/\", include(router.urls)),\n]\n\n\nif settings.DEBUG:\n import debug_toolbar\n\n urlpatterns += [\n path(\"__debug__/\", include(debug_toolbar.urls)),\n ]\n\n urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_URL)\n", "path": "backend-project/config/urls.py"}]} | 1,409 | 773 |
gh_patches_debug_12882 | rasdani/github-patches | git_diff | open-mmlab__mmpretrain-147 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KeyError: 'LinearHead is not in the head registry'
use config
```python
model = dict(
head=dict(
type='LinearHead',
num_classes=1000,
in_channels=2048,
loss=dict(
type='LabelSmoothLoss',
loss_weight=1.0,
label_smooth_val=0.1,
num_classes=1000),
))
```
got trackback
```python
Traceback (most recent call last):
File "/home/code/open_mmlab_codebase/huatian_bump_blur_cls/tools/train.py", line 177, in <module>
main()
File "/home/code/open_mmlab_codebase/huatian_bump_blur_cls/tools/train.py", line 151, in main
model = build_classifier(cfg.model)
File "/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py", line 38, in build_classifier
return build(cfg, CLASSIFIERS)
File "/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py", line 18, in build
return build_from_cfg(cfg, registry, default_args)
File "/opt/conda/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/home/code/open_mmlab_codebase/mmclassification/mmcls/models/classifiers/image.py", line 18, in __init__
self.head = build_head(head)
File "/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py", line 26, in build_head
return build(cfg, HEADS)
File "/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py", line 18, in build
return build_from_cfg(cfg, registry, default_args)
File "/opt/conda/lib/python3.7/site-packages/mmcv/utils/registry.py", line 164, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'LinearHead is not in the head registry'
```
__check /mmcls/models/heads/*.py, not exist `LinearHead` registered__
</issue>
<code>
[start of configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py]
1 _base_ = ['./resnet50_batch2048_warmup.py']
2 model = dict(
3 head=dict(
4 type='LinearHead',
5 num_classes=1000,
6 in_channels=2048,
7 loss=dict(
8 type='LabelSmoothLoss',
9 loss_weight=1.0,
10 label_smooth_val=0.1,
11 num_classes=1000),
12 ))
13
[end of configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py]
[start of configs/resnet/resnet50_b32x8_label_smooth_imagenet.py]
1 _base_ = ['./resnet50_imagenet_bs256.py']
2 model = dict(
3 head=dict(
4 type='LinearHead',
5 num_classes=1000,
6 in_channels=2048,
7 loss=dict(
8 type='LabelSmoothLoss',
9 loss_weight=1.0,
10 label_smooth_val=0.1,
11 num_classes=1000),
12 ))
13
[end of configs/resnet/resnet50_b32x8_label_smooth_imagenet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py b/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py
--- a/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py
+++ b/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py
@@ -1,7 +1,7 @@
_base_ = ['./resnet50_imagenet_bs256.py']
model = dict(
head=dict(
- type='LinearHead',
+ type='LinearClsHead',
num_classes=1000,
in_channels=2048,
loss=dict(
diff --git a/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py b/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py
--- a/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py
+++ b/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py
@@ -1,7 +1,7 @@
_base_ = ['./resnet50_batch2048_warmup.py']
model = dict(
head=dict(
- type='LinearHead',
+ type='LinearClsHead',
num_classes=1000,
in_channels=2048,
loss=dict(
| {"golden_diff": "diff --git a/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py b/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py\n--- a/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py\n+++ b/configs/resnet/resnet50_b32x8_label_smooth_imagenet.py\n@@ -1,7 +1,7 @@\n _base_ = ['./resnet50_imagenet_bs256.py']\n model = dict(\n head=dict(\n- type='LinearHead',\n+ type='LinearClsHead',\n num_classes=1000,\n in_channels=2048,\n loss=dict(\ndiff --git a/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py b/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py\n--- a/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py\n+++ b/configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py\n@@ -1,7 +1,7 @@\n _base_ = ['./resnet50_batch2048_warmup.py']\n model = dict(\n head=dict(\n- type='LinearHead',\n+ type='LinearClsHead',\n num_classes=1000,\n in_channels=2048,\n loss=dict(\n", "issue": "KeyError: 'LinearHead is not in the head registry'\nuse config\r\n```python\r\nmodel = dict(\r\n head=dict(\r\n type='LinearHead',\r\n num_classes=1000,\r\n in_channels=2048,\r\n loss=dict(\r\n type='LabelSmoothLoss',\r\n loss_weight=1.0,\r\n label_smooth_val=0.1,\r\n num_classes=1000),\r\n ))\r\n```\r\n\r\ngot trackback\r\n```python\r\nTraceback (most recent call last):\r\n File \"/home/code/open_mmlab_codebase/huatian_bump_blur_cls/tools/train.py\", line 177, in <module>\r\n main()\r\n File \"/home/code/open_mmlab_codebase/huatian_bump_blur_cls/tools/train.py\", line 151, in main\r\n model = build_classifier(cfg.model)\r\n File \"/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py\", line 38, in build_classifier\r\n return build(cfg, CLASSIFIERS)\r\n File \"/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py\", line 18, in build\r\n return build_from_cfg(cfg, registry, default_args)\r\n File \"/opt/conda/lib/python3.7/site-packages/mmcv/utils/registry.py\", line 171, in build_from_cfg\r\n return obj_cls(**args)\r\n File \"/home/code/open_mmlab_codebase/mmclassification/mmcls/models/classifiers/image.py\", line 18, in __init__\r\n self.head = build_head(head)\r\n File \"/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py\", line 26, in build_head\r\n return build(cfg, HEADS)\r\n File \"/home/code/open_mmlab_codebase/mmclassification/mmcls/models/builder.py\", line 18, in build\r\n return build_from_cfg(cfg, registry, default_args)\r\n File \"/opt/conda/lib/python3.7/site-packages/mmcv/utils/registry.py\", line 164, in build_from_cfg\r\n f'{obj_type} is not in the {registry.name} registry')\r\nKeyError: 'LinearHead is not in the head registry'\r\n```\r\n\r\n__check /mmcls/models/heads/*.py, not exist `LinearHead` registered__\n", "before_files": [{"content": "_base_ = ['./resnet50_batch2048_warmup.py']\nmodel = dict(\n head=dict(\n type='LinearHead',\n num_classes=1000,\n in_channels=2048,\n loss=dict(\n type='LabelSmoothLoss',\n loss_weight=1.0,\n label_smooth_val=0.1,\n num_classes=1000),\n ))\n", "path": "configs/resnet/resnet50_b64x32_warmup_label_smooth_imagenet.py"}, {"content": "_base_ = ['./resnet50_imagenet_bs256.py']\nmodel = dict(\n head=dict(\n type='LinearHead',\n num_classes=1000,\n in_channels=2048,\n loss=dict(\n type='LabelSmoothLoss',\n loss_weight=1.0,\n label_smooth_val=0.1,\n num_classes=1000),\n ))\n", "path": "configs/resnet/resnet50_b32x8_label_smooth_imagenet.py"}]} | 1,323 | 339 |
gh_patches_debug_24780 | rasdani/github-patches | git_diff | apache__airflow-15109 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make Docs builds fallback in case external docs sources are missing
Every now and then our docs builds start to fail because of external dependency (latest example here #14985). And while we are doing caching now of that information, it does not help when the initial retrieval fails. This information does not change often but with the number of dependencies we have it will continue to fail regularly simply because many of those depenencies are not very reliable - they are just a web page hosted somewhere. They are nowhere near the stabilty of even PyPI or Apt sources and we have no mirroring in case of problem.
Maybe we could
a) see if we can use some kind of mirroring scheme (do those sites have mirrrors ? )
b) if not, simply write a simple script that will dump the cached content for those to S3, refresh it in the CI scheduled (nightly) master builds ad have a fallback mechanism to download that from there in case of any problems in CI?
</issue>
<code>
[start of docs/exts/docs_build/fetch_inventories.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 import concurrent
19 import concurrent.futures
20 import datetime
21 import os
22 import shutil
23 from itertools import repeat
24 from typing import Iterator, List, Tuple
25
26 import requests
27 from requests.adapters import DEFAULT_POOLSIZE
28
29 from airflow.utils.helpers import partition
30 from docs.exts.docs_build.docs_builder import ( # pylint: disable=no-name-in-module
31 get_available_providers_packages,
32 )
33 from docs.exts.docs_build.third_party_inventories import ( # pylint: disable=no-name-in-module
34 THIRD_PARTY_INDEXES,
35 )
36
37 CURRENT_DIR = os.path.dirname(__file__)
38 ROOT_DIR = os.path.abspath(os.path.join(CURRENT_DIR, os.pardir, os.pardir, os.pardir))
39 DOCS_DIR = os.path.join(ROOT_DIR, 'docs')
40 CACHE_DIR = os.path.join(DOCS_DIR, '_inventory_cache')
41 EXPIRATION_DATE_PATH = os.path.join(DOCS_DIR, '_inventory_cache', "expiration-date")
42
43 S3_DOC_URL = "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com"
44 S3_DOC_URL_VERSIONED = S3_DOC_URL + "/docs/{package_name}/latest/objects.inv"
45 S3_DOC_URL_NON_VERSIONED = S3_DOC_URL + "/docs/{package_name}/objects.inv"
46
47
48 def _fetch_file(session: requests.Session, package_name: str, url: str, path: str) -> Tuple[str, bool]:
49 """
50 Download a file and returns status information as a tuple with package
51 name and success status(bool value).
52 """
53 response = session.get(url, allow_redirects=True, stream=True)
54 if not response.ok:
55 print(f"Failed to fetch inventory: {url}")
56 return package_name, False
57
58 os.makedirs(os.path.dirname(path), exist_ok=True)
59 with open(path, 'wb') as f:
60 response.raw.decode_content = True
61 shutil.copyfileobj(response.raw, f)
62 print(f"Fetched inventory: {url}")
63 return package_name, True
64
65
66 def _is_outdated(path: str):
67 if not os.path.exists(path):
68 return True
69 delta = datetime.datetime.now() - datetime.datetime.fromtimestamp(os.path.getmtime(path))
70 return delta > datetime.timedelta(hours=12)
71
72
73 def fetch_inventories():
74 """Fetch all inventories for Airflow documentation packages and store in cache."""
75 os.makedirs(os.path.dirname(CACHE_DIR), exist_ok=True)
76 to_download: List[Tuple[str, str, str]] = []
77
78 for pkg_name in get_available_providers_packages():
79 to_download.append(
80 (
81 pkg_name,
82 S3_DOC_URL_VERSIONED.format(package_name=pkg_name),
83 f'{CACHE_DIR}/{pkg_name}/objects.inv',
84 )
85 )
86 to_download.append(
87 (
88 "apache-airflow",
89 S3_DOC_URL_VERSIONED.format(package_name='apache-airflow'),
90 f'{CACHE_DIR}/apache-airflow/objects.inv',
91 )
92 )
93 for pkg_name in ['apache-airflow-providers', 'docker-stack']:
94 to_download.append(
95 (
96 pkg_name,
97 S3_DOC_URL_NON_VERSIONED.format(package_name=pkg_name),
98 f'{CACHE_DIR}/{pkg_name}/objects.inv',
99 )
100 )
101 to_download.extend(
102 (
103 pkg_name,
104 f"{doc_url}/objects.inv",
105 f'{CACHE_DIR}/{pkg_name}/objects.inv',
106 )
107 for pkg_name, doc_url in THIRD_PARTY_INDEXES.items()
108 )
109
110 to_download = [(pkg_name, url, path) for pkg_name, url, path in to_download if _is_outdated(path)]
111 if not to_download:
112 print("Nothing to do")
113 return []
114
115 print(f"To download {len(to_download)} inventorie(s)")
116
117 with requests.Session() as session, concurrent.futures.ThreadPoolExecutor(DEFAULT_POOLSIZE) as pool:
118 download_results: Iterator[Tuple[str, bool]] = pool.map(
119 _fetch_file,
120 repeat(session, len(to_download)),
121 (pkg_name for pkg_name, _, _ in to_download),
122 (url for _, url, _ in to_download),
123 (path for _, _, path in to_download),
124 )
125 failed, success = partition(lambda d: d[1], download_results)
126 failed, success = list(failed), list(success)
127 print(f"Result: {len(success)} success, {len(failed)} failed")
128 if failed:
129 print("Failed packages:")
130 for pkg_no, (pkg_name, _) in enumerate(failed, start=1):
131 print(f"{pkg_no}. {pkg_name}")
132
133 return [pkg_name for pkg_name, status in failed]
134
[end of docs/exts/docs_build/fetch_inventories.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/exts/docs_build/fetch_inventories.py b/docs/exts/docs_build/fetch_inventories.py
--- a/docs/exts/docs_build/fetch_inventories.py
+++ b/docs/exts/docs_build/fetch_inventories.py
@@ -20,10 +20,13 @@
import datetime
import os
import shutil
+import sys
+import traceback
from itertools import repeat
from typing import Iterator, List, Tuple
import requests
+import urllib3.exceptions
from requests.adapters import DEFAULT_POOLSIZE
from airflow.utils.helpers import partition
@@ -50,9 +53,15 @@
Download a file and returns status information as a tuple with package
name and success status(bool value).
"""
- response = session.get(url, allow_redirects=True, stream=True)
+ try:
+ response = session.get(url, allow_redirects=True, stream=True)
+ except (requests.RequestException, urllib3.exceptions.HTTPError):
+ print(f"Failed to fetch inventory: {url}")
+ traceback.print_exc(file=sys.stderr)
+ return package_name, False
if not response.ok:
print(f"Failed to fetch inventory: {url}")
+ print(f"Failed with status: {response.status_code}", file=sys.stderr)
return package_name, False
os.makedirs(os.path.dirname(path), exist_ok=True)
| {"golden_diff": "diff --git a/docs/exts/docs_build/fetch_inventories.py b/docs/exts/docs_build/fetch_inventories.py\n--- a/docs/exts/docs_build/fetch_inventories.py\n+++ b/docs/exts/docs_build/fetch_inventories.py\n@@ -20,10 +20,13 @@\n import datetime\n import os\n import shutil\n+import sys\n+import traceback\n from itertools import repeat\n from typing import Iterator, List, Tuple\n \n import requests\n+import urllib3.exceptions\n from requests.adapters import DEFAULT_POOLSIZE\n \n from airflow.utils.helpers import partition\n@@ -50,9 +53,15 @@\n Download a file and returns status information as a tuple with package\n name and success status(bool value).\n \"\"\"\n- response = session.get(url, allow_redirects=True, stream=True)\n+ try:\n+ response = session.get(url, allow_redirects=True, stream=True)\n+ except (requests.RequestException, urllib3.exceptions.HTTPError):\n+ print(f\"Failed to fetch inventory: {url}\")\n+ traceback.print_exc(file=sys.stderr)\n+ return package_name, False\n if not response.ok:\n print(f\"Failed to fetch inventory: {url}\")\n+ print(f\"Failed with status: {response.status_code}\", file=sys.stderr)\n return package_name, False\n \n os.makedirs(os.path.dirname(path), exist_ok=True)\n", "issue": "Make Docs builds fallback in case external docs sources are missing\nEvery now and then our docs builds start to fail because of external dependency (latest example here #14985). And while we are doing caching now of that information, it does not help when the initial retrieval fails. This information does not change often but with the number of dependencies we have it will continue to fail regularly simply because many of those depenencies are not very reliable - they are just a web page hosted somewhere. They are nowhere near the stabilty of even PyPI or Apt sources and we have no mirroring in case of problem.\r\n\r\nMaybe we could \r\n\r\na) see if we can use some kind of mirroring scheme (do those sites have mirrrors ? )\r\nb) if not, simply write a simple script that will dump the cached content for those to S3, refresh it in the CI scheduled (nightly) master builds ad have a fallback mechanism to download that from there in case of any problems in CI?\r\n\r\n \n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nimport concurrent\nimport concurrent.futures\nimport datetime\nimport os\nimport shutil\nfrom itertools import repeat\nfrom typing import Iterator, List, Tuple\n\nimport requests\nfrom requests.adapters import DEFAULT_POOLSIZE\n\nfrom airflow.utils.helpers import partition\nfrom docs.exts.docs_build.docs_builder import ( # pylint: disable=no-name-in-module\n get_available_providers_packages,\n)\nfrom docs.exts.docs_build.third_party_inventories import ( # pylint: disable=no-name-in-module\n THIRD_PARTY_INDEXES,\n)\n\nCURRENT_DIR = os.path.dirname(__file__)\nROOT_DIR = os.path.abspath(os.path.join(CURRENT_DIR, os.pardir, os.pardir, os.pardir))\nDOCS_DIR = os.path.join(ROOT_DIR, 'docs')\nCACHE_DIR = os.path.join(DOCS_DIR, '_inventory_cache')\nEXPIRATION_DATE_PATH = os.path.join(DOCS_DIR, '_inventory_cache', \"expiration-date\")\n\nS3_DOC_URL = \"http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com\"\nS3_DOC_URL_VERSIONED = S3_DOC_URL + \"/docs/{package_name}/latest/objects.inv\"\nS3_DOC_URL_NON_VERSIONED = S3_DOC_URL + \"/docs/{package_name}/objects.inv\"\n\n\ndef _fetch_file(session: requests.Session, package_name: str, url: str, path: str) -> Tuple[str, bool]:\n \"\"\"\n Download a file and returns status information as a tuple with package\n name and success status(bool value).\n \"\"\"\n response = session.get(url, allow_redirects=True, stream=True)\n if not response.ok:\n print(f\"Failed to fetch inventory: {url}\")\n return package_name, False\n\n os.makedirs(os.path.dirname(path), exist_ok=True)\n with open(path, 'wb') as f:\n response.raw.decode_content = True\n shutil.copyfileobj(response.raw, f)\n print(f\"Fetched inventory: {url}\")\n return package_name, True\n\n\ndef _is_outdated(path: str):\n if not os.path.exists(path):\n return True\n delta = datetime.datetime.now() - datetime.datetime.fromtimestamp(os.path.getmtime(path))\n return delta > datetime.timedelta(hours=12)\n\n\ndef fetch_inventories():\n \"\"\"Fetch all inventories for Airflow documentation packages and store in cache.\"\"\"\n os.makedirs(os.path.dirname(CACHE_DIR), exist_ok=True)\n to_download: List[Tuple[str, str, str]] = []\n\n for pkg_name in get_available_providers_packages():\n to_download.append(\n (\n pkg_name,\n S3_DOC_URL_VERSIONED.format(package_name=pkg_name),\n f'{CACHE_DIR}/{pkg_name}/objects.inv',\n )\n )\n to_download.append(\n (\n \"apache-airflow\",\n S3_DOC_URL_VERSIONED.format(package_name='apache-airflow'),\n f'{CACHE_DIR}/apache-airflow/objects.inv',\n )\n )\n for pkg_name in ['apache-airflow-providers', 'docker-stack']:\n to_download.append(\n (\n pkg_name,\n S3_DOC_URL_NON_VERSIONED.format(package_name=pkg_name),\n f'{CACHE_DIR}/{pkg_name}/objects.inv',\n )\n )\n to_download.extend(\n (\n pkg_name,\n f\"{doc_url}/objects.inv\",\n f'{CACHE_DIR}/{pkg_name}/objects.inv',\n )\n for pkg_name, doc_url in THIRD_PARTY_INDEXES.items()\n )\n\n to_download = [(pkg_name, url, path) for pkg_name, url, path in to_download if _is_outdated(path)]\n if not to_download:\n print(\"Nothing to do\")\n return []\n\n print(f\"To download {len(to_download)} inventorie(s)\")\n\n with requests.Session() as session, concurrent.futures.ThreadPoolExecutor(DEFAULT_POOLSIZE) as pool:\n download_results: Iterator[Tuple[str, bool]] = pool.map(\n _fetch_file,\n repeat(session, len(to_download)),\n (pkg_name for pkg_name, _, _ in to_download),\n (url for _, url, _ in to_download),\n (path for _, _, path in to_download),\n )\n failed, success = partition(lambda d: d[1], download_results)\n failed, success = list(failed), list(success)\n print(f\"Result: {len(success)} success, {len(failed)} failed\")\n if failed:\n print(\"Failed packages:\")\n for pkg_no, (pkg_name, _) in enumerate(failed, start=1):\n print(f\"{pkg_no}. {pkg_name}\")\n\n return [pkg_name for pkg_name, status in failed]\n", "path": "docs/exts/docs_build/fetch_inventories.py"}]} | 2,209 | 297 |
gh_patches_debug_28554 | rasdani/github-patches | git_diff | cupy__cupy-5215 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add NVCC path and Python version to show_config
</issue>
<code>
[start of cupyx/_runtime.py]
1 import inspect
2 import io
3 import os
4 import platform
5
6 import numpy
7
8 import cupy
9 import cupy_backends
10
11 try:
12 import cupy.cuda.thrust as thrust
13 except ImportError:
14 thrust = None
15
16 try:
17 import cupy_backends.cuda.libs.cudnn as cudnn
18 except ImportError:
19 cudnn = None
20
21 try:
22 import cupy_backends.cuda.libs.nccl as nccl
23 except ImportError:
24 nccl = None
25
26 try:
27 import cupy.cuda.cub as cub
28 except ImportError:
29 cub = None
30
31 try:
32 import cupy.cuda.jitify as jitify
33 except ImportError:
34 jitify = None
35
36 try:
37 import cupy_backends.cuda.libs.cutensor as cutensor
38 except ImportError:
39 cutensor = None
40
41 try:
42 import cupy_backends.cuda.libs.cusparselt as cusparselt
43 except ImportError:
44 cusparselt = None
45
46 try:
47 import scipy
48 except ImportError:
49 scipy = None
50
51 try:
52 import Cython
53 except ImportError:
54 Cython = None
55
56 is_hip = cupy_backends.cuda.api.runtime.is_hip
57
58
59 def _eval_or_error(func, errors):
60 # Evaluates `func` and return the result.
61 # If an error specified by `errors` occured, it returns a string
62 # representing the error.
63 try:
64 return func()
65 except errors as e:
66 return repr(e)
67
68
69 class _InstallInfo(object):
70
71 # TODO(niboshi): Add is_binary_distribution
72
73 def __init__(self):
74 cupy_package_root = self._get_cupy_package_root()
75 if cupy_package_root is not None:
76 data_root = os.path.join(cupy_package_root, '.data')
77 data_paths = {
78 'lib': _dir_or_none(os.path.join(data_root, 'lib')),
79 'include': _dir_or_none(os.path.join(data_root, 'include')),
80 }
81 else:
82 data_paths = {
83 'lib': None,
84 'include': None,
85 }
86
87 self.cupy_package_root = cupy_package_root
88 self.data_paths = data_paths
89
90 def get_data_path(self, data_type):
91 if data_type not in self.data_paths:
92 raise ValueError('Invalid data type: {}'.format(data_type))
93 return self.data_paths[data_type]
94
95 def _get_cupy_package_root(self):
96 try:
97 cupy_path = inspect.getfile(cupy)
98 except TypeError:
99 return None
100 return os.path.dirname(cupy_path)
101
102
103 class _RuntimeInfo(object):
104
105 cupy_version = None
106 cuda_path = None
107
108 # CUDA Driver
109 cuda_build_version = None
110 cuda_driver_version = None
111
112 # CUDA Runtime
113 cuda_runtime_version = None
114
115 # CUDA Toolkit
116 cublas_version = None
117 cufft_version = None
118 curand_version = None
119 cusolver_version = None
120 cusparse_version = None
121 nvrtc_version = None
122 thrust_version = None
123
124 # Optional Libraries
125 cudnn_build_version = None
126 cudnn_version = None
127 nccl_build_version = None
128 nccl_runtime_version = None
129 cub_build_version = None
130 jitify_build_version = None
131 cutensor_version = None
132 cusparselt_version = None
133 cython_build_version = None
134 cython_version = None
135
136 numpy_version = None
137 scipy_version = None
138
139 def __init__(self):
140 self.cupy_version = cupy.__version__
141
142 if not is_hip:
143 self.cuda_path = cupy.cuda.get_cuda_path()
144 else:
145 self.cuda_path = cupy._environment.get_rocm_path()
146
147 self.cuda_build_version = cupy.cuda.driver.get_build_version()
148 self.cuda_driver_version = _eval_or_error(
149 cupy.cuda.runtime.driverGetVersion,
150 cupy.cuda.runtime.CUDARuntimeError)
151
152 self.cuda_runtime_version = _eval_or_error(
153 cupy.cuda.runtime.runtimeGetVersion,
154 cupy.cuda.runtime.CUDARuntimeError)
155
156 self.cublas_version = _eval_or_error(
157 lambda: cupy.cuda.cublas.getVersion(
158 cupy.cuda.device.get_cublas_handle()),
159 cupy.cuda.cublas.CUBLASError)
160 self.cufft_version = _eval_or_error(
161 cupy.cuda.cufft.getVersion,
162 cupy.cuda.cufft.CuFFTError)
163 self.curand_version = _eval_or_error(
164 cupy.cuda.curand.getVersion,
165 cupy.cuda.curand.CURANDError)
166 self.cusolver_version = _eval_or_error(
167 cupy.cuda.cusolver._getVersion,
168 cupy.cuda.cusolver.CUSOLVERError)
169 self.cusparse_version = _eval_or_error(
170 lambda: cupy.cuda.cusparse.getVersion(
171 cupy.cuda.device.get_cusparse_handle()),
172 cupy.cuda.cusparse.CuSparseError)
173 self.nvrtc_version = _eval_or_error(
174 cupy.cuda.nvrtc.getVersion,
175 cupy.cuda.nvrtc.NVRTCError)
176
177 if thrust is not None:
178 self.thrust_version = thrust.get_build_version()
179
180 if cudnn is not None:
181 self.cudnn_build_version = cudnn.get_build_version()
182 self.cudnn_version = _eval_or_error(
183 cudnn.getVersion, cudnn.CuDNNError)
184
185 if nccl is not None:
186 self.nccl_build_version = nccl.get_build_version()
187 nccl_runtime_version = nccl.get_version()
188 if nccl_runtime_version == 0:
189 nccl_runtime_version = '(unknown)'
190 self.nccl_runtime_version = nccl_runtime_version
191
192 if cub is not None:
193 self.cub_build_version = cub.get_build_version()
194
195 if jitify is not None:
196 self.jitify_build_version = jitify.get_build_version()
197
198 if cutensor is not None:
199 self.cutensor_version = cutensor.get_version()
200
201 if cusparselt is not None:
202 self.cusparselt_version = cusparselt.get_build_version()
203
204 self.cython_build_version = cupy._util.cython_build_ver
205 if Cython is not None:
206 self.cython_version = Cython.__version__
207
208 self.numpy_version = numpy.version.full_version
209 if scipy is not None:
210 self.scipy_version = scipy.version.full_version
211
212 def __str__(self):
213 records = [
214 ('OS', platform.platform()),
215 ('CuPy Version', self.cupy_version),
216 ('NumPy Version', self.numpy_version),
217 ('SciPy Version', self.scipy_version),
218 ('Cython Build Version', self.cython_build_version),
219 ('Cython Runtime Version', self.cython_version),
220 ('CUDA Root', self.cuda_path),
221
222 ('CUDA Build Version', self.cuda_build_version),
223 ('CUDA Driver Version', self.cuda_driver_version),
224
225 ('CUDA Runtime Version', self.cuda_runtime_version),
226 ]
227
228 records += [
229 ('cuBLAS Version', self.cublas_version),
230 ('cuFFT Version', self.cufft_version),
231 ('cuRAND Version', self.curand_version),
232 ('cuSOLVER Version', self.cusolver_version),
233 ('cuSPARSE Version', self.cusparse_version),
234 ('NVRTC Version', self.nvrtc_version),
235 ('Thrust Version', self.thrust_version),
236 ('CUB Build Version', self.cub_build_version),
237 ('Jitify Build Version', self.jitify_build_version),
238 ]
239
240 records += [
241 ('cuDNN Build Version', self.cudnn_build_version),
242 ('cuDNN Version', self.cudnn_version),
243 ('NCCL Build Version', self.nccl_build_version),
244 ('NCCL Runtime Version', self.nccl_runtime_version),
245 ('cuTENSOR Version', self.cutensor_version),
246 ('cuSPARSELt Build Version', self.cusparselt_version),
247 ]
248
249 for device_id in range(cupy.cuda.runtime.getDeviceCount()):
250 with cupy.cuda.Device(device_id) as device:
251 props = cupy.cuda.runtime.getDeviceProperties(device_id)
252 name = ('Device {} Name'.format(device_id),
253 props['name'].decode())
254 pci_bus = ('Device {} PCI Bus ID'.format(device_id),
255 device.pci_bus_id)
256 if is_hip:
257 try:
258 arch = props['gcnArchName'].decode()
259 except KeyError: # ROCm < 3.6.0
260 arch = 'gfx'+str(props['gcnArch'])
261 arch = ('Device {} Arch'.format(device_id), arch)
262 else:
263 arch = ('Device {} Compute Capability'.format(device_id),
264 device.compute_capability)
265 records += [name, arch, pci_bus]
266
267 width = max([len(r[0]) for r in records]) + 2
268 fmt = '{:' + str(width) + '}: {}\n'
269 s = io.StringIO()
270 for k, v in records:
271 s.write(fmt.format(k, v))
272
273 return s.getvalue()
274
275
276 def get_runtime_info():
277 return _RuntimeInfo()
278
279
280 def get_install_info():
281 return _InstallInfo()
282
283
284 def _dir_or_none(path):
285 """Returns None if path does not exist."""
286 if os.path.isdir(path):
287 return path
288 return None
289
[end of cupyx/_runtime.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupyx/_runtime.py b/cupyx/_runtime.py
--- a/cupyx/_runtime.py
+++ b/cupyx/_runtime.py
@@ -144,6 +144,11 @@
else:
self.cuda_path = cupy._environment.get_rocm_path()
+ if not is_hip:
+ self.nvcc_path = cupy._environment.get_nvcc_path()
+ else:
+ self.nvcc_path = cupy._environment.get_hipcc_path()
+
self.cuda_build_version = cupy.cuda.driver.get_build_version()
self.cuda_driver_version = _eval_or_error(
cupy.cuda.runtime.driverGetVersion,
@@ -212,12 +217,15 @@
def __str__(self):
records = [
('OS', platform.platform()),
+ ('Python Version', platform.python_version()),
('CuPy Version', self.cupy_version),
+ ('CuPy Platform', 'NVIDIA CUDA' if not is_hip else 'AMD ROCm'),
('NumPy Version', self.numpy_version),
('SciPy Version', self.scipy_version),
('Cython Build Version', self.cython_build_version),
('Cython Runtime Version', self.cython_version),
('CUDA Root', self.cuda_path),
+ ('hipcc PATH' if is_hip else 'nvcc PATH', self.nvcc_path),
('CUDA Build Version', self.cuda_build_version),
('CUDA Driver Version', self.cuda_driver_version),
| {"golden_diff": "diff --git a/cupyx/_runtime.py b/cupyx/_runtime.py\n--- a/cupyx/_runtime.py\n+++ b/cupyx/_runtime.py\n@@ -144,6 +144,11 @@\n else:\n self.cuda_path = cupy._environment.get_rocm_path()\n \n+ if not is_hip:\n+ self.nvcc_path = cupy._environment.get_nvcc_path()\n+ else:\n+ self.nvcc_path = cupy._environment.get_hipcc_path()\n+\n self.cuda_build_version = cupy.cuda.driver.get_build_version()\n self.cuda_driver_version = _eval_or_error(\n cupy.cuda.runtime.driverGetVersion,\n@@ -212,12 +217,15 @@\n def __str__(self):\n records = [\n ('OS', platform.platform()),\n+ ('Python Version', platform.python_version()),\n ('CuPy Version', self.cupy_version),\n+ ('CuPy Platform', 'NVIDIA CUDA' if not is_hip else 'AMD ROCm'),\n ('NumPy Version', self.numpy_version),\n ('SciPy Version', self.scipy_version),\n ('Cython Build Version', self.cython_build_version),\n ('Cython Runtime Version', self.cython_version),\n ('CUDA Root', self.cuda_path),\n+ ('hipcc PATH' if is_hip else 'nvcc PATH', self.nvcc_path),\n \n ('CUDA Build Version', self.cuda_build_version),\n ('CUDA Driver Version', self.cuda_driver_version),\n", "issue": "Add NVCC path and Python version to show_config\n\n", "before_files": [{"content": "import inspect\nimport io\nimport os\nimport platform\n\nimport numpy\n\nimport cupy\nimport cupy_backends\n\ntry:\n import cupy.cuda.thrust as thrust\nexcept ImportError:\n thrust = None\n\ntry:\n import cupy_backends.cuda.libs.cudnn as cudnn\nexcept ImportError:\n cudnn = None\n\ntry:\n import cupy_backends.cuda.libs.nccl as nccl\nexcept ImportError:\n nccl = None\n\ntry:\n import cupy.cuda.cub as cub\nexcept ImportError:\n cub = None\n\ntry:\n import cupy.cuda.jitify as jitify\nexcept ImportError:\n jitify = None\n\ntry:\n import cupy_backends.cuda.libs.cutensor as cutensor\nexcept ImportError:\n cutensor = None\n\ntry:\n import cupy_backends.cuda.libs.cusparselt as cusparselt\nexcept ImportError:\n cusparselt = None\n\ntry:\n import scipy\nexcept ImportError:\n scipy = None\n\ntry:\n import Cython\nexcept ImportError:\n Cython = None\n\nis_hip = cupy_backends.cuda.api.runtime.is_hip\n\n\ndef _eval_or_error(func, errors):\n # Evaluates `func` and return the result.\n # If an error specified by `errors` occured, it returns a string\n # representing the error.\n try:\n return func()\n except errors as e:\n return repr(e)\n\n\nclass _InstallInfo(object):\n\n # TODO(niboshi): Add is_binary_distribution\n\n def __init__(self):\n cupy_package_root = self._get_cupy_package_root()\n if cupy_package_root is not None:\n data_root = os.path.join(cupy_package_root, '.data')\n data_paths = {\n 'lib': _dir_or_none(os.path.join(data_root, 'lib')),\n 'include': _dir_or_none(os.path.join(data_root, 'include')),\n }\n else:\n data_paths = {\n 'lib': None,\n 'include': None,\n }\n\n self.cupy_package_root = cupy_package_root\n self.data_paths = data_paths\n\n def get_data_path(self, data_type):\n if data_type not in self.data_paths:\n raise ValueError('Invalid data type: {}'.format(data_type))\n return self.data_paths[data_type]\n\n def _get_cupy_package_root(self):\n try:\n cupy_path = inspect.getfile(cupy)\n except TypeError:\n return None\n return os.path.dirname(cupy_path)\n\n\nclass _RuntimeInfo(object):\n\n cupy_version = None\n cuda_path = None\n\n # CUDA Driver\n cuda_build_version = None\n cuda_driver_version = None\n\n # CUDA Runtime\n cuda_runtime_version = None\n\n # CUDA Toolkit\n cublas_version = None\n cufft_version = None\n curand_version = None\n cusolver_version = None\n cusparse_version = None\n nvrtc_version = None\n thrust_version = None\n\n # Optional Libraries\n cudnn_build_version = None\n cudnn_version = None\n nccl_build_version = None\n nccl_runtime_version = None\n cub_build_version = None\n jitify_build_version = None\n cutensor_version = None\n cusparselt_version = None\n cython_build_version = None\n cython_version = None\n\n numpy_version = None\n scipy_version = None\n\n def __init__(self):\n self.cupy_version = cupy.__version__\n\n if not is_hip:\n self.cuda_path = cupy.cuda.get_cuda_path()\n else:\n self.cuda_path = cupy._environment.get_rocm_path()\n\n self.cuda_build_version = cupy.cuda.driver.get_build_version()\n self.cuda_driver_version = _eval_or_error(\n cupy.cuda.runtime.driverGetVersion,\n cupy.cuda.runtime.CUDARuntimeError)\n\n self.cuda_runtime_version = _eval_or_error(\n cupy.cuda.runtime.runtimeGetVersion,\n cupy.cuda.runtime.CUDARuntimeError)\n\n self.cublas_version = _eval_or_error(\n lambda: cupy.cuda.cublas.getVersion(\n cupy.cuda.device.get_cublas_handle()),\n cupy.cuda.cublas.CUBLASError)\n self.cufft_version = _eval_or_error(\n cupy.cuda.cufft.getVersion,\n cupy.cuda.cufft.CuFFTError)\n self.curand_version = _eval_or_error(\n cupy.cuda.curand.getVersion,\n cupy.cuda.curand.CURANDError)\n self.cusolver_version = _eval_or_error(\n cupy.cuda.cusolver._getVersion,\n cupy.cuda.cusolver.CUSOLVERError)\n self.cusparse_version = _eval_or_error(\n lambda: cupy.cuda.cusparse.getVersion(\n cupy.cuda.device.get_cusparse_handle()),\n cupy.cuda.cusparse.CuSparseError)\n self.nvrtc_version = _eval_or_error(\n cupy.cuda.nvrtc.getVersion,\n cupy.cuda.nvrtc.NVRTCError)\n\n if thrust is not None:\n self.thrust_version = thrust.get_build_version()\n\n if cudnn is not None:\n self.cudnn_build_version = cudnn.get_build_version()\n self.cudnn_version = _eval_or_error(\n cudnn.getVersion, cudnn.CuDNNError)\n\n if nccl is not None:\n self.nccl_build_version = nccl.get_build_version()\n nccl_runtime_version = nccl.get_version()\n if nccl_runtime_version == 0:\n nccl_runtime_version = '(unknown)'\n self.nccl_runtime_version = nccl_runtime_version\n\n if cub is not None:\n self.cub_build_version = cub.get_build_version()\n\n if jitify is not None:\n self.jitify_build_version = jitify.get_build_version()\n\n if cutensor is not None:\n self.cutensor_version = cutensor.get_version()\n\n if cusparselt is not None:\n self.cusparselt_version = cusparselt.get_build_version()\n\n self.cython_build_version = cupy._util.cython_build_ver\n if Cython is not None:\n self.cython_version = Cython.__version__\n\n self.numpy_version = numpy.version.full_version\n if scipy is not None:\n self.scipy_version = scipy.version.full_version\n\n def __str__(self):\n records = [\n ('OS', platform.platform()),\n ('CuPy Version', self.cupy_version),\n ('NumPy Version', self.numpy_version),\n ('SciPy Version', self.scipy_version),\n ('Cython Build Version', self.cython_build_version),\n ('Cython Runtime Version', self.cython_version),\n ('CUDA Root', self.cuda_path),\n\n ('CUDA Build Version', self.cuda_build_version),\n ('CUDA Driver Version', self.cuda_driver_version),\n\n ('CUDA Runtime Version', self.cuda_runtime_version),\n ]\n\n records += [\n ('cuBLAS Version', self.cublas_version),\n ('cuFFT Version', self.cufft_version),\n ('cuRAND Version', self.curand_version),\n ('cuSOLVER Version', self.cusolver_version),\n ('cuSPARSE Version', self.cusparse_version),\n ('NVRTC Version', self.nvrtc_version),\n ('Thrust Version', self.thrust_version),\n ('CUB Build Version', self.cub_build_version),\n ('Jitify Build Version', self.jitify_build_version),\n ]\n\n records += [\n ('cuDNN Build Version', self.cudnn_build_version),\n ('cuDNN Version', self.cudnn_version),\n ('NCCL Build Version', self.nccl_build_version),\n ('NCCL Runtime Version', self.nccl_runtime_version),\n ('cuTENSOR Version', self.cutensor_version),\n ('cuSPARSELt Build Version', self.cusparselt_version),\n ]\n\n for device_id in range(cupy.cuda.runtime.getDeviceCount()):\n with cupy.cuda.Device(device_id) as device:\n props = cupy.cuda.runtime.getDeviceProperties(device_id)\n name = ('Device {} Name'.format(device_id),\n props['name'].decode())\n pci_bus = ('Device {} PCI Bus ID'.format(device_id),\n device.pci_bus_id)\n if is_hip:\n try:\n arch = props['gcnArchName'].decode()\n except KeyError: # ROCm < 3.6.0\n arch = 'gfx'+str(props['gcnArch'])\n arch = ('Device {} Arch'.format(device_id), arch)\n else:\n arch = ('Device {} Compute Capability'.format(device_id),\n device.compute_capability)\n records += [name, arch, pci_bus]\n\n width = max([len(r[0]) for r in records]) + 2\n fmt = '{:' + str(width) + '}: {}\\n'\n s = io.StringIO()\n for k, v in records:\n s.write(fmt.format(k, v))\n\n return s.getvalue()\n\n\ndef get_runtime_info():\n return _RuntimeInfo()\n\n\ndef get_install_info():\n return _InstallInfo()\n\n\ndef _dir_or_none(path):\n \"\"\"Returns None if path does not exist.\"\"\"\n if os.path.isdir(path):\n return path\n return None\n", "path": "cupyx/_runtime.py"}]} | 3,344 | 331 |
gh_patches_debug_12800 | rasdani/github-patches | git_diff | mindsdb__mindsdb-712 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KeyError: 'mongodb' on start
Starting Mindsdb(python -m mindsdb) version 2.8.1 throws:
```
Failed to start mongodb API with exception 'mongodb'
Traceback (most recent call last):
File "/home/zoran/MyProjects/mindsdb-examples/mdb/lib/python3.7/site-packages/mindsdb/__main__.py", line 83, in <module>
p = ctx.Process(target=start_functions[api], args=(config_path, True,))
KeyError: 'mongodb'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/zoran/MyProjects/mindsdb-examples/mdb/lib/python3.7/site-packages/mindsdb/__main__.py", line 83, in <module>
p = ctx.Process(target=start_functions[api], args=(config_path, True,))
KeyError: 'mongodb'
```
</issue>
<code>
[start of mindsdb/__main__.py]
1 import atexit
2 import traceback
3 import sys
4 import os
5
6 import torch.multiprocessing as mp
7
8 from mindsdb_native.config import CONFIG
9
10 from mindsdb.utilities.config import Config
11 from mindsdb.interfaces.native.mindsdb import MindsdbNative
12 from mindsdb.interfaces.custom.custom_models import CustomModels
13 from mindsdb.api.http.start import start as start_http
14 from mindsdb.api.mysql.start import start as start_mysql
15 from mindsdb.api.mongo.start import start as start_mongo
16 from mindsdb.utilities.fs import get_or_create_dir_struct
17 from mindsdb.interfaces.database.database import DatabaseWrapper
18 from mindsdb.utilities.functions import args_parse
19
20
21 def close_api_gracefully(p_arr):
22 for p in p_arr:
23 sys.stdout.flush()
24 p.terminate()
25 p.join()
26 sys.stdout.flush()
27
28
29 if __name__ == '__main__':
30 mp.freeze_support()
31
32 args = args_parse()
33
34 config_path = args.config
35 if config_path is None:
36 config_dir, _ = get_or_create_dir_struct()
37 config_path = os.path.join(config_dir, 'config.json')
38
39 print(f'Using configuration file: {config_path}')
40 config = Config(config_path)
41
42 if args.api is None:
43 api_arr = [api for api in config['api']]
44 else:
45 api_arr = args.api.split(',')
46
47 start_functions = {
48 'http': start_http,
49 'mysql': start_mysql,
50 'mongo': start_mongo
51 }
52
53 mdb = MindsdbNative(config)
54 cst = CustomModels(config)
55 # @TODO Maybe just use `get_model_data` directly here ? Seems like a useless abstraction
56 model_data_arr = [
57 {
58 'name': x['name'],
59 'predict': x['predict'],
60 'data_analysis': mdb.get_model_data(x['name'])['data_analysis_v2']
61 } for x in mdb.get_models()
62 ]
63
64 for m in model_data_arr:
65 if 'columns_to_ignore' in m['data_analysis']:
66 del m['data_analysis']['columns_to_ignore']
67 if 'train_std_dev' in m['data_analysis']:
68 del m['data_analysis']['train_std_dev']
69
70 model_data_arr.extend(cst.get_models())
71
72 dbw = DatabaseWrapper(config)
73 dbw.register_predictors(model_data_arr)
74
75 for broken_name in [name for name, connected in dbw.check_connections().items() if connected is False]:
76 print(f'Error failed to integrate with database aliased: {broken_name}')
77
78 p_arr = []
79 ctx = mp.get_context('spawn')
80 for api in api_arr:
81 print(f'Starting Mindsdb {api} API !')
82 try:
83 p = ctx.Process(target=start_functions[api], args=(config_path, True,))
84 p.start()
85 p_arr.append(p)
86 print(f'Started Mindsdb {api} API !')
87 except Exception as e:
88 close_api_gracefully(p_arr)
89 print(f'Failed to start {api} API with exception {e}')
90 print(traceback.format_exc())
91 raise
92
93 atexit.register(close_api_gracefully, p_arr=p_arr)
94
95 for p in p_arr:
96 p.join()
97
[end of mindsdb/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py
--- a/mindsdb/__main__.py
+++ b/mindsdb/__main__.py
@@ -40,14 +40,20 @@
config = Config(config_path)
if args.api is None:
- api_arr = [api for api in config['api']]
+ api_arr = ['http', 'mysql']
else:
api_arr = args.api.split(',')
+ for api in api_arr:
+ if api not in config:
+ print(f"Trying run '{api}' API, but is no config for this api.")
+ print(f"Please, fill config['api']['{api}']")
+ sys.exit(0)
+
start_functions = {
'http': start_http,
'mysql': start_mysql,
- 'mongo': start_mongo
+ 'mongodb': start_mongo
}
mdb = MindsdbNative(config)
| {"golden_diff": "diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py\n--- a/mindsdb/__main__.py\n+++ b/mindsdb/__main__.py\n@@ -40,14 +40,20 @@\n config = Config(config_path)\n \n if args.api is None:\n- api_arr = [api for api in config['api']]\n+ api_arr = ['http', 'mysql']\n else:\n api_arr = args.api.split(',')\n \n+ for api in api_arr:\n+ if api not in config:\n+ print(f\"Trying run '{api}' API, but is no config for this api.\")\n+ print(f\"Please, fill config['api']['{api}']\")\n+ sys.exit(0)\n+\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n- 'mongo': start_mongo\n+ 'mongodb': start_mongo\n }\n \n mdb = MindsdbNative(config)\n", "issue": "KeyError: 'mongodb' on start\nStarting Mindsdb(python -m mindsdb) version 2.8.1 throws:\r\n\r\n```\r\nFailed to start mongodb API with exception 'mongodb'\r\nTraceback (most recent call last):\r\n File \"/home/zoran/MyProjects/mindsdb-examples/mdb/lib/python3.7/site-packages/mindsdb/__main__.py\", line 83, in <module>\r\n p = ctx.Process(target=start_functions[api], args=(config_path, True,))\r\nKeyError: 'mongodb'\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/zoran/MyProjects/mindsdb-examples/mdb/lib/python3.7/site-packages/mindsdb/__main__.py\", line 83, in <module>\r\n p = ctx.Process(target=start_functions[api], args=(config_path, True,))\r\nKeyError: 'mongodb'\r\n```\n", "before_files": [{"content": "import atexit\nimport traceback\nimport sys\nimport os\n\nimport torch.multiprocessing as mp\n\nfrom mindsdb_native.config import CONFIG\n\nfrom mindsdb.utilities.config import Config\nfrom mindsdb.interfaces.native.mindsdb import MindsdbNative\nfrom mindsdb.interfaces.custom.custom_models import CustomModels\nfrom mindsdb.api.http.start import start as start_http\nfrom mindsdb.api.mysql.start import start as start_mysql\nfrom mindsdb.api.mongo.start import start as start_mongo\nfrom mindsdb.utilities.fs import get_or_create_dir_struct\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.functions import args_parse\n\n\ndef close_api_gracefully(p_arr):\n for p in p_arr:\n sys.stdout.flush()\n p.terminate()\n p.join()\n sys.stdout.flush()\n\n\nif __name__ == '__main__':\n mp.freeze_support()\n\n args = args_parse()\n\n config_path = args.config\n if config_path is None:\n config_dir, _ = get_or_create_dir_struct()\n config_path = os.path.join(config_dir, 'config.json')\n\n print(f'Using configuration file: {config_path}')\n config = Config(config_path)\n\n if args.api is None:\n api_arr = [api for api in config['api']]\n else:\n api_arr = args.api.split(',')\n\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n 'mongo': start_mongo\n }\n\n mdb = MindsdbNative(config)\n cst = CustomModels(config)\n # @TODO Maybe just use `get_model_data` directly here ? Seems like a useless abstraction\n model_data_arr = [\n {\n 'name': x['name'],\n 'predict': x['predict'],\n 'data_analysis': mdb.get_model_data(x['name'])['data_analysis_v2']\n } for x in mdb.get_models()\n ]\n\n for m in model_data_arr:\n if 'columns_to_ignore' in m['data_analysis']:\n del m['data_analysis']['columns_to_ignore']\n if 'train_std_dev' in m['data_analysis']:\n del m['data_analysis']['train_std_dev']\n\n model_data_arr.extend(cst.get_models())\n\n dbw = DatabaseWrapper(config)\n dbw.register_predictors(model_data_arr)\n\n for broken_name in [name for name, connected in dbw.check_connections().items() if connected is False]:\n print(f'Error failed to integrate with database aliased: {broken_name}')\n\n p_arr = []\n ctx = mp.get_context('spawn')\n for api in api_arr:\n print(f'Starting Mindsdb {api} API !')\n try:\n p = ctx.Process(target=start_functions[api], args=(config_path, True,))\n p.start()\n p_arr.append(p)\n print(f'Started Mindsdb {api} API !')\n except Exception as e:\n close_api_gracefully(p_arr)\n print(f'Failed to start {api} API with exception {e}')\n print(traceback.format_exc())\n raise\n\n atexit.register(close_api_gracefully, p_arr=p_arr)\n\n for p in p_arr:\n p.join()\n", "path": "mindsdb/__main__.py"}]} | 1,661 | 214 |
gh_patches_debug_4504 | rasdani/github-patches | git_diff | saleor__saleor-2803 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Grapql query for home page
### What I'm trying to achieve
I want to have a shop homepage which shows:
* new arrivals,
* product in a sale,
* featured products,
* featured collection,
* categories links
### Describe a proposed solution
```graphql
query HomePage {
shop {
featuredCollection {
id
name
}
}
featured: products(first: 10, collectionSlug: "featured") {
edges {
node {
id
name
thumbnailUrl
category {
id
name
}
price {
amount
currency
}
}
}
}
newArrivals: products(first: 10, sortBy: "creation_date") {
edges {
node {
id
name
thumbnailUrl
category {
id
name
}
price {
amount
currency
}
}
}
}
sales: products(first: 10, collectionSlug: "sales") {
edges {
node {
id
name
thumbnailUrl
category {
id
name
}
price {
amount
currency
}
}
}
}
categories {
edges {
node {
id
name
}
}
}
}
```
### Other solutions I've tried and won't work
I introduced:
* filter by collection slug for featured and sales. That is the simplest approach which I have in my mind.
* exposing homepage collection in the shop query,
* sorting products by creation data for new arrivals.
This is only a proposition. If you have a better approach in mind please share it.
</issue>
<code>
[start of saleor/product/filters.py]
1 from collections import OrderedDict
2
3 from django.db.models import Q
4 from django.forms import CheckboxSelectMultiple, ValidationError
5 from django.utils.translation import pgettext_lazy
6 from django_filters import MultipleChoiceFilter, OrderingFilter, RangeFilter
7
8 from ..core.filters import SortedFilterSet
9 from .models import Product, ProductAttribute
10
11 SORT_BY_FIELDS = OrderedDict([
12 ('name', pgettext_lazy('Product list sorting option', 'name')),
13 ('price', pgettext_lazy('Product list sorting option', 'price'))])
14
15
16 class ProductFilter(SortedFilterSet):
17 sort_by = OrderingFilter(
18 label=pgettext_lazy('Product list sorting form', 'Sort by'),
19 fields=SORT_BY_FIELDS.keys(),
20 field_labels=SORT_BY_FIELDS)
21 price = RangeFilter(
22 label=pgettext_lazy('Currency amount', 'Price'))
23
24 class Meta:
25 model = Product
26 fields = []
27
28 def __init__(self, *args, **kwargs):
29 super().__init__(*args, **kwargs)
30 self.product_attributes, self.variant_attributes = (
31 self._get_attributes())
32 self.filters.update(self._get_product_attributes_filters())
33 self.filters.update(self._get_product_variants_attributes_filters())
34 self.filters = OrderedDict(sorted(self.filters.items()))
35
36 def _get_attributes(self):
37 q_product_attributes = self._get_product_attributes_lookup()
38 q_variant_attributes = self._get_variant_attributes_lookup()
39 product_attributes = (
40 ProductAttribute.objects.all()
41 .prefetch_related('translations', 'values__translations')
42 .filter(q_product_attributes)
43 .distinct())
44 variant_attributes = (
45 ProductAttribute.objects.all()
46 .prefetch_related('translations', 'values__translations')
47 .filter(q_variant_attributes)
48 .distinct())
49 return product_attributes, variant_attributes
50
51 def _get_product_attributes_lookup(self):
52 raise NotImplementedError()
53
54 def _get_variant_attributes_lookup(self):
55 raise NotImplementedError()
56
57 def _get_product_attributes_filters(self):
58 filters = {}
59 for attribute in self.product_attributes:
60 filters[attribute.slug] = MultipleChoiceFilter(
61 name='attributes__%s' % attribute.pk,
62 label=attribute.translated.name,
63 widget=CheckboxSelectMultiple,
64 choices=self._get_attribute_choices(attribute))
65 return filters
66
67 def _get_product_variants_attributes_filters(self):
68 filters = {}
69 for attribute in self.variant_attributes:
70 filters[attribute.slug] = MultipleChoiceFilter(
71 name='variants__attributes__%s' % attribute.pk,
72 label=attribute.translated.name,
73 widget=CheckboxSelectMultiple,
74 choices=self._get_attribute_choices(attribute))
75 return filters
76
77 def _get_attribute_choices(self, attribute):
78 return [
79 (choice.pk, choice.translated.name)
80 for choice in attribute.values.all()]
81
82 def validate_sort_by(self, value):
83 if value.strip('-') not in SORT_BY_FIELDS:
84 raise ValidationError(
85 pgettext_lazy(
86 'Validation error for sort_by filter',
87 '%(value)s is not a valid sorting option'),
88 params={'value': value})
89
90
91 class ProductCategoryFilter(ProductFilter):
92 def __init__(self, *args, **kwargs):
93 self.category = kwargs.pop('category')
94 super().__init__(*args, **kwargs)
95
96 def _get_product_attributes_lookup(self):
97 return Q(product_types__products__category=self.category)
98
99 def _get_variant_attributes_lookup(self):
100 return Q(product_variant_types__products__category=self.category)
101
102
103 class ProductCollectionFilter(ProductFilter):
104 def __init__(self, *args, **kwargs):
105 self.collection = kwargs.pop('collection')
106 super().__init__(*args, **kwargs)
107
108 def _get_product_attributes_lookup(self):
109 return Q(product_types__products__collections=self.collection)
110
111 def _get_variant_attributes_lookup(self):
112 return Q(product_variant_types__products__collections=self.collection)
113
[end of saleor/product/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/product/filters.py b/saleor/product/filters.py
--- a/saleor/product/filters.py
+++ b/saleor/product/filters.py
@@ -10,7 +10,9 @@
SORT_BY_FIELDS = OrderedDict([
('name', pgettext_lazy('Product list sorting option', 'name')),
- ('price', pgettext_lazy('Product list sorting option', 'price'))])
+ ('price', pgettext_lazy('Product list sorting option', 'price')),
+ ('updated_at', pgettext_lazy(
+ 'Product list sorting option', 'last updated'))])
class ProductFilter(SortedFilterSet):
| {"golden_diff": "diff --git a/saleor/product/filters.py b/saleor/product/filters.py\n--- a/saleor/product/filters.py\n+++ b/saleor/product/filters.py\n@@ -10,7 +10,9 @@\n \n SORT_BY_FIELDS = OrderedDict([\n ('name', pgettext_lazy('Product list sorting option', 'name')),\n- ('price', pgettext_lazy('Product list sorting option', 'price'))])\n+ ('price', pgettext_lazy('Product list sorting option', 'price')),\n+ ('updated_at', pgettext_lazy(\n+ 'Product list sorting option', 'last updated'))])\n \n \n class ProductFilter(SortedFilterSet):\n", "issue": "Grapql query for home page\n### What I'm trying to achieve\r\nI want to have a shop homepage which shows:\r\n* new arrivals,\r\n* product in a sale,\r\n* featured products,\r\n* featured collection,\r\n* categories links\r\n\r\n### Describe a proposed solution\r\n```graphql\r\nquery HomePage {\r\n shop {\r\n featuredCollection {\r\n id\r\n name\r\n }\r\n }\r\n featured: products(first: 10, collectionSlug: \"featured\") {\r\n edges {\r\n node {\r\n id\r\n name\r\n thumbnailUrl\r\n category {\r\n id\r\n name\r\n }\r\n price {\r\n amount\r\n currency\r\n }\r\n }\r\n }\r\n }\r\n newArrivals: products(first: 10, sortBy: \"creation_date\") {\r\n edges {\r\n node {\r\n id\r\n name\r\n thumbnailUrl\r\n category {\r\n id\r\n name\r\n }\r\n price {\r\n amount\r\n currency\r\n }\r\n }\r\n }\r\n }\r\n sales: products(first: 10, collectionSlug: \"sales\") {\r\n edges {\r\n node {\r\n id\r\n name\r\n thumbnailUrl\r\n category {\r\n id\r\n name\r\n }\r\n price {\r\n amount\r\n currency\r\n }\r\n }\r\n }\r\n }\r\n categories {\r\n edges {\r\n node {\r\n id\r\n name\r\n }\r\n }\r\n }\r\n}\r\n\r\n```\r\n\r\n### Other solutions I've tried and won't work\r\nI introduced:\r\n* filter by collection slug for featured and sales. That is the simplest approach which I have in my mind.\r\n* exposing homepage collection in the shop query,\r\n* sorting products by creation data for new arrivals.\r\n\r\nThis is only a proposition. If you have a better approach in mind please share it.\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom django.db.models import Q\nfrom django.forms import CheckboxSelectMultiple, ValidationError\nfrom django.utils.translation import pgettext_lazy\nfrom django_filters import MultipleChoiceFilter, OrderingFilter, RangeFilter\n\nfrom ..core.filters import SortedFilterSet\nfrom .models import Product, ProductAttribute\n\nSORT_BY_FIELDS = OrderedDict([\n ('name', pgettext_lazy('Product list sorting option', 'name')),\n ('price', pgettext_lazy('Product list sorting option', 'price'))])\n\n\nclass ProductFilter(SortedFilterSet):\n sort_by = OrderingFilter(\n label=pgettext_lazy('Product list sorting form', 'Sort by'),\n fields=SORT_BY_FIELDS.keys(),\n field_labels=SORT_BY_FIELDS)\n price = RangeFilter(\n label=pgettext_lazy('Currency amount', 'Price'))\n\n class Meta:\n model = Product\n fields = []\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.product_attributes, self.variant_attributes = (\n self._get_attributes())\n self.filters.update(self._get_product_attributes_filters())\n self.filters.update(self._get_product_variants_attributes_filters())\n self.filters = OrderedDict(sorted(self.filters.items()))\n\n def _get_attributes(self):\n q_product_attributes = self._get_product_attributes_lookup()\n q_variant_attributes = self._get_variant_attributes_lookup()\n product_attributes = (\n ProductAttribute.objects.all()\n .prefetch_related('translations', 'values__translations')\n .filter(q_product_attributes)\n .distinct())\n variant_attributes = (\n ProductAttribute.objects.all()\n .prefetch_related('translations', 'values__translations')\n .filter(q_variant_attributes)\n .distinct())\n return product_attributes, variant_attributes\n\n def _get_product_attributes_lookup(self):\n raise NotImplementedError()\n\n def _get_variant_attributes_lookup(self):\n raise NotImplementedError()\n\n def _get_product_attributes_filters(self):\n filters = {}\n for attribute in self.product_attributes:\n filters[attribute.slug] = MultipleChoiceFilter(\n name='attributes__%s' % attribute.pk,\n label=attribute.translated.name,\n widget=CheckboxSelectMultiple,\n choices=self._get_attribute_choices(attribute))\n return filters\n\n def _get_product_variants_attributes_filters(self):\n filters = {}\n for attribute in self.variant_attributes:\n filters[attribute.slug] = MultipleChoiceFilter(\n name='variants__attributes__%s' % attribute.pk,\n label=attribute.translated.name,\n widget=CheckboxSelectMultiple,\n choices=self._get_attribute_choices(attribute))\n return filters\n\n def _get_attribute_choices(self, attribute):\n return [\n (choice.pk, choice.translated.name)\n for choice in attribute.values.all()]\n\n def validate_sort_by(self, value):\n if value.strip('-') not in SORT_BY_FIELDS:\n raise ValidationError(\n pgettext_lazy(\n 'Validation error for sort_by filter',\n '%(value)s is not a valid sorting option'),\n params={'value': value})\n\n\nclass ProductCategoryFilter(ProductFilter):\n def __init__(self, *args, **kwargs):\n self.category = kwargs.pop('category')\n super().__init__(*args, **kwargs)\n\n def _get_product_attributes_lookup(self):\n return Q(product_types__products__category=self.category)\n\n def _get_variant_attributes_lookup(self):\n return Q(product_variant_types__products__category=self.category)\n\n\nclass ProductCollectionFilter(ProductFilter):\n def __init__(self, *args, **kwargs):\n self.collection = kwargs.pop('collection')\n super().__init__(*args, **kwargs)\n\n def _get_product_attributes_lookup(self):\n return Q(product_types__products__collections=self.collection)\n\n def _get_variant_attributes_lookup(self):\n return Q(product_variant_types__products__collections=self.collection)\n", "path": "saleor/product/filters.py"}]} | 1,936 | 142 |
gh_patches_debug_1185 | rasdani/github-patches | git_diff | learningequality__kolibri-5872 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
update perseus to use new build config scheme
### Observed behavior
follow-up from #5864, need to update perseus to use new buildconfig. Currently builds but does not run.
### Errors and logs
Currently getting:
```
ERROR Internal Server Error: /en/user/
Traceback (most recent call last):
File "/Users/d/Projects/le/kolibri/kolibri/core/webpack/hooks.py", line 111, in _stats_file_content
with io.open(self._stats_file, mode="r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/d/Projects/le/kolibri/.venv/lib/python3.7/site-packages/kolibri_exercise_perseus_plugin/build/_stats.json'
```
### Context
current 0.13.0 develop branch
</issue>
<code>
[start of packages/kolibri-tools/lib/webpack_json.py]
1 import argparse
2 import importlib
3 import json
4 import logging
5 import os
6 import sys
7 import tempfile
8
9 from pkg_resources import DistributionNotFound
10 from pkg_resources import get_distribution
11 from pkg_resources import resource_exists
12 from pkg_resources import resource_filename
13 from pkg_resources import resource_isdir
14 from pkg_resources import resource_listdir
15
16 logger = logging.getLogger("webpack_json")
17 logger.setLevel(level=logging.INFO)
18
19 BUILD_CONFIG = "buildConfig.js"
20
21
22 def load_plugins_from_file(file_path):
23 try:
24 import requests
25 except ImportError:
26 requests = None
27 # We have been passed a URL, not a local file path
28 if file_path.startswith("http"):
29 if requests is None:
30 raise ImportError("Requests is required to import plugins from urls")
31 print(
32 "Downloading plugins manifest from {file_path}".format(file_path=file_path)
33 )
34 _, path = tempfile.mkstemp(suffix=".txt", text=True)
35 with open(path, "w") as f:
36 r = requests.get(file_path)
37 f.write(r.content)
38 file_path = path
39 with open(file_path, "r") as f:
40 return [plugin.strip() for plugin in f.readlines() if plugin.strip()]
41
42
43 def expand_glob(build_item):
44 plugins = []
45 # Do a very simple check here, only deal with a single * at the end of something!
46 if (
47 len([item for item in build_item.split(".") if item == "*"]) > 1
48 or build_item.endswith("**")
49 or build_item == "*"
50 or not build_item.endswith("*")
51 ):
52 logging.error("Too many * paths, only use one per module spec")
53 return plugins
54 parent_module_path = ".".join(
55 [item for item in build_item.split(".") if item and item != "*"]
56 )
57 try:
58 for file in resource_listdir(parent_module_path, "."):
59 if resource_isdir(parent_module_path, file):
60 try:
61 child_module_path = parent_module_path + "." + file
62 plugin = plugin_data(child_module_path)
63 if plugin is not None:
64 plugins.append(plugin)
65 except ImportError:
66 continue
67 except OSError:
68 pass
69 return plugins
70
71
72 def plugin_data(module_path):
73 try:
74 if resource_exists(module_path, BUILD_CONFIG):
75 plugin_path = os.path.dirname(resource_filename(module_path, BUILD_CONFIG))
76 try:
77 version = get_distribution(module_path).version
78 except (DistributionNotFound, AttributeError):
79 try:
80 module = importlib.import_module(module_path)
81 version = module.__version__
82 except (ImportError, AttributeError):
83 import kolibri
84
85 version = kolibri.__version__
86 if module_path.startswith("kolibri."):
87 import kolibri
88
89 locale_data_folder = os.path.join(
90 os.path.dirname(kolibri.__file__), "locale", "en", "LC_MESSAGES"
91 )
92 # Is an external plugin, do otherwise!
93 else:
94 locale_data_folder = os.path.join(
95 plugin_path, "locale", "en", "LC_MESSAGES"
96 )
97 return {
98 "locale_data_folder": locale_data_folder,
99 "plugin_path": plugin_path,
100 "version": version,
101 }
102 # Python 3.{4,5,6} raises a NotImplementedError for an empty directory
103 # Python 3.7 raises a TypeError for an empty directory
104 except (NotImplementedError, TypeError):
105 pass
106 raise ImportError("No frontend build assets")
107
108
109 def initialize_plugins(build_list):
110 plugins = []
111 for build_item in build_list:
112 if "*" in build_item:
113 plugins += expand_glob(build_item)
114 elif build_item:
115 # No '*' in the module path, so just add it naively
116 plugin = plugin_data(build_item)
117 if plugin is not None:
118 plugins.append(plugin)
119 return plugins
120
121
122 def main():
123 parser = argparse.ArgumentParser()
124
125 parser.add_argument(
126 "--plugin_file",
127 help="the filepath to which you'd like to run plugins from",
128 type=str,
129 default=None,
130 )
131 parser.add_argument(
132 "--plugins",
133 help="provide a space separated list of plugins you'd like to run",
134 type=str,
135 nargs="*",
136 default=None,
137 )
138 parser.add_argument(
139 "--plugin_path",
140 help="provide a path to add to the Python path to enable import of the plugins",
141 type=str,
142 default=os.getcwd(),
143 )
144 parser.add_argument(
145 "-o", "--output_file", type=str, default=None, dest="output_file"
146 )
147 parser.add_argument("-v", "--verbose", default=False, action="store_true")
148 args = parser.parse_args()
149 build_list = []
150
151 if args.verbose:
152 logger.setLevel(logging.DEBUG)
153
154 plugin_path = os.path.realpath(args.plugin_path)
155
156 # Add our plugin_path to the path
157 sys.path.append(plugin_path)
158
159 # Put environment variable setting first to allow customized builds within buildkite through env vars
160 if "BUILD_TIME_PLUGINS" in os.environ and os.environ["BUILD_TIME_PLUGINS"]:
161 build_list = load_plugins_from_file(os.environ["BUILD_TIME_PLUGINS"])
162 elif args.plugin_file:
163 build_list = load_plugins_from_file(args.plugin_file)
164 elif args.plugins:
165 build_list = args.plugins
166
167 logger.info("Gathering relevant modules from {}".format(build_list))
168
169 result = initialize_plugins(build_list)
170
171 if args.output_file:
172 logger.info("Writing webpack_json output to {}".format(args.output_file))
173 with open(args.output_file, "w") as f:
174 json.dump(result, f)
175 else:
176 logger.info("No output file argument; writing webpack_json output to stdout.")
177 logger.info(json.dumps(result))
178
179 # Remove the plugin_path from the path to clean up
180 sys.path.remove(plugin_path)
181
182
183 if __name__ == "__main__":
184 main()
185
[end of packages/kolibri-tools/lib/webpack_json.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/packages/kolibri-tools/lib/webpack_json.py b/packages/kolibri-tools/lib/webpack_json.py
--- a/packages/kolibri-tools/lib/webpack_json.py
+++ b/packages/kolibri-tools/lib/webpack_json.py
@@ -15,6 +15,9 @@
logger = logging.getLogger("webpack_json")
logger.setLevel(level=logging.INFO)
+handler = logging.StreamHandler()
+handler.setLevel(logging.INFO)
+logger.addHandler(handler)
BUILD_CONFIG = "buildConfig.js"
| {"golden_diff": "diff --git a/packages/kolibri-tools/lib/webpack_json.py b/packages/kolibri-tools/lib/webpack_json.py\n--- a/packages/kolibri-tools/lib/webpack_json.py\n+++ b/packages/kolibri-tools/lib/webpack_json.py\n@@ -15,6 +15,9 @@\n \n logger = logging.getLogger(\"webpack_json\")\n logger.setLevel(level=logging.INFO)\n+handler = logging.StreamHandler()\n+handler.setLevel(logging.INFO)\n+logger.addHandler(handler)\n \n BUILD_CONFIG = \"buildConfig.js\"\n", "issue": "update perseus to use new build config scheme\n\r\n### Observed behavior\r\n\r\nfollow-up from #5864, need to update perseus to use new buildconfig. Currently builds but does not run.\r\n\r\n\r\n### Errors and logs\r\n\r\nCurrently getting:\r\n\r\n```\r\nERROR Internal Server Error: /en/user/\r\nTraceback (most recent call last):\r\n File \"/Users/d/Projects/le/kolibri/kolibri/core/webpack/hooks.py\", line 111, in _stats_file_content\r\n with io.open(self._stats_file, mode=\"r\", encoding=\"utf-8\") as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '/Users/d/Projects/le/kolibri/.venv/lib/python3.7/site-packages/kolibri_exercise_perseus_plugin/build/_stats.json'\r\n```\r\n\r\n\r\n\r\n### Context\r\n\r\ncurrent 0.13.0 develop branch\r\n\n", "before_files": [{"content": "import argparse\nimport importlib\nimport json\nimport logging\nimport os\nimport sys\nimport tempfile\n\nfrom pkg_resources import DistributionNotFound\nfrom pkg_resources import get_distribution\nfrom pkg_resources import resource_exists\nfrom pkg_resources import resource_filename\nfrom pkg_resources import resource_isdir\nfrom pkg_resources import resource_listdir\n\nlogger = logging.getLogger(\"webpack_json\")\nlogger.setLevel(level=logging.INFO)\n\nBUILD_CONFIG = \"buildConfig.js\"\n\n\ndef load_plugins_from_file(file_path):\n try:\n import requests\n except ImportError:\n requests = None\n # We have been passed a URL, not a local file path\n if file_path.startswith(\"http\"):\n if requests is None:\n raise ImportError(\"Requests is required to import plugins from urls\")\n print(\n \"Downloading plugins manifest from {file_path}\".format(file_path=file_path)\n )\n _, path = tempfile.mkstemp(suffix=\".txt\", text=True)\n with open(path, \"w\") as f:\n r = requests.get(file_path)\n f.write(r.content)\n file_path = path\n with open(file_path, \"r\") as f:\n return [plugin.strip() for plugin in f.readlines() if plugin.strip()]\n\n\ndef expand_glob(build_item):\n plugins = []\n # Do a very simple check here, only deal with a single * at the end of something!\n if (\n len([item for item in build_item.split(\".\") if item == \"*\"]) > 1\n or build_item.endswith(\"**\")\n or build_item == \"*\"\n or not build_item.endswith(\"*\")\n ):\n logging.error(\"Too many * paths, only use one per module spec\")\n return plugins\n parent_module_path = \".\".join(\n [item for item in build_item.split(\".\") if item and item != \"*\"]\n )\n try:\n for file in resource_listdir(parent_module_path, \".\"):\n if resource_isdir(parent_module_path, file):\n try:\n child_module_path = parent_module_path + \".\" + file\n plugin = plugin_data(child_module_path)\n if plugin is not None:\n plugins.append(plugin)\n except ImportError:\n continue\n except OSError:\n pass\n return plugins\n\n\ndef plugin_data(module_path):\n try:\n if resource_exists(module_path, BUILD_CONFIG):\n plugin_path = os.path.dirname(resource_filename(module_path, BUILD_CONFIG))\n try:\n version = get_distribution(module_path).version\n except (DistributionNotFound, AttributeError):\n try:\n module = importlib.import_module(module_path)\n version = module.__version__\n except (ImportError, AttributeError):\n import kolibri\n\n version = kolibri.__version__\n if module_path.startswith(\"kolibri.\"):\n import kolibri\n\n locale_data_folder = os.path.join(\n os.path.dirname(kolibri.__file__), \"locale\", \"en\", \"LC_MESSAGES\"\n )\n # Is an external plugin, do otherwise!\n else:\n locale_data_folder = os.path.join(\n plugin_path, \"locale\", \"en\", \"LC_MESSAGES\"\n )\n return {\n \"locale_data_folder\": locale_data_folder,\n \"plugin_path\": plugin_path,\n \"version\": version,\n }\n # Python 3.{4,5,6} raises a NotImplementedError for an empty directory\n # Python 3.7 raises a TypeError for an empty directory\n except (NotImplementedError, TypeError):\n pass\n raise ImportError(\"No frontend build assets\")\n\n\ndef initialize_plugins(build_list):\n plugins = []\n for build_item in build_list:\n if \"*\" in build_item:\n plugins += expand_glob(build_item)\n elif build_item:\n # No '*' in the module path, so just add it naively\n plugin = plugin_data(build_item)\n if plugin is not None:\n plugins.append(plugin)\n return plugins\n\n\ndef main():\n parser = argparse.ArgumentParser()\n\n parser.add_argument(\n \"--plugin_file\",\n help=\"the filepath to which you'd like to run plugins from\",\n type=str,\n default=None,\n )\n parser.add_argument(\n \"--plugins\",\n help=\"provide a space separated list of plugins you'd like to run\",\n type=str,\n nargs=\"*\",\n default=None,\n )\n parser.add_argument(\n \"--plugin_path\",\n help=\"provide a path to add to the Python path to enable import of the plugins\",\n type=str,\n default=os.getcwd(),\n )\n parser.add_argument(\n \"-o\", \"--output_file\", type=str, default=None, dest=\"output_file\"\n )\n parser.add_argument(\"-v\", \"--verbose\", default=False, action=\"store_true\")\n args = parser.parse_args()\n build_list = []\n\n if args.verbose:\n logger.setLevel(logging.DEBUG)\n\n plugin_path = os.path.realpath(args.plugin_path)\n\n # Add our plugin_path to the path\n sys.path.append(plugin_path)\n\n # Put environment variable setting first to allow customized builds within buildkite through env vars\n if \"BUILD_TIME_PLUGINS\" in os.environ and os.environ[\"BUILD_TIME_PLUGINS\"]:\n build_list = load_plugins_from_file(os.environ[\"BUILD_TIME_PLUGINS\"])\n elif args.plugin_file:\n build_list = load_plugins_from_file(args.plugin_file)\n elif args.plugins:\n build_list = args.plugins\n\n logger.info(\"Gathering relevant modules from {}\".format(build_list))\n\n result = initialize_plugins(build_list)\n\n if args.output_file:\n logger.info(\"Writing webpack_json output to {}\".format(args.output_file))\n with open(args.output_file, \"w\") as f:\n json.dump(result, f)\n else:\n logger.info(\"No output file argument; writing webpack_json output to stdout.\")\n logger.info(json.dumps(result))\n\n # Remove the plugin_path from the path to clean up\n sys.path.remove(plugin_path)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "packages/kolibri-tools/lib/webpack_json.py"}]} | 2,436 | 106 |
gh_patches_debug_43571 | rasdani/github-patches | git_diff | pytorch__tnt-66 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PEP8 compliance?
I notice sometimes snake_case, sometimes mixedCase are used for class methods
```python
def on_end_epoch(state):
mlog.printMeter(mode="Train", iepoch=state['epoch'])
mlog.resetMeter(mode="Train", iepoch=state['epoch'])
```
WIll tnt codebase 100% comply with PEP8?
</issue>
<code>
[start of example/mnist_with_meterlogger.py]
1 """ Run MNIST example and log to visdom
2 Notes:
3 - Visdom must be installed (pip works)
4 - the Visdom server must be running at start!
5
6 Example:
7 $ python -m visdom.server -port 8097 &
8 $ python mnist_with_visdom.py
9 """
10 from tqdm import tqdm
11 import torch
12 import torch.optim
13 import torchnet as tnt
14 from torch.autograd import Variable
15 import torch.nn.functional as F
16 from torch.nn.init import kaiming_normal
17 from torchnet.engine import Engine
18 from torchnet.logger import MeterLogger
19 from torchvision.datasets.mnist import MNIST
20
21
22 def get_iterator(mode):
23 ds = MNIST(root='./', download=True, train=mode)
24 data = getattr(ds, 'train_data' if mode else 'test_data')
25 labels = getattr(ds, 'train_labels' if mode else 'test_labels')
26 tds = tnt.dataset.TensorDataset([data, labels])
27 return tds.parallel(batch_size=128, num_workers=4, shuffle=mode)
28
29
30 def conv_init(ni, no, k):
31 return kaiming_normal(torch.Tensor(no, ni, k, k))
32
33
34 def linear_init(ni, no):
35 return kaiming_normal(torch.Tensor(no, ni))
36
37
38 def f(params, inputs, mode):
39 o = inputs.view(inputs.size(0), 1, 28, 28)
40 o = F.conv2d(o, params['conv0.weight'], params['conv0.bias'], stride=2)
41 o = F.relu(o)
42 o = F.conv2d(o, params['conv1.weight'], params['conv1.bias'], stride=2)
43 o = F.relu(o)
44 o = o.view(o.size(0), -1)
45 o = F.linear(o, params['linear2.weight'], params['linear2.bias'])
46 o = F.relu(o)
47 o = F.linear(o, params['linear3.weight'], params['linear3.bias'])
48 return o
49
50
51 def main():
52 params = {
53 'conv0.weight': conv_init(1, 50, 5), 'conv0.bias': torch.zeros(50),
54 'conv1.weight': conv_init(50, 50, 5), 'conv1.bias': torch.zeros(50),
55 'linear2.weight': linear_init(800, 512), 'linear2.bias': torch.zeros(512),
56 'linear3.weight': linear_init(512, 10), 'linear3.bias': torch.zeros(10),
57 }
58 params = {k: Variable(v, requires_grad=True) for k, v in params.items()}
59
60 optimizer = torch.optim.SGD(
61 params.values(), lr=0.01, momentum=0.9, weight_decay=0.0005)
62
63 engine = Engine()
64
65 mlog = MeterLogger(server='10.10.30.91', port=9917, nclass=10, title="mnist_meterlogger")
66
67 def h(sample):
68 inputs = Variable(sample[0].float() / 255.0)
69 targets = Variable(torch.LongTensor(sample[1]))
70 o = f(params, inputs, sample[2])
71 return F.cross_entropy(o, targets), o
72
73 def on_sample(state):
74 state['sample'].append(state['train'])
75
76 def on_forward(state):
77 loss = state['loss']
78 output = state['output']
79 target = state['sample'][1]
80 # online ploter
81 mlog.updateLoss(loss, meter='loss')
82 mlog.updateMeter(output, target, meters={'accuracy', 'map', 'confusion'})
83
84 def on_start_epoch(state):
85 mlog.timer.reset()
86 state['iterator'] = tqdm(state['iterator'])
87
88 def on_end_epoch(state):
89 mlog.printMeter(mode="Train", iepoch=state['epoch'])
90 mlog.resetMeter(mode="Train", iepoch=state['epoch'])
91
92 # do validation at the end of each epoch
93 engine.test(h, get_iterator(False))
94 mlog.printMeter(mode="Test", iepoch=state['epoch'])
95 mlog.resetMeter(mode="Test", iepoch=state['epoch'])
96
97 engine.hooks['on_sample'] = on_sample
98 engine.hooks['on_forward'] = on_forward
99 engine.hooks['on_start_epoch'] = on_start_epoch
100 engine.hooks['on_end_epoch'] = on_end_epoch
101 engine.train(h, get_iterator(True), maxepoch=10, optimizer=optimizer)
102
103
104 if __name__ == '__main__':
105 main()
106
[end of example/mnist_with_meterlogger.py]
[start of torchnet/logger/meterlogger.py]
1 # Copyright (c) 2017, Kui.
2 #
3 # [email protected]
4 # Tsinghua Univ.
5 # Modified at Dec 12 2017
6 #
7 import torch
8 import torchnet as tnt
9 from torchnet.logger import VisdomPlotLogger, VisdomLogger
10
11
12 class MeterLogger(object):
13
14 def __init__(self, server="http://localhost", port=8097, nclass=21, title="DNN"):
15 self.nclass = nclass
16 self.meter = {}
17 self.server = server
18 self.port = port
19 self.nclass = nclass
20 self.topk = 5 if nclass > 5 else nclass
21 self.title = title
22 self.logger = {'Train': {}, 'Test': {}}
23 self.timer = tnt.meter.TimeMeter(None)
24
25 def __ver2Tensor(self, target):
26 target_mat = torch.zeros(target.shape[0], self.nclass)
27 for i, j in enumerate(target):
28 target_mat[i][j] = 1
29 return target_mat
30
31 def __toTensor(self, var):
32 if isinstance(var, torch.autograd.Variable):
33 var = var.data
34 if not torch.is_tensor(var):
35 var = torch.from_numpy(var)
36 return var
37
38 def __addlogger(self, meter, ptype):
39 if ptype == 'line':
40 opts = {'title': self.title + ' Train ' + meter}
41 self.logger['Train'][meter] = VisdomPlotLogger(ptype, server=self.server, port=self.port, opts=opts)
42 opts = {'title': self.title + ' Test ' + meter}
43 self.logger['Test'][meter] = VisdomPlotLogger(ptype, server=self.server, port=self.port, opts=opts)
44 elif ptype == 'heatmap':
45 names = list(range(self.nclass))
46 opts = {'title': self.title + ' Train ' + meter, 'columnnames': names, 'rownames': names}
47 self.logger['Train'][meter] = VisdomLogger('heatmap', server=self.server, port=self.port, opts=opts)
48 opts = {'title': self.title + ' Test ' + meter, 'columnnames': names, 'rownames': names}
49 self.logger['Test'][meter] = VisdomLogger('heatmap', server=self.server, port=self.port, opts=opts)
50
51 def __addloss(self, meter):
52 self.meter[meter] = tnt.meter.AverageValueMeter()
53 self.__addlogger(meter, 'line')
54
55 def __addmeter(self, meter):
56 if meter == 'accuracy':
57 self.meter[meter] = tnt.meter.ClassErrorMeter(topk=(1, self.topk), accuracy=True)
58 self.__addlogger(meter, 'line')
59 elif meter == 'map':
60 self.meter[meter] = tnt.meter.mAPMeter()
61 self.__addlogger(meter, 'line')
62 elif meter == 'auc':
63 self.meter[meter] = tnt.meter.AUCMeter()
64 self.__addlogger(meter, 'line')
65 elif meter == 'confusion':
66 self.meter[meter] = tnt.meter.ConfusionMeter(self.nclass, normalized=True)
67 self.__addlogger(meter, 'heatmap')
68
69 def updateMeter(self, output, target, meters={'accuracy'}):
70 output = self.__toTensor(output)
71 target = self.__toTensor(target)
72 for meter in meters:
73 if meter not in self.meter.keys():
74 self.__addmeter(meter)
75 if meter in ['ap', 'map', 'confusion']:
76 target_th = self.__ver2Tensor(target)
77 self.meter[meter].add(output, target_th)
78 else:
79 self.meter[meter].add(output, target)
80
81 def updateLoss(self, loss, meter='loss'):
82 loss = self.__toTensor(loss)
83 if meter not in self.meter.keys():
84 self.__addloss(meter)
85 self.meter[meter].add(loss[0])
86
87 def resetMeter(self, iepoch, mode='Train'):
88 self.timer.reset()
89 for key in self.meter.keys():
90 val = self.meter[key].value()
91 val = val[0] if isinstance(val, (list, tuple)) else val
92 if key in ['confusion', 'histogram', 'image']:
93 self.logger[mode][key].log(val)
94 else:
95 self.logger[mode][key].log(iepoch, val)
96 self.meter[key].reset()
97
98 def printMeter(self, mode, iepoch, ibatch=1, totalbatch=1, meterlist=None):
99 pstr = "%s:\t[%d][%d/%d] \t"
100 tval = []
101 tval.extend([mode, iepoch, ibatch, totalbatch])
102 if meterlist is None:
103 meterlist = self.meter.keys()
104 for meter in meterlist:
105 if meter in ['confusion', 'histogram', 'image']:
106 continue
107 if meter == 'accuracy':
108 pstr += "Acc@1 %.2f%% \t Acc@" + str(self.topk) + " %.2f%% \t"
109 tval.extend([self.meter[meter].value()[0], self.meter[meter].value()[1]])
110 elif meter == 'map':
111 pstr += "mAP %.3f \t"
112 tval.extend([self.meter[meter].value()])
113 elif meter == 'auc':
114 pstr += "AUC %.3f \t"
115 tval.extend([self.meter[meter].value()])
116 else:
117 pstr += meter + " %.3f (%.3f)\t"
118 tval.extend([self.meter[meter].val, self.meter[meter].mean])
119 pstr += " %.2fs/its\t"
120 tval.extend([self.timer.value()])
121 print(pstr % tuple(tval))
122
[end of torchnet/logger/meterlogger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/example/mnist_with_meterlogger.py b/example/mnist_with_meterlogger.py
--- a/example/mnist_with_meterlogger.py
+++ b/example/mnist_with_meterlogger.py
@@ -78,21 +78,21 @@
output = state['output']
target = state['sample'][1]
# online ploter
- mlog.updateLoss(loss, meter='loss')
- mlog.updateMeter(output, target, meters={'accuracy', 'map', 'confusion'})
+ mlog.update_loss(loss, meter='loss')
+ mlog.update_meter(output, target, meters={'accuracy', 'map', 'confusion'})
def on_start_epoch(state):
mlog.timer.reset()
state['iterator'] = tqdm(state['iterator'])
def on_end_epoch(state):
- mlog.printMeter(mode="Train", iepoch=state['epoch'])
- mlog.resetMeter(mode="Train", iepoch=state['epoch'])
+ mlog.print_meter(mode="Train", iepoch=state['epoch'])
+ mlog.reset_meter(mode="Train", iepoch=state['epoch'])
# do validation at the end of each epoch
engine.test(h, get_iterator(False))
- mlog.printMeter(mode="Test", iepoch=state['epoch'])
- mlog.resetMeter(mode="Test", iepoch=state['epoch'])
+ mlog.print_meter(mode="Test", iepoch=state['epoch'])
+ mlog.reset_meter(mode="Test", iepoch=state['epoch'])
engine.hooks['on_sample'] = on_sample
engine.hooks['on_forward'] = on_forward
diff --git a/torchnet/logger/meterlogger.py b/torchnet/logger/meterlogger.py
--- a/torchnet/logger/meterlogger.py
+++ b/torchnet/logger/meterlogger.py
@@ -22,13 +22,13 @@
self.logger = {'Train': {}, 'Test': {}}
self.timer = tnt.meter.TimeMeter(None)
- def __ver2Tensor(self, target):
+ def _ver2tensor(self, target):
target_mat = torch.zeros(target.shape[0], self.nclass)
for i, j in enumerate(target):
target_mat[i][j] = 1
return target_mat
- def __toTensor(self, var):
+ def __to_tensor(self, var):
if isinstance(var, torch.autograd.Variable):
var = var.data
if not torch.is_tensor(var):
@@ -66,25 +66,25 @@
self.meter[meter] = tnt.meter.ConfusionMeter(self.nclass, normalized=True)
self.__addlogger(meter, 'heatmap')
- def updateMeter(self, output, target, meters={'accuracy'}):
- output = self.__toTensor(output)
- target = self.__toTensor(target)
+ def update_meter(self, output, target, meters={'accuracy'}):
+ output = self.__to_tensor(output)
+ target = self.__to_tensor(target)
for meter in meters:
if meter not in self.meter.keys():
self.__addmeter(meter)
if meter in ['ap', 'map', 'confusion']:
- target_th = self.__ver2Tensor(target)
+ target_th = self._ver2tensor(target)
self.meter[meter].add(output, target_th)
else:
self.meter[meter].add(output, target)
- def updateLoss(self, loss, meter='loss'):
- loss = self.__toTensor(loss)
+ def update_loss(self, loss, meter='loss'):
+ loss = self.__to_tensor(loss)
if meter not in self.meter.keys():
self.__addloss(meter)
self.meter[meter].add(loss[0])
- def resetMeter(self, iepoch, mode='Train'):
+ def reset_meter(self, iepoch, mode='Train'):
self.timer.reset()
for key in self.meter.keys():
val = self.meter[key].value()
@@ -95,7 +95,7 @@
self.logger[mode][key].log(iepoch, val)
self.meter[key].reset()
- def printMeter(self, mode, iepoch, ibatch=1, totalbatch=1, meterlist=None):
+ def print_meter(self, mode, iepoch, ibatch=1, totalbatch=1, meterlist=None):
pstr = "%s:\t[%d][%d/%d] \t"
tval = []
tval.extend([mode, iepoch, ibatch, totalbatch])
| {"golden_diff": "diff --git a/example/mnist_with_meterlogger.py b/example/mnist_with_meterlogger.py\n--- a/example/mnist_with_meterlogger.py\n+++ b/example/mnist_with_meterlogger.py\n@@ -78,21 +78,21 @@\n output = state['output']\n target = state['sample'][1]\n # online ploter\n- mlog.updateLoss(loss, meter='loss')\n- mlog.updateMeter(output, target, meters={'accuracy', 'map', 'confusion'})\n+ mlog.update_loss(loss, meter='loss')\n+ mlog.update_meter(output, target, meters={'accuracy', 'map', 'confusion'})\n \n def on_start_epoch(state):\n mlog.timer.reset()\n state['iterator'] = tqdm(state['iterator'])\n \n def on_end_epoch(state):\n- mlog.printMeter(mode=\"Train\", iepoch=state['epoch'])\n- mlog.resetMeter(mode=\"Train\", iepoch=state['epoch'])\n+ mlog.print_meter(mode=\"Train\", iepoch=state['epoch'])\n+ mlog.reset_meter(mode=\"Train\", iepoch=state['epoch'])\n \n # do validation at the end of each epoch\n engine.test(h, get_iterator(False))\n- mlog.printMeter(mode=\"Test\", iepoch=state['epoch'])\n- mlog.resetMeter(mode=\"Test\", iepoch=state['epoch'])\n+ mlog.print_meter(mode=\"Test\", iepoch=state['epoch'])\n+ mlog.reset_meter(mode=\"Test\", iepoch=state['epoch'])\n \n engine.hooks['on_sample'] = on_sample\n engine.hooks['on_forward'] = on_forward\ndiff --git a/torchnet/logger/meterlogger.py b/torchnet/logger/meterlogger.py\n--- a/torchnet/logger/meterlogger.py\n+++ b/torchnet/logger/meterlogger.py\n@@ -22,13 +22,13 @@\n self.logger = {'Train': {}, 'Test': {}}\n self.timer = tnt.meter.TimeMeter(None)\n \n- def __ver2Tensor(self, target):\n+ def _ver2tensor(self, target):\n target_mat = torch.zeros(target.shape[0], self.nclass)\n for i, j in enumerate(target):\n target_mat[i][j] = 1\n return target_mat\n \n- def __toTensor(self, var):\n+ def __to_tensor(self, var):\n if isinstance(var, torch.autograd.Variable):\n var = var.data\n if not torch.is_tensor(var):\n@@ -66,25 +66,25 @@\n self.meter[meter] = tnt.meter.ConfusionMeter(self.nclass, normalized=True)\n self.__addlogger(meter, 'heatmap')\n \n- def updateMeter(self, output, target, meters={'accuracy'}):\n- output = self.__toTensor(output)\n- target = self.__toTensor(target)\n+ def update_meter(self, output, target, meters={'accuracy'}):\n+ output = self.__to_tensor(output)\n+ target = self.__to_tensor(target)\n for meter in meters:\n if meter not in self.meter.keys():\n self.__addmeter(meter)\n if meter in ['ap', 'map', 'confusion']:\n- target_th = self.__ver2Tensor(target)\n+ target_th = self._ver2tensor(target)\n self.meter[meter].add(output, target_th)\n else:\n self.meter[meter].add(output, target)\n \n- def updateLoss(self, loss, meter='loss'):\n- loss = self.__toTensor(loss)\n+ def update_loss(self, loss, meter='loss'):\n+ loss = self.__to_tensor(loss)\n if meter not in self.meter.keys():\n self.__addloss(meter)\n self.meter[meter].add(loss[0])\n \n- def resetMeter(self, iepoch, mode='Train'):\n+ def reset_meter(self, iepoch, mode='Train'):\n self.timer.reset()\n for key in self.meter.keys():\n val = self.meter[key].value()\n@@ -95,7 +95,7 @@\n self.logger[mode][key].log(iepoch, val)\n self.meter[key].reset()\n \n- def printMeter(self, mode, iepoch, ibatch=1, totalbatch=1, meterlist=None):\n+ def print_meter(self, mode, iepoch, ibatch=1, totalbatch=1, meterlist=None):\n pstr = \"%s:\\t[%d][%d/%d] \\t\"\n tval = []\n tval.extend([mode, iepoch, ibatch, totalbatch])\n", "issue": "PEP8 compliance?\nI notice sometimes snake_case, sometimes mixedCase are used for class methods\r\n```python\r\ndef on_end_epoch(state):\r\n mlog.printMeter(mode=\"Train\", iepoch=state['epoch'])\r\n mlog.resetMeter(mode=\"Train\", iepoch=state['epoch'])\r\n```\r\nWIll tnt codebase 100% comply with PEP8?\n", "before_files": [{"content": "\"\"\" Run MNIST example and log to visdom\n Notes:\n - Visdom must be installed (pip works)\n - the Visdom server must be running at start!\n\n Example:\n $ python -m visdom.server -port 8097 &\n $ python mnist_with_visdom.py\n\"\"\"\nfrom tqdm import tqdm\nimport torch\nimport torch.optim\nimport torchnet as tnt\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\nfrom torch.nn.init import kaiming_normal\nfrom torchnet.engine import Engine\nfrom torchnet.logger import MeterLogger\nfrom torchvision.datasets.mnist import MNIST\n\n\ndef get_iterator(mode):\n ds = MNIST(root='./', download=True, train=mode)\n data = getattr(ds, 'train_data' if mode else 'test_data')\n labels = getattr(ds, 'train_labels' if mode else 'test_labels')\n tds = tnt.dataset.TensorDataset([data, labels])\n return tds.parallel(batch_size=128, num_workers=4, shuffle=mode)\n\n\ndef conv_init(ni, no, k):\n return kaiming_normal(torch.Tensor(no, ni, k, k))\n\n\ndef linear_init(ni, no):\n return kaiming_normal(torch.Tensor(no, ni))\n\n\ndef f(params, inputs, mode):\n o = inputs.view(inputs.size(0), 1, 28, 28)\n o = F.conv2d(o, params['conv0.weight'], params['conv0.bias'], stride=2)\n o = F.relu(o)\n o = F.conv2d(o, params['conv1.weight'], params['conv1.bias'], stride=2)\n o = F.relu(o)\n o = o.view(o.size(0), -1)\n o = F.linear(o, params['linear2.weight'], params['linear2.bias'])\n o = F.relu(o)\n o = F.linear(o, params['linear3.weight'], params['linear3.bias'])\n return o\n\n\ndef main():\n params = {\n 'conv0.weight': conv_init(1, 50, 5), 'conv0.bias': torch.zeros(50),\n 'conv1.weight': conv_init(50, 50, 5), 'conv1.bias': torch.zeros(50),\n 'linear2.weight': linear_init(800, 512), 'linear2.bias': torch.zeros(512),\n 'linear3.weight': linear_init(512, 10), 'linear3.bias': torch.zeros(10),\n }\n params = {k: Variable(v, requires_grad=True) for k, v in params.items()}\n\n optimizer = torch.optim.SGD(\n params.values(), lr=0.01, momentum=0.9, weight_decay=0.0005)\n\n engine = Engine()\n\n mlog = MeterLogger(server='10.10.30.91', port=9917, nclass=10, title=\"mnist_meterlogger\")\n\n def h(sample):\n inputs = Variable(sample[0].float() / 255.0)\n targets = Variable(torch.LongTensor(sample[1]))\n o = f(params, inputs, sample[2])\n return F.cross_entropy(o, targets), o\n\n def on_sample(state):\n state['sample'].append(state['train'])\n\n def on_forward(state):\n loss = state['loss']\n output = state['output']\n target = state['sample'][1]\n # online ploter\n mlog.updateLoss(loss, meter='loss')\n mlog.updateMeter(output, target, meters={'accuracy', 'map', 'confusion'})\n\n def on_start_epoch(state):\n mlog.timer.reset()\n state['iterator'] = tqdm(state['iterator'])\n\n def on_end_epoch(state):\n mlog.printMeter(mode=\"Train\", iepoch=state['epoch'])\n mlog.resetMeter(mode=\"Train\", iepoch=state['epoch'])\n\n # do validation at the end of each epoch\n engine.test(h, get_iterator(False))\n mlog.printMeter(mode=\"Test\", iepoch=state['epoch'])\n mlog.resetMeter(mode=\"Test\", iepoch=state['epoch'])\n\n engine.hooks['on_sample'] = on_sample\n engine.hooks['on_forward'] = on_forward\n engine.hooks['on_start_epoch'] = on_start_epoch\n engine.hooks['on_end_epoch'] = on_end_epoch\n engine.train(h, get_iterator(True), maxepoch=10, optimizer=optimizer)\n\n\nif __name__ == '__main__':\n main()\n", "path": "example/mnist_with_meterlogger.py"}, {"content": "# Copyright (c) 2017, Kui.\n#\n# [email protected]\n# Tsinghua Univ.\n# Modified at Dec 12 2017\n#\nimport torch\nimport torchnet as tnt\nfrom torchnet.logger import VisdomPlotLogger, VisdomLogger\n\n\nclass MeterLogger(object):\n\n def __init__(self, server=\"http://localhost\", port=8097, nclass=21, title=\"DNN\"):\n self.nclass = nclass\n self.meter = {}\n self.server = server\n self.port = port\n self.nclass = nclass\n self.topk = 5 if nclass > 5 else nclass\n self.title = title\n self.logger = {'Train': {}, 'Test': {}}\n self.timer = tnt.meter.TimeMeter(None)\n\n def __ver2Tensor(self, target):\n target_mat = torch.zeros(target.shape[0], self.nclass)\n for i, j in enumerate(target):\n target_mat[i][j] = 1\n return target_mat\n\n def __toTensor(self, var):\n if isinstance(var, torch.autograd.Variable):\n var = var.data\n if not torch.is_tensor(var):\n var = torch.from_numpy(var)\n return var\n\n def __addlogger(self, meter, ptype):\n if ptype == 'line':\n opts = {'title': self.title + ' Train ' + meter}\n self.logger['Train'][meter] = VisdomPlotLogger(ptype, server=self.server, port=self.port, opts=opts)\n opts = {'title': self.title + ' Test ' + meter}\n self.logger['Test'][meter] = VisdomPlotLogger(ptype, server=self.server, port=self.port, opts=opts)\n elif ptype == 'heatmap':\n names = list(range(self.nclass))\n opts = {'title': self.title + ' Train ' + meter, 'columnnames': names, 'rownames': names}\n self.logger['Train'][meter] = VisdomLogger('heatmap', server=self.server, port=self.port, opts=opts)\n opts = {'title': self.title + ' Test ' + meter, 'columnnames': names, 'rownames': names}\n self.logger['Test'][meter] = VisdomLogger('heatmap', server=self.server, port=self.port, opts=opts)\n\n def __addloss(self, meter):\n self.meter[meter] = tnt.meter.AverageValueMeter()\n self.__addlogger(meter, 'line')\n\n def __addmeter(self, meter):\n if meter == 'accuracy':\n self.meter[meter] = tnt.meter.ClassErrorMeter(topk=(1, self.topk), accuracy=True)\n self.__addlogger(meter, 'line')\n elif meter == 'map':\n self.meter[meter] = tnt.meter.mAPMeter()\n self.__addlogger(meter, 'line')\n elif meter == 'auc':\n self.meter[meter] = tnt.meter.AUCMeter()\n self.__addlogger(meter, 'line')\n elif meter == 'confusion':\n self.meter[meter] = tnt.meter.ConfusionMeter(self.nclass, normalized=True)\n self.__addlogger(meter, 'heatmap')\n\n def updateMeter(self, output, target, meters={'accuracy'}):\n output = self.__toTensor(output)\n target = self.__toTensor(target)\n for meter in meters:\n if meter not in self.meter.keys():\n self.__addmeter(meter)\n if meter in ['ap', 'map', 'confusion']:\n target_th = self.__ver2Tensor(target)\n self.meter[meter].add(output, target_th)\n else:\n self.meter[meter].add(output, target)\n\n def updateLoss(self, loss, meter='loss'):\n loss = self.__toTensor(loss)\n if meter not in self.meter.keys():\n self.__addloss(meter)\n self.meter[meter].add(loss[0])\n\n def resetMeter(self, iepoch, mode='Train'):\n self.timer.reset()\n for key in self.meter.keys():\n val = self.meter[key].value()\n val = val[0] if isinstance(val, (list, tuple)) else val\n if key in ['confusion', 'histogram', 'image']:\n self.logger[mode][key].log(val)\n else:\n self.logger[mode][key].log(iepoch, val)\n self.meter[key].reset()\n\n def printMeter(self, mode, iepoch, ibatch=1, totalbatch=1, meterlist=None):\n pstr = \"%s:\\t[%d][%d/%d] \\t\"\n tval = []\n tval.extend([mode, iepoch, ibatch, totalbatch])\n if meterlist is None:\n meterlist = self.meter.keys()\n for meter in meterlist:\n if meter in ['confusion', 'histogram', 'image']:\n continue\n if meter == 'accuracy':\n pstr += \"Acc@1 %.2f%% \\t Acc@\" + str(self.topk) + \" %.2f%% \\t\"\n tval.extend([self.meter[meter].value()[0], self.meter[meter].value()[1]])\n elif meter == 'map':\n pstr += \"mAP %.3f \\t\"\n tval.extend([self.meter[meter].value()])\n elif meter == 'auc':\n pstr += \"AUC %.3f \\t\"\n tval.extend([self.meter[meter].value()])\n else:\n pstr += meter + \" %.3f (%.3f)\\t\"\n tval.extend([self.meter[meter].val, self.meter[meter].mean])\n pstr += \" %.2fs/its\\t\"\n tval.extend([self.timer.value()])\n print(pstr % tuple(tval))\n", "path": "torchnet/logger/meterlogger.py"}]} | 3,423 | 1,020 |
gh_patches_debug_17472 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-3230 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Raise exception with helpful error on Client pickle.
Based on the discussion in #3191, raise an exception with a helpful error stating that `Client` classes are not pickleable.
Note to self: We probably want this in GAX also.
</issue>
<code>
[start of core/google/cloud/client.py]
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Base classes for client used to interact with Google Cloud APIs."""
16
17 import google.auth.credentials
18 from google.oauth2 import service_account
19 import google_auth_httplib2
20 import six
21
22 from google.cloud._helpers import _determine_default_project
23 from google.cloud.credentials import get_credentials
24
25
26 _GOOGLE_AUTH_CREDENTIALS_HELP = (
27 'This library only supports credentials from google-auth-library-python. '
28 'See https://google-cloud-python.readthedocs.io/en/latest/'
29 'google-cloud-auth.html for help on authentication with this library.'
30 )
31
32
33 class _ClientFactoryMixin(object):
34 """Mixin to allow factories that create credentials.
35
36 .. note::
37
38 This class is virtual.
39 """
40
41 @classmethod
42 def from_service_account_json(cls, json_credentials_path, *args, **kwargs):
43 """Factory to retrieve JSON credentials while creating client.
44
45 :type json_credentials_path: str
46 :param json_credentials_path: The path to a private key file (this file
47 was given to you when you created the
48 service account). This file must contain
49 a JSON object with a private key and
50 other credentials information (downloaded
51 from the Google APIs console).
52
53 :type args: tuple
54 :param args: Remaining positional arguments to pass to constructor.
55
56 :type kwargs: dict
57 :param kwargs: Remaining keyword arguments to pass to constructor.
58
59 :rtype: :class:`google.cloud.pubsub.client.Client`
60 :returns: The client created with the retrieved JSON credentials.
61 :raises: :class:`TypeError` if there is a conflict with the kwargs
62 and the credentials created by the factory.
63 """
64 if 'credentials' in kwargs:
65 raise TypeError('credentials must not be in keyword arguments')
66 credentials = service_account.Credentials.from_service_account_file(
67 json_credentials_path)
68 kwargs['credentials'] = credentials
69 return cls(*args, **kwargs)
70
71
72 class Client(_ClientFactoryMixin):
73 """Client to bundle configuration needed for API requests.
74
75 Stores ``credentials`` and ``http`` object so that subclasses
76 can pass them along to a connection class.
77
78 If no value is passed in for ``http``, a :class:`httplib2.Http` object
79 will be created and authorized with the ``credentials``. If not, the
80 ``credentials`` and ``http`` need not be related.
81
82 Callers and subclasses may seek to use the private key from
83 ``credentials`` to sign data.
84
85 A custom (non-``httplib2``) HTTP object must have a ``request`` method
86 which accepts the following arguments:
87
88 * ``uri``
89 * ``method``
90 * ``body``
91 * ``headers``
92
93 In addition, ``redirections`` and ``connection_type`` may be used.
94
95 A custom ``http`` object will also need to be able to add a bearer token
96 to API requests and handle token refresh on 401 errors.
97
98 :type credentials: :class:`~google.auth.credentials.Credentials`
99 :param credentials: (Optional) The OAuth2 Credentials to use for this
100 client. If not passed (and if no ``http`` object is
101 passed), falls back to the default inferred from the
102 environment.
103
104 :type http: :class:`~httplib2.Http`
105 :param http: (Optional) HTTP object to make requests. Can be any object
106 that defines ``request()`` with the same interface as
107 :meth:`~httplib2.Http.request`. If not passed, an
108 ``http`` object is created that is bound to the
109 ``credentials`` for the current object.
110 """
111
112 SCOPE = None
113 """The scopes required for authenticating with a service.
114
115 Needs to be set by subclasses.
116 """
117
118 def __init__(self, credentials=None, http=None):
119 if (credentials is not None and
120 not isinstance(
121 credentials, google.auth.credentials.Credentials)):
122 raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)
123 if credentials is None and http is None:
124 credentials = get_credentials()
125 self._credentials = google.auth.credentials.with_scopes_if_required(
126 credentials, self.SCOPE)
127 self._http_internal = http
128
129 @property
130 def _http(self):
131 """Getter for object used for HTTP transport.
132
133 :rtype: :class:`~httplib2.Http`
134 :returns: An HTTP object.
135 """
136 if self._http_internal is None:
137 self._http_internal = google_auth_httplib2.AuthorizedHttp(
138 self._credentials)
139 return self._http_internal
140
141
142 class _ClientProjectMixin(object):
143 """Mixin to allow setting the project on the client.
144
145 :type project: str
146 :param project: the project which the client acts on behalf of. If not
147 passed falls back to the default inferred from the
148 environment.
149
150 :raises: :class:`EnvironmentError` if the project is neither passed in nor
151 set in the environment. :class:`ValueError` if the project value
152 is invalid.
153 """
154
155 def __init__(self, project=None):
156 project = self._determine_default(project)
157 if project is None:
158 raise EnvironmentError('Project was not passed and could not be '
159 'determined from the environment.')
160 if isinstance(project, six.binary_type):
161 project = project.decode('utf-8')
162 if not isinstance(project, six.string_types):
163 raise ValueError('Project must be a string.')
164 self.project = project
165
166 @staticmethod
167 def _determine_default(project):
168 """Helper: use default project detection."""
169 return _determine_default_project(project)
170
171
172 class ClientWithProject(Client, _ClientProjectMixin):
173 """Client that also stores a project.
174
175 :type project: str
176 :param project: the project which the client acts on behalf of. If not
177 passed falls back to the default inferred from the
178 environment.
179
180 :type credentials: :class:`~google.auth.credentials.Credentials`
181 :param credentials: (Optional) The OAuth2 Credentials to use for this
182 client. If not passed (and if no ``http`` object is
183 passed), falls back to the default inferred from the
184 environment.
185
186 :type http: :class:`~httplib2.Http`
187 :param http: (Optional) HTTP object to make requests. Can be any object
188 that defines ``request()`` with the same interface as
189 :meth:`~httplib2.Http.request`. If not passed, an
190 ``http`` object is created that is bound to the
191 ``credentials`` for the current object.
192
193 :raises: :class:`ValueError` if the project is neither passed in nor
194 set in the environment.
195 """
196
197 def __init__(self, project=None, credentials=None, http=None):
198 _ClientProjectMixin.__init__(self, project=project)
199 Client.__init__(self, credentials=credentials, http=http)
200
[end of core/google/cloud/client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/google/cloud/client.py b/core/google/cloud/client.py
--- a/core/google/cloud/client.py
+++ b/core/google/cloud/client.py
@@ -14,6 +14,8 @@
"""Base classes for client used to interact with Google Cloud APIs."""
+from pickle import PicklingError
+
import google.auth.credentials
from google.oauth2 import service_account
import google_auth_httplib2
@@ -126,6 +128,13 @@
credentials, self.SCOPE)
self._http_internal = http
+ def __getstate__(self):
+ """Explicitly state that clients are not pickleable."""
+ raise PicklingError('\n'.join([
+ 'Pickling client objects is explicitly not supported.',
+ 'Clients have non-trivial state that is local and unpickleable.',
+ ]))
+
@property
def _http(self):
"""Getter for object used for HTTP transport.
| {"golden_diff": "diff --git a/core/google/cloud/client.py b/core/google/cloud/client.py\n--- a/core/google/cloud/client.py\n+++ b/core/google/cloud/client.py\n@@ -14,6 +14,8 @@\n \n \"\"\"Base classes for client used to interact with Google Cloud APIs.\"\"\"\n \n+from pickle import PicklingError\n+\n import google.auth.credentials\n from google.oauth2 import service_account\n import google_auth_httplib2\n@@ -126,6 +128,13 @@\n credentials, self.SCOPE)\n self._http_internal = http\n \n+ def __getstate__(self):\n+ \"\"\"Explicitly state that clients are not pickleable.\"\"\"\n+ raise PicklingError('\\n'.join([\n+ 'Pickling client objects is explicitly not supported.',\n+ 'Clients have non-trivial state that is local and unpickleable.',\n+ ]))\n+\n @property\n def _http(self):\n \"\"\"Getter for object used for HTTP transport.\n", "issue": "Raise exception with helpful error on Client pickle.\nBased on the discussion in #3191, raise an exception with a helpful error stating that `Client` classes are not pickleable.\r\n\r\nNote to self: We probably want this in GAX also.\n", "before_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Base classes for client used to interact with Google Cloud APIs.\"\"\"\n\nimport google.auth.credentials\nfrom google.oauth2 import service_account\nimport google_auth_httplib2\nimport six\n\nfrom google.cloud._helpers import _determine_default_project\nfrom google.cloud.credentials import get_credentials\n\n\n_GOOGLE_AUTH_CREDENTIALS_HELP = (\n 'This library only supports credentials from google-auth-library-python. '\n 'See https://google-cloud-python.readthedocs.io/en/latest/'\n 'google-cloud-auth.html for help on authentication with this library.'\n)\n\n\nclass _ClientFactoryMixin(object):\n \"\"\"Mixin to allow factories that create credentials.\n\n .. note::\n\n This class is virtual.\n \"\"\"\n\n @classmethod\n def from_service_account_json(cls, json_credentials_path, *args, **kwargs):\n \"\"\"Factory to retrieve JSON credentials while creating client.\n\n :type json_credentials_path: str\n :param json_credentials_path: The path to a private key file (this file\n was given to you when you created the\n service account). This file must contain\n a JSON object with a private key and\n other credentials information (downloaded\n from the Google APIs console).\n\n :type args: tuple\n :param args: Remaining positional arguments to pass to constructor.\n\n :type kwargs: dict\n :param kwargs: Remaining keyword arguments to pass to constructor.\n\n :rtype: :class:`google.cloud.pubsub.client.Client`\n :returns: The client created with the retrieved JSON credentials.\n :raises: :class:`TypeError` if there is a conflict with the kwargs\n and the credentials created by the factory.\n \"\"\"\n if 'credentials' in kwargs:\n raise TypeError('credentials must not be in keyword arguments')\n credentials = service_account.Credentials.from_service_account_file(\n json_credentials_path)\n kwargs['credentials'] = credentials\n return cls(*args, **kwargs)\n\n\nclass Client(_ClientFactoryMixin):\n \"\"\"Client to bundle configuration needed for API requests.\n\n Stores ``credentials`` and ``http`` object so that subclasses\n can pass them along to a connection class.\n\n If no value is passed in for ``http``, a :class:`httplib2.Http` object\n will be created and authorized with the ``credentials``. If not, the\n ``credentials`` and ``http`` need not be related.\n\n Callers and subclasses may seek to use the private key from\n ``credentials`` to sign data.\n\n A custom (non-``httplib2``) HTTP object must have a ``request`` method\n which accepts the following arguments:\n\n * ``uri``\n * ``method``\n * ``body``\n * ``headers``\n\n In addition, ``redirections`` and ``connection_type`` may be used.\n\n A custom ``http`` object will also need to be able to add a bearer token\n to API requests and handle token refresh on 401 errors.\n\n :type credentials: :class:`~google.auth.credentials.Credentials`\n :param credentials: (Optional) The OAuth2 Credentials to use for this\n client. If not passed (and if no ``http`` object is\n passed), falls back to the default inferred from the\n environment.\n\n :type http: :class:`~httplib2.Http`\n :param http: (Optional) HTTP object to make requests. Can be any object\n that defines ``request()`` with the same interface as\n :meth:`~httplib2.Http.request`. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n \"\"\"\n\n SCOPE = None\n \"\"\"The scopes required for authenticating with a service.\n\n Needs to be set by subclasses.\n \"\"\"\n\n def __init__(self, credentials=None, http=None):\n if (credentials is not None and\n not isinstance(\n credentials, google.auth.credentials.Credentials)):\n raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)\n if credentials is None and http is None:\n credentials = get_credentials()\n self._credentials = google.auth.credentials.with_scopes_if_required(\n credentials, self.SCOPE)\n self._http_internal = http\n\n @property\n def _http(self):\n \"\"\"Getter for object used for HTTP transport.\n\n :rtype: :class:`~httplib2.Http`\n :returns: An HTTP object.\n \"\"\"\n if self._http_internal is None:\n self._http_internal = google_auth_httplib2.AuthorizedHttp(\n self._credentials)\n return self._http_internal\n\n\nclass _ClientProjectMixin(object):\n \"\"\"Mixin to allow setting the project on the client.\n\n :type project: str\n :param project: the project which the client acts on behalf of. If not\n passed falls back to the default inferred from the\n environment.\n\n :raises: :class:`EnvironmentError` if the project is neither passed in nor\n set in the environment. :class:`ValueError` if the project value\n is invalid.\n \"\"\"\n\n def __init__(self, project=None):\n project = self._determine_default(project)\n if project is None:\n raise EnvironmentError('Project was not passed and could not be '\n 'determined from the environment.')\n if isinstance(project, six.binary_type):\n project = project.decode('utf-8')\n if not isinstance(project, six.string_types):\n raise ValueError('Project must be a string.')\n self.project = project\n\n @staticmethod\n def _determine_default(project):\n \"\"\"Helper: use default project detection.\"\"\"\n return _determine_default_project(project)\n\n\nclass ClientWithProject(Client, _ClientProjectMixin):\n \"\"\"Client that also stores a project.\n\n :type project: str\n :param project: the project which the client acts on behalf of. If not\n passed falls back to the default inferred from the\n environment.\n\n :type credentials: :class:`~google.auth.credentials.Credentials`\n :param credentials: (Optional) The OAuth2 Credentials to use for this\n client. If not passed (and if no ``http`` object is\n passed), falls back to the default inferred from the\n environment.\n\n :type http: :class:`~httplib2.Http`\n :param http: (Optional) HTTP object to make requests. Can be any object\n that defines ``request()`` with the same interface as\n :meth:`~httplib2.Http.request`. If not passed, an\n ``http`` object is created that is bound to the\n ``credentials`` for the current object.\n\n :raises: :class:`ValueError` if the project is neither passed in nor\n set in the environment.\n \"\"\"\n\n def __init__(self, project=None, credentials=None, http=None):\n _ClientProjectMixin.__init__(self, project=project)\n Client.__init__(self, credentials=credentials, http=http)\n", "path": "core/google/cloud/client.py"}]} | 2,705 | 205 |
gh_patches_debug_13947 | rasdani/github-patches | git_diff | uccser__cs-unplugged-1466 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HTML can be inserted into a user's Plugging it in challenge templates
### The steps are:
- [Open any programming challenge on PII](https://www.csunplugged.org/en/plugging-it-in/binary-numbers/how-binary-digits-work/binary-numbers-no-calculations/)
- Submit this code in the answer box:
- `</script><script type="text/javascript">console.log("hi")</script>`
- It must be all on one line
- Adapt to your desired level of maliciousness
- Refresh the page
When the user re-requests the page, the submitted 'code' is loaded [in the HTML template](https://github.com/uccser/cs-unplugged/blob/develop/csunplugged/templates/plugging_it_in/programming-challenge.html#L191). The first `</script>` closes the script that assigns the variables, and what follows is loaded as a new HTML script tag. The resulting HTML is sent to the user and executes 'as normal'.
I haven't been able to submit code that executes while also not breaking every PII challenge page. When the page does break, deleting your cookies for CSU allows you to get it working again
### Three potential fixes
- [Escape the submitted code.](https://docs.djangoproject.com/en/3.1/ref/utils/#module-django.utils.html)
- It was already escaped, the problem is that [it gets un-escaped](https://docs.djangoproject.com/en/2.2/ref/templates/language/#automatic-html-escaping) in order to load as a variable.
- So we would need to un-escape it in JS rather than HTML, and doing so will be tricky
(`'}}<` etc is loaded into the answer box)
- Don't load the user submitted code into the HTML template, and instead request it in JS
- Will be slower, another request is needed
- Load the submitted code in the HTML, not as a variable, but straight into the <textarea> answer box.
- This is how it's done in CodeWOF
- Since it's in a <textarea> everything is rendered as plaintext
</issue>
<code>
[start of csunplugged/config/__init__.py]
1 """Module for Django system configuration."""
2
3 __version__ = "6.0.0"
4
[end of csunplugged/config/__init__.py]
[start of csunplugged/plugging_it_in/views.py]
1 """Views for the plugging_it_in application."""
2
3 from django.http import HttpResponse
4 from django.http import Http404
5
6 import json
7 import requests
8
9 from django.shortcuts import get_object_or_404
10 from django.views import generic
11 from django.views import View
12 from django.urls import reverse
13 from django.conf import settings
14 from django.core.exceptions import ObjectDoesNotExist
15 from utils.translated_first import translated_first
16 from utils.group_lessons_by_age import group_lessons_by_age
17
18 from topics.models import (
19 Topic,
20 ProgrammingChallenge,
21 Lesson
22 )
23
24
25 class IndexView(generic.ListView):
26 """View for the topics application homepage."""
27
28 template_name = "plugging_it_in/index.html"
29 context_object_name = "programming_topics"
30
31 def get_queryset(self):
32 """Get queryset of all topics.
33
34 Returns:
35 Queryset of Topic objects ordered by name.
36 """
37 programming_topics = Topic.objects.order_by(
38 "name"
39 ).exclude(
40 programming_challenges__isnull=True
41 ).prefetch_related(
42 "programming_challenges",
43 "lessons",
44 )
45 return translated_first(programming_topics)
46
47 def get_context_data(self, **kwargs):
48 """Provide the context data for the index view.
49
50 Returns:
51 Dictionary of context data.
52 """
53 # Call the base implementation first to get a context
54 context = super(IndexView, self).get_context_data(**kwargs)
55 for topic in self.object_list:
56 topic.grouped_lessons = group_lessons_by_age(
57 topic.lessons.all(),
58 only_programming_exercises=True
59 )
60 return context
61
62
63 class AboutView(generic.TemplateView):
64 """View for the about page that renders from a template."""
65
66 template_name = "plugging_it_in/about.html"
67
68
69 class ProgrammingChallengeListView(generic.DetailView):
70 """View showing all the programming exercises for a specific lesson."""
71
72 model = Lesson
73 template_name = "plugging_it_in/lesson.html"
74
75 def get_object(self, **kwargs):
76 """Retrieve object for the lesson view.
77
78 Returns:
79 Lesson object, or raises 404 error if not found.
80 """
81 return get_object_or_404(
82 self.model.objects.select_related(),
83 topic__slug=self.kwargs.get("topic_slug", None),
84 slug=self.kwargs.get("lesson_slug", None),
85 )
86
87 def get_context_data(self, **kwargs):
88 """Provide the context data for the programming challenge list view.
89
90 Returns:
91 Dictionary of context data.
92 """
93 # Call the base implementation first to get a context
94 context = super(ProgrammingChallengeListView, self).get_context_data(**kwargs)
95
96 context["topic"] = self.object.topic
97
98 context["lesson"] = self.object
99
100 # Add in a QuerySet of all the connected programming exercises for this topic
101 context["programming_challenges"] = self.object.retrieve_related_programming_challenges(
102 ).prefetch_related('implementations')
103 return context
104
105
106 class ProgrammingChallengeView(generic.DetailView):
107 """View for a specific programming challenge."""
108
109 model = ProgrammingChallenge
110 template_name = "plugging_it_in/programming-challenge.html"
111 context_object_name = "programming_challenge"
112
113 def get_object(self, **kwargs):
114 """Retrieve object for the programming challenge view.
115
116 Returns:
117 ProgrammingChallenge object, or raises 404 error if not found.
118 """
119 return get_object_or_404(
120 self.model.objects.select_related(),
121 topic__slug=self.kwargs.get("topic_slug", None),
122 slug=self.kwargs.get("programming_challenge_slug", None)
123 )
124
125 def get_context_data(self, **kwargs):
126 """Provide the context data for the programming challenge view.
127
128 Returns:
129 Dictionary of context data.
130 """
131 # Call the base implementation first to get a context
132 context = super(ProgrammingChallengeView, self).get_context_data(**kwargs)
133
134 context["topic"] = self.object.topic
135
136 try:
137 lesson_slug = self.kwargs.get("lesson_slug", None)
138 lesson = Lesson.objects.get(slug=lesson_slug)
139 context["lesson"] = lesson
140 challlenges = lesson.retrieve_related_programming_challenges("Python")
141 context["programming_challenges"] = challlenges
142 context["programming_exercises_json"] = json.dumps(list(challlenges.values()))
143 except ObjectDoesNotExist:
144 raise Http404("Lesson does not exist")
145
146 context["implementations"] = self.object.ordered_implementations()
147
148 related_test_cases = self.object.related_test_cases()
149 context["test_cases_json"] = json.dumps(list(related_test_cases.values()))
150 context["test_cases"] = related_test_cases
151 context["jobe_proxy_url"] = reverse('plugging_it_in:jobe_proxy')
152 context["saved_attempts"] = self.request.session.get('saved_attempts', {})
153
154 return context
155
156
157 class JobeProxyView(View):
158 """Proxy for Jobe Server."""
159
160 def post(self, request, *args, **kwargs):
161 """Forward on request to Jobe from the frontend and adds API key if this is needed.
162
163 Returns:
164 The response from the Jobe server.
165 """
166 # Extracting data from the request body
167 body_unicode = request.body.decode('utf-8')
168 body = json.dumps(json.loads(body_unicode))
169
170 headers = {"Content-type": "application/json; charset=utf-8",
171 "Accept": "application/json"}
172
173 # Set API key for production
174 if hasattr(settings, 'JOBE_API_KEY'):
175 headers["X-API-KEY"] = settings.JOBE_API_KEY
176
177 response = requests.post(settings.JOBE_SERVER_URL + "/jobe/index.php/restapi/runs/",
178 data=body, headers=headers)
179 return HttpResponse(response.text)
180
181
182 class SaveAttemptView(View):
183 """View to save the users challenge attempt."""
184
185 def post(self, request):
186 """Save the users attempt to a Django session."""
187 body_unicode = request.body.decode('utf-8')
188 body = json.loads(body_unicode)
189
190 request.session['saved_attempts'] = request.session.get('saved_attempts', {})
191
192 # To stop a "passed" or "failed" status being overridden by "started"
193 if (not (body["status"] == "started"
194 and request.session.get('saved_attempts', {}).get(body["challenge"], {}).get("status", "")
195 in {'passed', 'failed'})
196 and body["attempt"] != ""):
197 request.session['saved_attempts'][body["challenge"]] = {
198 "status": body["status"],
199 "code": body["attempt"]
200 }
201 return HttpResponse("Saved the attempt.")
202 else:
203 return HttpResponse("Response does not need to be saved.")
204
[end of csunplugged/plugging_it_in/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/csunplugged/config/__init__.py b/csunplugged/config/__init__.py
--- a/csunplugged/config/__init__.py
+++ b/csunplugged/config/__init__.py
@@ -1,3 +1,3 @@
"""Module for Django system configuration."""
-__version__ = "6.0.0"
+__version__ = "6.0.1"
diff --git a/csunplugged/plugging_it_in/views.py b/csunplugged/plugging_it_in/views.py
--- a/csunplugged/plugging_it_in/views.py
+++ b/csunplugged/plugging_it_in/views.py
@@ -150,6 +150,10 @@
context["test_cases"] = related_test_cases
context["jobe_proxy_url"] = reverse('plugging_it_in:jobe_proxy')
context["saved_attempts"] = self.request.session.get('saved_attempts', {})
+ try:
+ context["previous_submission"] = context["saved_attempts"][self.object.slug]['code']
+ except KeyError:
+ context["previous_submission"] = ''
return context
| {"golden_diff": "diff --git a/csunplugged/config/__init__.py b/csunplugged/config/__init__.py\n--- a/csunplugged/config/__init__.py\n+++ b/csunplugged/config/__init__.py\n@@ -1,3 +1,3 @@\n \"\"\"Module for Django system configuration.\"\"\"\n \n-__version__ = \"6.0.0\"\n+__version__ = \"6.0.1\"\ndiff --git a/csunplugged/plugging_it_in/views.py b/csunplugged/plugging_it_in/views.py\n--- a/csunplugged/plugging_it_in/views.py\n+++ b/csunplugged/plugging_it_in/views.py\n@@ -150,6 +150,10 @@\n context[\"test_cases\"] = related_test_cases\n context[\"jobe_proxy_url\"] = reverse('plugging_it_in:jobe_proxy')\n context[\"saved_attempts\"] = self.request.session.get('saved_attempts', {})\n+ try:\n+ context[\"previous_submission\"] = context[\"saved_attempts\"][self.object.slug]['code']\n+ except KeyError:\n+ context[\"previous_submission\"] = ''\n \n return context\n", "issue": "HTML can be inserted into a user's Plugging it in challenge templates\n### The steps are:\r\n- [Open any programming challenge on PII](https://www.csunplugged.org/en/plugging-it-in/binary-numbers/how-binary-digits-work/binary-numbers-no-calculations/)\r\n- Submit this code in the answer box:\r\n - `</script><script type=\"text/javascript\">console.log(\"hi\")</script>`\r\n - It must be all on one line\r\n - Adapt to your desired level of maliciousness\r\n- Refresh the page\r\n\r\nWhen the user re-requests the page, the submitted 'code' is loaded [in the HTML template](https://github.com/uccser/cs-unplugged/blob/develop/csunplugged/templates/plugging_it_in/programming-challenge.html#L191). The first `</script>` closes the script that assigns the variables, and what follows is loaded as a new HTML script tag. The resulting HTML is sent to the user and executes 'as normal'.\r\n\r\nI haven't been able to submit code that executes while also not breaking every PII challenge page. When the page does break, deleting your cookies for CSU allows you to get it working again\r\n\r\n### Three potential fixes\r\n- [Escape the submitted code.](https://docs.djangoproject.com/en/3.1/ref/utils/#module-django.utils.html)\r\n - It was already escaped, the problem is that [it gets un-escaped](https://docs.djangoproject.com/en/2.2/ref/templates/language/#automatic-html-escaping) in order to load as a variable.\r\n - So we would need to un-escape it in JS rather than HTML, and doing so will be tricky\r\n(`'}}<` etc is loaded into the answer box)\r\n- Don't load the user submitted code into the HTML template, and instead request it in JS\r\n - Will be slower, another request is needed\r\n- Load the submitted code in the HTML, not as a variable, but straight into the <textarea> answer box.\r\n - This is how it's done in CodeWOF\r\n - Since it's in a <textarea> everything is rendered as plaintext\n", "before_files": [{"content": "\"\"\"Module for Django system configuration.\"\"\"\n\n__version__ = \"6.0.0\"\n", "path": "csunplugged/config/__init__.py"}, {"content": "\"\"\"Views for the plugging_it_in application.\"\"\"\n\nfrom django.http import HttpResponse\nfrom django.http import Http404\n\nimport json\nimport requests\n\nfrom django.shortcuts import get_object_or_404\nfrom django.views import generic\nfrom django.views import View\nfrom django.urls import reverse\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom utils.translated_first import translated_first\nfrom utils.group_lessons_by_age import group_lessons_by_age\n\nfrom topics.models import (\n Topic,\n ProgrammingChallenge,\n Lesson\n)\n\n\nclass IndexView(generic.ListView):\n \"\"\"View for the topics application homepage.\"\"\"\n\n template_name = \"plugging_it_in/index.html\"\n context_object_name = \"programming_topics\"\n\n def get_queryset(self):\n \"\"\"Get queryset of all topics.\n\n Returns:\n Queryset of Topic objects ordered by name.\n \"\"\"\n programming_topics = Topic.objects.order_by(\n \"name\"\n ).exclude(\n programming_challenges__isnull=True\n ).prefetch_related(\n \"programming_challenges\",\n \"lessons\",\n )\n return translated_first(programming_topics)\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the index view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(IndexView, self).get_context_data(**kwargs)\n for topic in self.object_list:\n topic.grouped_lessons = group_lessons_by_age(\n topic.lessons.all(),\n only_programming_exercises=True\n )\n return context\n\n\nclass AboutView(generic.TemplateView):\n \"\"\"View for the about page that renders from a template.\"\"\"\n\n template_name = \"plugging_it_in/about.html\"\n\n\nclass ProgrammingChallengeListView(generic.DetailView):\n \"\"\"View showing all the programming exercises for a specific lesson.\"\"\"\n\n model = Lesson\n template_name = \"plugging_it_in/lesson.html\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the lesson view.\n\n Returns:\n Lesson object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"lesson_slug\", None),\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the programming challenge list view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(ProgrammingChallengeListView, self).get_context_data(**kwargs)\n\n context[\"topic\"] = self.object.topic\n\n context[\"lesson\"] = self.object\n\n # Add in a QuerySet of all the connected programming exercises for this topic\n context[\"programming_challenges\"] = self.object.retrieve_related_programming_challenges(\n ).prefetch_related('implementations')\n return context\n\n\nclass ProgrammingChallengeView(generic.DetailView):\n \"\"\"View for a specific programming challenge.\"\"\"\n\n model = ProgrammingChallenge\n template_name = \"plugging_it_in/programming-challenge.html\"\n context_object_name = \"programming_challenge\"\n\n def get_object(self, **kwargs):\n \"\"\"Retrieve object for the programming challenge view.\n\n Returns:\n ProgrammingChallenge object, or raises 404 error if not found.\n \"\"\"\n return get_object_or_404(\n self.model.objects.select_related(),\n topic__slug=self.kwargs.get(\"topic_slug\", None),\n slug=self.kwargs.get(\"programming_challenge_slug\", None)\n )\n\n def get_context_data(self, **kwargs):\n \"\"\"Provide the context data for the programming challenge view.\n\n Returns:\n Dictionary of context data.\n \"\"\"\n # Call the base implementation first to get a context\n context = super(ProgrammingChallengeView, self).get_context_data(**kwargs)\n\n context[\"topic\"] = self.object.topic\n\n try:\n lesson_slug = self.kwargs.get(\"lesson_slug\", None)\n lesson = Lesson.objects.get(slug=lesson_slug)\n context[\"lesson\"] = lesson\n challlenges = lesson.retrieve_related_programming_challenges(\"Python\")\n context[\"programming_challenges\"] = challlenges\n context[\"programming_exercises_json\"] = json.dumps(list(challlenges.values()))\n except ObjectDoesNotExist:\n raise Http404(\"Lesson does not exist\")\n\n context[\"implementations\"] = self.object.ordered_implementations()\n\n related_test_cases = self.object.related_test_cases()\n context[\"test_cases_json\"] = json.dumps(list(related_test_cases.values()))\n context[\"test_cases\"] = related_test_cases\n context[\"jobe_proxy_url\"] = reverse('plugging_it_in:jobe_proxy')\n context[\"saved_attempts\"] = self.request.session.get('saved_attempts', {})\n\n return context\n\n\nclass JobeProxyView(View):\n \"\"\"Proxy for Jobe Server.\"\"\"\n\n def post(self, request, *args, **kwargs):\n \"\"\"Forward on request to Jobe from the frontend and adds API key if this is needed.\n\n Returns:\n The response from the Jobe server.\n \"\"\"\n # Extracting data from the request body\n body_unicode = request.body.decode('utf-8')\n body = json.dumps(json.loads(body_unicode))\n\n headers = {\"Content-type\": \"application/json; charset=utf-8\",\n \"Accept\": \"application/json\"}\n\n # Set API key for production\n if hasattr(settings, 'JOBE_API_KEY'):\n headers[\"X-API-KEY\"] = settings.JOBE_API_KEY\n\n response = requests.post(settings.JOBE_SERVER_URL + \"/jobe/index.php/restapi/runs/\",\n data=body, headers=headers)\n return HttpResponse(response.text)\n\n\nclass SaveAttemptView(View):\n \"\"\"View to save the users challenge attempt.\"\"\"\n\n def post(self, request):\n \"\"\"Save the users attempt to a Django session.\"\"\"\n body_unicode = request.body.decode('utf-8')\n body = json.loads(body_unicode)\n\n request.session['saved_attempts'] = request.session.get('saved_attempts', {})\n\n # To stop a \"passed\" or \"failed\" status being overridden by \"started\"\n if (not (body[\"status\"] == \"started\"\n and request.session.get('saved_attempts', {}).get(body[\"challenge\"], {}).get(\"status\", \"\")\n in {'passed', 'failed'})\n and body[\"attempt\"] != \"\"):\n request.session['saved_attempts'][body[\"challenge\"]] = {\n \"status\": body[\"status\"],\n \"code\": body[\"attempt\"]\n }\n return HttpResponse(\"Saved the attempt.\")\n else:\n return HttpResponse(\"Response does not need to be saved.\")\n", "path": "csunplugged/plugging_it_in/views.py"}]} | 3,001 | 246 |
gh_patches_debug_23066 | rasdani/github-patches | git_diff | mlflow__mlflow-3204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[WIP] Remove double model directory
## What changes are proposed in this pull request?
When a customer uses an FTP artifact store, the artifact uri is incorrect.
Correct: `ftp://0.0.0.0:9821/0/543736e8ba4a4b93a84662cf96b043b4/artifacts/model/data`
Incorrect: `ftp://0.0.0.0:9821/0/543736e8ba4a4b93a84662cf96b043b4/artifacts/model/model/data`
## How is this patch tested?
Manual Testing
1. Setup an FTP server and ran a keras model using the FTP as the artifact store
<img width="1843" alt="Screen Shot 2020-04-09 at 3 37 20 PM" src="https://user-images.githubusercontent.com/58712524/78946676-3873d180-7a78-11ea-936d-e10c42df90b1.png">
Automated
## Release Notes
### Is this a user-facing change?
- [x] No. You can skip the rest of this section.
- [ ] Yes. Give a description of this change to be included in the release notes for MLflow users.
(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)
### What component(s) does this PR affect?
- [ ] UI
- [ ] CLI
- [ ] API
- [ ] REST-API
- [ ] Examples
- [ ] Docs
- [x] Tracking
- [ ] Projects
- [x] Artifacts
- [ ] Models
- [ ] Scoring
- [ ] Serving
- [ ] R
- [ ] Java
- [ ] Python
### How should the PR be classified in the release notes? Choose one:
- [ ] `rn/breaking-change` - The PR will be mentioned in the "Breaking Changes" section
- [ ] `rn/none` - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
- [ ] `rn/feature` - A new user-facing feature worth mentioning in the release notes
- [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes
- [ ] `rn/documentation` - A user-facing documentation change worth mentioning in the release notes
</issue>
<code>
[start of mlflow/store/artifact/ftp_artifact_repo.py]
1 import os
2 import ftplib
3 from ftplib import FTP
4 from contextlib import contextmanager
5
6 import posixpath
7 from six.moves import urllib
8
9 from mlflow.entities.file_info import FileInfo
10 from mlflow.store.artifact.artifact_repo import ArtifactRepository
11 from mlflow.utils.file_utils import relative_path_to_artifact_path
12 from mlflow.exceptions import MlflowException
13
14
15 class FTPArtifactRepository(ArtifactRepository):
16 """Stores artifacts as files in a remote directory, via ftp."""
17
18 def __init__(self, artifact_uri):
19 self.uri = artifact_uri
20 parsed = urllib.parse.urlparse(artifact_uri)
21 self.config = {
22 'host': parsed.hostname,
23 'port': 21 if parsed.port is None else parsed.port,
24 'username': parsed.username,
25 'password': parsed.password
26 }
27 self.path = parsed.path
28
29 if self.config['host'] is None:
30 self.config['host'] = 'localhost'
31
32 super(FTPArtifactRepository, self).__init__(artifact_uri)
33
34 @contextmanager
35 def get_ftp_client(self):
36 ftp = FTP()
37 ftp.connect(self.config['host'], self.config['port'])
38 ftp.login(self.config['username'], self.config['password'])
39 yield ftp
40 ftp.close()
41
42 @staticmethod
43 def _is_dir(ftp, full_file_path):
44 try:
45 ftp.cwd(full_file_path)
46 return True
47 except ftplib.error_perm:
48 return False
49
50 @staticmethod
51 def _mkdir(ftp, artifact_dir):
52 try:
53 if not FTPArtifactRepository._is_dir(ftp, artifact_dir):
54 ftp.mkd(artifact_dir)
55 except ftplib.error_perm:
56 head, _ = posixpath.split(artifact_dir)
57 FTPArtifactRepository._mkdir(ftp, head)
58 FTPArtifactRepository._mkdir(ftp, artifact_dir)
59
60 @staticmethod
61 def _size(ftp, full_file_path):
62 ftp.voidcmd('TYPE I')
63 size = ftp.size(full_file_path)
64 ftp.voidcmd('TYPE A')
65 return size
66
67 def log_artifact(self, local_file, artifact_path=None):
68 with self.get_ftp_client() as ftp:
69 artifact_dir = posixpath.join(self.path, artifact_path) \
70 if artifact_path else self.path
71 self._mkdir(ftp, artifact_dir)
72 with open(local_file, 'rb') as f:
73 ftp.cwd(artifact_dir)
74 ftp.storbinary('STOR ' + os.path.basename(local_file), f)
75
76 def log_artifacts(self, local_dir, artifact_path=None):
77 dest_path = posixpath.join(self.path, artifact_path) \
78 if artifact_path else self.path
79
80 dest_path = posixpath.join(
81 dest_path, os.path.split(local_dir)[1])
82 dest_path_re = os.path.split(local_dir)[1]
83 if artifact_path:
84 dest_path_re = posixpath.join(
85 artifact_path, os.path.split(local_dir)[1])
86
87 local_dir = os.path.abspath(local_dir)
88 for (root, _, filenames) in os.walk(local_dir):
89 upload_path = dest_path
90 if root != local_dir:
91 rel_path = os.path.relpath(root, local_dir)
92 rel_path = relative_path_to_artifact_path(rel_path)
93 upload_path = posixpath.join(dest_path_re, rel_path)
94 if not filenames:
95 with self.get_ftp_client() as ftp:
96 self._mkdir(ftp, posixpath.join(self.path, upload_path))
97 for f in filenames:
98 if os.path.isfile(os.path.join(root, f)):
99 self.log_artifact(os.path.join(root, f), upload_path)
100
101 def _is_directory(self, artifact_path):
102 artifact_dir = self.path
103 list_dir = posixpath.join(artifact_dir, artifact_path) if artifact_path else artifact_dir
104 with self.get_ftp_client() as ftp:
105 return self._is_dir(ftp, list_dir)
106
107 def list_artifacts(self, path=None):
108 with self.get_ftp_client() as ftp:
109 artifact_dir = self.path
110 list_dir = posixpath.join(artifact_dir, path) if path else artifact_dir
111 if not self._is_dir(ftp, list_dir):
112 return []
113 artifact_files = ftp.nlst(list_dir)
114 artifact_files = list(filter(lambda x: x != "." and x != "..", artifact_files))
115 infos = []
116 for file_name in artifact_files:
117 file_path = (file_name if path is None
118 else posixpath.join(path, file_name))
119 full_file_path = posixpath.join(list_dir, file_name)
120 if self._is_dir(ftp, full_file_path):
121 infos.append(FileInfo(file_path, True, None))
122 else:
123 size = self._size(ftp, full_file_path)
124 infos.append(FileInfo(file_path, False, size))
125 return infos
126
127 def _download_file(self, remote_file_path, local_path):
128 remote_full_path = posixpath.join(self.path, remote_file_path) \
129 if remote_file_path else self.path
130 with self.get_ftp_client() as ftp:
131 with open(local_path, 'wb') as f:
132 ftp.retrbinary('RETR ' + remote_full_path, f.write)
133
134 def delete_artifacts(self, artifact_path=None):
135 raise MlflowException('Not implemented yet')
136
[end of mlflow/store/artifact/ftp_artifact_repo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/store/artifact/ftp_artifact_repo.py b/mlflow/store/artifact/ftp_artifact_repo.py
--- a/mlflow/store/artifact/ftp_artifact_repo.py
+++ b/mlflow/store/artifact/ftp_artifact_repo.py
@@ -77,20 +77,12 @@
dest_path = posixpath.join(self.path, artifact_path) \
if artifact_path else self.path
- dest_path = posixpath.join(
- dest_path, os.path.split(local_dir)[1])
- dest_path_re = os.path.split(local_dir)[1]
- if artifact_path:
- dest_path_re = posixpath.join(
- artifact_path, os.path.split(local_dir)[1])
-
local_dir = os.path.abspath(local_dir)
for (root, _, filenames) in os.walk(local_dir):
upload_path = dest_path
if root != local_dir:
rel_path = os.path.relpath(root, local_dir)
- rel_path = relative_path_to_artifact_path(rel_path)
- upload_path = posixpath.join(dest_path_re, rel_path)
+ upload_path = relative_path_to_artifact_path(rel_path)
if not filenames:
with self.get_ftp_client() as ftp:
self._mkdir(ftp, posixpath.join(self.path, upload_path))
| {"golden_diff": "diff --git a/mlflow/store/artifact/ftp_artifact_repo.py b/mlflow/store/artifact/ftp_artifact_repo.py\n--- a/mlflow/store/artifact/ftp_artifact_repo.py\n+++ b/mlflow/store/artifact/ftp_artifact_repo.py\n@@ -77,20 +77,12 @@\n dest_path = posixpath.join(self.path, artifact_path) \\\n if artifact_path else self.path\n \n- dest_path = posixpath.join(\n- dest_path, os.path.split(local_dir)[1])\n- dest_path_re = os.path.split(local_dir)[1]\n- if artifact_path:\n- dest_path_re = posixpath.join(\n- artifact_path, os.path.split(local_dir)[1])\n-\n local_dir = os.path.abspath(local_dir)\n for (root, _, filenames) in os.walk(local_dir):\n upload_path = dest_path\n if root != local_dir:\n rel_path = os.path.relpath(root, local_dir)\n- rel_path = relative_path_to_artifact_path(rel_path)\n- upload_path = posixpath.join(dest_path_re, rel_path)\n+ upload_path = relative_path_to_artifact_path(rel_path)\n if not filenames:\n with self.get_ftp_client() as ftp:\n self._mkdir(ftp, posixpath.join(self.path, upload_path))\n", "issue": "[WIP] Remove double model directory\n## What changes are proposed in this pull request?\r\n\r\nWhen a customer uses an FTP artifact store, the artifact uri is incorrect. \r\n\r\nCorrect: `ftp://0.0.0.0:9821/0/543736e8ba4a4b93a84662cf96b043b4/artifacts/model/data`\r\nIncorrect: `ftp://0.0.0.0:9821/0/543736e8ba4a4b93a84662cf96b043b4/artifacts/model/model/data`\r\n\r\n## How is this patch tested?\r\n\r\nManual Testing\r\n\r\n1. Setup an FTP server and ran a keras model using the FTP as the artifact store\r\n\r\n<img width=\"1843\" alt=\"Screen Shot 2020-04-09 at 3 37 20 PM\" src=\"https://user-images.githubusercontent.com/58712524/78946676-3873d180-7a78-11ea-936d-e10c42df90b1.png\">\r\n\r\n\r\nAutomated\r\n\r\n## Release Notes\r\n\r\n### Is this a user-facing change?\r\n\r\n- [x] No. You can skip the rest of this section.\r\n- [ ] Yes. Give a description of this change to be included in the release notes for MLflow users.\r\n\r\n(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)\r\n\r\n### What component(s) does this PR affect?\r\n\r\n- [ ] UI\r\n- [ ] CLI\r\n- [ ] API\r\n- [ ] REST-API\r\n- [ ] Examples\r\n- [ ] Docs\r\n- [x] Tracking\r\n- [ ] Projects\r\n- [x] Artifacts\r\n- [ ] Models\r\n- [ ] Scoring\r\n- [ ] Serving\r\n- [ ] R\r\n- [ ] Java\r\n- [ ] Python\r\n\r\n### How should the PR be classified in the release notes? Choose one:\r\n\r\n- [ ] `rn/breaking-change` - The PR will be mentioned in the \"Breaking Changes\" section\r\n- [ ] `rn/none` - No description will be included. The PR will be mentioned only by the PR number in the \"Small Bugfixes and Documentation Updates\" section\r\n- [ ] `rn/feature` - A new user-facing feature worth mentioning in the release notes\r\n- [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes\r\n- [ ] `rn/documentation` - A user-facing documentation change worth mentioning in the release notes\r\n\n", "before_files": [{"content": "import os\nimport ftplib\nfrom ftplib import FTP\nfrom contextlib import contextmanager\n\nimport posixpath\nfrom six.moves import urllib\n\nfrom mlflow.entities.file_info import FileInfo\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository\nfrom mlflow.utils.file_utils import relative_path_to_artifact_path\nfrom mlflow.exceptions import MlflowException\n\n\nclass FTPArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts as files in a remote directory, via ftp.\"\"\"\n\n def __init__(self, artifact_uri):\n self.uri = artifact_uri\n parsed = urllib.parse.urlparse(artifact_uri)\n self.config = {\n 'host': parsed.hostname,\n 'port': 21 if parsed.port is None else parsed.port,\n 'username': parsed.username,\n 'password': parsed.password\n }\n self.path = parsed.path\n\n if self.config['host'] is None:\n self.config['host'] = 'localhost'\n\n super(FTPArtifactRepository, self).__init__(artifact_uri)\n\n @contextmanager\n def get_ftp_client(self):\n ftp = FTP()\n ftp.connect(self.config['host'], self.config['port'])\n ftp.login(self.config['username'], self.config['password'])\n yield ftp\n ftp.close()\n\n @staticmethod\n def _is_dir(ftp, full_file_path):\n try:\n ftp.cwd(full_file_path)\n return True\n except ftplib.error_perm:\n return False\n\n @staticmethod\n def _mkdir(ftp, artifact_dir):\n try:\n if not FTPArtifactRepository._is_dir(ftp, artifact_dir):\n ftp.mkd(artifact_dir)\n except ftplib.error_perm:\n head, _ = posixpath.split(artifact_dir)\n FTPArtifactRepository._mkdir(ftp, head)\n FTPArtifactRepository._mkdir(ftp, artifact_dir)\n\n @staticmethod\n def _size(ftp, full_file_path):\n ftp.voidcmd('TYPE I')\n size = ftp.size(full_file_path)\n ftp.voidcmd('TYPE A')\n return size\n\n def log_artifact(self, local_file, artifact_path=None):\n with self.get_ftp_client() as ftp:\n artifact_dir = posixpath.join(self.path, artifact_path) \\\n if artifact_path else self.path\n self._mkdir(ftp, artifact_dir)\n with open(local_file, 'rb') as f:\n ftp.cwd(artifact_dir)\n ftp.storbinary('STOR ' + os.path.basename(local_file), f)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n dest_path = posixpath.join(self.path, artifact_path) \\\n if artifact_path else self.path\n\n dest_path = posixpath.join(\n dest_path, os.path.split(local_dir)[1])\n dest_path_re = os.path.split(local_dir)[1]\n if artifact_path:\n dest_path_re = posixpath.join(\n artifact_path, os.path.split(local_dir)[1])\n\n local_dir = os.path.abspath(local_dir)\n for (root, _, filenames) in os.walk(local_dir):\n upload_path = dest_path\n if root != local_dir:\n rel_path = os.path.relpath(root, local_dir)\n rel_path = relative_path_to_artifact_path(rel_path)\n upload_path = posixpath.join(dest_path_re, rel_path)\n if not filenames:\n with self.get_ftp_client() as ftp:\n self._mkdir(ftp, posixpath.join(self.path, upload_path))\n for f in filenames:\n if os.path.isfile(os.path.join(root, f)):\n self.log_artifact(os.path.join(root, f), upload_path)\n\n def _is_directory(self, artifact_path):\n artifact_dir = self.path\n list_dir = posixpath.join(artifact_dir, artifact_path) if artifact_path else artifact_dir\n with self.get_ftp_client() as ftp:\n return self._is_dir(ftp, list_dir)\n\n def list_artifacts(self, path=None):\n with self.get_ftp_client() as ftp:\n artifact_dir = self.path\n list_dir = posixpath.join(artifact_dir, path) if path else artifact_dir\n if not self._is_dir(ftp, list_dir):\n return []\n artifact_files = ftp.nlst(list_dir)\n artifact_files = list(filter(lambda x: x != \".\" and x != \"..\", artifact_files))\n infos = []\n for file_name in artifact_files:\n file_path = (file_name if path is None\n else posixpath.join(path, file_name))\n full_file_path = posixpath.join(list_dir, file_name)\n if self._is_dir(ftp, full_file_path):\n infos.append(FileInfo(file_path, True, None))\n else:\n size = self._size(ftp, full_file_path)\n infos.append(FileInfo(file_path, False, size))\n return infos\n\n def _download_file(self, remote_file_path, local_path):\n remote_full_path = posixpath.join(self.path, remote_file_path) \\\n if remote_file_path else self.path\n with self.get_ftp_client() as ftp:\n with open(local_path, 'wb') as f:\n ftp.retrbinary('RETR ' + remote_full_path, f.write)\n\n def delete_artifacts(self, artifact_path=None):\n raise MlflowException('Not implemented yet')\n", "path": "mlflow/store/artifact/ftp_artifact_repo.py"}]} | 2,583 | 284 |
gh_patches_debug_25279 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-2204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error message broken on Zope-Root
http://localhost:8080/foo when `foo` does not exists (i.e. is not a Plone instance) results in a traceback.
In Plone 5.1 we used to get:
```
<h2>Site Error</h2> <p>An error was encountered while publishing this resource. </p> <p><strong>Resource not found</strong></p> Sorry, the requested resource does not exist.<p>Check the URL and try again.</p><p><b>Resource:</b> foo GET</p> <hr noshade="noshade"/> <p>Troubleshooting Suggestions</p> <ul> <li>The URL may be incorrect.</li> <li>The parameters passed to this resource may be incorrect.</li> <li>A resource that this resource relies on may be encountering an error.</li> </ul> <p>For more detailed information about the error, please refer to the error log. </p> <p>If the error persists please contact the site maintainer. Thank you for your patience. </p>
```
That was ugly (escaped html) but the text was correct.
Plone 5.2 on the other hand tries to render the `ExceptionView` and fails sind the app has no method `Language()`.
```
Traceback (innermost last):
Module ZServer.ZPublisher.Publish, line 261, in publish_module_standard
Module Products.PDBDebugMode.runcall, line 83, in pdb_publish
Module ZServer.ZPublisher.Publish, line 182, in publish
Module ZServer.ZPublisher.exceptionhook, line 117, in __call__
Module Products.CMFPlone.browser.exceptions, line 49, in __call__
Module Products.Five.browser.pagetemplatefile, line 125, in __call__
Module Products.Five.browser.pagetemplatefile, line 60, in __call__
Module zope.pagetemplate.pagetemplate, line 134, in pt_render
Module Products.PageTemplates.engine, line 85, in __call__
Module z3c.pt.pagetemplate, line 163, in render
Module chameleon.zpt.template, line 261, in render
Module chameleon.template, line 191, in render
Module chameleon.template, line 171, in render
Module 4195113b17720aee65cd4ca2a7e7ba2d.py, line 1095, in render
Module 9fafff3b78c7ea63dcd15308ddf75fb8.py, line 652, in render_master
Module Products.PageTemplates.expression, line 105, in __call__
Module plone.app.layout.globals.portal, line 80, in language
AttributeError: 'RequestContainer' object has no attribute 'Language'
- Expression: "portal_state/language"
- Filename: ... one/Products/CMFPlone/browser/templates/main_template.pt
- Location: (line 12: col 11)
- Source: lang portal_state/language;
^^^^^^^^^^^^^^^^^^^^^
- Arguments:
repeat: {...} (0)
template: <ViewPageTemplateFile - at 0x10da74950>
views: <ViewMapper - at 0x10e47d9d0>
modules: <_SecureModuleImporter - at 0x10867bed0>
args: <tuple - at 0x1066a0050>
here: <ImplicitAcquisitionWrapper at 0x10d34b960>
user: <SpecialUser - at 0x108457290>
nothing: <NoneType - at 0x1065eeeb8>
container: <ImplicitAcquisitionWrapper at 0x10d34b960>
request: <HTTPRequest - at 0x10e5f1c90>
wrapped_repeat: <SafeMapping - at 0x10c50a5f0>
traverse_subpath: <list - at 0x10e88d488>
default: <object - at 0x1066f6ba0>
loop: {...} (0) context: <ImplicitAcquisitionWrapper at 0x10d34b960>
view: <SimpleViewClass from /Users/pbauer/workspace/coredev/src/Products.CMFPlone/Products/CMFPlone/browser/templates/error_message.pt index.html at 0x10c45f9d0>
translate: <function translate at 0x10e59f050>
root: <ImplicitAcquisitionWrapper at 0x10d34b960>
options: {...} (2)
target_language: <NoneType - at 0x1065eeeb8>
```
</issue>
<code>
[start of Products/CMFPlone/browser/exceptions.py]
1 from AccessControl import getSecurityManager
2 from Products.Five import BrowserView
3 from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
4 from zExceptions.ExceptionFormatter import format_exception
5 import json
6 import sys
7
8
9 class ExceptionView(BrowserView):
10 basic_template = ViewPageTemplateFile('templates/basic_error_message.pt')
11
12 def is_manager(self):
13 return getSecurityManager().checkPermission(
14 'Manage portal', self.context)
15
16 def __call__(self):
17 exception = self.context
18 self.context = self.__parent__
19 request = self.request
20
21 error_type = exception.__class__.__name__
22 exc_type, value, traceback = sys.exc_info()
23 error_tb = ''.join(
24 format_exception(exc_type, value, traceback, as_html=True))
25 request.response.setStatus(exc_type)
26
27 # Indicate exception as JSON
28 if "text/html" not in request.getHeader('Accept', ''):
29 request.response.setHeader("Content-Type", "application/json")
30 return json.dumps({
31 'error_type': error_type,
32 })
33
34 # Use a simplified template if main_template is not available
35 try:
36 self.context.unrestrictedTraverse('main_template')
37 except:
38 template = self.basic_template
39 else:
40 template = self.index
41
42 # Render page with user-facing error notice
43 request.set('disable_border', True)
44 request.set('disable_plone.leftcolumn', True)
45 request.set('disable_plone.rightcolumn', True)
46
47 return template(
48 error_type=error_type,
49 error_tb=error_tb,
50 )
51
[end of Products/CMFPlone/browser/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/browser/exceptions.py b/Products/CMFPlone/browser/exceptions.py
--- a/Products/CMFPlone/browser/exceptions.py
+++ b/Products/CMFPlone/browser/exceptions.py
@@ -1,7 +1,10 @@
+# -*- coding: utf-8 -*-
from AccessControl import getSecurityManager
from Products.Five import BrowserView
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from zExceptions.ExceptionFormatter import format_exception
+from zope.component.hooks import getSite
+
import json
import sys
@@ -31,13 +34,17 @@
'error_type': error_type,
})
- # Use a simplified template if main_template is not available
- try:
- self.context.unrestrictedTraverse('main_template')
- except:
+ if getSite() is None:
+ # We cannot get the site, so we cannot render our nice template
template = self.basic_template
else:
- template = self.index
+ # Use a simplified template if main_template is not available
+ try:
+ self.context.unrestrictedTraverse('main_template')
+ except:
+ template = self.basic_template
+ else:
+ template = self.index
# Render page with user-facing error notice
request.set('disable_border', True)
| {"golden_diff": "diff --git a/Products/CMFPlone/browser/exceptions.py b/Products/CMFPlone/browser/exceptions.py\n--- a/Products/CMFPlone/browser/exceptions.py\n+++ b/Products/CMFPlone/browser/exceptions.py\n@@ -1,7 +1,10 @@\n+# -*- coding: utf-8 -*-\n from AccessControl import getSecurityManager\n from Products.Five import BrowserView\n from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\n from zExceptions.ExceptionFormatter import format_exception\n+from zope.component.hooks import getSite\n+\n import json\n import sys\n \n@@ -31,13 +34,17 @@\n 'error_type': error_type,\n })\n \n- # Use a simplified template if main_template is not available\n- try:\n- self.context.unrestrictedTraverse('main_template')\n- except:\n+ if getSite() is None:\n+ # We cannot get the site, so we cannot render our nice template\n template = self.basic_template\n else:\n- template = self.index\n+ # Use a simplified template if main_template is not available\n+ try:\n+ self.context.unrestrictedTraverse('main_template')\n+ except:\n+ template = self.basic_template\n+ else:\n+ template = self.index\n \n # Render page with user-facing error notice\n request.set('disable_border', True)\n", "issue": "Error message broken on Zope-Root\nhttp://localhost:8080/foo when `foo` does not exists (i.e. is not a Plone instance) results in a traceback.\r\n\r\nIn Plone 5.1 we used to get:\r\n```\r\n<h2>Site Error</h2> <p>An error was encountered while publishing this resource. </p> <p><strong>Resource not found</strong></p> Sorry, the requested resource does not exist.<p>Check the URL and try again.</p><p><b>Resource:</b> foo GET</p> <hr noshade=\"noshade\"/> <p>Troubleshooting Suggestions</p> <ul> <li>The URL may be incorrect.</li> <li>The parameters passed to this resource may be incorrect.</li> <li>A resource that this resource relies on may be encountering an error.</li> </ul> <p>For more detailed information about the error, please refer to the error log. </p> <p>If the error persists please contact the site maintainer. Thank you for your patience. </p> \r\n```\r\nThat was ugly (escaped html) but the text was correct.\r\n\r\nPlone 5.2 on the other hand tries to render the `ExceptionView` and fails sind the app has no method `Language()`. \r\n\r\n```\r\nTraceback (innermost last):\r\n\r\n Module ZServer.ZPublisher.Publish, line 261, in publish_module_standard\r\n Module Products.PDBDebugMode.runcall, line 83, in pdb_publish\r\n Module ZServer.ZPublisher.Publish, line 182, in publish\r\n Module ZServer.ZPublisher.exceptionhook, line 117, in __call__\r\n Module Products.CMFPlone.browser.exceptions, line 49, in __call__\r\n Module Products.Five.browser.pagetemplatefile, line 125, in __call__\r\n Module Products.Five.browser.pagetemplatefile, line 60, in __call__\r\n Module zope.pagetemplate.pagetemplate, line 134, in pt_render\r\n Module Products.PageTemplates.engine, line 85, in __call__\r\n Module z3c.pt.pagetemplate, line 163, in render\r\n Module chameleon.zpt.template, line 261, in render\r\n Module chameleon.template, line 191, in render\r\n Module chameleon.template, line 171, in render\r\n Module 4195113b17720aee65cd4ca2a7e7ba2d.py, line 1095, in render\r\n Module 9fafff3b78c7ea63dcd15308ddf75fb8.py, line 652, in render_master\r\n Module Products.PageTemplates.expression, line 105, in __call__\r\n Module plone.app.layout.globals.portal, line 80, in language\r\n\r\nAttributeError: 'RequestContainer' object has no attribute 'Language'\r\n\r\n - Expression: \"portal_state/language\"\r\n - Filename: ... one/Products/CMFPlone/browser/templates/main_template.pt\r\n - Location: (line 12: col 11)\r\n - Source: lang portal_state/language; \r\n\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n\r\n - Arguments: \r\n repeat: {...} (0) \r\n template: <ViewPageTemplateFile - at 0x10da74950> \r\n views: <ViewMapper - at 0x10e47d9d0> \r\n modules: <_SecureModuleImporter - at 0x10867bed0> \r\n args: <tuple - at 0x1066a0050> \r\n here: <ImplicitAcquisitionWrapper at 0x10d34b960> \r\n user: <SpecialUser - at 0x108457290> \r\n nothing: <NoneType - at 0x1065eeeb8> \r\n container: <ImplicitAcquisitionWrapper at 0x10d34b960> \r\n request: <HTTPRequest - at 0x10e5f1c90> \r\n wrapped_repeat: <SafeMapping - at 0x10c50a5f0> \r\n traverse_subpath: <list - at 0x10e88d488> \r\n default: <object - at 0x1066f6ba0> \r\n loop: {...} (0) context: <ImplicitAcquisitionWrapper at 0x10d34b960> \r\n view: <SimpleViewClass from /Users/pbauer/workspace/coredev/src/Products.CMFPlone/Products/CMFPlone/browser/templates/error_message.pt index.html at 0x10c45f9d0> \r\n translate: <function translate at 0x10e59f050> \r\n root: <ImplicitAcquisitionWrapper at 0x10d34b960> \r\n options: {...} (2) \r\n target_language: <NoneType - at 0x1065eeeb8>\r\n```\n", "before_files": [{"content": "from AccessControl import getSecurityManager\nfrom Products.Five import BrowserView\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom zExceptions.ExceptionFormatter import format_exception\nimport json\nimport sys\n\n\nclass ExceptionView(BrowserView):\n basic_template = ViewPageTemplateFile('templates/basic_error_message.pt')\n\n def is_manager(self):\n return getSecurityManager().checkPermission(\n 'Manage portal', self.context)\n\n def __call__(self):\n exception = self.context\n self.context = self.__parent__\n request = self.request\n\n error_type = exception.__class__.__name__\n exc_type, value, traceback = sys.exc_info()\n error_tb = ''.join(\n format_exception(exc_type, value, traceback, as_html=True))\n request.response.setStatus(exc_type)\n\n # Indicate exception as JSON\n if \"text/html\" not in request.getHeader('Accept', ''):\n request.response.setHeader(\"Content-Type\", \"application/json\")\n return json.dumps({\n 'error_type': error_type,\n })\n\n # Use a simplified template if main_template is not available\n try:\n self.context.unrestrictedTraverse('main_template')\n except:\n template = self.basic_template\n else:\n template = self.index\n\n # Render page with user-facing error notice\n request.set('disable_border', True)\n request.set('disable_plone.leftcolumn', True)\n request.set('disable_plone.rightcolumn', True)\n\n return template(\n error_type=error_type,\n error_tb=error_tb,\n )\n", "path": "Products/CMFPlone/browser/exceptions.py"}]} | 2,123 | 307 |
gh_patches_debug_14969 | rasdani/github-patches | git_diff | Parsl__parsl-1447 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect pgid identification in LocalProvider kills too many processes
This code in LocalProvider probably should have a space added to give `-o pgid= {}`. This should be two parameters, one '-o pgid=` meaning output only pgids, and give that single column an empty heading; and secondly the process ID to inspect.
```
cmd = "kill -- -$(ps -o pgid={} | grep -o '[0-9]*')".format(self.resources[job]['remote_pid'])
```
The code as it is probably doesn't shut down the expected process - it might sometimes; it might kill other processes or process groups, as the resulting ps output contains a list of many PGIDs, some of which might be valid PIDs to kill; or it might not kill anything. This might be related to bad shutdown behaviour described in #674
I have not tested this but I encountered similar behaviour reviewing #1297
</issue>
<code>
[start of parsl/providers/local/local.py]
1 import logging
2 import os
3 import signal
4 import time
5
6 from parsl.channels import LocalChannel
7 from parsl.launchers import SingleNodeLauncher
8 from parsl.providers.provider_base import ExecutionProvider
9 from parsl.providers.error import SchedulerMissingArgs, ScriptPathError
10 from parsl.utils import RepresentationMixin
11
12 logger = logging.getLogger(__name__)
13
14 translate_table = {
15 'PD': 'PENDING',
16 'R': 'RUNNING',
17 'CA': 'CANCELLED',
18 'CF': 'PENDING', # (configuring),
19 'CG': 'RUNNING', # (completing),
20 'CD': 'COMPLETED',
21 'F': 'FAILED',
22 'TO': 'TIMEOUT',
23 'NF': 'FAILED', # (node failure),
24 'RV': 'FAILED', # (revoked) and
25 'SE': 'FAILED'
26 } # (special exit state
27
28
29 class LocalProvider(ExecutionProvider, RepresentationMixin):
30 """ Local Execution Provider
31
32 This provider is used to provide execution resources from the localhost.
33
34 Parameters
35 ----------
36
37 min_blocks : int
38 Minimum number of blocks to maintain.
39 max_blocks : int
40 Maximum number of blocks to maintain.
41 parallelism : float
42 Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive
43 scaling where as many resources as possible are used; parallelism close to 0 represents
44 the opposite situation in which as few resources as possible (i.e., min_blocks) are used.
45 move_files : Optional[Bool]: should files be moved? by default, Parsl will try to figure
46 this out itself (= None). If True, then will always move. If False, will never move.
47 worker_init : str
48 Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.
49 """
50
51 def __init__(self,
52 channel=LocalChannel(),
53 nodes_per_block=1,
54 launcher=SingleNodeLauncher(),
55 init_blocks=4,
56 min_blocks=0,
57 max_blocks=10,
58 walltime="00:15:00",
59 worker_init='',
60 cmd_timeout=30,
61 parallelism=1,
62 move_files=None):
63 self.channel = channel
64 self._label = 'local'
65 self.provisioned_blocks = 0
66 self.nodes_per_block = nodes_per_block
67 self.launcher = launcher
68 self.worker_init = worker_init
69 self.init_blocks = init_blocks
70 self.min_blocks = min_blocks
71 self.max_blocks = max_blocks
72 self.parallelism = parallelism
73 self.walltime = walltime
74 self.script_dir = None
75 self.cmd_timeout = cmd_timeout
76 self.move_files = move_files
77
78 # Dictionary that keeps track of jobs, keyed on job_id
79 self.resources = {}
80
81 def status(self, job_ids):
82 ''' Get the status of a list of jobs identified by their ids.
83
84 Args:
85 - job_ids (List of ids) : List of identifiers for the jobs
86
87 Returns:
88 - List of status codes.
89
90 '''
91
92 logger.debug("Checking status of: {0}".format(job_ids))
93 for job_id in self.resources:
94
95 if self.resources[job_id]['proc']:
96
97 poll_code = self.resources[job_id]['proc'].poll()
98 if self.resources[job_id]['status'] in ['COMPLETED', 'FAILED']:
99 continue
100
101 if poll_code is None:
102 self.resources[job_id]['status'] = 'RUNNING'
103 elif poll_code == 0:
104 self.resources[job_id]['status'] = 'COMPLETED'
105 elif poll_code != 0:
106 self.resources[job_id]['status'] = 'FAILED'
107 else:
108 logger.error("Internal consistency error: unexpected case in local provider state machine")
109
110 elif self.resources[job_id]['remote_pid']:
111
112 retcode, stdout, stderr = self.channel.execute_wait('ps -p {} &> /dev/null; echo "STATUS:$?" '.format(self.resources[job_id]['remote_pid']),
113 self.cmd_timeout)
114 for line in stdout.split('\n'):
115 if line.startswith("STATUS:"):
116 status = line.split("STATUS:")[1].strip()
117 if status == "0":
118 self.resources[job_id]['status'] = 'RUNNING'
119 else:
120 self.resources[job_id]['status'] = 'FAILED'
121
122 return [self.resources[jid]['status'] for jid in job_ids]
123
124 def _write_submit_script(self, script_string, script_filename):
125 '''
126 Load the template string with config values and write the generated submit script to
127 a submit script file.
128
129 Args:
130 - template_string (string) : The template string to be used for the writing submit script
131 - script_filename (string) : Name of the submit script
132
133 Returns:
134 - True: on success
135
136 Raises:
137 SchedulerMissingArgs : If template is missing args
138 ScriptPathError : Unable to write submit script out
139 '''
140
141 try:
142 with open(script_filename, 'w') as f:
143 f.write(script_string)
144
145 except KeyError as e:
146 logger.error("Missing keys for submit script: %s", e)
147 raise (SchedulerMissingArgs(e.args, self.label))
148
149 except IOError as e:
150 logger.error("Failed writing to submit script: %s", script_filename)
151 raise (ScriptPathError(script_filename, e))
152
153 return True
154
155 def submit(self, command, tasks_per_node, job_name="parsl.auto"):
156 ''' Submits the command onto an Local Resource Manager job.
157 Submit returns an ID that corresponds to the task that was just submitted.
158
159 If tasks_per_node < 1:
160 1/tasks_per_node is provisioned
161
162 If tasks_per_node == 1:
163 A single node is provisioned
164
165 If tasks_per_node > 1 :
166 tasks_per_node nodes are provisioned.
167
168 Args:
169 - command :(String) Commandline invocation to be made on the remote side.
170 - tasks_per_node (int) : command invocations to be launched per node
171
172 Kwargs:
173 - job_name (String): Name for job, must be unique
174
175 Returns:
176 - None: At capacity, cannot provision more
177 - job_id: (string) Identifier for the job
178
179 '''
180
181 job_name = "{0}.{1}".format(job_name, time.time())
182
183 # Set script path
184 script_path = "{0}/{1}.sh".format(self.script_dir, job_name)
185 script_path = os.path.abspath(script_path)
186
187 wrap_command = self.worker_init + '\n' + self.launcher(command, tasks_per_node, self.nodes_per_block)
188
189 self._write_submit_script(wrap_command, script_path)
190
191 job_id = None
192 proc = None
193 remote_pid = None
194 if (self.move_files is None and not isinstance(self.channel, LocalChannel)) or (self.move_files):
195 logger.debug("Pushing start script")
196 script_path = self.channel.push_file(script_path, self.channel.script_dir)
197
198 if not isinstance(self.channel, LocalChannel):
199 logger.debug("Launching in remote mode")
200 # Bash would return until the streams are closed. So we redirect to a outs file
201 cmd = 'bash {0} &> {0}.out & \n echo "PID:$!" '.format(script_path)
202 retcode, stdout, stderr = self.channel.execute_wait(cmd, self.cmd_timeout)
203 for line in stdout.split('\n'):
204 if line.startswith("PID:"):
205 remote_pid = line.split("PID:")[1].strip()
206 job_id = remote_pid
207 if job_id is None:
208 logger.warning("Channel failed to start remote command/retrieve PID")
209 else:
210
211 try:
212 job_id, proc = self.channel.execute_no_wait('bash {0}'.format(script_path), self.cmd_timeout)
213 except Exception as e:
214 logger.debug("Channel execute failed for: {}, {}".format(self.channel, e))
215 raise
216
217 self.resources[job_id] = {'job_id': job_id, 'status': 'RUNNING',
218 'remote_pid': remote_pid,
219 'proc': proc}
220
221 return job_id
222
223 def cancel(self, job_ids):
224 ''' Cancels the jobs specified by a list of job ids
225
226 Args:
227 job_ids : [<job_id> ...]
228
229 Returns :
230 [True/False...] : If the cancel operation fails the entire list will be False.
231 '''
232 for job in job_ids:
233 logger.debug("Terminating job/proc_id: {0}".format(job))
234 # Here we are assuming that for local, the job_ids are the process id's
235 if self.resources[job]['proc']:
236 proc = self.resources[job]['proc']
237 os.killpg(os.getpgid(proc.pid), signal.SIGTERM)
238 self.resources[job]['status'] = 'CANCELLED'
239
240 elif self.resources[job]['remote_pid']:
241 cmd = "kill -- -$(ps -o pgid={} | grep -o '[0-9]*')".format(self.resources[job]['remote_pid'])
242 retcode, stdout, stderr = self.channel.execute_wait(cmd, self.cmd_timeout)
243 if retcode != 0:
244 logger.warning("Failed to kill PID: {} and child processes on {}".format(self.resources[job]['remote_pid'],
245 self.label))
246
247 rets = [True for i in job_ids]
248 return rets
249
250 @property
251 def current_capacity(self):
252 return len(self.resources)
253
254 @property
255 def label(self):
256 return self._label
257
258
259 if __name__ == "__main__":
260
261 print("Nothing here")
262
[end of parsl/providers/local/local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsl/providers/local/local.py b/parsl/providers/local/local.py
--- a/parsl/providers/local/local.py
+++ b/parsl/providers/local/local.py
@@ -238,7 +238,7 @@
self.resources[job]['status'] = 'CANCELLED'
elif self.resources[job]['remote_pid']:
- cmd = "kill -- -$(ps -o pgid={} | grep -o '[0-9]*')".format(self.resources[job]['remote_pid'])
+ cmd = "kill -- -$(ps -o pgid= {} | grep -o '[0-9]*')".format(self.resources[job]['remote_pid'])
retcode, stdout, stderr = self.channel.execute_wait(cmd, self.cmd_timeout)
if retcode != 0:
logger.warning("Failed to kill PID: {} and child processes on {}".format(self.resources[job]['remote_pid'],
| {"golden_diff": "diff --git a/parsl/providers/local/local.py b/parsl/providers/local/local.py\n--- a/parsl/providers/local/local.py\n+++ b/parsl/providers/local/local.py\n@@ -238,7 +238,7 @@\n self.resources[job]['status'] = 'CANCELLED'\n \n elif self.resources[job]['remote_pid']:\n- cmd = \"kill -- -$(ps -o pgid={} | grep -o '[0-9]*')\".format(self.resources[job]['remote_pid'])\n+ cmd = \"kill -- -$(ps -o pgid= {} | grep -o '[0-9]*')\".format(self.resources[job]['remote_pid'])\n retcode, stdout, stderr = self.channel.execute_wait(cmd, self.cmd_timeout)\n if retcode != 0:\n logger.warning(\"Failed to kill PID: {} and child processes on {}\".format(self.resources[job]['remote_pid'],\n", "issue": "Incorrect pgid identification in LocalProvider kills too many processes\nThis code in LocalProvider probably should have a space added to give `-o pgid= {}`. This should be two parameters, one '-o pgid=` meaning output only pgids, and give that single column an empty heading; and secondly the process ID to inspect.\r\n\r\n```\r\n cmd = \"kill -- -$(ps -o pgid={} | grep -o '[0-9]*')\".format(self.resources[job]['remote_pid'])\r\n```\r\n\r\nThe code as it is probably doesn't shut down the expected process - it might sometimes; it might kill other processes or process groups, as the resulting ps output contains a list of many PGIDs, some of which might be valid PIDs to kill; or it might not kill anything. This might be related to bad shutdown behaviour described in #674 \r\n\r\nI have not tested this but I encountered similar behaviour reviewing #1297 \n", "before_files": [{"content": "import logging\nimport os\nimport signal\nimport time\n\nfrom parsl.channels import LocalChannel\nfrom parsl.launchers import SingleNodeLauncher\nfrom parsl.providers.provider_base import ExecutionProvider\nfrom parsl.providers.error import SchedulerMissingArgs, ScriptPathError\nfrom parsl.utils import RepresentationMixin\n\nlogger = logging.getLogger(__name__)\n\ntranslate_table = {\n 'PD': 'PENDING',\n 'R': 'RUNNING',\n 'CA': 'CANCELLED',\n 'CF': 'PENDING', # (configuring),\n 'CG': 'RUNNING', # (completing),\n 'CD': 'COMPLETED',\n 'F': 'FAILED',\n 'TO': 'TIMEOUT',\n 'NF': 'FAILED', # (node failure),\n 'RV': 'FAILED', # (revoked) and\n 'SE': 'FAILED'\n} # (special exit state\n\n\nclass LocalProvider(ExecutionProvider, RepresentationMixin):\n \"\"\" Local Execution Provider\n\n This provider is used to provide execution resources from the localhost.\n\n Parameters\n ----------\n\n min_blocks : int\n Minimum number of blocks to maintain.\n max_blocks : int\n Maximum number of blocks to maintain.\n parallelism : float\n Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive\n scaling where as many resources as possible are used; parallelism close to 0 represents\n the opposite situation in which as few resources as possible (i.e., min_blocks) are used.\n move_files : Optional[Bool]: should files be moved? by default, Parsl will try to figure\n this out itself (= None). If True, then will always move. If False, will never move.\n worker_init : str\n Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.\n \"\"\"\n\n def __init__(self,\n channel=LocalChannel(),\n nodes_per_block=1,\n launcher=SingleNodeLauncher(),\n init_blocks=4,\n min_blocks=0,\n max_blocks=10,\n walltime=\"00:15:00\",\n worker_init='',\n cmd_timeout=30,\n parallelism=1,\n move_files=None):\n self.channel = channel\n self._label = 'local'\n self.provisioned_blocks = 0\n self.nodes_per_block = nodes_per_block\n self.launcher = launcher\n self.worker_init = worker_init\n self.init_blocks = init_blocks\n self.min_blocks = min_blocks\n self.max_blocks = max_blocks\n self.parallelism = parallelism\n self.walltime = walltime\n self.script_dir = None\n self.cmd_timeout = cmd_timeout\n self.move_files = move_files\n\n # Dictionary that keeps track of jobs, keyed on job_id\n self.resources = {}\n\n def status(self, job_ids):\n ''' Get the status of a list of jobs identified by their ids.\n\n Args:\n - job_ids (List of ids) : List of identifiers for the jobs\n\n Returns:\n - List of status codes.\n\n '''\n\n logger.debug(\"Checking status of: {0}\".format(job_ids))\n for job_id in self.resources:\n\n if self.resources[job_id]['proc']:\n\n poll_code = self.resources[job_id]['proc'].poll()\n if self.resources[job_id]['status'] in ['COMPLETED', 'FAILED']:\n continue\n\n if poll_code is None:\n self.resources[job_id]['status'] = 'RUNNING'\n elif poll_code == 0:\n self.resources[job_id]['status'] = 'COMPLETED'\n elif poll_code != 0:\n self.resources[job_id]['status'] = 'FAILED'\n else:\n logger.error(\"Internal consistency error: unexpected case in local provider state machine\")\n\n elif self.resources[job_id]['remote_pid']:\n\n retcode, stdout, stderr = self.channel.execute_wait('ps -p {} &> /dev/null; echo \"STATUS:$?\" '.format(self.resources[job_id]['remote_pid']),\n self.cmd_timeout)\n for line in stdout.split('\\n'):\n if line.startswith(\"STATUS:\"):\n status = line.split(\"STATUS:\")[1].strip()\n if status == \"0\":\n self.resources[job_id]['status'] = 'RUNNING'\n else:\n self.resources[job_id]['status'] = 'FAILED'\n\n return [self.resources[jid]['status'] for jid in job_ids]\n\n def _write_submit_script(self, script_string, script_filename):\n '''\n Load the template string with config values and write the generated submit script to\n a submit script file.\n\n Args:\n - template_string (string) : The template string to be used for the writing submit script\n - script_filename (string) : Name of the submit script\n\n Returns:\n - True: on success\n\n Raises:\n SchedulerMissingArgs : If template is missing args\n ScriptPathError : Unable to write submit script out\n '''\n\n try:\n with open(script_filename, 'w') as f:\n f.write(script_string)\n\n except KeyError as e:\n logger.error(\"Missing keys for submit script: %s\", e)\n raise (SchedulerMissingArgs(e.args, self.label))\n\n except IOError as e:\n logger.error(\"Failed writing to submit script: %s\", script_filename)\n raise (ScriptPathError(script_filename, e))\n\n return True\n\n def submit(self, command, tasks_per_node, job_name=\"parsl.auto\"):\n ''' Submits the command onto an Local Resource Manager job.\n Submit returns an ID that corresponds to the task that was just submitted.\n\n If tasks_per_node < 1:\n 1/tasks_per_node is provisioned\n\n If tasks_per_node == 1:\n A single node is provisioned\n\n If tasks_per_node > 1 :\n tasks_per_node nodes are provisioned.\n\n Args:\n - command :(String) Commandline invocation to be made on the remote side.\n - tasks_per_node (int) : command invocations to be launched per node\n\n Kwargs:\n - job_name (String): Name for job, must be unique\n\n Returns:\n - None: At capacity, cannot provision more\n - job_id: (string) Identifier for the job\n\n '''\n\n job_name = \"{0}.{1}\".format(job_name, time.time())\n\n # Set script path\n script_path = \"{0}/{1}.sh\".format(self.script_dir, job_name)\n script_path = os.path.abspath(script_path)\n\n wrap_command = self.worker_init + '\\n' + self.launcher(command, tasks_per_node, self.nodes_per_block)\n\n self._write_submit_script(wrap_command, script_path)\n\n job_id = None\n proc = None\n remote_pid = None\n if (self.move_files is None and not isinstance(self.channel, LocalChannel)) or (self.move_files):\n logger.debug(\"Pushing start script\")\n script_path = self.channel.push_file(script_path, self.channel.script_dir)\n\n if not isinstance(self.channel, LocalChannel):\n logger.debug(\"Launching in remote mode\")\n # Bash would return until the streams are closed. So we redirect to a outs file\n cmd = 'bash {0} &> {0}.out & \\n echo \"PID:$!\" '.format(script_path)\n retcode, stdout, stderr = self.channel.execute_wait(cmd, self.cmd_timeout)\n for line in stdout.split('\\n'):\n if line.startswith(\"PID:\"):\n remote_pid = line.split(\"PID:\")[1].strip()\n job_id = remote_pid\n if job_id is None:\n logger.warning(\"Channel failed to start remote command/retrieve PID\")\n else:\n\n try:\n job_id, proc = self.channel.execute_no_wait('bash {0}'.format(script_path), self.cmd_timeout)\n except Exception as e:\n logger.debug(\"Channel execute failed for: {}, {}\".format(self.channel, e))\n raise\n\n self.resources[job_id] = {'job_id': job_id, 'status': 'RUNNING',\n 'remote_pid': remote_pid,\n 'proc': proc}\n\n return job_id\n\n def cancel(self, job_ids):\n ''' Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False.\n '''\n for job in job_ids:\n logger.debug(\"Terminating job/proc_id: {0}\".format(job))\n # Here we are assuming that for local, the job_ids are the process id's\n if self.resources[job]['proc']:\n proc = self.resources[job]['proc']\n os.killpg(os.getpgid(proc.pid), signal.SIGTERM)\n self.resources[job]['status'] = 'CANCELLED'\n\n elif self.resources[job]['remote_pid']:\n cmd = \"kill -- -$(ps -o pgid={} | grep -o '[0-9]*')\".format(self.resources[job]['remote_pid'])\n retcode, stdout, stderr = self.channel.execute_wait(cmd, self.cmd_timeout)\n if retcode != 0:\n logger.warning(\"Failed to kill PID: {} and child processes on {}\".format(self.resources[job]['remote_pid'],\n self.label))\n\n rets = [True for i in job_ids]\n return rets\n\n @property\n def current_capacity(self):\n return len(self.resources)\n\n @property\n def label(self):\n return self._label\n\n\nif __name__ == \"__main__\":\n\n print(\"Nothing here\")\n", "path": "parsl/providers/local/local.py"}]} | 3,518 | 199 |
gh_patches_debug_29292 | rasdani/github-patches | git_diff | e-valuation__EvaP-721 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Only internal redirects
The platform should only redirect to internal pages after logging in.
(handled in `evaluation/views.py index`)
</issue>
<code>
[start of evap/evaluation/views.py]
1 from django.contrib import messages
2 from django.contrib.auth import login as auth_login
3 from django.shortcuts import redirect, render
4 from django.utils.translation import ugettext as _
5
6 from evap.evaluation.forms import NewKeyForm, LoginKeyForm, LoginUsernameForm
7 from evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester
8
9
10 def index(request):
11 """Main entry page into EvaP providing all the login options available. THe username/password
12 login is thought to be used for internal users, e.g. by connecting to a LDAP directory.
13 The login key mechanism is meant to be used to include external participants, e.g. visiting
14 students or visiting contributors.
15 """
16
17 # parse the form data into the respective form
18 submit_type = request.POST.get("submit_type", "no_submit")
19 new_key_form = NewKeyForm(request.POST if submit_type == "new_key" else None)
20 login_key_form = LoginKeyForm(request.POST if submit_type == "login_key" else None)
21 login_username_form = LoginUsernameForm(request, request.POST if submit_type == "login_username" else None)
22
23 # process form data
24 if request.method == 'POST':
25 if new_key_form.is_valid():
26 # user wants a new login key
27 profile = new_key_form.get_user()
28 profile.generate_login_key()
29 profile.save()
30
31 EmailTemplate.send_login_key_to_user(new_key_form.get_user())
32
33 messages.success(request, _("Successfully sent email with new login key."))
34 elif login_key_form.is_valid():
35 # user would like to login with a login key and passed key test
36 auth_login(request, login_key_form.get_user())
37 elif login_username_form.is_valid():
38 # user would like to login with username and password and passed password test
39 auth_login(request, login_username_form.get_user())
40
41 # clean up our test cookie
42 if request.session.test_cookie_worked():
43 request.session.delete_test_cookie()
44
45 # if not logged in by now, render form
46 if not request.user.is_authenticated():
47 # set test cookie to verify whether they work in the next step
48 request.session.set_test_cookie()
49
50 template_data = dict(new_key_form=new_key_form, login_key_form=login_key_form, login_username_form=login_username_form)
51 return render(request, "index.html", template_data)
52 else:
53 user, created = UserProfile.objects.get_or_create(username=request.user.username)
54
55 # check for redirect variable
56 redirect_to = request.GET.get("next", None)
57 if redirect_to is not None:
58 if redirect_to.startswith("/staff/"):
59 if request.user.is_staff:
60 return redirect(redirect_to)
61 elif redirect_to.startswith("/grades/"):
62 if request.user.is_grade_publisher:
63 return redirect(redirect_to)
64 elif redirect_to.startswith("/contributor/"):
65 if user.is_contributor:
66 return redirect(redirect_to)
67 elif redirect_to.startswith("/student/"):
68 if user.is_participant:
69 return redirect(redirect_to)
70 else:
71 return redirect(redirect_to)
72
73 # redirect user to appropriate start page
74 if request.user.is_staff:
75 return redirect('staff:index')
76 elif request.user.is_grade_publisher:
77 return redirect('grades:semester_view', Semester.active_semester().id)
78 elif user.is_contributor_or_delegate:
79 return redirect('contributor:index')
80 elif user.is_participant:
81 return redirect('student:index')
82 else:
83 return redirect('results:index')
84
85
86 def faq(request):
87 return render(request, "faq.html", dict(sections=FaqSection.objects.all()))
88
89 def legal_notice(request):
90 return render(request, "legal_notice.html", dict())
91
[end of evap/evaluation/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/evaluation/views.py b/evap/evaluation/views.py
--- a/evap/evaluation/views.py
+++ b/evap/evaluation/views.py
@@ -2,13 +2,14 @@
from django.contrib.auth import login as auth_login
from django.shortcuts import redirect, render
from django.utils.translation import ugettext as _
+from django.core.urlresolvers import resolve, Resolver404
from evap.evaluation.forms import NewKeyForm, LoginKeyForm, LoginUsernameForm
from evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester
def index(request):
- """Main entry page into EvaP providing all the login options available. THe username/password
+ """Main entry page into EvaP providing all the login options available. The username/password
login is thought to be used for internal users, e.g. by connecting to a LDAP directory.
The login key mechanism is meant to be used to include external participants, e.g. visiting
students or visiting contributors.
@@ -68,7 +69,12 @@
if user.is_participant:
return redirect(redirect_to)
else:
- return redirect(redirect_to)
+ try:
+ resolve(redirect_to)
+ except Resolver404:
+ pass
+ else:
+ return redirect(redirect_to)
# redirect user to appropriate start page
if request.user.is_staff:
| {"golden_diff": "diff --git a/evap/evaluation/views.py b/evap/evaluation/views.py\n--- a/evap/evaluation/views.py\n+++ b/evap/evaluation/views.py\n@@ -2,13 +2,14 @@\n from django.contrib.auth import login as auth_login\n from django.shortcuts import redirect, render\n from django.utils.translation import ugettext as _\n+from django.core.urlresolvers import resolve, Resolver404\n \n from evap.evaluation.forms import NewKeyForm, LoginKeyForm, LoginUsernameForm\n from evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester\n \n \n def index(request):\n- \"\"\"Main entry page into EvaP providing all the login options available. THe username/password\n+ \"\"\"Main entry page into EvaP providing all the login options available. The username/password\n login is thought to be used for internal users, e.g. by connecting to a LDAP directory.\n The login key mechanism is meant to be used to include external participants, e.g. visiting\n students or visiting contributors.\n@@ -68,7 +69,12 @@\n if user.is_participant:\n return redirect(redirect_to)\n else:\n- return redirect(redirect_to)\n+ try:\n+ resolve(redirect_to)\n+ except Resolver404:\n+ pass\n+ else:\n+ return redirect(redirect_to)\n \n # redirect user to appropriate start page\n if request.user.is_staff:\n", "issue": "Only internal redirects\nThe platform should only redirect to internal pages after logging in.\n\n(handled in `evaluation/views.py index`)\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.contrib.auth import login as auth_login\nfrom django.shortcuts import redirect, render\nfrom django.utils.translation import ugettext as _\n\nfrom evap.evaluation.forms import NewKeyForm, LoginKeyForm, LoginUsernameForm\nfrom evap.evaluation.models import UserProfile, FaqSection, EmailTemplate, Semester\n\n\ndef index(request):\n \"\"\"Main entry page into EvaP providing all the login options available. THe username/password\n login is thought to be used for internal users, e.g. by connecting to a LDAP directory.\n The login key mechanism is meant to be used to include external participants, e.g. visiting\n students or visiting contributors.\n \"\"\"\n\n # parse the form data into the respective form\n submit_type = request.POST.get(\"submit_type\", \"no_submit\")\n new_key_form = NewKeyForm(request.POST if submit_type == \"new_key\" else None)\n login_key_form = LoginKeyForm(request.POST if submit_type == \"login_key\" else None)\n login_username_form = LoginUsernameForm(request, request.POST if submit_type == \"login_username\" else None)\n\n # process form data\n if request.method == 'POST':\n if new_key_form.is_valid():\n # user wants a new login key\n profile = new_key_form.get_user()\n profile.generate_login_key()\n profile.save()\n\n EmailTemplate.send_login_key_to_user(new_key_form.get_user())\n\n messages.success(request, _(\"Successfully sent email with new login key.\"))\n elif login_key_form.is_valid():\n # user would like to login with a login key and passed key test\n auth_login(request, login_key_form.get_user())\n elif login_username_form.is_valid():\n # user would like to login with username and password and passed password test\n auth_login(request, login_username_form.get_user())\n\n # clean up our test cookie\n if request.session.test_cookie_worked():\n request.session.delete_test_cookie()\n\n # if not logged in by now, render form\n if not request.user.is_authenticated():\n # set test cookie to verify whether they work in the next step\n request.session.set_test_cookie()\n\n template_data = dict(new_key_form=new_key_form, login_key_form=login_key_form, login_username_form=login_username_form)\n return render(request, \"index.html\", template_data)\n else:\n user, created = UserProfile.objects.get_or_create(username=request.user.username)\n\n # check for redirect variable\n redirect_to = request.GET.get(\"next\", None)\n if redirect_to is not None:\n if redirect_to.startswith(\"/staff/\"):\n if request.user.is_staff:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/grades/\"):\n if request.user.is_grade_publisher:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/contributor/\"):\n if user.is_contributor:\n return redirect(redirect_to)\n elif redirect_to.startswith(\"/student/\"):\n if user.is_participant:\n return redirect(redirect_to)\n else:\n return redirect(redirect_to)\n\n # redirect user to appropriate start page\n if request.user.is_staff:\n return redirect('staff:index')\n elif request.user.is_grade_publisher:\n return redirect('grades:semester_view', Semester.active_semester().id)\n elif user.is_contributor_or_delegate:\n return redirect('contributor:index')\n elif user.is_participant:\n return redirect('student:index')\n else:\n return redirect('results:index')\n\n\ndef faq(request):\n return render(request, \"faq.html\", dict(sections=FaqSection.objects.all()))\n\ndef legal_notice(request):\n return render(request, \"legal_notice.html\", dict())\n", "path": "evap/evaluation/views.py"}]} | 1,517 | 314 |
gh_patches_debug_27800 | rasdani/github-patches | git_diff | facebookresearch__hydra-404 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] issues with named arguments in hydra.utils.instantiate
`hydra.utils.instantiate` converts to OmegaConf named arguments, making it impossible to use the named arguments syntax for arguments which cannot be represented in OmegaConf.
## To reproduce:
`conf.yaml`:
```yaml
class: torch.optim.Adam
params:
lr: 1E-4
```
`test.py`:
```python
import hydra
import torch
import torchvision
@hydra.main(config_path="conf.yaml")
def my_app(cfg):
print(cfg.pretty())
model = torchvision.models.alexnet()
hydra.utils.instantiate(cfg, model.parameters()) # OK
hydra.utils.instantiate(cfg, params=model.parameters()) # FAILS
if __name__ == "__main__":
my_app()
```
** Stack trace/error message **
```
ValueError: key params: generator is not a primitive type
```
As a side note, the fact that `hydra.utils.instantiate` takes as input an argument named `config` makes it impossible to instantiate an object which expects the same argument using named arguments, which seems an unnecessary restriction.
## The following solves both issues:
```python
def instantiate_f(config):
assert config is not None, "Input config is None"
class_name = config["class"]
clazz = get_class(class_name)
params = config.params if "params" in config else OmegaConf.create()
assert isinstance(
params, DictConfig
), "Input config params are expected to be a mapping, found {}".format(
type(config.params)
)
params = OmegaConf.to_container(params, resolve=True)
def f(*args, **kwargs):
try:
kwargs = {**params, **kwargs}
return clazz(*args, **kwargs)
except Exception as e:
log.error("Error instantiating {} : {}".format(class_name, e))
raise e
return f
@hydra.main(config_path="conf.yaml")
def my_app(cfg):
print(cfg.pretty())
model = torchvision.models.alexnet()
instantiate_f(cfg)(params=model.parameters()) # OK
instantiate_f(cfg)(model.parameters()) # OK
if __name__ == "__main__":
my_app()
```
A note in the documentation should be added explaining the behavior wrt `config`, as it gets resolved immediately.
</issue>
<code>
[start of hydra/conf/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from omegaconf import MISSING
6
7 from hydra.core.config_store import ConfigStore
8
9 hydra_defaults = [
10 # Hydra's logging config
11 {"hydra/hydra_logging": "default"},
12 # Job's logging config
13 {"hydra/job_logging": "default"},
14 # Launcher config
15 {"hydra/launcher": "basic"},
16 # Sweeper config
17 {"hydra/sweeper": "basic"},
18 # Output directory
19 {"hydra/output": "default"},
20 # --help template
21 {"hydra/help": "default"},
22 # --hydra-help template
23 {"hydra/hydra_help": "default"},
24 ]
25
26
27 @dataclass
28 class PluginConf(Dict[str, Any]):
29 # class name for plugin
30 cls: str = MISSING
31 params: Dict[str, Any] = field(default_factory=dict)
32
33
34 @dataclass
35 class HelpConf:
36 app_name: str = MISSING
37 header: str = MISSING
38 footer: str = MISSING
39 template: str = MISSING
40
41
42 @dataclass
43 class HydraHelpConf:
44 hydra_help: str = MISSING
45 template: str = MISSING
46
47
48 @dataclass
49 class RunDir:
50 dir: str = MISSING
51
52
53 @dataclass
54 class SweepDir:
55 dir: str = MISSING
56 subdir: str = MISSING
57
58
59 @dataclass
60 class OverridesConf:
61 # Overrides for the hydra configuration
62 hydra: List[str] = field(default_factory=lambda: [])
63 # Overrides for the task configuration
64 task: List[str] = field(default_factory=lambda: [])
65
66
67 @dataclass
68 # job runtime information will be populated here
69 class JobConf:
70 # Job name, can be specified by the user (in config or cli) or populated automatically
71 name: str = MISSING
72
73 # Concatenation of job overrides that can be used as a part
74 # of the directory name.
75 # This can be configured in hydra.job.config.override_dirname
76 override_dirname: str = MISSING
77
78 # Job ID in underlying scheduling system
79 id: str = MISSING
80
81 # Job number if job is a part of a sweep
82 num: str = MISSING
83
84 # The config name used by the job
85 config_name: Optional[str] = MISSING
86
87 @dataclass
88 # Job config
89 class JobConfig:
90 @dataclass
91 # configuration for the ${hydra.job.override_dirname} runtime variable
92 class OverrideDirname:
93 kv_sep: str = "="
94 item_sep: str = ","
95 exclude_keys: List[str] = field(default_factory=lambda: [])
96
97 override_dirname: OverrideDirname = OverrideDirname()
98
99 config: JobConfig = JobConfig()
100
101
102 @dataclass
103 class RuntimeConf:
104 version: str = MISSING
105 cwd: str = MISSING
106
107
108 @dataclass
109 class HydraConf:
110 # Normal run output configuration
111 run: RunDir = RunDir()
112 # Multi-run output configuration
113 sweep: SweepDir = SweepDir()
114 # Logging configuration for Hydra
115 hydra_logging: Any = MISSING
116 # Logging configuration for the job
117 job_logging: Any = MISSING
118
119 # Sweeper configuration
120 sweeper: PluginConf = field(default_factory=PluginConf)
121 # Launcher configuration
122 launcher: PluginConf = field(default_factory=PluginConf)
123
124 # Program Help template
125 help: HelpConf = HelpConf()
126 # Hydra's Help template
127 hydra_help: HydraHelpConf = HydraHelpConf()
128
129 # Output directory for produced configuration files and overrides.
130 # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging
131 # and extra context when looking at past runs.
132 output_subdir: str = ".hydra"
133
134 # Those lists will contain runtime overrides
135 overrides: OverridesConf = OverridesConf()
136
137 job: JobConf = JobConf()
138
139 # populated at runtime
140 runtime: RuntimeConf = RuntimeConf()
141
142 # Can be a boolean, string or a list of strings
143 # If a boolean, setting to true will set the log level for the root logger to debug
144 # If a string, it's interpreted as a the list [string]
145 # If a list, each element is interpreted as a logger to have logging level set to debug.
146 # Typical command lines to manipulate hydra.verbose:
147 # hydra.verbose=true
148 # hydra.verbose=[hydra,__main__]
149 # TODO: good use case for Union support in OmegaConf
150 verbose: Any = False
151
152
153 ConfigStore.instance().store(
154 name="hydra_config",
155 node={
156 # Hydra composition defaults
157 "defaults": hydra_defaults,
158 # Hydra config
159 "hydra": HydraConf,
160 },
161 provider="hydra",
162 )
163
[end of hydra/conf/__init__.py]
[start of hydra/utils.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import logging.config
3 import warnings
4 from pathlib import Path
5 from typing import Any
6
7 from omegaconf import DictConfig, OmegaConf
8
9 from hydra.conf import PluginConf
10 from hydra.core.hydra_config import HydraConfig
11
12 log = logging.getLogger(__name__)
13
14
15 def get_method(path: str) -> type:
16 return get_class(path)
17
18
19 def get_class(path: str) -> type:
20 try:
21 from importlib import import_module
22
23 module_path, _, class_name = path.rpartition(".")
24 mod = import_module(module_path)
25 try:
26 klass: type = getattr(mod, class_name)
27 except AttributeError:
28 raise ImportError(
29 "Class {} is not in module {}".format(class_name, module_path)
30 )
31 return klass
32 except ValueError as e:
33 log.error("Error initializing class " + path)
34 raise e
35
36
37 def get_static_method(full_method_name: str) -> type:
38 try:
39 spl = full_method_name.split(".")
40 method_name = spl.pop()
41 class_name = ".".join(spl)
42 clz = get_class(class_name)
43 ret: type = getattr(clz, method_name)
44 return ret
45 except Exception as e:
46 log.error("Error getting static method {} : {}".format(full_method_name, e))
47 raise e
48
49
50 def instantiate(config: PluginConf, *args: Any, **kwargs: Any) -> Any:
51 import copy
52
53 # copy config to avoid mutating it when merging with kwargs
54
55 config_copy = copy.deepcopy(config)
56
57 # Manually set parent as deepcopy does not currently handles it (https://github.com/omry/omegaconf/issues/130)
58 # noinspection PyProtectedMember
59 config_copy._set_parent(config._get_parent()) # type: ignore
60 config = config_copy
61 classname = _get_class_name(config)
62 try:
63 clazz = get_class(classname)
64 params = config.params if "params" in config else OmegaConf.create()
65 assert isinstance(
66 params, DictConfig
67 ), "Input config params are expected to be a mapping, found {}".format(
68 type(config.params)
69 )
70 params.merge_with(OmegaConf.create(kwargs))
71
72 return clazz(*args, **params)
73 except Exception as e:
74 log.error(f"Error instantiating '{classname}' : {e}")
75 raise e
76
77
78 def get_original_cwd() -> str:
79 ret = HydraConfig.instance().hydra.runtime.cwd
80 assert ret is not None and isinstance(ret, str)
81 return ret
82
83
84 def to_absolute_path(path: str) -> str:
85 """
86 converts the specified path to be absolute path.
87 if the input path is relative, it's interpreted as relative to the original working directory
88 if it's absolute, it's returned as is
89 :param path:
90 :return:
91 """
92 p = Path(path)
93 if p.is_absolute():
94 ret = p
95 else:
96 ret = Path(get_original_cwd()) / p
97 return str(ret)
98
99
100 def _get_class_name(config: PluginConf) -> str:
101 if "class" in config:
102 warnings.warn(
103 "\n"
104 "PluginConf field 'class' is deprecated since Hydra 1.0.0 and will be removed in a future Hydra version.\n"
105 "Offending config class:\n"
106 f"\tclass={config['class']}\n"
107 "Change your config to use 'cls' instead of 'class'.\n",
108 category=UserWarning,
109 )
110 classname = config["class"]
111 assert isinstance(classname, str)
112 return classname
113 else:
114 if "cls" in config:
115 return config.cls
116 else:
117 raise ValueError("Input config does not have a cls field")
118
[end of hydra/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py
--- a/hydra/conf/__init__.py
+++ b/hydra/conf/__init__.py
@@ -25,6 +25,8 @@
@dataclass
+# This extends Dict[str, Any] to allow for the deprecated "class" field.
+# Once support for class field removed this can stop extending Dict.
class PluginConf(Dict[str, Any]):
# class name for plugin
cls: str = MISSING
diff --git a/hydra/utils.py b/hydra/utils.py
--- a/hydra/utils.py
+++ b/hydra/utils.py
@@ -4,7 +4,7 @@
from pathlib import Path
from typing import Any
-from omegaconf import DictConfig, OmegaConf
+from omegaconf import DictConfig, OmegaConf, _utils
from hydra.conf import PluginConf
from hydra.core.hydra_config import HydraConfig
@@ -67,9 +67,22 @@
), "Input config params are expected to be a mapping, found {}".format(
type(config.params)
)
- params.merge_with(OmegaConf.create(kwargs))
-
- return clazz(*args, **params)
+ primitives = {}
+ rest = {}
+ for k, v in kwargs.items():
+ if _utils._is_primitive_type(v) or isinstance(v, (dict, list)):
+ primitives[k] = v
+ else:
+ rest[k] = v
+ final_kwargs = {}
+ params.merge_with(OmegaConf.create(primitives))
+ for k, v in params.items():
+ final_kwargs[k] = v
+
+ for k, v in rest.items():
+ final_kwargs[k] = v
+
+ return clazz(*args, **final_kwargs)
except Exception as e:
log.error(f"Error instantiating '{classname}' : {e}")
raise e
| {"golden_diff": "diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py\n--- a/hydra/conf/__init__.py\n+++ b/hydra/conf/__init__.py\n@@ -25,6 +25,8 @@\n \n \n @dataclass\n+# This extends Dict[str, Any] to allow for the deprecated \"class\" field.\n+# Once support for class field removed this can stop extending Dict.\n class PluginConf(Dict[str, Any]):\n # class name for plugin\n cls: str = MISSING\ndiff --git a/hydra/utils.py b/hydra/utils.py\n--- a/hydra/utils.py\n+++ b/hydra/utils.py\n@@ -4,7 +4,7 @@\n from pathlib import Path\n from typing import Any\n \n-from omegaconf import DictConfig, OmegaConf\n+from omegaconf import DictConfig, OmegaConf, _utils\n \n from hydra.conf import PluginConf\n from hydra.core.hydra_config import HydraConfig\n@@ -67,9 +67,22 @@\n ), \"Input config params are expected to be a mapping, found {}\".format(\n type(config.params)\n )\n- params.merge_with(OmegaConf.create(kwargs))\n-\n- return clazz(*args, **params)\n+ primitives = {}\n+ rest = {}\n+ for k, v in kwargs.items():\n+ if _utils._is_primitive_type(v) or isinstance(v, (dict, list)):\n+ primitives[k] = v\n+ else:\n+ rest[k] = v\n+ final_kwargs = {}\n+ params.merge_with(OmegaConf.create(primitives))\n+ for k, v in params.items():\n+ final_kwargs[k] = v\n+\n+ for k, v in rest.items():\n+ final_kwargs[k] = v\n+\n+ return clazz(*args, **final_kwargs)\n except Exception as e:\n log.error(f\"Error instantiating '{classname}' : {e}\")\n raise e\n", "issue": "[Bug] issues with named arguments in hydra.utils.instantiate\n`hydra.utils.instantiate` converts to OmegaConf named arguments, making it impossible to use the named arguments syntax for arguments which cannot be represented in OmegaConf.\r\n\r\n## To reproduce:\r\n\r\n`conf.yaml`:\r\n\r\n```yaml\r\nclass: torch.optim.Adam\r\nparams:\r\n lr: 1E-4\r\n```\r\n\r\n`test.py`:\r\n\r\n```python\r\nimport hydra\r\nimport torch\r\nimport torchvision\r\n\r\[email protected](config_path=\"conf.yaml\")\r\ndef my_app(cfg):\r\n print(cfg.pretty())\r\n model = torchvision.models.alexnet()\r\n hydra.utils.instantiate(cfg, model.parameters()) # OK\r\n hydra.utils.instantiate(cfg, params=model.parameters()) # FAILS\r\n\r\nif __name__ == \"__main__\":\r\n my_app()\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\nValueError: key params: generator is not a primitive type\r\n```\r\n\r\nAs a side note, the fact that `hydra.utils.instantiate` takes as input an argument named `config` makes it impossible to instantiate an object which expects the same argument using named arguments, which seems an unnecessary restriction.\r\n\r\n## The following solves both issues:\r\n\r\n```python\r\ndef instantiate_f(config):\r\n assert config is not None, \"Input config is None\"\r\n class_name = config[\"class\"]\r\n clazz = get_class(class_name)\r\n params = config.params if \"params\" in config else OmegaConf.create()\r\n assert isinstance(\r\n params, DictConfig\r\n ), \"Input config params are expected to be a mapping, found {}\".format(\r\n type(config.params)\r\n )\r\n params = OmegaConf.to_container(params, resolve=True)\r\n\r\n def f(*args, **kwargs):\r\n try:\r\n kwargs = {**params, **kwargs}\r\n return clazz(*args, **kwargs)\r\n except Exception as e:\r\n log.error(\"Error instantiating {} : {}\".format(class_name, e))\r\n raise e\r\n\r\n return f\r\n\r\[email protected](config_path=\"conf.yaml\")\r\ndef my_app(cfg):\r\n print(cfg.pretty())\r\n model = torchvision.models.alexnet()\r\n instantiate_f(cfg)(params=model.parameters()) # OK\r\n instantiate_f(cfg)(model.parameters()) # OK\r\n\r\nif __name__ == \"__main__\":\r\n my_app()\r\n```\r\n\r\nA note in the documentation should be added explaining the behavior wrt `config`, as it gets resolved immediately.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\nhydra_defaults = [\n # Hydra's logging config\n {\"hydra/hydra_logging\": \"default\"},\n # Job's logging config\n {\"hydra/job_logging\": \"default\"},\n # Launcher config\n {\"hydra/launcher\": \"basic\"},\n # Sweeper config\n {\"hydra/sweeper\": \"basic\"},\n # Output directory\n {\"hydra/output\": \"default\"},\n # --help template\n {\"hydra/help\": \"default\"},\n # --hydra-help template\n {\"hydra/hydra_help\": \"default\"},\n]\n\n\n@dataclass\nclass PluginConf(Dict[str, Any]):\n # class name for plugin\n cls: str = MISSING\n params: Dict[str, Any] = field(default_factory=dict)\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n@dataclass\n# job runtime information will be populated here\nclass JobConf:\n # Job name, can be specified by the user (in config or cli) or populated automatically\n name: str = MISSING\n\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured in hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: str = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n @dataclass\n # Job config\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=lambda: [])\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n\n\n@dataclass\nclass HydraConf:\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: PluginConf = field(default_factory=PluginConf)\n # Launcher configuration\n launcher: PluginConf = field(default_factory=PluginConf)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n output_subdir: str = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n\nConfigStore.instance().store(\n name=\"hydra_config\",\n node={\n # Hydra composition defaults\n \"defaults\": hydra_defaults,\n # Hydra config\n \"hydra\": HydraConf,\n },\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging.config\nimport warnings\nfrom pathlib import Path\nfrom typing import Any\n\nfrom omegaconf import DictConfig, OmegaConf\n\nfrom hydra.conf import PluginConf\nfrom hydra.core.hydra_config import HydraConfig\n\nlog = logging.getLogger(__name__)\n\n\ndef get_method(path: str) -> type:\n return get_class(path)\n\n\ndef get_class(path: str) -> type:\n try:\n from importlib import import_module\n\n module_path, _, class_name = path.rpartition(\".\")\n mod = import_module(module_path)\n try:\n klass: type = getattr(mod, class_name)\n except AttributeError:\n raise ImportError(\n \"Class {} is not in module {}\".format(class_name, module_path)\n )\n return klass\n except ValueError as e:\n log.error(\"Error initializing class \" + path)\n raise e\n\n\ndef get_static_method(full_method_name: str) -> type:\n try:\n spl = full_method_name.split(\".\")\n method_name = spl.pop()\n class_name = \".\".join(spl)\n clz = get_class(class_name)\n ret: type = getattr(clz, method_name)\n return ret\n except Exception as e:\n log.error(\"Error getting static method {} : {}\".format(full_method_name, e))\n raise e\n\n\ndef instantiate(config: PluginConf, *args: Any, **kwargs: Any) -> Any:\n import copy\n\n # copy config to avoid mutating it when merging with kwargs\n\n config_copy = copy.deepcopy(config)\n\n # Manually set parent as deepcopy does not currently handles it (https://github.com/omry/omegaconf/issues/130)\n # noinspection PyProtectedMember\n config_copy._set_parent(config._get_parent()) # type: ignore\n config = config_copy\n classname = _get_class_name(config)\n try:\n clazz = get_class(classname)\n params = config.params if \"params\" in config else OmegaConf.create()\n assert isinstance(\n params, DictConfig\n ), \"Input config params are expected to be a mapping, found {}\".format(\n type(config.params)\n )\n params.merge_with(OmegaConf.create(kwargs))\n\n return clazz(*args, **params)\n except Exception as e:\n log.error(f\"Error instantiating '{classname}' : {e}\")\n raise e\n\n\ndef get_original_cwd() -> str:\n ret = HydraConfig.instance().hydra.runtime.cwd\n assert ret is not None and isinstance(ret, str)\n return ret\n\n\ndef to_absolute_path(path: str) -> str:\n \"\"\"\n converts the specified path to be absolute path.\n if the input path is relative, it's interpreted as relative to the original working directory\n if it's absolute, it's returned as is\n :param path:\n :return:\n \"\"\"\n p = Path(path)\n if p.is_absolute():\n ret = p\n else:\n ret = Path(get_original_cwd()) / p\n return str(ret)\n\n\ndef _get_class_name(config: PluginConf) -> str:\n if \"class\" in config:\n warnings.warn(\n \"\\n\"\n \"PluginConf field 'class' is deprecated since Hydra 1.0.0 and will be removed in a future Hydra version.\\n\"\n \"Offending config class:\\n\"\n f\"\\tclass={config['class']}\\n\"\n \"Change your config to use 'cls' instead of 'class'.\\n\",\n category=UserWarning,\n )\n classname = config[\"class\"]\n assert isinstance(classname, str)\n return classname\n else:\n if \"cls\" in config:\n return config.cls\n else:\n raise ValueError(\"Input config does not have a cls field\")\n", "path": "hydra/utils.py"}]} | 3,589 | 423 |
gh_patches_debug_1011 | rasdani/github-patches | git_diff | pymedusa__Medusa-3547 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
codec can't encode characters in position 29-36
### Before submitting your issue:
Enable debug logging in Medusa settings, reproduce the error (be sure to disable after the bug is fixed)
**Branch/Commit:** feature/add-indexerids-to-db/4bdbd81
**OS:** windows
**What you did:** Started up medusa while having the series `Tokyo Goul` added. With scene exceptions added from xem.
**What happened:** The error below showed.
**What you expected:** no error.
**Logs:**
```
2017-12-27 21:29:34 ERROR MAIN :: [4bdbd81] BraceMessage string formatting failed. Using representation instead.
File "D:\JetBrains\PyCharm 2017.2.4\helpers\pydev\pydevd.py", line 1599, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "D:\JetBrains\PyCharm 2017.2.4\helpers\pydev\pydevd.py", line 1026, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:/Development/Medusa5/start.py", line 7, in <module>
main()
File "D:/Development/Medusa5\medusa\__main__.py", line 2109, in main
application.start(sys.argv[1:])
File "D:/Development/Medusa5\medusa\__main__.py", line 354, in start
name_cache.build_name_cache()
File "D:/Development/Medusa5\medusa\name_cache.py", line 128, in build_name_cache
_cache_name(show)
File "D:/Development/Medusa5\medusa\name_cache.py", line 116, in _cache_name
'names': ', '.join(names.keys())
File "D:/Development/Medusa5\medusa\logger\adapters\style.py", line 89, in log
self.logger.log(level, brace_msg, **kwargs)
File "D:\Python27\lib\logging\__init__.py", line 1489, in log
self.logger.log(level, msg, *args, **kwargs)
File "D:\Python27\lib\logging\__init__.py", line 1231, in log
self._log(level, msg, args, **kwargs)
File "D:\Python27\lib\logging\__init__.py", line 1286, in _log
self.handle(record)
File "D:\Python27\lib\logging\__init__.py", line 1296, in handle
self.callHandlers(record)
File "D:\Python27\lib\logging\__init__.py", line 1336, in callHandlers
hdlr.handle(record)
File "D:\Python27\lib\logging\__init__.py", line 759, in handle
self.emit(record)
File "D:\Python27\lib\logging\handlers.py", line 78, in emit
logging.FileHandler.emit(self, record)
File "D:\Python27\lib\logging\__init__.py", line 957, in emit
StreamHandler.emit(self, record)
File "D:\Python27\lib\logging\__init__.py", line 861, in emit
msg = self.format(record)
File "D:\Python27\lib\logging\__init__.py", line 734, in format
return fmt.format(record)
File "D:/Development/Medusa5\medusa\logger\__init__.py", line 546, in format
msg = super(CensoredFormatter, self).format(record)
File "D:\Python27\lib\logging\__init__.py", line 465, in format
record.message = record.getMessage()
File "D:\Python27\lib\logging\__init__.py", line 325, in getMessage
msg = str(self.msg)
File "D:/Development/Medusa5\medusa\init\logconfig.py", line 80, in __str__
result = text_type(self.fmt)
File "D:/Development/Medusa5\medusa\logger\adapters\style.py", line 49, in __str__
''.join(traceback.format_stack()),
Traceback (most recent call last):
File "D:/Development/Medusa5\medusa\logger\adapters\style.py", line 39, in __str__
return msg.format(*args, **kwargs)
File "D:\Python27\lib\encodings\cp1252.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_table)
UnicodeEncodeError: 'charmap' codec can't encode characters in position 29-36: character maps to <undefined>
```
I got it in my branch, but I doubt it was because of any of the changed I did.
</issue>
<code>
[start of medusa/logger/adapters/style.py]
1 # coding=utf-8
2
3 """Style Adapters for Python logging."""
4
5 from __future__ import unicode_literals
6
7 import collections
8 import functools
9 import logging
10 import traceback
11
12 from six import text_type
13
14 log = logging.getLogger(__name__)
15 log.addHandler(logging.NullHandler())
16
17
18 class BraceMessage(object):
19 """Lazily convert a Brace-formatted message."""
20
21 def __init__(self, msg, *args, **kwargs):
22 """Initialize a lazy-formatted message."""
23 self.msg = msg
24 self.args = args
25 self.kwargs = kwargs
26
27 def __str__(self):
28 """Convert to string."""
29 args = self.args
30 kwargs = self.kwargs
31 if args and len(args) == 1:
32 if args[0] and isinstance(args[0], collections.Mapping):
33 args = []
34 kwargs = self.args[0]
35
36 msg = str(self.msg)
37
38 try:
39 return msg.format(*args, **kwargs)
40 except IndexError:
41 try:
42 return msg.format(kwargs)
43 except IndexError:
44 return msg
45 except Exception:
46 log.error(
47 'BraceMessage string formatting failed. '
48 'Using representation instead.\n{0}'.format(
49 ''.join(traceback.format_stack()),
50 )
51 )
52 return repr(self)
53
54 def __repr__(self):
55 """Convert to class representation."""
56 sep = ', '
57 kw_repr = '{key}={value!r}'
58 name = self.__class__.__name__
59 args = sep.join(map(text_type, self.args))
60 kwargs = sep.join(kw_repr.format(key=k, value=v)
61 for k, v in self.kwargs.items())
62 return '{cls}({args})'.format(
63 cls=name,
64 args=sep.join([repr(self.msg), args, kwargs])
65 )
66
67 def format(self, *args, **kwargs):
68 """Format a BraceMessage string."""
69 return str(self).format(*args, **kwargs)
70
71
72 class BraceAdapter(logging.LoggerAdapter):
73 """Adapt logger to use Brace-formatted messages."""
74
75 def __init__(self, logger, extra=None):
76 """Initialize the Brace adapter with a logger."""
77 super(BraceAdapter, self).__init__(logger, extra)
78 self.debug = functools.partial(self.log, logging.DEBUG)
79 self.info = functools.partial(self.log, logging.INFO)
80 self.warning = functools.partial(self.log, logging.WARNING)
81 self.error = functools.partial(self.log, logging.ERROR)
82 self.critical = functools.partial(self.log, logging.CRITICAL)
83
84 def log(self, level, msg, *args, **kwargs):
85 """Log a message at the specified level using Brace-formatting."""
86 if self.isEnabledFor(level):
87 msg, kwargs = self.process(msg, kwargs)
88 brace_msg = BraceMessage(msg, *args, **kwargs)
89 self.logger.log(level, brace_msg, **kwargs)
90
91 def exception(self, msg, *args, **kwargs):
92 """Add exception information before delegating to self.log."""
93 kwargs['exc_info'] = 1
94 self.log(logging.ERROR, msg, *args, **kwargs)
95
[end of medusa/logger/adapters/style.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/medusa/logger/adapters/style.py b/medusa/logger/adapters/style.py
--- a/medusa/logger/adapters/style.py
+++ b/medusa/logger/adapters/style.py
@@ -33,7 +33,7 @@
args = []
kwargs = self.args[0]
- msg = str(self.msg)
+ msg = text_type(self.msg)
try:
return msg.format(*args, **kwargs)
| {"golden_diff": "diff --git a/medusa/logger/adapters/style.py b/medusa/logger/adapters/style.py\n--- a/medusa/logger/adapters/style.py\n+++ b/medusa/logger/adapters/style.py\n@@ -33,7 +33,7 @@\n args = []\n kwargs = self.args[0]\n \n- msg = str(self.msg)\n+ msg = text_type(self.msg)\n \n try:\n return msg.format(*args, **kwargs)\n", "issue": "codec can't encode characters in position 29-36\n### Before submitting your issue:\r\n\r\nEnable debug logging in Medusa settings, reproduce the error (be sure to disable after the bug is fixed)\r\n\r\n**Branch/Commit:** feature/add-indexerids-to-db/4bdbd81\r\n**OS:** windows\r\n**What you did:** Started up medusa while having the series `Tokyo Goul` added. With scene exceptions added from xem.\r\n**What happened:** The error below showed.\r\n**What you expected:** no error.\r\n**Logs:**\r\n```\r\n2017-12-27 21:29:34 ERROR MAIN :: [4bdbd81] BraceMessage string formatting failed. Using representation instead.\r\n File \"D:\\JetBrains\\PyCharm 2017.2.4\\helpers\\pydev\\pydevd.py\", line 1599, in <module>\r\n globals = debugger.run(setup['file'], None, None, is_module)\r\n File \"D:\\JetBrains\\PyCharm 2017.2.4\\helpers\\pydev\\pydevd.py\", line 1026, in run\r\n pydev_imports.execfile(file, globals, locals) # execute the script\r\n File \"D:/Development/Medusa5/start.py\", line 7, in <module>\r\n main()\r\n File \"D:/Development/Medusa5\\medusa\\__main__.py\", line 2109, in main\r\n application.start(sys.argv[1:])\r\n File \"D:/Development/Medusa5\\medusa\\__main__.py\", line 354, in start\r\n name_cache.build_name_cache()\r\n File \"D:/Development/Medusa5\\medusa\\name_cache.py\", line 128, in build_name_cache\r\n _cache_name(show)\r\n File \"D:/Development/Medusa5\\medusa\\name_cache.py\", line 116, in _cache_name\r\n 'names': ', '.join(names.keys())\r\n File \"D:/Development/Medusa5\\medusa\\logger\\adapters\\style.py\", line 89, in log\r\n self.logger.log(level, brace_msg, **kwargs)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 1489, in log\r\n self.logger.log(level, msg, *args, **kwargs)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 1231, in log\r\n self._log(level, msg, args, **kwargs)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 1286, in _log\r\n self.handle(record)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 1296, in handle\r\n self.callHandlers(record)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 1336, in callHandlers\r\n hdlr.handle(record)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 759, in handle\r\n self.emit(record)\r\n File \"D:\\Python27\\lib\\logging\\handlers.py\", line 78, in emit\r\n logging.FileHandler.emit(self, record)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 957, in emit\r\n StreamHandler.emit(self, record)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 861, in emit\r\n msg = self.format(record)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 734, in format\r\n return fmt.format(record)\r\n File \"D:/Development/Medusa5\\medusa\\logger\\__init__.py\", line 546, in format\r\n msg = super(CensoredFormatter, self).format(record)\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 465, in format\r\n record.message = record.getMessage()\r\n File \"D:\\Python27\\lib\\logging\\__init__.py\", line 325, in getMessage\r\n msg = str(self.msg)\r\n File \"D:/Development/Medusa5\\medusa\\init\\logconfig.py\", line 80, in __str__\r\n result = text_type(self.fmt)\r\n File \"D:/Development/Medusa5\\medusa\\logger\\adapters\\style.py\", line 49, in __str__\r\n ''.join(traceback.format_stack()),\r\nTraceback (most recent call last):\r\n File \"D:/Development/Medusa5\\medusa\\logger\\adapters\\style.py\", line 39, in __str__\r\n return msg.format(*args, **kwargs)\r\n File \"D:\\Python27\\lib\\encodings\\cp1252.py\", line 12, in encode\r\n return codecs.charmap_encode(input,errors,encoding_table)\r\nUnicodeEncodeError: 'charmap' codec can't encode characters in position 29-36: character maps to <undefined>\r\n```\r\n\r\nI got it in my branch, but I doubt it was because of any of the changed I did.\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Style Adapters for Python logging.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport collections\nimport functools\nimport logging\nimport traceback\n\nfrom six import text_type\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\nclass BraceMessage(object):\n \"\"\"Lazily convert a Brace-formatted message.\"\"\"\n\n def __init__(self, msg, *args, **kwargs):\n \"\"\"Initialize a lazy-formatted message.\"\"\"\n self.msg = msg\n self.args = args\n self.kwargs = kwargs\n\n def __str__(self):\n \"\"\"Convert to string.\"\"\"\n args = self.args\n kwargs = self.kwargs\n if args and len(args) == 1:\n if args[0] and isinstance(args[0], collections.Mapping):\n args = []\n kwargs = self.args[0]\n\n msg = str(self.msg)\n\n try:\n return msg.format(*args, **kwargs)\n except IndexError:\n try:\n return msg.format(kwargs)\n except IndexError:\n return msg\n except Exception:\n log.error(\n 'BraceMessage string formatting failed. '\n 'Using representation instead.\\n{0}'.format(\n ''.join(traceback.format_stack()),\n )\n )\n return repr(self)\n\n def __repr__(self):\n \"\"\"Convert to class representation.\"\"\"\n sep = ', '\n kw_repr = '{key}={value!r}'\n name = self.__class__.__name__\n args = sep.join(map(text_type, self.args))\n kwargs = sep.join(kw_repr.format(key=k, value=v)\n for k, v in self.kwargs.items())\n return '{cls}({args})'.format(\n cls=name,\n args=sep.join([repr(self.msg), args, kwargs])\n )\n\n def format(self, *args, **kwargs):\n \"\"\"Format a BraceMessage string.\"\"\"\n return str(self).format(*args, **kwargs)\n\n\nclass BraceAdapter(logging.LoggerAdapter):\n \"\"\"Adapt logger to use Brace-formatted messages.\"\"\"\n\n def __init__(self, logger, extra=None):\n \"\"\"Initialize the Brace adapter with a logger.\"\"\"\n super(BraceAdapter, self).__init__(logger, extra)\n self.debug = functools.partial(self.log, logging.DEBUG)\n self.info = functools.partial(self.log, logging.INFO)\n self.warning = functools.partial(self.log, logging.WARNING)\n self.error = functools.partial(self.log, logging.ERROR)\n self.critical = functools.partial(self.log, logging.CRITICAL)\n\n def log(self, level, msg, *args, **kwargs):\n \"\"\"Log a message at the specified level using Brace-formatting.\"\"\"\n if self.isEnabledFor(level):\n msg, kwargs = self.process(msg, kwargs)\n brace_msg = BraceMessage(msg, *args, **kwargs)\n self.logger.log(level, brace_msg, **kwargs)\n\n def exception(self, msg, *args, **kwargs):\n \"\"\"Add exception information before delegating to self.log.\"\"\"\n kwargs['exc_info'] = 1\n self.log(logging.ERROR, msg, *args, **kwargs)\n", "path": "medusa/logger/adapters/style.py"}]} | 2,552 | 100 |
gh_patches_debug_36298 | rasdani/github-patches | git_diff | elastic__apm-agent-python-957 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider using route information instead of URL for Starlette transactions
We're slowly started using Starlette for some smaller applications due to its simplicity and async capabilities. We also happen to use Elastic APM for our monitoring, but we've found that the agent works quite different from other agents, such as Django or Flask.
The default for Django, is "HTTP Method + path.to.view". This can be changed for Django 2.2+ using the `django-transaction-name-from-route` setting, to instead use the route that was matched against. For example `GET /users/<int:user_id>/`.
For Flask, the default seems to be the that of method + route, e.g.: `GET /users/<int:user_id/`.
For Starlette however, information about the matched route is not actually available on the request, and for that reason (I assume), the full URL is instead used as the transaction name.
In the example above, it would mean that every time a user with a different URL was hit, it would generate two separate transaction entries. For some of our views, that would mean 1+ milllion transaction names for the same view.
However, it is somewhat possible to extract the route by mimicking what Starlette itself does. On the request, the Application itself is available, and from that route router + routes. This means it's possible to loop over them, and match their regular expression against the current request.
1) I realise this isn't pretty in any way, shape or form - but it would make instrumentation of Starlette applications a lot more useable
2) I also realise it's backwards incompatible - but perhaps this could be solved in the same way that it's done with Django?
What would your thought be on such a change?
For us at our workplace, we've actually have one major of our main applications ported to Starlette, but once we saw what was happening to the transaction names, we had to revert the deployment. We'd love to have it running, so we'd probably be interested in providing a patch for it as well.
**EDIT**
Just wanted to add this here, since it's a discussion of making this type of information available in the middleware system: https://github.com/encode/starlette/issues/685
It also links another repo for timing of ASGI that basically does what I describe here, by looping over all routes and comparing them to the current URL.
</issue>
<code>
[start of elasticapm/contrib/starlette/__init__.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 from __future__ import absolute_import
33
34 import starlette
35 from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
36 from starlette.requests import Request
37 from starlette.responses import Response
38 from starlette.types import ASGIApp
39
40 import elasticapm
41 import elasticapm.instrumentation.control
42 from elasticapm.base import Client
43 from elasticapm.conf import constants
44 from elasticapm.contrib.asyncio.traces import set_context
45 from elasticapm.contrib.starlette.utils import get_data_from_request, get_data_from_response
46 from elasticapm.utils.disttracing import TraceParent
47 from elasticapm.utils.logging import get_logger
48
49 logger = get_logger("elasticapm.errors.client")
50
51
52 def make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:
53 """Builds ElasticAPM client.
54
55 Args:
56 config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.
57 client_cls (Client): Must be Client or its child.
58 **defaults: Additional parameters for Client. See `elasticapm.base.Client`
59
60 Returns:
61 Client
62 """
63 if "framework_name" not in defaults:
64 defaults["framework_name"] = "starlette"
65 defaults["framework_version"] = starlette.__version__
66
67 return client_cls(config, **defaults)
68
69
70 class ElasticAPM(BaseHTTPMiddleware):
71 """
72 Starlette / FastAPI middleware for Elastic APM capturing.
73
74 >>> elasticapm = make_apm_client({
75 >>> 'SERVICE_NAME': 'myapp',
76 >>> 'DEBUG': True,
77 >>> 'SERVER_URL': 'http://localhost:8200',
78 >>> 'CAPTURE_HEADERS': True,
79 >>> 'CAPTURE_BODY': 'all'
80 >>> })
81
82 >>> app.add_middleware(ElasticAPM, client=elasticapm)
83
84 Pass an arbitrary APP_NAME and SECRET_TOKEN::
85
86 >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')
87
88 Pass an explicit client::
89
90 >>> elasticapm = ElasticAPM(app, client=client)
91
92 Automatically configure logging::
93
94 >>> elasticapm = ElasticAPM(app, logging=True)
95
96 Capture an exception::
97
98 >>> try:
99 >>> 1 / 0
100 >>> except ZeroDivisionError:
101 >>> elasticapm.capture_exception()
102
103 Capture a message::
104
105 >>> elasticapm.capture_message('hello, world!')
106 """
107
108 def __init__(self, app: ASGIApp, client: Client):
109 """
110
111 Args:
112 app (ASGIApp): Starlette app
113 client (Client): ElasticAPM Client
114 """
115 self.client = client
116
117 if self.client.config.instrument:
118 elasticapm.instrumentation.control.instrument()
119
120 super().__init__(app)
121
122 async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
123 """Processes the whole request APM capturing.
124
125 Args:
126 request (Request)
127 call_next (RequestResponseEndpoint): Next request process in Starlette.
128
129 Returns:
130 Response
131 """
132 await self._request_started(request)
133
134 try:
135 response = await call_next(request)
136 elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)
137 except Exception:
138 await self.capture_exception(
139 context={"request": await get_data_from_request(request, self.client.config, constants.ERROR)}
140 )
141 elasticapm.set_transaction_result("HTTP 5xx", override=False)
142 elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)
143 elasticapm.set_context({"status_code": 500}, "response")
144
145 raise
146 else:
147 await self._request_finished(response)
148 finally:
149 self.client.end_transaction()
150
151 return response
152
153 async def capture_exception(self, *args, **kwargs):
154 """Captures your exception.
155
156 Args:
157 *args:
158 **kwargs:
159 """
160 self.client.capture_exception(*args, **kwargs)
161
162 async def capture_message(self, *args, **kwargs):
163 """Captures your message.
164
165 Args:
166 *args: Whatever
167 **kwargs: Whatever
168 """
169 self.client.capture_message(*args, **kwargs)
170
171 async def _request_started(self, request: Request):
172 """Captures the begin of the request processing to APM.
173
174 Args:
175 request (Request)
176 """
177 if not self.client.should_ignore_url(request.url.path):
178 trace_parent = TraceParent.from_headers(dict(request.headers))
179 self.client.begin_transaction("request", trace_parent=trace_parent)
180
181 await set_context(
182 lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), "request"
183 )
184 elasticapm.set_transaction_name("{} {}".format(request.method, request.url.path), override=False)
185
186 async def _request_finished(self, response: Response):
187 """Captures the end of the request processing to APM.
188
189 Args:
190 response (Response)
191 """
192 await set_context(
193 lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), "response"
194 )
195
196 result = "HTTP {}xx".format(response.status_code // 100)
197 elasticapm.set_transaction_result(result, override=False)
198
[end of elasticapm/contrib/starlette/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py
--- a/elasticapm/contrib/starlette/__init__.py
+++ b/elasticapm/contrib/starlette/__init__.py
@@ -35,6 +35,7 @@
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
from starlette.requests import Request
from starlette.responses import Response
+from starlette.routing import Match
from starlette.types import ASGIApp
import elasticapm
@@ -181,7 +182,8 @@
await set_context(
lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), "request"
)
- elasticapm.set_transaction_name("{} {}".format(request.method, request.url.path), override=False)
+ transaction_name = self.get_route_name(request) or request.url.path
+ elasticapm.set_transaction_name("{} {}".format(request.method, transaction_name), override=False)
async def _request_finished(self, response: Response):
"""Captures the end of the request processing to APM.
@@ -195,3 +197,34 @@
result = "HTTP {}xx".format(response.status_code // 100)
elasticapm.set_transaction_result(result, override=False)
+
+ def get_route_name(self, request: Request) -> str:
+ route_name = None
+ app = request.app
+ scope = request.scope
+ routes = app.routes
+
+ for route in routes:
+ match, _ = route.matches(scope)
+ if match == Match.FULL:
+ route_name = route.path
+ break
+ elif match == Match.PARTIAL and route_name is None:
+ route_name = route.path
+ # Starlette magically redirects requests if the path matches a route name with a trailing slash
+ # appended or removed. To not spam the transaction names list, we do the same here and put these
+ # redirects all in the same "redirect trailing slashes" transaction name
+ if not route_name and app.router.redirect_slashes and scope["path"] != "/":
+ redirect_scope = dict(scope)
+ if scope["path"].endswith("/"):
+ redirect_scope["path"] = scope["path"][:-1]
+ trim = True
+ else:
+ redirect_scope["path"] = scope["path"] + "/"
+ trim = False
+ for route in routes:
+ match, _ = route.matches(redirect_scope)
+ if match != Match.NONE:
+ route_name = route.path + "/" if trim else route.path[:-1]
+ break
+ return route_name
| {"golden_diff": "diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py\n--- a/elasticapm/contrib/starlette/__init__.py\n+++ b/elasticapm/contrib/starlette/__init__.py\n@@ -35,6 +35,7 @@\n from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\n from starlette.requests import Request\n from starlette.responses import Response\n+from starlette.routing import Match\n from starlette.types import ASGIApp\n \n import elasticapm\n@@ -181,7 +182,8 @@\n await set_context(\n lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), \"request\"\n )\n- elasticapm.set_transaction_name(\"{} {}\".format(request.method, request.url.path), override=False)\n+ transaction_name = self.get_route_name(request) or request.url.path\n+ elasticapm.set_transaction_name(\"{} {}\".format(request.method, transaction_name), override=False)\n \n async def _request_finished(self, response: Response):\n \"\"\"Captures the end of the request processing to APM.\n@@ -195,3 +197,34 @@\n \n result = \"HTTP {}xx\".format(response.status_code // 100)\n elasticapm.set_transaction_result(result, override=False)\n+\n+ def get_route_name(self, request: Request) -> str:\n+ route_name = None\n+ app = request.app\n+ scope = request.scope\n+ routes = app.routes\n+\n+ for route in routes:\n+ match, _ = route.matches(scope)\n+ if match == Match.FULL:\n+ route_name = route.path\n+ break\n+ elif match == Match.PARTIAL and route_name is None:\n+ route_name = route.path\n+ # Starlette magically redirects requests if the path matches a route name with a trailing slash\n+ # appended or removed. To not spam the transaction names list, we do the same here and put these\n+ # redirects all in the same \"redirect trailing slashes\" transaction name\n+ if not route_name and app.router.redirect_slashes and scope[\"path\"] != \"/\":\n+ redirect_scope = dict(scope)\n+ if scope[\"path\"].endswith(\"/\"):\n+ redirect_scope[\"path\"] = scope[\"path\"][:-1]\n+ trim = True\n+ else:\n+ redirect_scope[\"path\"] = scope[\"path\"] + \"/\"\n+ trim = False\n+ for route in routes:\n+ match, _ = route.matches(redirect_scope)\n+ if match != Match.NONE:\n+ route_name = route.path + \"/\" if trim else route.path[:-1]\n+ break\n+ return route_name\n", "issue": "Consider using route information instead of URL for Starlette transactions\nWe're slowly started using Starlette for some smaller applications due to its simplicity and async capabilities. We also happen to use Elastic APM for our monitoring, but we've found that the agent works quite different from other agents, such as Django or Flask.\r\n\r\nThe default for Django, is \"HTTP Method + path.to.view\". This can be changed for Django 2.2+ using the `django-transaction-name-from-route` setting, to instead use the route that was matched against. For example `GET /users/<int:user_id>/`.\r\n\r\nFor Flask, the default seems to be the that of method + route, e.g.: `GET /users/<int:user_id/`.\r\n\r\nFor Starlette however, information about the matched route is not actually available on the request, and for that reason (I assume), the full URL is instead used as the transaction name.\r\nIn the example above, it would mean that every time a user with a different URL was hit, it would generate two separate transaction entries. For some of our views, that would mean 1+ milllion transaction names for the same view.\r\n\r\nHowever, it is somewhat possible to extract the route by mimicking what Starlette itself does. On the request, the Application itself is available, and from that route router + routes. This means it's possible to loop over them, and match their regular expression against the current request.\r\n\r\n1) I realise this isn't pretty in any way, shape or form - but it would make instrumentation of Starlette applications a lot more useable\r\n2) I also realise it's backwards incompatible - but perhaps this could be solved in the same way that it's done with Django?\r\n\r\nWhat would your thought be on such a change?\r\n\r\nFor us at our workplace, we've actually have one major of our main applications ported to Starlette, but once we saw what was happening to the transaction names, we had to revert the deployment. We'd love to have it running, so we'd probably be interested in providing a patch for it as well.\r\n\r\n**EDIT**\r\nJust wanted to add this here, since it's a discussion of making this type of information available in the middleware system: https://github.com/encode/starlette/issues/685\r\nIt also links another repo for timing of ASGI that basically does what I describe here, by looping over all routes and comparing them to the current URL.\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nimport starlette\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.types import ASGIApp\n\nimport elasticapm\nimport elasticapm.instrumentation.control\nfrom elasticapm.base import Client\nfrom elasticapm.conf import constants\nfrom elasticapm.contrib.asyncio.traces import set_context\nfrom elasticapm.contrib.starlette.utils import get_data_from_request, get_data_from_response\nfrom elasticapm.utils.disttracing import TraceParent\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.errors.client\")\n\n\ndef make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:\n \"\"\"Builds ElasticAPM client.\n\n Args:\n config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.\n client_cls (Client): Must be Client or its child.\n **defaults: Additional parameters for Client. See `elasticapm.base.Client`\n\n Returns:\n Client\n \"\"\"\n if \"framework_name\" not in defaults:\n defaults[\"framework_name\"] = \"starlette\"\n defaults[\"framework_version\"] = starlette.__version__\n\n return client_cls(config, **defaults)\n\n\nclass ElasticAPM(BaseHTTPMiddleware):\n \"\"\"\n Starlette / FastAPI middleware for Elastic APM capturing.\n\n >>> elasticapm = make_apm_client({\n >>> 'SERVICE_NAME': 'myapp',\n >>> 'DEBUG': True,\n >>> 'SERVER_URL': 'http://localhost:8200',\n >>> 'CAPTURE_HEADERS': True,\n >>> 'CAPTURE_BODY': 'all'\n >>> })\n\n >>> app.add_middleware(ElasticAPM, client=elasticapm)\n\n Pass an arbitrary APP_NAME and SECRET_TOKEN::\n\n >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')\n\n Pass an explicit client::\n\n >>> elasticapm = ElasticAPM(app, client=client)\n\n Automatically configure logging::\n\n >>> elasticapm = ElasticAPM(app, logging=True)\n\n Capture an exception::\n\n >>> try:\n >>> 1 / 0\n >>> except ZeroDivisionError:\n >>> elasticapm.capture_exception()\n\n Capture a message::\n\n >>> elasticapm.capture_message('hello, world!')\n \"\"\"\n\n def __init__(self, app: ASGIApp, client: Client):\n \"\"\"\n\n Args:\n app (ASGIApp): Starlette app\n client (Client): ElasticAPM Client\n \"\"\"\n self.client = client\n\n if self.client.config.instrument:\n elasticapm.instrumentation.control.instrument()\n\n super().__init__(app)\n\n async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:\n \"\"\"Processes the whole request APM capturing.\n\n Args:\n request (Request)\n call_next (RequestResponseEndpoint): Next request process in Starlette.\n\n Returns:\n Response\n \"\"\"\n await self._request_started(request)\n\n try:\n response = await call_next(request)\n elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)\n except Exception:\n await self.capture_exception(\n context={\"request\": await get_data_from_request(request, self.client.config, constants.ERROR)}\n )\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n raise\n else:\n await self._request_finished(response)\n finally:\n self.client.end_transaction()\n\n return response\n\n async def capture_exception(self, *args, **kwargs):\n \"\"\"Captures your exception.\n\n Args:\n *args:\n **kwargs:\n \"\"\"\n self.client.capture_exception(*args, **kwargs)\n\n async def capture_message(self, *args, **kwargs):\n \"\"\"Captures your message.\n\n Args:\n *args: Whatever\n **kwargs: Whatever\n \"\"\"\n self.client.capture_message(*args, **kwargs)\n\n async def _request_started(self, request: Request):\n \"\"\"Captures the begin of the request processing to APM.\n\n Args:\n request (Request)\n \"\"\"\n if not self.client.should_ignore_url(request.url.path):\n trace_parent = TraceParent.from_headers(dict(request.headers))\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n\n await set_context(\n lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), \"request\"\n )\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, request.url.path), override=False)\n\n async def _request_finished(self, response: Response):\n \"\"\"Captures the end of the request processing to APM.\n\n Args:\n response (Response)\n \"\"\"\n await set_context(\n lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), \"response\"\n )\n\n result = \"HTTP {}xx\".format(response.status_code // 100)\n elasticapm.set_transaction_result(result, override=False)\n", "path": "elasticapm/contrib/starlette/__init__.py"}]} | 3,021 | 597 |
gh_patches_debug_52205 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-1323 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PrefixRedirectUrl redirect `/hub` to `/hub/hub`
It might be an edge case, which is not really important,
but I expected `/hub` -> `/hub/` and not `/hub/hub`. This is to to `uri.startswith(self.base_url)`, and `base_url` is guarantied to end with a `/`. Now of course we can't just strip the trailing slash from `base_url` or things like `/hubot` will not be redirected to `/hub/hubot`, and doing nothing may be the right answer.
</issue>
<code>
[start of jupyterhub/handlers/pages.py]
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from http.client import responses
7
8 from jinja2 import TemplateNotFound
9 from tornado import web, gen
10 from tornado.httputil import url_concat
11
12 from .. import orm
13 from ..utils import admin_only, url_path_join
14 from .base import BaseHandler
15
16
17 class RootHandler(BaseHandler):
18 """Render the Hub root page.
19
20 If next argument is passed by single-user server,
21 redirect to base_url + single-user page.
22
23 If logged in, redirects to:
24
25 - single-user server if running
26 - hub home, otherwise
27
28 Otherwise, renders login page.
29 """
30 def get(self):
31 next_url = self.get_argument('next', '')
32 if next_url and not next_url.startswith('/'):
33 self.log.warning("Disallowing redirect outside JupyterHub: %r", next_url)
34 next_url = ''
35 if next_url and next_url.startswith(url_path_join(self.base_url, 'user/')):
36 # add /hub/ prefix, to ensure we redirect to the right user's server.
37 # The next request will be handled by UserSpawnHandler,
38 # ultimately redirecting to the logged-in user's server.
39 without_prefix = next_url[len(self.base_url):]
40 next_url = url_path_join(self.hub.base_url, without_prefix)
41 self.log.warning("Redirecting %s to %s. For sharing public links, use /user-redirect/",
42 self.request.uri, next_url,
43 )
44 self.redirect(next_url)
45 return
46 user = self.get_current_user()
47 if user:
48 if user.running:
49 url = user.url
50 self.log.debug("User is running: %s", url)
51 self.set_login_cookie(user) # set cookie
52 else:
53 url = url_path_join(self.hub.base_url, 'home')
54 self.log.debug("User is not running: %s", url)
55 else:
56 url = self.settings['login_url']
57 self.redirect(url)
58
59
60 class HomeHandler(BaseHandler):
61 """Render the user's home page."""
62
63 @web.authenticated
64 @gen.coroutine
65 def get(self):
66 user = self.get_current_user()
67 if user.running:
68 # trigger poll_and_notify event in case of a server that died
69 yield user.spawner.poll_and_notify()
70 html = self.render_template('home.html',
71 user=user,
72 url=user.url,
73 )
74 self.finish(html)
75
76
77 class SpawnHandler(BaseHandler):
78 """Handle spawning of single-user servers via form.
79
80 GET renders the form, POST handles form submission.
81
82 Only enabled when Spawner.options_form is defined.
83 """
84 def _render_form(self, message=''):
85 user = self.get_current_user()
86 return self.render_template('spawn.html',
87 user=user,
88 spawner_options_form=user.spawner.options_form,
89 error_message=message,
90 url=self.request.uri,
91 )
92
93 @web.authenticated
94 def get(self):
95 """GET renders form for spawning with user-specified options"""
96 user = self.get_current_user()
97 if not self.allow_named_servers and user.running:
98 url = user.url
99 self.log.debug("User is running: %s", url)
100 self.redirect(url)
101 return
102 if user.spawner.options_form:
103 self.finish(self._render_form())
104 else:
105 # not running, no form. Trigger spawn.
106 self.redirect(user.url)
107
108 @web.authenticated
109 @gen.coroutine
110 def post(self):
111 """POST spawns with user-specified options"""
112 user = self.get_current_user()
113 if not self.allow_named_servers and user.running:
114 url = user.url
115 self.log.warning("User is already running: %s", url)
116 self.redirect(url)
117 return
118 form_options = {}
119 for key, byte_list in self.request.body_arguments.items():
120 form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
121 for key, byte_list in self.request.files.items():
122 form_options["%s_file"%key] = byte_list
123 try:
124 options = user.spawner.options_from_form(form_options)
125 yield self.spawn_single_user(user, options=options)
126 except Exception as e:
127 self.log.error("Failed to spawn single-user server with form", exc_info=True)
128 self.finish(self._render_form(str(e)))
129 return
130 self.set_login_cookie(user)
131 url = user.url
132
133 next_url = self.get_argument('next', '')
134 if next_url and not next_url.startswith('/'):
135 self.log.warning("Disallowing redirect outside JupyterHub: %r", next_url)
136 elif next_url:
137 url = next_url
138
139 self.redirect(url)
140
141 class AdminHandler(BaseHandler):
142 """Render the admin page."""
143
144 @admin_only
145 def get(self):
146 available = {'name', 'admin', 'running', 'last_activity'}
147 default_sort = ['admin', 'name']
148 mapping = {
149 'running': '_server_id'
150 }
151 default_order = {
152 'name': 'asc',
153 'last_activity': 'desc',
154 'admin': 'desc',
155 'running': 'desc',
156 }
157 sorts = self.get_arguments('sort') or default_sort
158 orders = self.get_arguments('order')
159
160 for bad in set(sorts).difference(available):
161 self.log.warning("ignoring invalid sort: %r", bad)
162 sorts.remove(bad)
163 for bad in set(orders).difference({'asc', 'desc'}):
164 self.log.warning("ignoring invalid order: %r", bad)
165 orders.remove(bad)
166
167 # add default sort as secondary
168 for s in default_sort:
169 if s not in sorts:
170 sorts.append(s)
171 if len(orders) < len(sorts):
172 for col in sorts[len(orders):]:
173 orders.append(default_order[col])
174 else:
175 orders = orders[:len(sorts)]
176
177 # this could be one incomprehensible nested list comprehension
178 # get User columns
179 cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
180 # get User.col.desc() order objects
181 ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
182
183 users = self.db.query(orm.User).order_by(*ordered)
184 users = [ self._user_from_orm(u) for u in users ]
185 running = [ u for u in users if u.running ]
186
187 html = self.render_template('admin.html',
188 user=self.get_current_user(),
189 admin_access=self.settings.get('admin_access', False),
190 users=users,
191 running=running,
192 sort={s:o for s,o in zip(sorts, orders)},
193 )
194 self.finish(html)
195
196
197 class TokenPageHandler(BaseHandler):
198 """Handler for page requesting new API tokens"""
199
200 @web.authenticated
201 def get(self):
202 html = self.render_template('token.html')
203 self.finish(html)
204
205
206 class ProxyErrorHandler(BaseHandler):
207 """Handler for rendering proxy error pages"""
208
209 def get(self, status_code_s):
210 status_code = int(status_code_s)
211 status_message = responses.get(status_code, 'Unknown HTTP Error')
212 # build template namespace
213
214 hub_home = url_path_join(self.hub.base_url, 'home')
215 message_html = ''
216 if status_code == 503:
217 message_html = ' '.join([
218 "Your server appears to be down.",
219 "Try restarting it <a href='%s'>from the hub</a>" % hub_home
220 ])
221 ns = dict(
222 status_code=status_code,
223 status_message=status_message,
224 message_html=message_html,
225 logo_url=hub_home,
226 )
227
228 self.set_header('Content-Type', 'text/html')
229 # render the template
230 try:
231 html = self.render_template('%s.html' % status_code, **ns)
232 except TemplateNotFound:
233 self.log.debug("No template for %d", status_code)
234 html = self.render_template('error.html', **ns)
235
236 self.write(html)
237
238
239 default_handlers = [
240 (r'/', RootHandler),
241 (r'/home', HomeHandler),
242 (r'/admin', AdminHandler),
243 (r'/spawn', SpawnHandler),
244 (r'/token', TokenPageHandler),
245 (r'/error/(\d+)', ProxyErrorHandler),
246 ]
247
[end of jupyterhub/handlers/pages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -237,7 +237,7 @@
default_handlers = [
- (r'/', RootHandler),
+ (r'/?', RootHandler),
(r'/home', HomeHandler),
(r'/admin', AdminHandler),
(r'/spawn', SpawnHandler),
| {"golden_diff": "diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -237,7 +237,7 @@\n \n \n default_handlers = [\n- (r'/', RootHandler),\n+ (r'/?', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n", "issue": "PrefixRedirectUrl redirect `/hub` to `/hub/hub`\nIt might be an edge case, which is not really important, \r\nbut I expected `/hub` -> `/hub/` and not `/hub/hub`. This is to to `uri.startswith(self.base_url)`, and `base_url` is guarantied to end with a `/`. Now of course we can't just strip the trailing slash from `base_url` or things like `/hubot` will not be redirected to `/hub/hubot`, and doing nothing may be the right answer. \n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom http.client import responses\n\nfrom jinja2 import TemplateNotFound\nfrom tornado import web, gen\nfrom tornado.httputil import url_concat\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n\n If next argument is passed by single-user server,\n redirect to base_url + single-user page.\n\n If logged in, redirects to:\n\n - single-user server if running\n - hub home, otherwise\n\n Otherwise, renders login page.\n \"\"\"\n def get(self):\n next_url = self.get_argument('next', '')\n if next_url and not next_url.startswith('/'):\n self.log.warning(\"Disallowing redirect outside JupyterHub: %r\", next_url)\n next_url = ''\n if next_url and next_url.startswith(url_path_join(self.base_url, 'user/')):\n # add /hub/ prefix, to ensure we redirect to the right user's server.\n # The next request will be handled by UserSpawnHandler,\n # ultimately redirecting to the logged-in user's server.\n without_prefix = next_url[len(self.base_url):]\n next_url = url_path_join(self.hub.base_url, without_prefix)\n self.log.warning(\"Redirecting %s to %s. For sharing public links, use /user-redirect/\",\n self.request.uri, next_url,\n )\n self.redirect(next_url)\n return\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.url\n self.log.debug(\"User is running: %s\", url)\n self.set_login_cookie(user) # set cookie\n else:\n url = url_path_join(self.hub.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n else:\n url = self.settings['login_url']\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n @gen.coroutine\n def get(self):\n user = self.get_current_user()\n if user.running:\n # trigger poll_and_notify event in case of a server that died\n yield user.spawner.poll_and_notify()\n html = self.render_template('home.html',\n user=user,\n url=user.url,\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n\n GET renders the form, POST handles form submission.\n\n Only enabled when Spawner.options_form is defined.\n \"\"\"\n def _render_form(self, message=''):\n user = self.get_current_user()\n return self.render_template('spawn.html',\n user=user,\n spawner_options_form=user.spawner.options_form,\n error_message=message,\n url=self.request.uri,\n )\n\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if not self.allow_named_servers and user.running:\n url = user.url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n self.finish(self._render_form())\n else:\n # not running, no form. Trigger spawn.\n self.redirect(user.url)\n\n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if not self.allow_named_servers and user.running:\n url = user.url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n try:\n options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n self.finish(self._render_form(str(e)))\n return\n self.set_login_cookie(user)\n url = user.url\n\n next_url = self.get_argument('next', '')\n if next_url and not next_url.startswith('/'):\n self.log.warning(\"Disallowing redirect outside JupyterHub: %r\", next_url)\n elif next_url:\n url = next_url\n\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n\n for bad in set(sorts).difference(available):\n self.log.warning(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warning(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n\n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n\n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n\n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n\n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\nclass TokenPageHandler(BaseHandler):\n \"\"\"Handler for page requesting new API tokens\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('token.html')\n self.finish(html)\n\n\nclass ProxyErrorHandler(BaseHandler):\n \"\"\"Handler for rendering proxy error pages\"\"\"\n \n def get(self, status_code_s):\n status_code = int(status_code_s)\n status_message = responses.get(status_code, 'Unknown HTTP Error')\n # build template namespace\n \n hub_home = url_path_join(self.hub.base_url, 'home')\n message_html = ''\n if status_code == 503:\n message_html = ' '.join([\n \"Your server appears to be down.\",\n \"Try restarting it <a href='%s'>from the hub</a>\" % hub_home\n ])\n ns = dict(\n status_code=status_code,\n status_message=status_message,\n message_html=message_html,\n logo_url=hub_home,\n )\n\n self.set_header('Content-Type', 'text/html')\n # render the template\n try:\n html = self.render_template('%s.html' % status_code, **ns)\n except TemplateNotFound:\n self.log.debug(\"No template for %d\", status_code)\n html = self.render_template('error.html', **ns)\n\n self.write(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n (r'/token', TokenPageHandler),\n (r'/error/(\\d+)', ProxyErrorHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}]} | 3,101 | 109 |
gh_patches_debug_4673 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-2112 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Problem with tesseract after Bugfix: Some tesseract languages aren't detected as installed. @stumpylog (#2057)
### Description
Hi,
after Fixes [2044 ](https://github.com/paperless-ngx/paperless-ngx/issues/2044)I have problem with OCR and paperless-ngx.
Before this commit I use next ENV :
>
- PAPERLESS_OCR_LANGUAGE=srp_latn+srp
- PAPERLESS_OCR_LANGUAGES=srp-latn srp script-latn
and everything work.
After this commit if dont make any changes in ENV error is:
?: The selected ocr language srp_latn is not installed. Paperless cannot OCR your documents without it. Please fix PAPERLESS_OCR_LANGUAGE.
If i make changes in ENV, replace _ with -:
>
- PAPERLESS_OCR_LANGUAGE=srp-latn+srp
- PAPERLESS_OCR_LANGUAGES=srp-latn srp script-latn
After this change system install lang and start paperless, but if I upload any document, OCR dont work, error is:
`[2022-12-04 13:05:46,369] [ERROR] [paperless.consumer] Error while consuming document apr 2022.pdf: MissingDependencyError: OCR engine does not have language data for the following requested languages:
srp-latn
**
Paperless-ngx 1.10.0 WORK
Paperless-ngx 1.10.1 DONT WORK
**
### Steps to reproduce
1. Add this ENV
- PAPERLESS_OCR_LANGUAGE=srp-latn+srp
- PAPERLESS_OCR_LANGUAGES=srp-latn srp script-latn
2. Upload any document
### Webserver logs
```bash
[2022-12-04 13:05:46,369] [ERROR] [paperless.consumer] Error while consuming document apr 2022.pdf: MissingDependencyError: OCR engine does not have language data for the following requested languages:
srp-latn
Note: most languages are identified by a 3-digit ISO 639-2 Code
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 292, in parse
ocrmypdf.ocr(**args)
File "/usr/local/lib/python3.9/site-packages/ocrmypdf/api.py", line 331, in ocr
check_options(options, plugin_manager)
File "/usr/local/lib/python3.9/site-packages/ocrmypdf/_validation.py", line 246, in check_options
_check_plugin_options(options, plugin_manager)
File "/usr/local/lib/python3.9/site-packages/ocrmypdf/_validation.py", line 241, in _check_plugin_options
check_options_languages(options, ocr_engine_languages)
File "/usr/local/lib/python3.9/site-packages/ocrmypdf/_validation.py", line 70, in check_options_languages
raise MissingDependencyError(msg)
ocrmypdf.exceptions.MissingDependencyError: OCR engine does not have language data for the following requested languages:
srp-latn
Note: most languages are identified by a 3-digit ISO 639-2 Code
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/consumer.py", line 337, in try_consume_file
document_parser.parse(self.path, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 346, in parse
raise ParseError(f"{e.__class__.__name__}: {str(e)}") from e
documents.parsers.ParseError: MissingDependencyError: OCR engine does not have language data for the following requested languages:
srp-latn
Note: most languages are identified by a 3-digit ISO 639-2 Code
```
### Browser logs
_No response_
### Paperless-ngx version
1.10.1
### Host OS
Docker
### Installation method
Docker - official image
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
</issue>
<code>
[start of src/paperless_tesseract/checks.py]
1 import shutil
2 import subprocess
3
4 from django.conf import settings
5 from django.core.checks import Error
6 from django.core.checks import register
7 from django.core.checks import Warning
8
9
10 def get_tesseract_langs():
11 proc = subprocess.run(
12 [shutil.which("tesseract"), "--list-langs"],
13 capture_output=True,
14 )
15
16 # Decode bytes to string, split on newlines, trim out the header
17 proc_lines = proc.stdout.decode("utf8", errors="ignore").strip().split("\n")[1:]
18
19 # Replace _ with - to convert two part languages to the expected code
20 return [x.replace("_", "-") for x in proc_lines]
21
22
23 @register()
24 def check_default_language_available(app_configs, **kwargs):
25 installed_langs = get_tesseract_langs()
26
27 if not settings.OCR_LANGUAGE:
28 return [
29 Warning(
30 "No OCR language has been specified with PAPERLESS_OCR_LANGUAGE. "
31 "This means that tesseract will fallback to english.",
32 ),
33 ]
34
35 specified_langs = settings.OCR_LANGUAGE.split("+")
36
37 for lang in specified_langs:
38 if lang not in installed_langs:
39 return [
40 Error(
41 f"The selected ocr language {lang} is "
42 f"not installed. Paperless cannot OCR your documents "
43 f"without it. Please fix PAPERLESS_OCR_LANGUAGE.",
44 ),
45 ]
46
47 return []
48
[end of src/paperless_tesseract/checks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/paperless_tesseract/checks.py b/src/paperless_tesseract/checks.py
--- a/src/paperless_tesseract/checks.py
+++ b/src/paperless_tesseract/checks.py
@@ -16,8 +16,7 @@
# Decode bytes to string, split on newlines, trim out the header
proc_lines = proc.stdout.decode("utf8", errors="ignore").strip().split("\n")[1:]
- # Replace _ with - to convert two part languages to the expected code
- return [x.replace("_", "-") for x in proc_lines]
+ return [x.strip() for x in proc_lines]
@register()
| {"golden_diff": "diff --git a/src/paperless_tesseract/checks.py b/src/paperless_tesseract/checks.py\n--- a/src/paperless_tesseract/checks.py\n+++ b/src/paperless_tesseract/checks.py\n@@ -16,8 +16,7 @@\n # Decode bytes to string, split on newlines, trim out the header\n proc_lines = proc.stdout.decode(\"utf8\", errors=\"ignore\").strip().split(\"\\n\")[1:]\n \n- # Replace _ with - to convert two part languages to the expected code\n- return [x.replace(\"_\", \"-\") for x in proc_lines]\n+ return [x.strip() for x in proc_lines]\n \n \n @register()\n", "issue": "[BUG] Problem with tesseract after Bugfix: Some tesseract languages aren't detected as installed. @stumpylog (#2057)\n### Description\r\n\r\nHi,\r\nafter Fixes [2044 ](https://github.com/paperless-ngx/paperless-ngx/issues/2044)I have problem with OCR and paperless-ngx.\r\n\r\nBefore this commit I use next ENV :\r\n\r\n> \r\n\r\n - PAPERLESS_OCR_LANGUAGE=srp_latn+srp\r\n - PAPERLESS_OCR_LANGUAGES=srp-latn srp script-latn \r\n\r\nand everything work.\r\n\r\nAfter this commit if dont make any changes in ENV error is:\r\n?: The selected ocr language srp_latn is not installed. Paperless cannot OCR your documents without it. Please fix PAPERLESS_OCR_LANGUAGE.\r\n\r\nIf i make changes in ENV, replace _ with -:\r\n\r\n> \r\n\r\n - PAPERLESS_OCR_LANGUAGE=srp-latn+srp\r\n - PAPERLESS_OCR_LANGUAGES=srp-latn srp script-latn\r\nAfter this change system install lang and start paperless, but if I upload any document, OCR dont work, error is:\r\n\r\n`[2022-12-04 13:05:46,369] [ERROR] [paperless.consumer] Error while consuming document apr 2022.pdf: MissingDependencyError: OCR engine does not have language data for the following requested languages:\r\n\r\nsrp-latn\r\n\r\n**\r\nPaperless-ngx 1.10.0 WORK\r\nPaperless-ngx 1.10.1 DONT WORK\r\n**\r\n### Steps to reproduce\r\n\r\n1. Add this ENV\r\n - PAPERLESS_OCR_LANGUAGE=srp-latn+srp\r\n - PAPERLESS_OCR_LANGUAGES=srp-latn srp script-latn\r\n\r\n2. Upload any document\r\n\r\n### Webserver logs\r\n\r\n```bash\r\n[2022-12-04 13:05:46,369] [ERROR] [paperless.consumer] Error while consuming document apr 2022.pdf: MissingDependencyError: OCR engine does not have language data for the following requested languages:\r\n\r\nsrp-latn\r\n\r\nNote: most languages are identified by a 3-digit ISO 639-2 Code\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/usr/src/paperless/src/paperless_tesseract/parsers.py\", line 292, in parse\r\n\r\n ocrmypdf.ocr(**args)\r\n\r\n File \"/usr/local/lib/python3.9/site-packages/ocrmypdf/api.py\", line 331, in ocr\r\n\r\n check_options(options, plugin_manager)\r\n\r\n File \"/usr/local/lib/python3.9/site-packages/ocrmypdf/_validation.py\", line 246, in check_options\r\n\r\n _check_plugin_options(options, plugin_manager)\r\n\r\n File \"/usr/local/lib/python3.9/site-packages/ocrmypdf/_validation.py\", line 241, in _check_plugin_options\r\n\r\n check_options_languages(options, ocr_engine_languages)\r\n\r\n File \"/usr/local/lib/python3.9/site-packages/ocrmypdf/_validation.py\", line 70, in check_options_languages\r\n\r\n raise MissingDependencyError(msg)\r\n\r\nocrmypdf.exceptions.MissingDependencyError: OCR engine does not have language data for the following requested languages:\r\n\r\nsrp-latn\r\n\r\nNote: most languages are identified by a 3-digit ISO 639-2 Code\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/usr/src/paperless/src/documents/consumer.py\", line 337, in try_consume_file\r\n\r\n document_parser.parse(self.path, mime_type, self.filename)\r\n\r\n File \"/usr/src/paperless/src/paperless_tesseract/parsers.py\", line 346, in parse\r\n\r\n raise ParseError(f\"{e.__class__.__name__}: {str(e)}\") from e\r\n\r\ndocuments.parsers.ParseError: MissingDependencyError: OCR engine does not have language data for the following requested languages:\r\n\r\nsrp-latn\r\n\r\nNote: most languages are identified by a 3-digit ISO 639-2 Code\r\n```\r\n\r\n\r\n### Browser logs\r\n\r\n_No response_\r\n\r\n### Paperless-ngx version\r\n\r\n1.10.1\r\n\r\n### Host OS\r\n\r\nDocker\r\n\r\n### Installation method\r\n\r\nDocker - official image\r\n\r\n### Browser\r\n\r\n_No response_\r\n\r\n### Configuration changes\r\n\r\n_No response_\r\n\r\n### Other\r\n\r\n_No response_\n", "before_files": [{"content": "import shutil\nimport subprocess\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\n\ndef get_tesseract_langs():\n proc = subprocess.run(\n [shutil.which(\"tesseract\"), \"--list-langs\"],\n capture_output=True,\n )\n\n # Decode bytes to string, split on newlines, trim out the header\n proc_lines = proc.stdout.decode(\"utf8\", errors=\"ignore\").strip().split(\"\\n\")[1:]\n\n # Replace _ with - to convert two part languages to the expected code\n return [x.replace(\"_\", \"-\") for x in proc_lines]\n\n\n@register()\ndef check_default_language_available(app_configs, **kwargs):\n installed_langs = get_tesseract_langs()\n\n if not settings.OCR_LANGUAGE:\n return [\n Warning(\n \"No OCR language has been specified with PAPERLESS_OCR_LANGUAGE. \"\n \"This means that tesseract will fallback to english.\",\n ),\n ]\n\n specified_langs = settings.OCR_LANGUAGE.split(\"+\")\n\n for lang in specified_langs:\n if lang not in installed_langs:\n return [\n Error(\n f\"The selected ocr language {lang} is \"\n f\"not installed. Paperless cannot OCR your documents \"\n f\"without it. Please fix PAPERLESS_OCR_LANGUAGE.\",\n ),\n ]\n\n return []\n", "path": "src/paperless_tesseract/checks.py"}]} | 1,893 | 151 |
gh_patches_debug_30845 | rasdani/github-patches | git_diff | rasterio__rasterio-1841 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Conflicting rio bounds optional command line arguments for GeoJSON output
<!--
WELCOME ABOARD!
Hi and welcome to the Rasterio project. We appreciate bug reports, questions
about documentation, and suggestions for new features. This issue template
isn't intended to ward you off; only to intercept and redirect some particular
categories of reports, and to collect a few important facts that issue reporters
often omit.
The primary forum for questions about installation and usage of Rasterio is
https://rasterio.groups.io/g/main. The authors and other users will answer
questions when they have expertise to share and time to explain. Please take the
time to craft a clear question and be patient about responses. Please do not
bring these questions to Rasterio's issue tracker, which we want to reserve for
bug reports and other actionable issues.
Questions about development of Rasterio, brainstorming, requests for comment,
and not-yet-actionable proposals are welcome in the project's developers
discussion group https://rasterio.groups.io/g/dev. Issues opened in Rasterio's
GitHub repo which haven't been socialized there may be perfunctorily closed.
Please note: Rasterio contains extension modules and is thus susceptible to
C library compatibility issues. If you are reporting an installation or module
import issue, please note that this project only accepts reports about problems
with packages downloaded from the Python Package Index. Conda users should take
issues to one of the following trackers:
- https://github.com/ContinuumIO/anaconda-issues/issues
- https://github.com/conda-forge/rasterio-feedstock
Also please note: we are currently working on 1.0 and pre-releases. Bugs found
in version 0.36 will not be fixed except in a 1.0 alpha or beta release. In
some cases, 0.36 bugs have already been fixed in recent pre-releases.
You think you've found a bug? We believe you!
-->
## Expected behavior and actual behavior.
Initially discussed: https://rasterio.groups.io/g/main/topic/35000585#344
When using the command `rio bounds` there is conflicting behaviour of optional command line arguments `--sequence/--collection` and `--collection`.
## Steps to reproduce the problem.
```
$ rio bounds --collection /path/to/a.tif /path/to/b.tif
{"bbox": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], "features": [{"bbox": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], "geometry": {"coordinates": [[[-117.43992919012538, 32.44224727533216], [-115.91213159002874, 32.44224727533216], [-115.91213159002874, 33.755623708770855], [-117.43992919012538, 33.755623708770855], [-117.43992919012538, 32.44224727533216]]], "type": "Polygon"}, "properties": {"filename": "imagery_HH.tif", "id": "0", "title": "imagery_HH.tif"}, "type": "Feature"}], "type": "FeatureCollection"}
{"bbox": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], "features": [{"bbox": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], "geometry": {"coordinates": [[[-117.43992919012538, 32.44224727533216], [-115.91213159002874, 32.44224727533216], [-115.91213159002874, 33.755623708770855], [-117.43992919012538, 33.755623708770855], [-117.43992919012538, 32.44224727533216]]], "type": "Polygon"}, "properties": {"filename": "imagery_HH.tif", "id": "1", "title": "imagery_HH.tif"}, "type": "Feature"}], "type": "FeatureCollection"}
```
## Operating system
Tested on Windows 10 1709 and CentOS Linux release 7.7.1908
## Rasterio version and provenance
rasterio==1.1.0
tested with both gdal==3.0.1 and gdal==2.2.4
rasterio and gdal both installed from conda
</issue>
<code>
[start of rasterio/rio/helpers.py]
1 """
2 Helper objects used by multiple CLI commands.
3 """
4
5 import json
6 import os
7
8 from rasterio.errors import FileOverwriteError
9
10
11 def coords(obj):
12 """Yield all coordinate coordinate tuples from a geometry or feature.
13 From python-geojson package."""
14 if isinstance(obj, (tuple, list)):
15 coordinates = obj
16 elif 'geometry' in obj:
17 coordinates = obj['geometry']['coordinates']
18 else:
19 coordinates = obj.get('coordinates', obj)
20 for e in coordinates:
21 if isinstance(e, (float, int)):
22 yield tuple(coordinates)
23 break
24 else:
25 for f in coords(e):
26 yield f
27
28
29 def write_features(
30 fobj, collection, sequence=False, geojson_type='feature', use_rs=False,
31 **dump_kwds):
32 """Read an iterator of (feat, bbox) pairs and write to file using
33 the selected modes."""
34 # Sequence of features expressed as bbox, feature, or collection.
35 if sequence:
36 for feat in collection():
37 xs, ys = zip(*coords(feat))
38 bbox = (min(xs), min(ys), max(xs), max(ys))
39 if use_rs:
40 fobj.write(u'\u001e')
41 if geojson_type == 'feature':
42 fobj.write(json.dumps(feat, **dump_kwds))
43 elif geojson_type == 'bbox':
44 fobj.write(json.dumps(bbox, **dump_kwds))
45 else:
46 fobj.write(
47 json.dumps({
48 'type': 'FeatureCollection',
49 'bbox': bbox,
50 'features': [feat]}, **dump_kwds))
51 fobj.write('\n')
52 # Aggregate all features into a single object expressed as
53 # bbox or collection.
54 else:
55 features = list(collection())
56 if geojson_type == 'bbox':
57 fobj.write(json.dumps(collection.bbox, **dump_kwds))
58 elif geojson_type == 'feature':
59 fobj.write(json.dumps(features[0], **dump_kwds))
60 else:
61 fobj.write(json.dumps({
62 'bbox': collection.bbox,
63 'type': 'FeatureCollection',
64 'features': features},
65 **dump_kwds))
66 fobj.write('\n')
67
68
69 def resolve_inout(input=None, output=None, files=None, overwrite=False):
70 """Resolves inputs and outputs from standard args and options.
71
72 :param input: a single input filename, optional.
73 :param output: a single output filename, optional.
74 :param files: a sequence of filenames in which the last is the
75 output filename.
76 :param overwrite: whether to force overwriting the output
77 file, bool.
78 :return: the resolved output filename and input filenames as a
79 tuple of length 2.
80
81 If provided, the :param:`output` file may be overwritten. An output
82 file extracted from :param:`files` will not be overwritten unless
83 :param:`overwrite` is `True`.
84 """
85 resolved_output = output or (files[-1] if files else None)
86 if not overwrite and resolved_output and os.path.exists(
87 resolved_output):
88 raise FileOverwriteError(
89 "file exists and won't be overwritten without use of the "
90 "`--overwrite` option.")
91 resolved_inputs = (
92 [input] if input else [] +
93 list(files[:-1 if not output else None]) if files else [])
94 return resolved_output, resolved_inputs
95
96
97 def to_lower(ctx, param, value):
98 """Click callback, converts values to lowercase."""
99 return value.lower()
100
[end of rasterio/rio/helpers.py]
[start of rasterio/rio/bounds.py]
1 import logging
2 import os
3
4 import click
5 from cligj import (
6 precision_opt, indent_opt, compact_opt, projection_geographic_opt,
7 projection_mercator_opt, projection_projected_opt,
8 use_rs_opt, geojson_type_feature_opt, geojson_type_bbox_opt,
9 geojson_type_collection_opt)
10
11 from .helpers import write_features, to_lower
12 import rasterio
13 from rasterio.rio import options
14 from rasterio.warp import transform_bounds
15
16
17 logger = logging.getLogger(__name__)
18
19
20 # Bounds command.
21 @click.command(short_help="Write bounding boxes to stdout as GeoJSON.")
22 # One or more files, the bounds of each are a feature in the collection
23 # object or feature sequence.
24 @click.argument('INPUT', nargs=-1, type=click.Path(), required=True)
25 @precision_opt
26 @indent_opt
27 @compact_opt
28 @projection_geographic_opt
29 @projection_projected_opt
30 @projection_mercator_opt
31 @click.option(
32 '--dst-crs', default='', metavar="EPSG:NNNN", callback=to_lower,
33 help="Output in specified coordinates.")
34 @options.sequence_opt
35 @use_rs_opt
36 @geojson_type_collection_opt(True)
37 @geojson_type_feature_opt(False)
38 @geojson_type_bbox_opt(False)
39 @click.pass_context
40 def bounds(ctx, input, precision, indent, compact, projection, dst_crs,
41 sequence, use_rs, geojson_type):
42 """Write bounding boxes to stdout as GeoJSON for use with, e.g.,
43 geojsonio
44
45 $ rio bounds *.tif | geojsonio
46
47 If a destination crs is passed via dst_crs, it takes precedence over
48 the projection parameter.
49 """
50 import rasterio.warp
51 dump_kwds = {'sort_keys': True}
52 if indent:
53 dump_kwds['indent'] = indent
54 if compact:
55 dump_kwds['separators'] = (',', ':')
56 stdout = click.get_text_stream('stdout')
57
58 # This is the generator for (feature, bbox) pairs.
59 class Collection(object):
60
61 def __init__(self, env):
62 self._xs = []
63 self._ys = []
64 self.env = env
65
66 @property
67 def bbox(self):
68 return min(self._xs), min(self._ys), max(self._xs), max(self._ys)
69
70 def __call__(self):
71 for i, path in enumerate(input):
72 with rasterio.open(path) as src:
73 bounds = src.bounds
74 if dst_crs:
75 bbox = transform_bounds(src.crs,
76 dst_crs, *bounds)
77 elif projection == 'mercator':
78 bbox = transform_bounds(src.crs,
79 {'init': 'epsg:3857'}, *bounds)
80 elif projection == 'geographic':
81 bbox = transform_bounds(src.crs,
82 {'init': 'epsg:4326'}, *bounds)
83 else:
84 bbox = bounds
85
86 if precision >= 0:
87 bbox = [round(b, precision) for b in bbox]
88
89 yield {
90 'type': 'Feature',
91 'bbox': bbox,
92 'geometry': {
93 'type': 'Polygon',
94 'coordinates': [[
95 [bbox[0], bbox[1]],
96 [bbox[2], bbox[1]],
97 [bbox[2], bbox[3]],
98 [bbox[0], bbox[3]],
99 [bbox[0], bbox[1]]]]},
100 'properties': {
101 'id': str(i),
102 'title': path,
103 'filename': os.path.basename(path)}}
104
105 self._xs.extend(bbox[::2])
106 self._ys.extend(bbox[1::2])
107
108 try:
109 with ctx.obj['env'] as env:
110 write_features(
111 stdout, Collection(env), sequence=sequence,
112 geojson_type=geojson_type, use_rs=use_rs,
113 **dump_kwds)
114
115 except Exception:
116 logger.exception("Exception caught during processing")
117 raise click.Abort()
118
[end of rasterio/rio/bounds.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/rio/bounds.py b/rasterio/rio/bounds.py
--- a/rasterio/rio/bounds.py
+++ b/rasterio/rio/bounds.py
@@ -33,8 +33,7 @@
help="Output in specified coordinates.")
@options.sequence_opt
@use_rs_opt
-@geojson_type_collection_opt(True)
-@geojson_type_feature_opt(False)
+@geojson_type_feature_opt(True)
@geojson_type_bbox_opt(False)
@click.pass_context
def bounds(ctx, input, precision, indent, compact, projection, dst_crs,
diff --git a/rasterio/rio/helpers.py b/rasterio/rio/helpers.py
--- a/rasterio/rio/helpers.py
+++ b/rasterio/rio/helpers.py
@@ -38,25 +38,18 @@
bbox = (min(xs), min(ys), max(xs), max(ys))
if use_rs:
fobj.write(u'\u001e')
- if geojson_type == 'feature':
- fobj.write(json.dumps(feat, **dump_kwds))
- elif geojson_type == 'bbox':
+ if geojson_type == 'bbox':
fobj.write(json.dumps(bbox, **dump_kwds))
else:
- fobj.write(
- json.dumps({
- 'type': 'FeatureCollection',
- 'bbox': bbox,
- 'features': [feat]}, **dump_kwds))
+ fobj.write(json.dumps(feat, **dump_kwds))
fobj.write('\n')
+
# Aggregate all features into a single object expressed as
# bbox or collection.
else:
features = list(collection())
if geojson_type == 'bbox':
fobj.write(json.dumps(collection.bbox, **dump_kwds))
- elif geojson_type == 'feature':
- fobj.write(json.dumps(features[0], **dump_kwds))
else:
fobj.write(json.dumps({
'bbox': collection.bbox,
| {"golden_diff": "diff --git a/rasterio/rio/bounds.py b/rasterio/rio/bounds.py\n--- a/rasterio/rio/bounds.py\n+++ b/rasterio/rio/bounds.py\n@@ -33,8 +33,7 @@\n help=\"Output in specified coordinates.\")\n @options.sequence_opt\n @use_rs_opt\n-@geojson_type_collection_opt(True)\n-@geojson_type_feature_opt(False)\n+@geojson_type_feature_opt(True)\n @geojson_type_bbox_opt(False)\n @click.pass_context\n def bounds(ctx, input, precision, indent, compact, projection, dst_crs,\ndiff --git a/rasterio/rio/helpers.py b/rasterio/rio/helpers.py\n--- a/rasterio/rio/helpers.py\n+++ b/rasterio/rio/helpers.py\n@@ -38,25 +38,18 @@\n bbox = (min(xs), min(ys), max(xs), max(ys))\n if use_rs:\n fobj.write(u'\\u001e')\n- if geojson_type == 'feature':\n- fobj.write(json.dumps(feat, **dump_kwds))\n- elif geojson_type == 'bbox':\n+ if geojson_type == 'bbox':\n fobj.write(json.dumps(bbox, **dump_kwds))\n else:\n- fobj.write(\n- json.dumps({\n- 'type': 'FeatureCollection',\n- 'bbox': bbox,\n- 'features': [feat]}, **dump_kwds))\n+ fobj.write(json.dumps(feat, **dump_kwds))\n fobj.write('\\n')\n+\n # Aggregate all features into a single object expressed as\n # bbox or collection.\n else:\n features = list(collection())\n if geojson_type == 'bbox':\n fobj.write(json.dumps(collection.bbox, **dump_kwds))\n- elif geojson_type == 'feature':\n- fobj.write(json.dumps(features[0], **dump_kwds))\n else:\n fobj.write(json.dumps({\n 'bbox': collection.bbox,\n", "issue": "Conflicting rio bounds optional command line arguments for GeoJSON output\n<!--\r\n\r\nWELCOME ABOARD!\r\n\r\nHi and welcome to the Rasterio project. We appreciate bug reports, questions\r\nabout documentation, and suggestions for new features. This issue template\r\nisn't intended to ward you off; only to intercept and redirect some particular\r\ncategories of reports, and to collect a few important facts that issue reporters\r\noften omit.\r\n\r\nThe primary forum for questions about installation and usage of Rasterio is \r\nhttps://rasterio.groups.io/g/main. The authors and other users will answer \r\nquestions when they have expertise to share and time to explain. Please take the\r\ntime to craft a clear question and be patient about responses. Please do not\r\nbring these questions to Rasterio's issue tracker, which we want to reserve for\r\nbug reports and other actionable issues.\r\n\r\nQuestions about development of Rasterio, brainstorming, requests for comment,\r\nand not-yet-actionable proposals are welcome in the project's developers \r\ndiscussion group https://rasterio.groups.io/g/dev. Issues opened in Rasterio's\r\nGitHub repo which haven't been socialized there may be perfunctorily closed.\r\n\r\nPlease note: Rasterio contains extension modules and is thus susceptible to\r\nC library compatibility issues. If you are reporting an installation or module\r\nimport issue, please note that this project only accepts reports about problems\r\nwith packages downloaded from the Python Package Index. Conda users should take\r\nissues to one of the following trackers:\r\n\r\n- https://github.com/ContinuumIO/anaconda-issues/issues\r\n- https://github.com/conda-forge/rasterio-feedstock\r\n\r\nAlso please note: we are currently working on 1.0 and pre-releases. Bugs found\r\nin version 0.36 will not be fixed except in a 1.0 alpha or beta release. In \r\nsome cases, 0.36 bugs have already been fixed in recent pre-releases.\r\n\r\nYou think you've found a bug? We believe you!\r\n-->\r\n\r\n## Expected behavior and actual behavior.\r\n\r\nInitially discussed: https://rasterio.groups.io/g/main/topic/35000585#344\r\n\r\nWhen using the command `rio bounds` there is conflicting behaviour of optional command line arguments `--sequence/--collection` and `--collection`. \r\n\r\n## Steps to reproduce the problem.\r\n\r\n```\r\n$ rio bounds --collection /path/to/a.tif /path/to/b.tif\r\n{\"bbox\": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], \"features\": [{\"bbox\": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], \"geometry\": {\"coordinates\": [[[-117.43992919012538, 32.44224727533216], [-115.91213159002874, 32.44224727533216], [-115.91213159002874, 33.755623708770855], [-117.43992919012538, 33.755623708770855], [-117.43992919012538, 32.44224727533216]]], \"type\": \"Polygon\"}, \"properties\": {\"filename\": \"imagery_HH.tif\", \"id\": \"0\", \"title\": \"imagery_HH.tif\"}, \"type\": \"Feature\"}], \"type\": \"FeatureCollection\"}\r\n{\"bbox\": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], \"features\": [{\"bbox\": [-117.43992919012538, 32.44224727533216, -115.91213159002874, 33.755623708770855], \"geometry\": {\"coordinates\": [[[-117.43992919012538, 32.44224727533216], [-115.91213159002874, 32.44224727533216], [-115.91213159002874, 33.755623708770855], [-117.43992919012538, 33.755623708770855], [-117.43992919012538, 32.44224727533216]]], \"type\": \"Polygon\"}, \"properties\": {\"filename\": \"imagery_HH.tif\", \"id\": \"1\", \"title\": \"imagery_HH.tif\"}, \"type\": \"Feature\"}], \"type\": \"FeatureCollection\"}\r\n```\r\n\r\n## Operating system\r\n\r\nTested on Windows 10 1709 and CentOS Linux release 7.7.1908\r\n\r\n## Rasterio version and provenance\r\nrasterio==1.1.0\r\ntested with both gdal==3.0.1 and gdal==2.2.4\r\nrasterio and gdal both installed from conda\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nHelper objects used by multiple CLI commands.\n\"\"\"\n\nimport json\nimport os\n\nfrom rasterio.errors import FileOverwriteError\n\n\ndef coords(obj):\n \"\"\"Yield all coordinate coordinate tuples from a geometry or feature.\n From python-geojson package.\"\"\"\n if isinstance(obj, (tuple, list)):\n coordinates = obj\n elif 'geometry' in obj:\n coordinates = obj['geometry']['coordinates']\n else:\n coordinates = obj.get('coordinates', obj)\n for e in coordinates:\n if isinstance(e, (float, int)):\n yield tuple(coordinates)\n break\n else:\n for f in coords(e):\n yield f\n\n\ndef write_features(\n fobj, collection, sequence=False, geojson_type='feature', use_rs=False,\n **dump_kwds):\n \"\"\"Read an iterator of (feat, bbox) pairs and write to file using\n the selected modes.\"\"\"\n # Sequence of features expressed as bbox, feature, or collection.\n if sequence:\n for feat in collection():\n xs, ys = zip(*coords(feat))\n bbox = (min(xs), min(ys), max(xs), max(ys))\n if use_rs:\n fobj.write(u'\\u001e')\n if geojson_type == 'feature':\n fobj.write(json.dumps(feat, **dump_kwds))\n elif geojson_type == 'bbox':\n fobj.write(json.dumps(bbox, **dump_kwds))\n else:\n fobj.write(\n json.dumps({\n 'type': 'FeatureCollection',\n 'bbox': bbox,\n 'features': [feat]}, **dump_kwds))\n fobj.write('\\n')\n # Aggregate all features into a single object expressed as\n # bbox or collection.\n else:\n features = list(collection())\n if geojson_type == 'bbox':\n fobj.write(json.dumps(collection.bbox, **dump_kwds))\n elif geojson_type == 'feature':\n fobj.write(json.dumps(features[0], **dump_kwds))\n else:\n fobj.write(json.dumps({\n 'bbox': collection.bbox,\n 'type': 'FeatureCollection',\n 'features': features},\n **dump_kwds))\n fobj.write('\\n')\n\n\ndef resolve_inout(input=None, output=None, files=None, overwrite=False):\n \"\"\"Resolves inputs and outputs from standard args and options.\n\n :param input: a single input filename, optional.\n :param output: a single output filename, optional.\n :param files: a sequence of filenames in which the last is the\n output filename.\n :param overwrite: whether to force overwriting the output\n file, bool.\n :return: the resolved output filename and input filenames as a\n tuple of length 2.\n\n If provided, the :param:`output` file may be overwritten. An output\n file extracted from :param:`files` will not be overwritten unless\n :param:`overwrite` is `True`.\n \"\"\"\n resolved_output = output or (files[-1] if files else None)\n if not overwrite and resolved_output and os.path.exists(\n resolved_output):\n raise FileOverwriteError(\n \"file exists and won't be overwritten without use of the \"\n \"`--overwrite` option.\")\n resolved_inputs = (\n [input] if input else [] +\n list(files[:-1 if not output else None]) if files else [])\n return resolved_output, resolved_inputs\n\n\ndef to_lower(ctx, param, value):\n \"\"\"Click callback, converts values to lowercase.\"\"\"\n return value.lower()\n", "path": "rasterio/rio/helpers.py"}, {"content": "import logging\nimport os\n\nimport click\nfrom cligj import (\n precision_opt, indent_opt, compact_opt, projection_geographic_opt,\n projection_mercator_opt, projection_projected_opt,\n use_rs_opt, geojson_type_feature_opt, geojson_type_bbox_opt,\n geojson_type_collection_opt)\n\nfrom .helpers import write_features, to_lower\nimport rasterio\nfrom rasterio.rio import options\nfrom rasterio.warp import transform_bounds\n\n\nlogger = logging.getLogger(__name__)\n\n\n# Bounds command.\[email protected](short_help=\"Write bounding boxes to stdout as GeoJSON.\")\n# One or more files, the bounds of each are a feature in the collection\n# object or feature sequence.\[email protected]('INPUT', nargs=-1, type=click.Path(), required=True)\n@precision_opt\n@indent_opt\n@compact_opt\n@projection_geographic_opt\n@projection_projected_opt\n@projection_mercator_opt\[email protected](\n '--dst-crs', default='', metavar=\"EPSG:NNNN\", callback=to_lower,\n help=\"Output in specified coordinates.\")\[email protected]_opt\n@use_rs_opt\n@geojson_type_collection_opt(True)\n@geojson_type_feature_opt(False)\n@geojson_type_bbox_opt(False)\[email protected]_context\ndef bounds(ctx, input, precision, indent, compact, projection, dst_crs,\n sequence, use_rs, geojson_type):\n \"\"\"Write bounding boxes to stdout as GeoJSON for use with, e.g.,\n geojsonio\n\n $ rio bounds *.tif | geojsonio\n\n If a destination crs is passed via dst_crs, it takes precedence over\n the projection parameter.\n \"\"\"\n import rasterio.warp\n dump_kwds = {'sort_keys': True}\n if indent:\n dump_kwds['indent'] = indent\n if compact:\n dump_kwds['separators'] = (',', ':')\n stdout = click.get_text_stream('stdout')\n\n # This is the generator for (feature, bbox) pairs.\n class Collection(object):\n\n def __init__(self, env):\n self._xs = []\n self._ys = []\n self.env = env\n\n @property\n def bbox(self):\n return min(self._xs), min(self._ys), max(self._xs), max(self._ys)\n\n def __call__(self):\n for i, path in enumerate(input):\n with rasterio.open(path) as src:\n bounds = src.bounds\n if dst_crs:\n bbox = transform_bounds(src.crs,\n dst_crs, *bounds)\n elif projection == 'mercator':\n bbox = transform_bounds(src.crs,\n {'init': 'epsg:3857'}, *bounds)\n elif projection == 'geographic':\n bbox = transform_bounds(src.crs,\n {'init': 'epsg:4326'}, *bounds)\n else:\n bbox = bounds\n\n if precision >= 0:\n bbox = [round(b, precision) for b in bbox]\n\n yield {\n 'type': 'Feature',\n 'bbox': bbox,\n 'geometry': {\n 'type': 'Polygon',\n 'coordinates': [[\n [bbox[0], bbox[1]],\n [bbox[2], bbox[1]],\n [bbox[2], bbox[3]],\n [bbox[0], bbox[3]],\n [bbox[0], bbox[1]]]]},\n 'properties': {\n 'id': str(i),\n 'title': path,\n 'filename': os.path.basename(path)}}\n\n self._xs.extend(bbox[::2])\n self._ys.extend(bbox[1::2])\n\n try:\n with ctx.obj['env'] as env:\n write_features(\n stdout, Collection(env), sequence=sequence,\n geojson_type=geojson_type, use_rs=use_rs,\n **dump_kwds)\n\n except Exception:\n logger.exception(\"Exception caught during processing\")\n raise click.Abort()\n", "path": "rasterio/rio/bounds.py"}]} | 4,050 | 440 |
gh_patches_debug_25108 | rasdani/github-patches | git_diff | getsentry__sentry-8160 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
build: Fix eslint results in travis logs
eslint currently outputs XML for zeus to ingest, but prints the XML to console as well. Ideally we would have checkstyle formatter write to XML file but standard formatter write to stdout
Reference: https://github.com/eslint/eslint/issues/8185
</issue>
<code>
[start of src/sentry/lint/engine.py]
1 """
2 Our linter engine needs to run in 3 different scenarios:
3 * Linting all files (python, js, less)
4 * Linting only python files (--python)
5 * Linting only js files (--js)
6
7 For the js only path, we should not depend on any packages outside the
8 python stdlib to prevent the need to install the world just to run eslint.
9
10 This also means imports should be done lazily/inside of function calls for
11 dependencies such as flake8/pep8.
12 """
13 from __future__ import absolute_import
14
15
16 import os
17 import sys
18 import subprocess
19 import json
20
21 from subprocess import check_output, Popen
22 from click import echo, secho, style
23
24 os.environ['PYFLAKES_NODOCTEST'] = '1'
25
26
27 def register_checks():
28 import pycodestyle
29
30 from sentry.lint.sentry_check import SentryCheck
31
32 pycodestyle.register_check(SentryCheck)
33
34
35 register_checks()
36
37
38 def get_project_root():
39 return os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir)
40
41
42 def get_node_modules_bin(name):
43 return os.path.join(
44 get_project_root(), 'node_modules', '.bin', name)
45
46
47 def get_prettier_path():
48 return get_node_modules_bin('prettier')
49
50
51 def get_files(path):
52 results = []
53 for root, _, files in os.walk(path):
54 for name in files:
55 results.append(os.path.join(root, name))
56 return results
57
58
59 def get_modified_files(path):
60 return [
61 s
62 for s in check_output(['git', 'diff-index', '--cached', '--name-only', 'HEAD']).split('\n')
63 if s
64 ]
65
66
67 def get_files_for_list(file_list):
68 if file_list is None:
69 files_to_check = get_files('.')
70
71 else:
72 files_to_check = []
73 for path in file_list:
74 if os.path.isdir(path):
75 files_to_check.extend(get_files(path))
76 else:
77 files_to_check.append(os.path.abspath(path))
78 return sorted(set(files_to_check))
79
80
81 def get_js_files(file_list=None, snapshots=False):
82 if snapshots:
83 extensions = ('.js', '.jsx', '.jsx.snap', '.js.snap')
84 else:
85 extensions = ('.js', '.jsx')
86
87 if file_list is None:
88 file_list = ['tests/js', 'src/sentry/static/sentry/app']
89 return [
90 x for x in get_files_for_list(file_list)
91 if x.endswith(extensions)
92 ]
93
94
95 def get_less_files(file_list=None):
96 if file_list is None:
97 file_list = ['src/sentry/static/sentry/less', 'src/sentry/static/sentry/app']
98 return [x for x in get_files_for_list(file_list) if x.endswith(('.less'))]
99 return file_list
100
101
102 def get_python_files(file_list=None):
103 if file_list is None:
104 file_list = ['src', 'tests']
105 return [
106 x for x in get_files_for_list(file_list)
107 if x.endswith('.py')
108 ]
109
110
111 # parseable is a no-op
112 def py_lint(file_list, parseable=False):
113 from flake8.engine import get_style_guide
114
115 file_list = get_python_files(file_list)
116 flake8_style = get_style_guide(parse_argv=True)
117 report = flake8_style.check_files(file_list)
118
119 return report.total_errors != 0
120
121
122 def js_lint(file_list=None, parseable=False, format=False):
123
124 eslint_path = get_node_modules_bin('eslint')
125
126 if not os.path.exists(eslint_path):
127 from click import echo
128 echo('!! Skipping JavaScript linting because eslint is not installed.')
129 return False
130
131 js_file_list = get_js_files(file_list, snapshots=True)
132
133 has_errors = False
134 if js_file_list:
135 cmd = [eslint_path, '--ext', '.js,.jsx']
136 if format:
137 cmd.append('--fix')
138 if parseable:
139 cmd.append('--format=checkstyle')
140 status = Popen(cmd + js_file_list).wait()
141 has_errors = status != 0
142
143 return has_errors
144
145
146 def yarn_check(file_list):
147 """
148 Checks if package.json was modified WITHOUT a corresponding change in the Yarn
149 lockfile. This can happen if a user manually edited package.json without running Yarn.
150
151 This is a user prompt right now because there ARE cases where you can touch package.json
152 without a Yarn lockfile change, e.g. Jest config changes, license changes, etc.
153 """
154 if file_list is None or os.environ.get('SKIP_YARN_CHECK'):
155 return False
156
157 if 'package.json' in file_list and 'yarn.lock' not in file_list:
158 echo(style("""
159 Warning: package.json modified without accompanying yarn.lock modifications.
160
161 If you updated a dependency/devDependency in package.json, you must run `yarn install` to update the lockfile.
162
163 To skip this check, run:
164
165 $ SKIP_YARN_CHECK=1 git commit [options]""", fg='yellow'))
166 return True
167
168 return False
169
170
171 def is_prettier_valid(project_root, prettier_path):
172 if not os.path.exists(prettier_path):
173 echo('[sentry.lint] Skipping JavaScript formatting because prettier is not installed.', err=True)
174 return False
175
176 # Get Prettier version from package.json
177 package_version = None
178 package_json_path = os.path.join(project_root, 'package.json')
179 with open(package_json_path) as package_json:
180 try:
181 package_version = json.load(package_json)[
182 'devDependencies']['prettier']
183 except KeyError:
184 echo('!! Prettier missing from package.json', err=True)
185 return False
186
187 prettier_version = subprocess.check_output(
188 [prettier_path, '--version']).rstrip()
189 if prettier_version != package_version:
190 echo(
191 '[sentry.lint] Prettier is out of date: {} (expected {}). Please run `yarn install`.'.format(
192 prettier_version,
193 package_version),
194 err=True)
195 return False
196
197 return True
198
199
200 def js_format(file_list=None):
201 """
202 We only format JavaScript code as part of this pre-commit hook. It is not part
203 of the lint engine.
204 """
205 project_root = get_project_root()
206 prettier_path = get_prettier_path()
207
208 if not is_prettier_valid(project_root, prettier_path):
209 return False
210
211 js_file_list = get_js_files(file_list)
212
213 # manually exclude some bad files
214 js_file_list = [x for x in js_file_list if '/javascript/example-project/' not in x]
215
216 return run_formatter([prettier_path,
217 '--write',
218 ],
219 js_file_list)
220
221
222 def js_test(file_list=None):
223 """
224 Run JavaScript unit tests on relevant files ONLY as part of pre-commit hook
225 """
226 jest_path = get_node_modules_bin('jest')
227
228 if not os.path.exists(jest_path):
229 from click import echo
230 echo('[sentry.test] Skipping JavaScript testing because jest is not installed.')
231 return False
232
233 js_file_list = get_js_files(file_list)
234
235 has_errors = False
236 if js_file_list:
237 status = Popen([jest_path, '--bail', '--findRelatedTests'] + js_file_list).wait()
238 has_errors = status != 0
239
240 return has_errors
241
242
243 def less_format(file_list=None):
244 """
245 We only format less code as part of this pre-commit hook. It is not part
246 of the lint engine.
247 """
248 project_root = get_project_root()
249 prettier_path = get_prettier_path()
250
251 if not is_prettier_valid(project_root, prettier_path):
252 return False
253
254 less_file_list = get_less_files(file_list)
255 return run_formatter(
256 [
257 prettier_path,
258 '--write',
259 ], less_file_list
260 )
261
262
263 def py_format(file_list=None):
264 try:
265 __import__('autopep8')
266 except ImportError:
267 echo('[sentry.lint] Skipping Python autoformat because autopep8 is not installed.', err=True)
268 return False
269
270 py_file_list = get_python_files(file_list)
271
272 return run_formatter(['autopep8', '--in-place', '-j0'], py_file_list)
273
274
275 def run_formatter(cmd, file_list, prompt_on_changes=True):
276 if not file_list:
277 return False
278
279 has_errors = False
280
281 status = subprocess.Popen(cmd + file_list).wait()
282 has_errors = status != 0
283 if has_errors:
284 return False
285
286 # this is not quite correct, but it at least represents what would be staged
287 output = subprocess.check_output(['git', 'diff'] + file_list)
288 if output:
289 echo('[sentry.lint] applied changes from autoformatting')
290 for line in output.splitlines():
291 if line.startswith('-'):
292 secho(line, fg='red')
293 elif line.startswith('+'):
294 secho(line, fg='green')
295 else:
296 echo(line)
297 if prompt_on_changes:
298 with open('/dev/tty') as fp:
299 secho('Stage this patch and continue? [Y/n] ', bold=True)
300 if fp.readline().strip().lower() != 'y':
301 echo(
302 '[sentry.lint] Aborted! Changes have been applied but not staged.', err=True)
303 if not os.environ.get('SENTRY_SKIP_FORCE_PATCH'):
304 sys.exit(1)
305 else:
306 status = subprocess.Popen(
307 ['git', 'update-index', '--add'] + file_list).wait()
308 has_errors = status != 0
309 return has_errors
310
311
312 def run(file_list=None, format=True, lint=True, js=True, py=True,
313 less=True, yarn=True, test=False, parseable=False):
314 # pep8.py uses sys.argv to find setup.cfg
315 old_sysargv = sys.argv
316
317 try:
318 sys.argv = [
319 os.path.join(os.path.dirname(__file__),
320 os.pardir, os.pardir, os.pardir)
321 ]
322 results = []
323
324 # packages
325 if yarn:
326 results.append(yarn_check(file_list))
327
328 # bail early if a deps failed
329 if any(results):
330 return 1
331
332 if format:
333 if py:
334 results.append(py_format(file_list))
335 if js:
336 results.append(js_format(file_list))
337 if less:
338 results.append(less_format(file_list))
339
340 # bail early if a formatter failed
341 if any(results):
342 return 1
343
344 if lint:
345 if py:
346 results.append(py_lint(file_list, parseable=parseable))
347 if js:
348 results.append(js_lint(file_list, parseable=parseable, format=format))
349
350 if test:
351 if js:
352 results.append(js_test(file_list))
353
354 if any(results):
355 return 1
356 return 0
357 finally:
358 sys.argv = old_sysargv
359
[end of src/sentry/lint/engine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/lint/engine.py b/src/sentry/lint/engine.py
--- a/src/sentry/lint/engine.py
+++ b/src/sentry/lint/engine.py
@@ -39,6 +39,10 @@
return os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir)
+def get_sentry_bin(name):
+ return os.path.join(get_project_root(), 'bin', name)
+
+
def get_node_modules_bin(name):
return os.path.join(
get_project_root(), 'node_modules', '.bin', name)
@@ -121,7 +125,9 @@
def js_lint(file_list=None, parseable=False, format=False):
+ # We require eslint in path but we actually call an eslint wrapper
eslint_path = get_node_modules_bin('eslint')
+ eslint_wrapper_path = get_sentry_bin('eslint-travis-wrapper')
if not os.path.exists(eslint_path):
from click import echo
@@ -132,7 +138,11 @@
has_errors = False
if js_file_list:
- cmd = [eslint_path, '--ext', '.js,.jsx']
+ if os.environ.get('CI'):
+ cmd = [eslint_wrapper_path, '--ext', '.js,.jsx']
+ else:
+ cmd = [eslint_path, '--ext', '.js,.jsx']
+
if format:
cmd.append('--fix')
if parseable:
| {"golden_diff": "diff --git a/src/sentry/lint/engine.py b/src/sentry/lint/engine.py\n--- a/src/sentry/lint/engine.py\n+++ b/src/sentry/lint/engine.py\n@@ -39,6 +39,10 @@\n return os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir)\n \n \n+def get_sentry_bin(name):\n+ return os.path.join(get_project_root(), 'bin', name)\n+\n+\n def get_node_modules_bin(name):\n return os.path.join(\n get_project_root(), 'node_modules', '.bin', name)\n@@ -121,7 +125,9 @@\n \n def js_lint(file_list=None, parseable=False, format=False):\n \n+ # We require eslint in path but we actually call an eslint wrapper\n eslint_path = get_node_modules_bin('eslint')\n+ eslint_wrapper_path = get_sentry_bin('eslint-travis-wrapper')\n \n if not os.path.exists(eslint_path):\n from click import echo\n@@ -132,7 +138,11 @@\n \n has_errors = False\n if js_file_list:\n- cmd = [eslint_path, '--ext', '.js,.jsx']\n+ if os.environ.get('CI'):\n+ cmd = [eslint_wrapper_path, '--ext', '.js,.jsx']\n+ else:\n+ cmd = [eslint_path, '--ext', '.js,.jsx']\n+\n if format:\n cmd.append('--fix')\n if parseable:\n", "issue": "build: Fix eslint results in travis logs\neslint currently outputs XML for zeus to ingest, but prints the XML to console as well. Ideally we would have checkstyle formatter write to XML file but standard formatter write to stdout\r\n\r\nReference: https://github.com/eslint/eslint/issues/8185\n", "before_files": [{"content": "\"\"\"\nOur linter engine needs to run in 3 different scenarios:\n * Linting all files (python, js, less)\n * Linting only python files (--python)\n * Linting only js files (--js)\n\nFor the js only path, we should not depend on any packages outside the\npython stdlib to prevent the need to install the world just to run eslint.\n\nThis also means imports should be done lazily/inside of function calls for\ndependencies such as flake8/pep8.\n\"\"\"\nfrom __future__ import absolute_import\n\n\nimport os\nimport sys\nimport subprocess\nimport json\n\nfrom subprocess import check_output, Popen\nfrom click import echo, secho, style\n\nos.environ['PYFLAKES_NODOCTEST'] = '1'\n\n\ndef register_checks():\n import pycodestyle\n\n from sentry.lint.sentry_check import SentryCheck\n\n pycodestyle.register_check(SentryCheck)\n\n\nregister_checks()\n\n\ndef get_project_root():\n return os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir)\n\n\ndef get_node_modules_bin(name):\n return os.path.join(\n get_project_root(), 'node_modules', '.bin', name)\n\n\ndef get_prettier_path():\n return get_node_modules_bin('prettier')\n\n\ndef get_files(path):\n results = []\n for root, _, files in os.walk(path):\n for name in files:\n results.append(os.path.join(root, name))\n return results\n\n\ndef get_modified_files(path):\n return [\n s\n for s in check_output(['git', 'diff-index', '--cached', '--name-only', 'HEAD']).split('\\n')\n if s\n ]\n\n\ndef get_files_for_list(file_list):\n if file_list is None:\n files_to_check = get_files('.')\n\n else:\n files_to_check = []\n for path in file_list:\n if os.path.isdir(path):\n files_to_check.extend(get_files(path))\n else:\n files_to_check.append(os.path.abspath(path))\n return sorted(set(files_to_check))\n\n\ndef get_js_files(file_list=None, snapshots=False):\n if snapshots:\n extensions = ('.js', '.jsx', '.jsx.snap', '.js.snap')\n else:\n extensions = ('.js', '.jsx')\n\n if file_list is None:\n file_list = ['tests/js', 'src/sentry/static/sentry/app']\n return [\n x for x in get_files_for_list(file_list)\n if x.endswith(extensions)\n ]\n\n\ndef get_less_files(file_list=None):\n if file_list is None:\n file_list = ['src/sentry/static/sentry/less', 'src/sentry/static/sentry/app']\n return [x for x in get_files_for_list(file_list) if x.endswith(('.less'))]\n return file_list\n\n\ndef get_python_files(file_list=None):\n if file_list is None:\n file_list = ['src', 'tests']\n return [\n x for x in get_files_for_list(file_list)\n if x.endswith('.py')\n ]\n\n\n# parseable is a no-op\ndef py_lint(file_list, parseable=False):\n from flake8.engine import get_style_guide\n\n file_list = get_python_files(file_list)\n flake8_style = get_style_guide(parse_argv=True)\n report = flake8_style.check_files(file_list)\n\n return report.total_errors != 0\n\n\ndef js_lint(file_list=None, parseable=False, format=False):\n\n eslint_path = get_node_modules_bin('eslint')\n\n if not os.path.exists(eslint_path):\n from click import echo\n echo('!! Skipping JavaScript linting because eslint is not installed.')\n return False\n\n js_file_list = get_js_files(file_list, snapshots=True)\n\n has_errors = False\n if js_file_list:\n cmd = [eslint_path, '--ext', '.js,.jsx']\n if format:\n cmd.append('--fix')\n if parseable:\n cmd.append('--format=checkstyle')\n status = Popen(cmd + js_file_list).wait()\n has_errors = status != 0\n\n return has_errors\n\n\ndef yarn_check(file_list):\n \"\"\"\n Checks if package.json was modified WITHOUT a corresponding change in the Yarn\n lockfile. This can happen if a user manually edited package.json without running Yarn.\n\n This is a user prompt right now because there ARE cases where you can touch package.json\n without a Yarn lockfile change, e.g. Jest config changes, license changes, etc.\n \"\"\"\n if file_list is None or os.environ.get('SKIP_YARN_CHECK'):\n return False\n\n if 'package.json' in file_list and 'yarn.lock' not in file_list:\n echo(style(\"\"\"\nWarning: package.json modified without accompanying yarn.lock modifications.\n\nIf you updated a dependency/devDependency in package.json, you must run `yarn install` to update the lockfile.\n\nTo skip this check, run:\n\n$ SKIP_YARN_CHECK=1 git commit [options]\"\"\", fg='yellow'))\n return True\n\n return False\n\n\ndef is_prettier_valid(project_root, prettier_path):\n if not os.path.exists(prettier_path):\n echo('[sentry.lint] Skipping JavaScript formatting because prettier is not installed.', err=True)\n return False\n\n # Get Prettier version from package.json\n package_version = None\n package_json_path = os.path.join(project_root, 'package.json')\n with open(package_json_path) as package_json:\n try:\n package_version = json.load(package_json)[\n 'devDependencies']['prettier']\n except KeyError:\n echo('!! Prettier missing from package.json', err=True)\n return False\n\n prettier_version = subprocess.check_output(\n [prettier_path, '--version']).rstrip()\n if prettier_version != package_version:\n echo(\n '[sentry.lint] Prettier is out of date: {} (expected {}). Please run `yarn install`.'.format(\n prettier_version,\n package_version),\n err=True)\n return False\n\n return True\n\n\ndef js_format(file_list=None):\n \"\"\"\n We only format JavaScript code as part of this pre-commit hook. It is not part\n of the lint engine.\n \"\"\"\n project_root = get_project_root()\n prettier_path = get_prettier_path()\n\n if not is_prettier_valid(project_root, prettier_path):\n return False\n\n js_file_list = get_js_files(file_list)\n\n # manually exclude some bad files\n js_file_list = [x for x in js_file_list if '/javascript/example-project/' not in x]\n\n return run_formatter([prettier_path,\n '--write',\n ],\n js_file_list)\n\n\ndef js_test(file_list=None):\n \"\"\"\n Run JavaScript unit tests on relevant files ONLY as part of pre-commit hook\n \"\"\"\n jest_path = get_node_modules_bin('jest')\n\n if not os.path.exists(jest_path):\n from click import echo\n echo('[sentry.test] Skipping JavaScript testing because jest is not installed.')\n return False\n\n js_file_list = get_js_files(file_list)\n\n has_errors = False\n if js_file_list:\n status = Popen([jest_path, '--bail', '--findRelatedTests'] + js_file_list).wait()\n has_errors = status != 0\n\n return has_errors\n\n\ndef less_format(file_list=None):\n \"\"\"\n We only format less code as part of this pre-commit hook. It is not part\n of the lint engine.\n \"\"\"\n project_root = get_project_root()\n prettier_path = get_prettier_path()\n\n if not is_prettier_valid(project_root, prettier_path):\n return False\n\n less_file_list = get_less_files(file_list)\n return run_formatter(\n [\n prettier_path,\n '--write',\n ], less_file_list\n )\n\n\ndef py_format(file_list=None):\n try:\n __import__('autopep8')\n except ImportError:\n echo('[sentry.lint] Skipping Python autoformat because autopep8 is not installed.', err=True)\n return False\n\n py_file_list = get_python_files(file_list)\n\n return run_formatter(['autopep8', '--in-place', '-j0'], py_file_list)\n\n\ndef run_formatter(cmd, file_list, prompt_on_changes=True):\n if not file_list:\n return False\n\n has_errors = False\n\n status = subprocess.Popen(cmd + file_list).wait()\n has_errors = status != 0\n if has_errors:\n return False\n\n # this is not quite correct, but it at least represents what would be staged\n output = subprocess.check_output(['git', 'diff'] + file_list)\n if output:\n echo('[sentry.lint] applied changes from autoformatting')\n for line in output.splitlines():\n if line.startswith('-'):\n secho(line, fg='red')\n elif line.startswith('+'):\n secho(line, fg='green')\n else:\n echo(line)\n if prompt_on_changes:\n with open('/dev/tty') as fp:\n secho('Stage this patch and continue? [Y/n] ', bold=True)\n if fp.readline().strip().lower() != 'y':\n echo(\n '[sentry.lint] Aborted! Changes have been applied but not staged.', err=True)\n if not os.environ.get('SENTRY_SKIP_FORCE_PATCH'):\n sys.exit(1)\n else:\n status = subprocess.Popen(\n ['git', 'update-index', '--add'] + file_list).wait()\n has_errors = status != 0\n return has_errors\n\n\ndef run(file_list=None, format=True, lint=True, js=True, py=True,\n less=True, yarn=True, test=False, parseable=False):\n # pep8.py uses sys.argv to find setup.cfg\n old_sysargv = sys.argv\n\n try:\n sys.argv = [\n os.path.join(os.path.dirname(__file__),\n os.pardir, os.pardir, os.pardir)\n ]\n results = []\n\n # packages\n if yarn:\n results.append(yarn_check(file_list))\n\n # bail early if a deps failed\n if any(results):\n return 1\n\n if format:\n if py:\n results.append(py_format(file_list))\n if js:\n results.append(js_format(file_list))\n if less:\n results.append(less_format(file_list))\n\n # bail early if a formatter failed\n if any(results):\n return 1\n\n if lint:\n if py:\n results.append(py_lint(file_list, parseable=parseable))\n if js:\n results.append(js_lint(file_list, parseable=parseable, format=format))\n\n if test:\n if js:\n results.append(js_test(file_list))\n\n if any(results):\n return 1\n return 0\n finally:\n sys.argv = old_sysargv\n", "path": "src/sentry/lint/engine.py"}]} | 3,985 | 329 |
gh_patches_debug_22822 | rasdani/github-patches | git_diff | iterative__dvc-4044 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plots modify minor issues
```
$ dvc -V
1.0.0a6+4cfc2d
```
1. ~~Misleading message~~ (#3978)
```
$ dvc plots modify plots.csv --unset y title
Adding stage 'trainme' to 'dvc.yaml'
```
~~It should be `Modifying stage 'trainme' to 'dvc.yaml'`~~
2. ~~`--unset` should handle errors properly~~
```
$ dvc plots modify plots.csv --unset y_fake_param
Adding stage 'trainme' to 'dvc.yaml'
$ echo $?
0
```
~~It supposed to fail with proper message.~~ https://github.com/iterative/dvc/pull/4044
3. ~~No template file check before setting~~ (#3995)
```
$ dvc plots modify -t confusion1 plots.csv
Adding stage 'trainme' to 'dvc.yaml'
```
While there is no template `confusion1`. It should fail with a proper error message.
4. ~~not easy to understand `--unset` syntax
It supposed to be in the end, right? Initially I tried `dvc plots modify --unset y plots.csv` that didn't work.~~ Same as in https://github.com/iterative/dvc/issues/3962
5. ~~message~~
```
$ dvc plots -h
modify Modify plot props associated with a target file.
```
~~`target file ` info is not needed. It can be `Modify plot properties`.~~
6. ~~message~~ (#3983)
```
$ dvc plots modify -h
```
~~`props` term is often used. It was not clear what foes it mean. We should probably clarify.~~
</issue>
<code>
[start of dvc/repo/plots/__init__.py]
1 import logging
2
3 from funcy import first, project
4
5 from dvc.exceptions import DvcException, NoPlotsError, OutputNotFoundError
6 from dvc.repo.tree import RepoTree
7 from dvc.schema import PLOT_PROPS
8 from dvc.utils import relpath
9
10 logger = logging.getLogger(__name__)
11
12
13 class NotAPlotError(DvcException):
14 def __init__(self, out):
15 super().__init__(
16 f"'{out}' is not a plot. Use `dvc plots modify` to change that."
17 )
18
19
20 class Plots:
21 def __init__(self, repo):
22 self.repo = repo
23
24 def collect(self, targets=None, revs=None):
25 """Collects all props and data for plots.
26
27 Returns a structure like:
28 {rev: {plots.csv: {
29 props: {x: ..., "header": ..., ...},
30 data: "...data as a string...",
31 }}}
32 Data parsing is postponed, since it's affected by props.
33 """
34 targets = [targets] if isinstance(targets, str) else targets or []
35 data = {}
36 for rev in self.repo.brancher(revs=revs):
37 # .brancher() adds unwanted workspace
38 if revs is not None and rev not in revs:
39 continue
40 rev = rev or "workspace"
41
42 tree = RepoTree(self.repo)
43 plots = _collect_plots(self.repo, targets, rev)
44 for path_info, props in plots.items():
45 datafile = relpath(path_info, self.repo.root_dir)
46 if rev not in data:
47 data[rev] = {}
48 data[rev].update({datafile: {"props": props}})
49
50 # Load data from git or dvc cache
51 try:
52 with tree.open(path_info) as fd:
53 data[rev][datafile]["data"] = fd.read()
54 except FileNotFoundError as e:
55 # This might happen simply because cache is absent
56 print(e)
57 pass
58
59 return data
60
61 @staticmethod
62 def render(data, revs=None, props=None, templates=None):
63 """Renders plots"""
64 props = props or {}
65
66 # Merge data by plot file and apply overriding props
67 plots = _prepare_plots(data, revs, props)
68
69 return {
70 datafile: _render(datafile, desc["data"], desc["props"], templates)
71 for datafile, desc in plots.items()
72 }
73
74 def show(self, targets=None, revs=None, props=None):
75 from .data import NoMetricInHistoryError
76
77 data = self.collect(targets, revs)
78
79 # If any mentioned plot doesn't have any data then that's an error
80 targets = [targets] if isinstance(targets, str) else targets or []
81 for target in targets:
82 if not any("data" in d[target] for d in data.values()):
83 raise NoMetricInHistoryError(target)
84
85 # No data at all is a special error with a special message
86 if not data:
87 raise NoPlotsError()
88
89 return self.render(data, revs, props, self.repo.plot_templates)
90
91 def diff(self, *args, **kwargs):
92 from .diff import diff
93
94 return diff(self.repo, *args, **kwargs)
95
96 def modify(self, path, props=None, unset=None):
97 from dvc.dvcfile import Dvcfile
98
99 props = props or {}
100 template = props.get("template")
101 if template:
102 self.repo.plot_templates.get_template(template)
103
104 (out,) = self.repo.find_outs_by_path(path)
105 if not out.plot and unset is not None:
106 raise NotAPlotError(out)
107
108 # This out will become a plot unless it is one already
109 if not isinstance(out.plot, dict):
110 out.plot = {}
111
112 for field in unset or ():
113 out.plot.pop(field, None)
114 out.plot.update(props)
115
116 # Empty dict will move it to non-plots
117 if not out.plot:
118 out.plot = True
119
120 out.verify_metric()
121
122 dvcfile = Dvcfile(self.repo, out.stage.path)
123 dvcfile.dump(out.stage, update_pipeline=True, no_lock=True)
124
125
126 def _collect_plots(repo, targets=None, rev=None):
127 def _targets_to_outs(targets):
128 for t in targets:
129 try:
130 (out,) = repo.find_outs_by_path(t)
131 yield out
132 except OutputNotFoundError:
133 logger.warning(
134 "File '{}' was not found at: '{}'. It will not be "
135 "plotted.".format(t, rev)
136 )
137
138 if targets:
139 outs = _targets_to_outs(targets)
140 else:
141 outs = (out for stage in repo.stages for out in stage.outs if out.plot)
142
143 return {out.path_info: _plot_props(out) for out in outs}
144
145
146 def _plot_props(out):
147 if not out.plot:
148 raise NotAPlotError(out)
149 if isinstance(out.plot, list):
150 raise DvcException("Multiple plots per data file not supported.")
151 if isinstance(out.plot, bool):
152 return {}
153
154 return project(out.plot, PLOT_PROPS)
155
156
157 def _prepare_plots(data, revs, props):
158 """Groups data by plot file.
159
160 Also resolves props conflicts between revs and applies global props.
161 """
162 # we go in order revs are supplied on props conflict first ones win.
163 revs = iter(data) if revs is None else revs
164
165 plots, props_revs = {}, {}
166 for rev in revs:
167 # Asked for revision without data
168 if rev not in data:
169 continue
170
171 for datafile, desc in data[rev].items():
172 # props from command line overwrite plot props from out definition
173 full_props = {**desc["props"], **props}
174
175 if datafile in plots:
176 saved = plots[datafile]
177 if saved["props"] != full_props:
178 logger.warning(
179 f"Inconsistent plot props for '{datafile}' in "
180 f"'{props_revs[datafile]}' and '{rev}'. "
181 f"Going to use ones from '{props_revs[datafile]}'"
182 )
183
184 saved["data"][rev] = desc["data"]
185 else:
186 plots[datafile] = {
187 "props": full_props,
188 "data": {rev: desc["data"]},
189 }
190 # Save rev we got props from
191 props_revs[datafile] = rev
192
193 return plots
194
195
196 def _render(datafile, datas, props, templates):
197 from .data import plot_data, PlotData
198
199 # Copy it to not modify a passed value
200 props = props.copy()
201
202 # Add x and y to fields if set
203 fields = props.get("fields")
204 if fields is not None:
205 fields = {*fields, props.get("x"), props.get("y")} - {None}
206
207 template = templates.load(props.get("template") or "default")
208
209 # If x is not set add index field
210 if not props.get("x") and template.has_anchor("x"):
211 props["append_index"] = True
212 props["x"] = PlotData.INDEX_FIELD
213
214 # Parse all data, preprocess it and collect as a list of dicts
215 data = []
216 for rev, datablob in datas.items():
217 rev_data = plot_data(datafile, rev, datablob).to_datapoints(
218 fields=fields,
219 path=props.get("path"),
220 header=props.get("header", True),
221 append_index=props.get("append_index", False),
222 )
223 data.extend(rev_data)
224
225 # If y is not set then use last field not used yet
226 if not props.get("y") and template.has_anchor("y"):
227 fields = list(first(data))
228 skip = (PlotData.REVISION_FIELD, props.get("x"))
229 props["y"] = first(f for f in reversed(fields) if f not in skip)
230
231 return template.render(data, props=props)
232
[end of dvc/repo/plots/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/repo/plots/__init__.py b/dvc/repo/plots/__init__.py
--- a/dvc/repo/plots/__init__.py
+++ b/dvc/repo/plots/__init__.py
@@ -17,6 +17,10 @@
)
+class PropsNotFoundError(DvcException):
+ pass
+
+
class Plots:
def __init__(self, repo):
self.repo = repo
@@ -93,6 +97,17 @@
return diff(self.repo, *args, **kwargs)
+ @staticmethod
+ def _unset(out, props):
+ missing = list(set(props) - set(out.plot.keys()))
+ if missing:
+ raise PropsNotFoundError(
+ f"display properties {missing} not found in plot '{out}'"
+ )
+
+ for prop in props:
+ out.plot.pop(prop)
+
def modify(self, path, props=None, unset=None):
from dvc.dvcfile import Dvcfile
@@ -109,8 +124,9 @@
if not isinstance(out.plot, dict):
out.plot = {}
- for field in unset or ():
- out.plot.pop(field, None)
+ if unset:
+ self._unset(out, unset)
+
out.plot.update(props)
# Empty dict will move it to non-plots
| {"golden_diff": "diff --git a/dvc/repo/plots/__init__.py b/dvc/repo/plots/__init__.py\n--- a/dvc/repo/plots/__init__.py\n+++ b/dvc/repo/plots/__init__.py\n@@ -17,6 +17,10 @@\n )\n \n \n+class PropsNotFoundError(DvcException):\n+ pass\n+\n+\n class Plots:\n def __init__(self, repo):\n self.repo = repo\n@@ -93,6 +97,17 @@\n \n return diff(self.repo, *args, **kwargs)\n \n+ @staticmethod\n+ def _unset(out, props):\n+ missing = list(set(props) - set(out.plot.keys()))\n+ if missing:\n+ raise PropsNotFoundError(\n+ f\"display properties {missing} not found in plot '{out}'\"\n+ )\n+\n+ for prop in props:\n+ out.plot.pop(prop)\n+\n def modify(self, path, props=None, unset=None):\n from dvc.dvcfile import Dvcfile\n \n@@ -109,8 +124,9 @@\n if not isinstance(out.plot, dict):\n out.plot = {}\n \n- for field in unset or ():\n- out.plot.pop(field, None)\n+ if unset:\n+ self._unset(out, unset)\n+\n out.plot.update(props)\n \n # Empty dict will move it to non-plots\n", "issue": "plots modify minor issues\n```\r\n$ dvc -V\r\n1.0.0a6+4cfc2d\r\n```\r\n\r\n1. ~~Misleading message~~ (#3978)\r\n```\r\n$ dvc plots modify plots.csv --unset y title\r\nAdding stage 'trainme' to 'dvc.yaml'\r\n```\r\n~~It should be `Modifying stage 'trainme' to 'dvc.yaml'`~~\r\n\r\n2. ~~`--unset` should handle errors properly~~\r\n```\r\n$ dvc plots modify plots.csv --unset y_fake_param\r\nAdding stage 'trainme' to 'dvc.yaml'\r\n$ echo $?\r\n0\r\n```\r\n~~It supposed to fail with proper message.~~ https://github.com/iterative/dvc/pull/4044\r\n\r\n3. ~~No template file check before setting~~ (#3995)\r\n```\r\n$ dvc plots modify -t confusion1 plots.csv\r\nAdding stage 'trainme' to 'dvc.yaml'\r\n```\r\nWhile there is no template `confusion1`. It should fail with a proper error message.\r\n\r\n4. ~~not easy to understand `--unset` syntax\r\nIt supposed to be in the end, right? Initially I tried `dvc plots modify --unset y plots.csv` that didn't work.~~ Same as in https://github.com/iterative/dvc/issues/3962\r\n\r\n5. ~~message~~\r\n\r\n```\r\n$ dvc plots -h\r\n modify Modify plot props associated with a target file.\r\n```\r\n~~`target file ` info is not needed. It can be `Modify plot properties`.~~\r\n\r\n6. ~~message~~ (#3983)\r\n\r\n```\r\n$ dvc plots modify -h\r\n```\r\n\r\n~~`props` term is often used. It was not clear what foes it mean. We should probably clarify.~~\n", "before_files": [{"content": "import logging\n\nfrom funcy import first, project\n\nfrom dvc.exceptions import DvcException, NoPlotsError, OutputNotFoundError\nfrom dvc.repo.tree import RepoTree\nfrom dvc.schema import PLOT_PROPS\nfrom dvc.utils import relpath\n\nlogger = logging.getLogger(__name__)\n\n\nclass NotAPlotError(DvcException):\n def __init__(self, out):\n super().__init__(\n f\"'{out}' is not a plot. Use `dvc plots modify` to change that.\"\n )\n\n\nclass Plots:\n def __init__(self, repo):\n self.repo = repo\n\n def collect(self, targets=None, revs=None):\n \"\"\"Collects all props and data for plots.\n\n Returns a structure like:\n {rev: {plots.csv: {\n props: {x: ..., \"header\": ..., ...},\n data: \"...data as a string...\",\n }}}\n Data parsing is postponed, since it's affected by props.\n \"\"\"\n targets = [targets] if isinstance(targets, str) else targets or []\n data = {}\n for rev in self.repo.brancher(revs=revs):\n # .brancher() adds unwanted workspace\n if revs is not None and rev not in revs:\n continue\n rev = rev or \"workspace\"\n\n tree = RepoTree(self.repo)\n plots = _collect_plots(self.repo, targets, rev)\n for path_info, props in plots.items():\n datafile = relpath(path_info, self.repo.root_dir)\n if rev not in data:\n data[rev] = {}\n data[rev].update({datafile: {\"props\": props}})\n\n # Load data from git or dvc cache\n try:\n with tree.open(path_info) as fd:\n data[rev][datafile][\"data\"] = fd.read()\n except FileNotFoundError as e:\n # This might happen simply because cache is absent\n print(e)\n pass\n\n return data\n\n @staticmethod\n def render(data, revs=None, props=None, templates=None):\n \"\"\"Renders plots\"\"\"\n props = props or {}\n\n # Merge data by plot file and apply overriding props\n plots = _prepare_plots(data, revs, props)\n\n return {\n datafile: _render(datafile, desc[\"data\"], desc[\"props\"], templates)\n for datafile, desc in plots.items()\n }\n\n def show(self, targets=None, revs=None, props=None):\n from .data import NoMetricInHistoryError\n\n data = self.collect(targets, revs)\n\n # If any mentioned plot doesn't have any data then that's an error\n targets = [targets] if isinstance(targets, str) else targets or []\n for target in targets:\n if not any(\"data\" in d[target] for d in data.values()):\n raise NoMetricInHistoryError(target)\n\n # No data at all is a special error with a special message\n if not data:\n raise NoPlotsError()\n\n return self.render(data, revs, props, self.repo.plot_templates)\n\n def diff(self, *args, **kwargs):\n from .diff import diff\n\n return diff(self.repo, *args, **kwargs)\n\n def modify(self, path, props=None, unset=None):\n from dvc.dvcfile import Dvcfile\n\n props = props or {}\n template = props.get(\"template\")\n if template:\n self.repo.plot_templates.get_template(template)\n\n (out,) = self.repo.find_outs_by_path(path)\n if not out.plot and unset is not None:\n raise NotAPlotError(out)\n\n # This out will become a plot unless it is one already\n if not isinstance(out.plot, dict):\n out.plot = {}\n\n for field in unset or ():\n out.plot.pop(field, None)\n out.plot.update(props)\n\n # Empty dict will move it to non-plots\n if not out.plot:\n out.plot = True\n\n out.verify_metric()\n\n dvcfile = Dvcfile(self.repo, out.stage.path)\n dvcfile.dump(out.stage, update_pipeline=True, no_lock=True)\n\n\ndef _collect_plots(repo, targets=None, rev=None):\n def _targets_to_outs(targets):\n for t in targets:\n try:\n (out,) = repo.find_outs_by_path(t)\n yield out\n except OutputNotFoundError:\n logger.warning(\n \"File '{}' was not found at: '{}'. It will not be \"\n \"plotted.\".format(t, rev)\n )\n\n if targets:\n outs = _targets_to_outs(targets)\n else:\n outs = (out for stage in repo.stages for out in stage.outs if out.plot)\n\n return {out.path_info: _plot_props(out) for out in outs}\n\n\ndef _plot_props(out):\n if not out.plot:\n raise NotAPlotError(out)\n if isinstance(out.plot, list):\n raise DvcException(\"Multiple plots per data file not supported.\")\n if isinstance(out.plot, bool):\n return {}\n\n return project(out.plot, PLOT_PROPS)\n\n\ndef _prepare_plots(data, revs, props):\n \"\"\"Groups data by plot file.\n\n Also resolves props conflicts between revs and applies global props.\n \"\"\"\n # we go in order revs are supplied on props conflict first ones win.\n revs = iter(data) if revs is None else revs\n\n plots, props_revs = {}, {}\n for rev in revs:\n # Asked for revision without data\n if rev not in data:\n continue\n\n for datafile, desc in data[rev].items():\n # props from command line overwrite plot props from out definition\n full_props = {**desc[\"props\"], **props}\n\n if datafile in plots:\n saved = plots[datafile]\n if saved[\"props\"] != full_props:\n logger.warning(\n f\"Inconsistent plot props for '{datafile}' in \"\n f\"'{props_revs[datafile]}' and '{rev}'. \"\n f\"Going to use ones from '{props_revs[datafile]}'\"\n )\n\n saved[\"data\"][rev] = desc[\"data\"]\n else:\n plots[datafile] = {\n \"props\": full_props,\n \"data\": {rev: desc[\"data\"]},\n }\n # Save rev we got props from\n props_revs[datafile] = rev\n\n return plots\n\n\ndef _render(datafile, datas, props, templates):\n from .data import plot_data, PlotData\n\n # Copy it to not modify a passed value\n props = props.copy()\n\n # Add x and y to fields if set\n fields = props.get(\"fields\")\n if fields is not None:\n fields = {*fields, props.get(\"x\"), props.get(\"y\")} - {None}\n\n template = templates.load(props.get(\"template\") or \"default\")\n\n # If x is not set add index field\n if not props.get(\"x\") and template.has_anchor(\"x\"):\n props[\"append_index\"] = True\n props[\"x\"] = PlotData.INDEX_FIELD\n\n # Parse all data, preprocess it and collect as a list of dicts\n data = []\n for rev, datablob in datas.items():\n rev_data = plot_data(datafile, rev, datablob).to_datapoints(\n fields=fields,\n path=props.get(\"path\"),\n header=props.get(\"header\", True),\n append_index=props.get(\"append_index\", False),\n )\n data.extend(rev_data)\n\n # If y is not set then use last field not used yet\n if not props.get(\"y\") and template.has_anchor(\"y\"):\n fields = list(first(data))\n skip = (PlotData.REVISION_FIELD, props.get(\"x\"))\n props[\"y\"] = first(f for f in reversed(fields) if f not in skip)\n\n return template.render(data, props=props)\n", "path": "dvc/repo/plots/__init__.py"}]} | 3,257 | 308 |
gh_patches_debug_16620 | rasdani/github-patches | git_diff | flairNLP__flair-666 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
How to specify CPU or GPU?
I'd like to know how to move the model between CPU and GPU. Alternatively, I'd like to know if it's possible to specify the device before instantiating a model.
</issue>
<code>
[start of flair/models/language_model.py]
1 from pathlib import Path
2
3 import torch.nn as nn
4 import torch
5 import math
6 from typing import Union, Tuple
7 from typing import List
8
9 from torch.optim import Optimizer
10
11 import flair
12 from flair.data import Dictionary
13
14
15 class LanguageModel(nn.Module):
16 """Container module with an encoder, a recurrent module, and a decoder."""
17
18 def __init__(
19 self,
20 dictionary: Dictionary,
21 is_forward_lm: bool,
22 hidden_size: int,
23 nlayers: int,
24 embedding_size: int = 100,
25 nout=None,
26 dropout=0.1,
27 ):
28
29 super(LanguageModel, self).__init__()
30
31 self.dictionary = dictionary
32 self.is_forward_lm: bool = is_forward_lm
33
34 self.dropout = dropout
35 self.hidden_size = hidden_size
36 self.embedding_size = embedding_size
37 self.nlayers = nlayers
38
39 self.drop = nn.Dropout(dropout)
40 self.encoder = nn.Embedding(len(dictionary), embedding_size)
41
42 if nlayers == 1:
43 self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers)
44 else:
45 self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers, dropout=dropout)
46
47 self.hidden = None
48
49 self.nout = nout
50 if nout is not None:
51 self.proj = nn.Linear(hidden_size, nout)
52 self.initialize(self.proj.weight)
53 self.decoder = nn.Linear(nout, len(dictionary))
54 else:
55 self.proj = None
56 self.decoder = nn.Linear(hidden_size, len(dictionary))
57
58 self.init_weights()
59
60 # auto-spawn on GPU if available
61 self.to(flair.device)
62
63 def init_weights(self):
64 initrange = 0.1
65 self.encoder.weight.detach().uniform_(-initrange, initrange)
66 self.decoder.bias.detach().fill_(0)
67 self.decoder.weight.detach().uniform_(-initrange, initrange)
68
69 def set_hidden(self, hidden):
70 self.hidden = hidden
71
72 def forward(self, input, hidden, ordered_sequence_lengths=None):
73 encoded = self.encoder(input)
74 emb = self.drop(encoded)
75
76 self.rnn.flatten_parameters()
77
78 output, hidden = self.rnn(emb, hidden)
79
80 if self.proj is not None:
81 output = self.proj(output)
82
83 output = self.drop(output)
84
85 decoded = self.decoder(
86 output.view(output.size(0) * output.size(1), output.size(2))
87 )
88
89 return (
90 decoded.view(output.size(0), output.size(1), decoded.size(1)),
91 output,
92 hidden,
93 )
94
95 def init_hidden(self, bsz):
96 weight = next(self.parameters()).detach()
97 return (
98 weight.new(self.nlayers, bsz, self.hidden_size).zero_().clone().detach(),
99 weight.new(self.nlayers, bsz, self.hidden_size).zero_().clone().detach(),
100 )
101
102 def get_representation(self, strings: List[str], chars_per_chunk: int = 512):
103
104 # cut up the input into chunks of max charlength = chunk_size
105 longest = len(strings[0])
106 chunks = []
107 splice_begin = 0
108 for splice_end in range(chars_per_chunk, longest, chars_per_chunk):
109 chunks.append([text[splice_begin:splice_end] for text in strings])
110 splice_begin = splice_end
111
112 chunks.append([text[splice_begin:longest] for text in strings])
113 hidden = self.init_hidden(len(chunks[0]))
114
115 output_parts = []
116
117 # push each chunk through the RNN language model
118 for chunk in chunks:
119
120 sequences_as_char_indices: List[List[int]] = []
121 for string in chunk:
122 char_indices = [
123 self.dictionary.get_idx_for_item(char) for char in string
124 ]
125 sequences_as_char_indices.append(char_indices)
126
127 batch = torch.LongTensor(sequences_as_char_indices).transpose(0, 1)
128 batch = batch.to(flair.device)
129
130 prediction, rnn_output, hidden = self.forward(batch, hidden)
131 rnn_output = rnn_output.detach()
132
133 output_parts.append(rnn_output)
134
135 # concatenate all chunks to make final output
136 output = torch.cat(output_parts)
137
138 return output
139
140 def get_output(self, text: str):
141 char_indices = [self.dictionary.get_idx_for_item(char) for char in text]
142 input_vector = torch.LongTensor([char_indices]).transpose(0, 1)
143
144 hidden = self.init_hidden(1)
145 prediction, rnn_output, hidden = self.forward(input_vector, hidden)
146
147 return self.repackage_hidden(hidden)
148
149 def repackage_hidden(self, h):
150 """Wraps hidden states in new Variables, to detach them from their history."""
151 if type(h) == torch.Tensor:
152 return h.clone().detach()
153 else:
154 return tuple(self.repackage_hidden(v) for v in h)
155
156 def initialize(self, matrix):
157 in_, out_ = matrix.size()
158 stdv = math.sqrt(3.0 / (in_ + out_))
159 matrix.detach().uniform_(-stdv, stdv)
160
161 @classmethod
162 def load_language_model(cls, model_file: Union[Path, str]):
163
164 state = torch.load(str(model_file), map_location=flair.device)
165
166 model = LanguageModel(
167 state["dictionary"],
168 state["is_forward_lm"],
169 state["hidden_size"],
170 state["nlayers"],
171 state["embedding_size"],
172 state["nout"],
173 state["dropout"],
174 )
175 model.load_state_dict(state["state_dict"])
176 model.eval()
177 model.to(flair.device)
178
179 return model
180
181 @classmethod
182 def load_checkpoint(cls, model_file: Path):
183 state = torch.load(str(model_file), map_location=flair.device)
184
185 epoch = state["epoch"] if "epoch" in state else None
186 split = state["split"] if "split" in state else None
187 loss = state["loss"] if "loss" in state else None
188 optimizer_state_dict = (
189 state["optimizer_state_dict"] if "optimizer_state_dict" in state else None
190 )
191
192 model = LanguageModel(
193 state["dictionary"],
194 state["is_forward_lm"],
195 state["hidden_size"],
196 state["nlayers"],
197 state["embedding_size"],
198 state["nout"],
199 state["dropout"],
200 )
201 model.load_state_dict(state["state_dict"])
202 model.eval()
203 model.to(flair.device)
204
205 return {
206 "model": model,
207 "epoch": epoch,
208 "split": split,
209 "loss": loss,
210 "optimizer_state_dict": optimizer_state_dict,
211 }
212
213 def save_checkpoint(
214 self, file: Path, optimizer: Optimizer, epoch: int, split: int, loss: float
215 ):
216 model_state = {
217 "state_dict": self.state_dict(),
218 "dictionary": self.dictionary,
219 "is_forward_lm": self.is_forward_lm,
220 "hidden_size": self.hidden_size,
221 "nlayers": self.nlayers,
222 "embedding_size": self.embedding_size,
223 "nout": self.nout,
224 "dropout": self.dropout,
225 "optimizer_state_dict": optimizer.state_dict(),
226 "epoch": epoch,
227 "split": split,
228 "loss": loss,
229 }
230
231 torch.save(model_state, str(file), pickle_protocol=4)
232
233 def save(self, file: Path):
234 model_state = {
235 "state_dict": self.state_dict(),
236 "dictionary": self.dictionary,
237 "is_forward_lm": self.is_forward_lm,
238 "hidden_size": self.hidden_size,
239 "nlayers": self.nlayers,
240 "embedding_size": self.embedding_size,
241 "nout": self.nout,
242 "dropout": self.dropout,
243 }
244
245 torch.save(model_state, str(file), pickle_protocol=4)
246
247 def generate_text(
248 self,
249 prefix: str = "\n",
250 number_of_characters: int = 1000,
251 temperature: float = 1.0,
252 break_on_suffix=None,
253 ) -> Tuple[str, float]:
254
255 if prefix == "":
256 prefix = "\n"
257
258 with torch.no_grad():
259 characters = []
260
261 idx2item = self.dictionary.idx2item
262
263 # initial hidden state
264 hidden = self.init_hidden(1)
265
266 if len(prefix) > 1:
267
268 char_tensors = []
269 for character in prefix[:-1]:
270 char_tensors.append(
271 torch.tensor(self.dictionary.get_idx_for_item(character))
272 .unsqueeze(0)
273 .unsqueeze(0)
274 )
275
276 input = torch.cat(char_tensors)
277 if torch.cuda.is_available():
278 input = input.cuda()
279
280 prediction, _, hidden = self.forward(input, hidden)
281
282 input = (
283 torch.tensor(self.dictionary.get_idx_for_item(prefix[-1]))
284 .unsqueeze(0)
285 .unsqueeze(0)
286 )
287
288 log_prob = 0.0
289
290 for i in range(number_of_characters):
291
292 if torch.cuda.is_available():
293 input = input.cuda()
294
295 # get predicted weights
296 prediction, _, hidden = self.forward(input, hidden)
297 prediction = prediction.squeeze().detach()
298 decoder_output = prediction
299
300 # divide by temperature
301 prediction = prediction.div(temperature)
302
303 # to prevent overflow problem with small temperature values, substract largest value from all
304 # this makes a vector in which the largest value is 0
305 max = torch.max(prediction)
306 prediction -= max
307
308 # compute word weights with exponential function
309 word_weights = prediction.exp().cpu()
310
311 # try sampling multinomial distribution for next character
312 try:
313 word_idx = torch.multinomial(word_weights, 1)[0]
314 except:
315 word_idx = torch.tensor(0)
316
317 # print(word_idx)
318 prob = decoder_output[word_idx]
319 log_prob += prob
320
321 input = word_idx.detach().unsqueeze(0).unsqueeze(0)
322 word = idx2item[word_idx].decode("UTF-8")
323 characters.append(word)
324
325 if break_on_suffix is not None:
326 if "".join(characters).endswith(break_on_suffix):
327 break
328
329 text = prefix + "".join(characters)
330
331 log_prob = log_prob.item()
332 log_prob /= len(characters)
333
334 if not self.is_forward_lm:
335 text = text[::-1]
336
337 return text, log_prob
338
339 def calculate_perplexity(self, text: str) -> float:
340
341 if not self.is_forward_lm:
342 text = text[::-1]
343
344 # input ids
345 input = torch.tensor(
346 [self.dictionary.get_idx_for_item(char) for char in text[:-1]]
347 ).unsqueeze(1)
348 input = input.to(flair.device)
349
350 # push list of character IDs through model
351 hidden = self.init_hidden(1)
352 prediction, _, hidden = self.forward(input, hidden)
353
354 # the target is always the next character
355 targets = torch.tensor(
356 [self.dictionary.get_idx_for_item(char) for char in text[1:]]
357 )
358 targets = targets.to(flair.device)
359
360 # use cross entropy loss to compare output of forward pass with targets
361 cross_entroy_loss = torch.nn.CrossEntropyLoss()
362 loss = cross_entroy_loss(
363 prediction.view(-1, len(self.dictionary)), targets
364 ).item()
365
366 # exponentiate cross-entropy loss to calculate perplexity
367 perplexity = math.exp(loss)
368
369 return perplexity
370
[end of flair/models/language_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flair/models/language_model.py b/flair/models/language_model.py
--- a/flair/models/language_model.py
+++ b/flair/models/language_model.py
@@ -273,9 +273,7 @@
.unsqueeze(0)
)
- input = torch.cat(char_tensors)
- if torch.cuda.is_available():
- input = input.cuda()
+ input = torch.cat(char_tensors).to(flair.device)
prediction, _, hidden = self.forward(input, hidden)
@@ -289,8 +287,7 @@
for i in range(number_of_characters):
- if torch.cuda.is_available():
- input = input.cuda()
+ input = input.to(flair.device)
# get predicted weights
prediction, _, hidden = self.forward(input, hidden)
| {"golden_diff": "diff --git a/flair/models/language_model.py b/flair/models/language_model.py\n--- a/flair/models/language_model.py\n+++ b/flair/models/language_model.py\n@@ -273,9 +273,7 @@\n .unsqueeze(0)\n )\n \n- input = torch.cat(char_tensors)\n- if torch.cuda.is_available():\n- input = input.cuda()\n+ input = torch.cat(char_tensors).to(flair.device)\n \n prediction, _, hidden = self.forward(input, hidden)\n \n@@ -289,8 +287,7 @@\n \n for i in range(number_of_characters):\n \n- if torch.cuda.is_available():\n- input = input.cuda()\n+ input = input.to(flair.device)\n \n # get predicted weights\n prediction, _, hidden = self.forward(input, hidden)\n", "issue": "How to specify CPU or GPU?\nI'd like to know how to move the model between CPU and GPU. Alternatively, I'd like to know if it's possible to specify the device before instantiating a model. \n", "before_files": [{"content": "from pathlib import Path\n\nimport torch.nn as nn\nimport torch\nimport math\nfrom typing import Union, Tuple\nfrom typing import List\n\nfrom torch.optim import Optimizer\n\nimport flair\nfrom flair.data import Dictionary\n\n\nclass LanguageModel(nn.Module):\n \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\n def __init__(\n self,\n dictionary: Dictionary,\n is_forward_lm: bool,\n hidden_size: int,\n nlayers: int,\n embedding_size: int = 100,\n nout=None,\n dropout=0.1,\n ):\n\n super(LanguageModel, self).__init__()\n\n self.dictionary = dictionary\n self.is_forward_lm: bool = is_forward_lm\n\n self.dropout = dropout\n self.hidden_size = hidden_size\n self.embedding_size = embedding_size\n self.nlayers = nlayers\n\n self.drop = nn.Dropout(dropout)\n self.encoder = nn.Embedding(len(dictionary), embedding_size)\n\n if nlayers == 1:\n self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers)\n else:\n self.rnn = nn.LSTM(embedding_size, hidden_size, nlayers, dropout=dropout)\n\n self.hidden = None\n\n self.nout = nout\n if nout is not None:\n self.proj = nn.Linear(hidden_size, nout)\n self.initialize(self.proj.weight)\n self.decoder = nn.Linear(nout, len(dictionary))\n else:\n self.proj = None\n self.decoder = nn.Linear(hidden_size, len(dictionary))\n\n self.init_weights()\n\n # auto-spawn on GPU if available\n self.to(flair.device)\n\n def init_weights(self):\n initrange = 0.1\n self.encoder.weight.detach().uniform_(-initrange, initrange)\n self.decoder.bias.detach().fill_(0)\n self.decoder.weight.detach().uniform_(-initrange, initrange)\n\n def set_hidden(self, hidden):\n self.hidden = hidden\n\n def forward(self, input, hidden, ordered_sequence_lengths=None):\n encoded = self.encoder(input)\n emb = self.drop(encoded)\n\n self.rnn.flatten_parameters()\n\n output, hidden = self.rnn(emb, hidden)\n\n if self.proj is not None:\n output = self.proj(output)\n\n output = self.drop(output)\n\n decoded = self.decoder(\n output.view(output.size(0) * output.size(1), output.size(2))\n )\n\n return (\n decoded.view(output.size(0), output.size(1), decoded.size(1)),\n output,\n hidden,\n )\n\n def init_hidden(self, bsz):\n weight = next(self.parameters()).detach()\n return (\n weight.new(self.nlayers, bsz, self.hidden_size).zero_().clone().detach(),\n weight.new(self.nlayers, bsz, self.hidden_size).zero_().clone().detach(),\n )\n\n def get_representation(self, strings: List[str], chars_per_chunk: int = 512):\n\n # cut up the input into chunks of max charlength = chunk_size\n longest = len(strings[0])\n chunks = []\n splice_begin = 0\n for splice_end in range(chars_per_chunk, longest, chars_per_chunk):\n chunks.append([text[splice_begin:splice_end] for text in strings])\n splice_begin = splice_end\n\n chunks.append([text[splice_begin:longest] for text in strings])\n hidden = self.init_hidden(len(chunks[0]))\n\n output_parts = []\n\n # push each chunk through the RNN language model\n for chunk in chunks:\n\n sequences_as_char_indices: List[List[int]] = []\n for string in chunk:\n char_indices = [\n self.dictionary.get_idx_for_item(char) for char in string\n ]\n sequences_as_char_indices.append(char_indices)\n\n batch = torch.LongTensor(sequences_as_char_indices).transpose(0, 1)\n batch = batch.to(flair.device)\n\n prediction, rnn_output, hidden = self.forward(batch, hidden)\n rnn_output = rnn_output.detach()\n\n output_parts.append(rnn_output)\n\n # concatenate all chunks to make final output\n output = torch.cat(output_parts)\n\n return output\n\n def get_output(self, text: str):\n char_indices = [self.dictionary.get_idx_for_item(char) for char in text]\n input_vector = torch.LongTensor([char_indices]).transpose(0, 1)\n\n hidden = self.init_hidden(1)\n prediction, rnn_output, hidden = self.forward(input_vector, hidden)\n\n return self.repackage_hidden(hidden)\n\n def repackage_hidden(self, h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == torch.Tensor:\n return h.clone().detach()\n else:\n return tuple(self.repackage_hidden(v) for v in h)\n\n def initialize(self, matrix):\n in_, out_ = matrix.size()\n stdv = math.sqrt(3.0 / (in_ + out_))\n matrix.detach().uniform_(-stdv, stdv)\n\n @classmethod\n def load_language_model(cls, model_file: Union[Path, str]):\n\n state = torch.load(str(model_file), map_location=flair.device)\n\n model = LanguageModel(\n state[\"dictionary\"],\n state[\"is_forward_lm\"],\n state[\"hidden_size\"],\n state[\"nlayers\"],\n state[\"embedding_size\"],\n state[\"nout\"],\n state[\"dropout\"],\n )\n model.load_state_dict(state[\"state_dict\"])\n model.eval()\n model.to(flair.device)\n\n return model\n\n @classmethod\n def load_checkpoint(cls, model_file: Path):\n state = torch.load(str(model_file), map_location=flair.device)\n\n epoch = state[\"epoch\"] if \"epoch\" in state else None\n split = state[\"split\"] if \"split\" in state else None\n loss = state[\"loss\"] if \"loss\" in state else None\n optimizer_state_dict = (\n state[\"optimizer_state_dict\"] if \"optimizer_state_dict\" in state else None\n )\n\n model = LanguageModel(\n state[\"dictionary\"],\n state[\"is_forward_lm\"],\n state[\"hidden_size\"],\n state[\"nlayers\"],\n state[\"embedding_size\"],\n state[\"nout\"],\n state[\"dropout\"],\n )\n model.load_state_dict(state[\"state_dict\"])\n model.eval()\n model.to(flair.device)\n\n return {\n \"model\": model,\n \"epoch\": epoch,\n \"split\": split,\n \"loss\": loss,\n \"optimizer_state_dict\": optimizer_state_dict,\n }\n\n def save_checkpoint(\n self, file: Path, optimizer: Optimizer, epoch: int, split: int, loss: float\n ):\n model_state = {\n \"state_dict\": self.state_dict(),\n \"dictionary\": self.dictionary,\n \"is_forward_lm\": self.is_forward_lm,\n \"hidden_size\": self.hidden_size,\n \"nlayers\": self.nlayers,\n \"embedding_size\": self.embedding_size,\n \"nout\": self.nout,\n \"dropout\": self.dropout,\n \"optimizer_state_dict\": optimizer.state_dict(),\n \"epoch\": epoch,\n \"split\": split,\n \"loss\": loss,\n }\n\n torch.save(model_state, str(file), pickle_protocol=4)\n\n def save(self, file: Path):\n model_state = {\n \"state_dict\": self.state_dict(),\n \"dictionary\": self.dictionary,\n \"is_forward_lm\": self.is_forward_lm,\n \"hidden_size\": self.hidden_size,\n \"nlayers\": self.nlayers,\n \"embedding_size\": self.embedding_size,\n \"nout\": self.nout,\n \"dropout\": self.dropout,\n }\n\n torch.save(model_state, str(file), pickle_protocol=4)\n\n def generate_text(\n self,\n prefix: str = \"\\n\",\n number_of_characters: int = 1000,\n temperature: float = 1.0,\n break_on_suffix=None,\n ) -> Tuple[str, float]:\n\n if prefix == \"\":\n prefix = \"\\n\"\n\n with torch.no_grad():\n characters = []\n\n idx2item = self.dictionary.idx2item\n\n # initial hidden state\n hidden = self.init_hidden(1)\n\n if len(prefix) > 1:\n\n char_tensors = []\n for character in prefix[:-1]:\n char_tensors.append(\n torch.tensor(self.dictionary.get_idx_for_item(character))\n .unsqueeze(0)\n .unsqueeze(0)\n )\n\n input = torch.cat(char_tensors)\n if torch.cuda.is_available():\n input = input.cuda()\n\n prediction, _, hidden = self.forward(input, hidden)\n\n input = (\n torch.tensor(self.dictionary.get_idx_for_item(prefix[-1]))\n .unsqueeze(0)\n .unsqueeze(0)\n )\n\n log_prob = 0.0\n\n for i in range(number_of_characters):\n\n if torch.cuda.is_available():\n input = input.cuda()\n\n # get predicted weights\n prediction, _, hidden = self.forward(input, hidden)\n prediction = prediction.squeeze().detach()\n decoder_output = prediction\n\n # divide by temperature\n prediction = prediction.div(temperature)\n\n # to prevent overflow problem with small temperature values, substract largest value from all\n # this makes a vector in which the largest value is 0\n max = torch.max(prediction)\n prediction -= max\n\n # compute word weights with exponential function\n word_weights = prediction.exp().cpu()\n\n # try sampling multinomial distribution for next character\n try:\n word_idx = torch.multinomial(word_weights, 1)[0]\n except:\n word_idx = torch.tensor(0)\n\n # print(word_idx)\n prob = decoder_output[word_idx]\n log_prob += prob\n\n input = word_idx.detach().unsqueeze(0).unsqueeze(0)\n word = idx2item[word_idx].decode(\"UTF-8\")\n characters.append(word)\n\n if break_on_suffix is not None:\n if \"\".join(characters).endswith(break_on_suffix):\n break\n\n text = prefix + \"\".join(characters)\n\n log_prob = log_prob.item()\n log_prob /= len(characters)\n\n if not self.is_forward_lm:\n text = text[::-1]\n\n return text, log_prob\n\n def calculate_perplexity(self, text: str) -> float:\n\n if not self.is_forward_lm:\n text = text[::-1]\n\n # input ids\n input = torch.tensor(\n [self.dictionary.get_idx_for_item(char) for char in text[:-1]]\n ).unsqueeze(1)\n input = input.to(flair.device)\n\n # push list of character IDs through model\n hidden = self.init_hidden(1)\n prediction, _, hidden = self.forward(input, hidden)\n\n # the target is always the next character\n targets = torch.tensor(\n [self.dictionary.get_idx_for_item(char) for char in text[1:]]\n )\n targets = targets.to(flair.device)\n\n # use cross entropy loss to compare output of forward pass with targets\n cross_entroy_loss = torch.nn.CrossEntropyLoss()\n loss = cross_entroy_loss(\n prediction.view(-1, len(self.dictionary)), targets\n ).item()\n\n # exponentiate cross-entropy loss to calculate perplexity\n perplexity = math.exp(loss)\n\n return perplexity\n", "path": "flair/models/language_model.py"}]} | 4,061 | 180 |
gh_patches_debug_60522 | rasdani/github-patches | git_diff | streamlit__streamlit-7257 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Changing start_time of st.video() doesn't work for the same video
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Changing start_time of st.video() doesn't work for the same video.
### Reproducible Code Example
```Python
import streamlit as st
timestamp = st.text_input('timestamp', '6')
st.video('local video path', start_time=int(timestamp))
```
### Steps To Reproduce
1. Replace 'local video path' with your own video path in the provided code, and run the code
2. Type different timestamp in the text input box
3. The video timestamp doesn't change
### Expected Behavior
The timestamp should change as start_time changes.
### Current Behavior
The video timestamp doesn't change. It always shows the initial timestamp. However, if you change the video to a different one in the source code and rerun the app, the timestamp will change correctly.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.25.0
- Python version: Python 3.10.11
- Operating System: Windows 11 Home 22H2
- Browser: Microsoft Edge Version 115.0.1901.188 (Official build) (64-bit)
### Additional Information
_No response_
</issue>
<code>
[start of e2e/scripts/st_video.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import requests
16
17 import streamlit as st
18
19 url = "https://www.w3schools.com/html/mov_bbb.mp4"
20 file = requests.get(url).content
21 st.video(file)
22
[end of e2e/scripts/st_video.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/e2e/scripts/st_video.py b/e2e/scripts/st_video.py
--- a/e2e/scripts/st_video.py
+++ b/e2e/scripts/st_video.py
@@ -19,3 +19,7 @@
url = "https://www.w3schools.com/html/mov_bbb.mp4"
file = requests.get(url).content
st.video(file)
+
+# Test start time with widget
+timestamp = st.number_input("Start Time (in seconds)", min_value=0, value=6)
+st.video(url, start_time=int(timestamp))
| {"golden_diff": "diff --git a/e2e/scripts/st_video.py b/e2e/scripts/st_video.py\n--- a/e2e/scripts/st_video.py\n+++ b/e2e/scripts/st_video.py\n@@ -19,3 +19,7 @@\n url = \"https://www.w3schools.com/html/mov_bbb.mp4\"\n file = requests.get(url).content\n st.video(file)\n+\n+# Test start time with widget\n+timestamp = st.number_input(\"Start Time (in seconds)\", min_value=0, value=6)\n+st.video(url, start_time=int(timestamp))\n", "issue": "Changing start_time of st.video() doesn't work for the same video\n### Checklist\r\n\r\n- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I have provided sufficient information below to help reproduce this issue.\r\n\r\n### Summary\r\n\r\nChanging start_time of st.video() doesn't work for the same video.\r\n\r\n### Reproducible Code Example\r\n\r\n```Python\r\nimport streamlit as st\r\n\r\ntimestamp = st.text_input('timestamp', '6')\r\nst.video('local video path', start_time=int(timestamp))\r\n```\r\n\r\n\r\n### Steps To Reproduce\r\n\r\n1. Replace 'local video path' with your own video path in the provided code, and run the code\r\n2. Type different timestamp in the text input box\r\n3. The video timestamp doesn't change\r\n\r\n### Expected Behavior\r\n\r\nThe timestamp should change as start_time changes.\r\n\r\n### Current Behavior\r\n\r\nThe video timestamp doesn't change. It always shows the initial timestamp. However, if you change the video to a different one in the source code and rerun the app, the timestamp will change correctly.\r\n\r\n### Is this a regression?\r\n\r\n- [ ] Yes, this used to work in a previous version.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 1.25.0\r\n- Python version: Python 3.10.11\r\n- Operating System: Windows 11 Home 22H2\r\n- Browser: Microsoft Edge Version 115.0.1901.188 (Official build) (64-bit)\r\n\r\n\r\n### Additional Information\r\n\r\n_No response_\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport requests\n\nimport streamlit as st\n\nurl = \"https://www.w3schools.com/html/mov_bbb.mp4\"\nfile = requests.get(url).content\nst.video(file)\n", "path": "e2e/scripts/st_video.py"}]} | 1,111 | 122 |
gh_patches_debug_9299 | rasdani/github-patches | git_diff | certbot__certbot-4857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Flesh out oldest tests
We should find the oldest versions of all our Python dependencies used in OS packages and add them to the [oldest tests](https://github.com/certbot/certbot/blob/master/tox.ini#L36) in Travis. This will prevent bugs like #3098 and #4040 from slipping into a release.
The two distros I'd check here are CentOS 7 and Debian 8.
</issue>
<code>
[start of acme/setup.py]
1 import sys
2
3 from setuptools import setup
4 from setuptools import find_packages
5
6
7 version = '0.16.0.dev0'
8
9 # Please update tox.ini when modifying dependency version requirements
10 install_requires = [
11 # load_pem_private/public_key (>=0.6)
12 # rsa_recover_prime_factors (>=0.8)
13 'cryptography>=0.8',
14 # Connection.set_tlsext_host_name (>=0.13)
15 'mock',
16 'PyOpenSSL>=0.13',
17 'pyrfc3339',
18 'pytz',
19 # requests>=2.10 is required to fix
20 # https://github.com/shazow/urllib3/issues/556. This requirement can be
21 # relaxed to 'requests[security]>=2.4.1', however, less useful errors
22 # will be raised for some network/SSL errors.
23 'requests[security]>=2.10',
24 # For pkg_resources. >=1.0 so pip resolves it to a version cryptography
25 # will tolerate; see #2599:
26 'setuptools>=1.0',
27 'six',
28 ]
29
30 # env markers cause problems with older pip and setuptools
31 if sys.version_info < (2, 7):
32 install_requires.extend([
33 'argparse',
34 'ordereddict',
35 ])
36
37 dev_extras = [
38 'nose',
39 'tox',
40 ]
41
42 docs_extras = [
43 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
44 'sphinx_rtd_theme',
45 ]
46
47
48 setup(
49 name='acme',
50 version=version,
51 description='ACME protocol implementation in Python',
52 url='https://github.com/letsencrypt/letsencrypt',
53 author="Certbot Project",
54 author_email='[email protected]',
55 license='Apache License 2.0',
56 classifiers=[
57 'Development Status :: 3 - Alpha',
58 'Intended Audience :: Developers',
59 'License :: OSI Approved :: Apache Software License',
60 'Programming Language :: Python',
61 'Programming Language :: Python :: 2',
62 'Programming Language :: Python :: 2.6',
63 'Programming Language :: Python :: 2.7',
64 'Programming Language :: Python :: 3',
65 'Programming Language :: Python :: 3.3',
66 'Programming Language :: Python :: 3.4',
67 'Programming Language :: Python :: 3.5',
68 'Programming Language :: Python :: 3.6',
69 'Topic :: Internet :: WWW/HTTP',
70 'Topic :: Security',
71 ],
72
73 packages=find_packages(),
74 include_package_data=True,
75 install_requires=install_requires,
76 extras_require={
77 'dev': dev_extras,
78 'docs': docs_extras,
79 },
80 entry_points={
81 'console_scripts': [
82 'jws = acme.jose.jws:CLI.run',
83 ],
84 },
85 test_suite='acme',
86 )
87
[end of acme/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/acme/setup.py b/acme/setup.py
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -16,11 +16,7 @@
'PyOpenSSL>=0.13',
'pyrfc3339',
'pytz',
- # requests>=2.10 is required to fix
- # https://github.com/shazow/urllib3/issues/556. This requirement can be
- # relaxed to 'requests[security]>=2.4.1', however, less useful errors
- # will be raised for some network/SSL errors.
- 'requests[security]>=2.10',
+ 'requests[security]>=2.4.1', # security extras added in 2.4.1
# For pkg_resources. >=1.0 so pip resolves it to a version cryptography
# will tolerate; see #2599:
'setuptools>=1.0',
| {"golden_diff": "diff --git a/acme/setup.py b/acme/setup.py\n--- a/acme/setup.py\n+++ b/acme/setup.py\n@@ -16,11 +16,7 @@\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n- # requests>=2.10 is required to fix\n- # https://github.com/shazow/urllib3/issues/556. This requirement can be\n- # relaxed to 'requests[security]>=2.4.1', however, less useful errors\n- # will be raised for some network/SSL errors.\n- 'requests[security]>=2.10',\n+ 'requests[security]>=2.4.1', # security extras added in 2.4.1\n # For pkg_resources. >=1.0 so pip resolves it to a version cryptography\n # will tolerate; see #2599:\n 'setuptools>=1.0',\n", "issue": "Flesh out oldest tests\nWe should find the oldest versions of all our Python dependencies used in OS packages and add them to the [oldest tests](https://github.com/certbot/certbot/blob/master/tox.ini#L36) in Travis. This will prevent bugs like #3098 and #4040 from slipping into a release.\r\n\r\nThe two distros I'd check here are CentOS 7 and Debian 8.\n", "before_files": [{"content": "import sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.16.0.dev0'\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n # load_pem_private/public_key (>=0.6)\n # rsa_recover_prime_factors (>=0.8)\n 'cryptography>=0.8',\n # Connection.set_tlsext_host_name (>=0.13)\n 'mock',\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n # requests>=2.10 is required to fix\n # https://github.com/shazow/urllib3/issues/556. This requirement can be\n # relaxed to 'requests[security]>=2.4.1', however, less useful errors\n # will be raised for some network/SSL errors.\n 'requests[security]>=2.10',\n # For pkg_resources. >=1.0 so pip resolves it to a version cryptography\n # will tolerate; see #2599:\n 'setuptools>=1.0',\n 'six',\n]\n\n# env markers cause problems with older pip and setuptools\nif sys.version_info < (2, 7):\n install_requires.extend([\n 'argparse',\n 'ordereddict',\n ])\n\ndev_extras = [\n 'nose',\n 'tox',\n]\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n]\n\n\nsetup(\n name='acme',\n version=version,\n description='ACME protocol implementation in Python',\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n entry_points={\n 'console_scripts': [\n 'jws = acme.jose.jws:CLI.run',\n ],\n },\n test_suite='acme',\n)\n", "path": "acme/setup.py"}]} | 1,438 | 219 |
gh_patches_debug_27784 | rasdani/github-patches | git_diff | searx__searx-1186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bing Video search engine doesn't work
Hallo,
yesterday I've set up my own instance of searx. Thereby I discovered A problem with the Bing Video search engine. This is the shown error message:
```
Die folgenden Suchmaschinen können die Ergebnisse nicht empfangen:
bing videos (unexpected crash: list index out of range)
```
</issue>
<code>
[start of searx/engines/bing_videos.py]
1 """
2 Bing (Videos)
3
4 @website https://www.bing.com/videos
5 @provide-api yes (http://datamarket.azure.com/dataset/bing/search)
6
7 @using-api no
8 @results HTML
9 @stable no
10 @parse url, title, content, thumbnail
11 """
12
13 from json import loads
14 from lxml import html
15 from searx.engines.bing_images import _fetch_supported_languages, supported_languages_url, get_region_code
16 from searx.engines.xpath import extract_text
17 from searx.url_utils import urlencode
18
19
20 categories = ['videos']
21 paging = True
22 safesearch = True
23 time_range_support = True
24 number_of_results = 10
25 language_support = True
26
27 search_url = 'https://www.bing.com/videos/asyncv2?{query}&async=content&'\
28 'first={offset}&count={number_of_results}&CW=1366&CH=25&FORM=R5VR5'
29 time_range_string = '&qft=+filterui:videoage-lt{interval}'
30 time_range_dict = {'day': '1440',
31 'week': '10080',
32 'month': '43200',
33 'year': '525600'}
34
35 # safesearch definitions
36 safesearch_types = {2: 'STRICT',
37 1: 'DEMOTE',
38 0: 'OFF'}
39
40
41 # do search-request
42 def request(query, params):
43 offset = (params['pageno'] - 1) * 10 + 1
44
45 # safesearch cookie
46 params['cookies']['SRCHHPGUSR'] = \
47 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')
48
49 # language cookie
50 region = get_region_code(params['language'], lang_list=supported_languages)
51 params['cookies']['_EDGE_S'] = 'mkt=' + region + '&F=1'
52
53 # query and paging
54 params['url'] = search_url.format(query=urlencode({'q': query}),
55 offset=offset,
56 number_of_results=number_of_results)
57
58 # time range
59 if params['time_range'] in time_range_dict:
60 params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])
61
62 return params
63
64
65 # get response from search-request
66 def response(resp):
67 results = []
68
69 dom = html.fromstring(resp.text)
70
71 for result in dom.xpath('//div[@class="dg_u"]'):
72
73 # try to extract the url
74 url_container = result.xpath('.//div[@class="sa_wrapper"]/@data-eventpayload')
75 if len(url_container) > 0:
76 url = loads(url_container[0])['purl']
77 else:
78 url = result.xpath('./a/@href')[0]
79
80 # discard results that do not return an external url
81 # very recent results sometimes don't return the video's url
82 if url.startswith('/videos/search?'):
83 continue
84
85 title = extract_text(result.xpath('./a//div[@class="tl"]'))
86 content = extract_text(result.xpath('.//div[@class="pubInfo"]'))
87 thumbnail = result.xpath('.//div[@class="vthumb"]/img/@src')[0]
88
89 results.append({'url': url,
90 'title': title,
91 'content': content,
92 'thumbnail': thumbnail,
93 'template': 'videos.html'})
94
95 # first page ignores requested number of results
96 if len(results) >= number_of_results:
97 break
98
99 return results
100
[end of searx/engines/bing_videos.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/bing_videos.py b/searx/engines/bing_videos.py
--- a/searx/engines/bing_videos.py
+++ b/searx/engines/bing_videos.py
@@ -69,22 +69,11 @@
dom = html.fromstring(resp.text)
for result in dom.xpath('//div[@class="dg_u"]'):
-
- # try to extract the url
- url_container = result.xpath('.//div[@class="sa_wrapper"]/@data-eventpayload')
- if len(url_container) > 0:
- url = loads(url_container[0])['purl']
- else:
- url = result.xpath('./a/@href')[0]
-
- # discard results that do not return an external url
- # very recent results sometimes don't return the video's url
- if url.startswith('/videos/search?'):
- continue
-
- title = extract_text(result.xpath('./a//div[@class="tl"]'))
- content = extract_text(result.xpath('.//div[@class="pubInfo"]'))
- thumbnail = result.xpath('.//div[@class="vthumb"]/img/@src')[0]
+ url = result.xpath('./div[@class="mc_vtvc"]/a/@href')[0]
+ url = 'https://bing.com' + url
+ title = extract_text(result.xpath('./div/a/div/div[@class="mc_vtvc_title"]/@title'))
+ content = extract_text(result.xpath('./div/a/div/div/div/div/text()'))
+ thumbnail = result.xpath('./div/a/div/div/img/@src')[0]
results.append({'url': url,
'title': title,
@@ -92,7 +81,6 @@
'thumbnail': thumbnail,
'template': 'videos.html'})
- # first page ignores requested number of results
if len(results) >= number_of_results:
break
| {"golden_diff": "diff --git a/searx/engines/bing_videos.py b/searx/engines/bing_videos.py\n--- a/searx/engines/bing_videos.py\n+++ b/searx/engines/bing_videos.py\n@@ -69,22 +69,11 @@\n dom = html.fromstring(resp.text)\n \n for result in dom.xpath('//div[@class=\"dg_u\"]'):\n-\n- # try to extract the url\n- url_container = result.xpath('.//div[@class=\"sa_wrapper\"]/@data-eventpayload')\n- if len(url_container) > 0:\n- url = loads(url_container[0])['purl']\n- else:\n- url = result.xpath('./a/@href')[0]\n-\n- # discard results that do not return an external url\n- # very recent results sometimes don't return the video's url\n- if url.startswith('/videos/search?'):\n- continue\n-\n- title = extract_text(result.xpath('./a//div[@class=\"tl\"]'))\n- content = extract_text(result.xpath('.//div[@class=\"pubInfo\"]'))\n- thumbnail = result.xpath('.//div[@class=\"vthumb\"]/img/@src')[0]\n+ url = result.xpath('./div[@class=\"mc_vtvc\"]/a/@href')[0]\n+ url = 'https://bing.com' + url\n+ title = extract_text(result.xpath('./div/a/div/div[@class=\"mc_vtvc_title\"]/@title'))\n+ content = extract_text(result.xpath('./div/a/div/div/div/div/text()'))\n+ thumbnail = result.xpath('./div/a/div/div/img/@src')[0]\n \n results.append({'url': url,\n 'title': title,\n@@ -92,7 +81,6 @@\n 'thumbnail': thumbnail,\n 'template': 'videos.html'})\n \n- # first page ignores requested number of results\n if len(results) >= number_of_results:\n break\n", "issue": "Bing Video search engine doesn't work\nHallo,\r\n\r\nyesterday I've set up my own instance of searx. Thereby I discovered A problem with the Bing Video search engine. This is the shown error message:\r\n\r\n```\r\nDie folgenden Suchmaschinen k\u00f6nnen die Ergebnisse nicht empfangen:\r\nbing videos (unexpected crash: list index out of range)\r\n```\n", "before_files": [{"content": "\"\"\"\n Bing (Videos)\n\n @website https://www.bing.com/videos\n @provide-api yes (http://datamarket.azure.com/dataset/bing/search)\n\n @using-api no\n @results HTML\n @stable no\n @parse url, title, content, thumbnail\n\"\"\"\n\nfrom json import loads\nfrom lxml import html\nfrom searx.engines.bing_images import _fetch_supported_languages, supported_languages_url, get_region_code\nfrom searx.engines.xpath import extract_text\nfrom searx.url_utils import urlencode\n\n\ncategories = ['videos']\npaging = True\nsafesearch = True\ntime_range_support = True\nnumber_of_results = 10\nlanguage_support = True\n\nsearch_url = 'https://www.bing.com/videos/asyncv2?{query}&async=content&'\\\n 'first={offset}&count={number_of_results}&CW=1366&CH=25&FORM=R5VR5'\ntime_range_string = '&qft=+filterui:videoage-lt{interval}'\ntime_range_dict = {'day': '1440',\n 'week': '10080',\n 'month': '43200',\n 'year': '525600'}\n\n# safesearch definitions\nsafesearch_types = {2: 'STRICT',\n 1: 'DEMOTE',\n 0: 'OFF'}\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * 10 + 1\n\n # safesearch cookie\n params['cookies']['SRCHHPGUSR'] = \\\n 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')\n\n # language cookie\n region = get_region_code(params['language'], lang_list=supported_languages)\n params['cookies']['_EDGE_S'] = 'mkt=' + region + '&F=1'\n\n # query and paging\n params['url'] = search_url.format(query=urlencode({'q': query}),\n offset=offset,\n number_of_results=number_of_results)\n\n # time range\n if params['time_range'] in time_range_dict:\n params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n for result in dom.xpath('//div[@class=\"dg_u\"]'):\n\n # try to extract the url\n url_container = result.xpath('.//div[@class=\"sa_wrapper\"]/@data-eventpayload')\n if len(url_container) > 0:\n url = loads(url_container[0])['purl']\n else:\n url = result.xpath('./a/@href')[0]\n\n # discard results that do not return an external url\n # very recent results sometimes don't return the video's url\n if url.startswith('/videos/search?'):\n continue\n\n title = extract_text(result.xpath('./a//div[@class=\"tl\"]'))\n content = extract_text(result.xpath('.//div[@class=\"pubInfo\"]'))\n thumbnail = result.xpath('.//div[@class=\"vthumb\"]/img/@src')[0]\n\n results.append({'url': url,\n 'title': title,\n 'content': content,\n 'thumbnail': thumbnail,\n 'template': 'videos.html'})\n\n # first page ignores requested number of results\n if len(results) >= number_of_results:\n break\n\n return results\n", "path": "searx/engines/bing_videos.py"}]} | 1,601 | 426 |
gh_patches_debug_35308 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-1334 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AZ Legislator with the following id has an invalid phone number AZL000372
State: AZ (be sure to include in ticket title)
This repository is for issues with state data, for feature requests, etc.
please visit the contributor guide (see above message) to file the issue in the correct place.
</issue>
<code>
[start of openstates/az/legislators.py]
1 from billy.scrape import NoDataForPeriod
2 from billy.scrape.legislators import LegislatorScraper, Legislator
3 from lxml import html
4
5 import re, datetime
6
7 class AZLegislatorScraper(LegislatorScraper):
8 jurisdiction = 'az'
9 parties = {
10 'R': 'Republican',
11 'D': 'Democratic',
12 'L': 'Libertarian',
13 'I': 'Independent',
14 'G': 'Green'
15 }
16
17 def get_party(self, abbr):
18 return self.parties[abbr]
19
20 def scrape(self, chamber, term):
21 # TODO: old AZ scraper allowed old sessions, they seem to be gone?
22 self.validate_term(term, latest_only=True)
23
24 body = {'lower': 'H', 'upper': 'S'}[chamber]
25 url = 'http://www.azleg.gov/MemberRoster/?body=' + body
26 page = self.get(url).text
27
28 # there is a bad comment closing tag on this page
29 page = page.replace('--!>', '-->')
30
31 root = html.fromstring(page)
32
33 path = '//table//tr'
34 roster = root.xpath(path)[1:]
35 for row in roster:
36 position = ''
37 name, district, party, email, room, phone, = row.xpath('td')
38
39 if email.attrib.get('class') == 'vacantmember':
40 continue # Skip any vacant members.
41
42 link = name.xpath('string(a/@href)')
43 if len(name) == 1:
44 name = name.text_content().strip()
45 else:
46 position = name.tail.strip()
47 name = name[0].text_content().strip()
48 if '--' in name:
49 name = name.split('--')[0].strip()
50
51 linkpage = self.get(link).text
52 linkpage = linkpage.replace('--!>', '-->')
53 linkroot = html.fromstring(linkpage)
54 linkroot.make_links_absolute(link)
55
56 photos = linkroot.xpath("//img[contains(@src, 'MemberPhoto')]")
57
58 if len(photos) != 1:
59 self.warning('no photo on ' + link)
60 photo_url = ''
61 else:
62 photo_url = photos[0].attrib['src']
63
64 district = district.text_content()
65 party = party.text_content().strip()
66 email = email.text_content().strip()
67
68 if email.startswith('Email: '):
69 email = email.replace('Email: ', '').lower() + '@azleg.gov'
70 else:
71 email = ''
72
73 party = self.get_party(party)
74 room = room.text_content().strip()
75 if chamber == 'lower':
76 address = "House of Representatives\n"
77 else:
78 address = "Senate\n"
79 address = address + "1700 West Washington\n Room " + room \
80 + "\nPhoenix, AZ 85007"
81
82 phone = phone.text_content().strip()
83 if not phone.startswith('602'):
84 phone = "602-" + phone
85
86 leg = Legislator(term, chamber, district, full_name=name,
87 party=party, url=link,
88 photo_url=photo_url)
89
90 leg.add_office('capitol', 'Capitol Office', address=address,
91 phone=phone, email=email)
92
93 if position:
94 leg.add_role( position, term, chamber=chamber,
95 district=district, party=party)
96
97 leg.add_source(url)
98
99 #Probably just get this from the committee scraper
100 #self.scrape_member_page(link, session, chamber, leg)
101 self.save_legislator(leg)
102
103 def scrape_member_page(self, url, session, chamber, leg):
104 html = self.get(url).text
105 root = html.fromstring(html)
106 #get the committee membership
107 c = root.xpath('//td/div/strong[contains(text(), "Committee")]')
108 for row in c.xpath('ancestor::table[1]')[1:]:
109 name = row[0].text_content().strip()
110 role = row[1].text_content().strip()
111 leg.add_role(role, session, chamber=chamber, committee=name)
112
113 leg.add_source(url)
114
[end of openstates/az/legislators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openstates/az/legislators.py b/openstates/az/legislators.py
--- a/openstates/az/legislators.py
+++ b/openstates/az/legislators.py
@@ -1,8 +1,7 @@
-from billy.scrape import NoDataForPeriod
from billy.scrape.legislators import LegislatorScraper, Legislator
from lxml import html
+import re
-import re, datetime
class AZLegislatorScraper(LegislatorScraper):
jurisdiction = 'az'
@@ -80,30 +79,30 @@
+ "\nPhoenix, AZ 85007"
phone = phone.text_content().strip()
- if not phone.startswith('602'):
+ if '602' not in re.findall(r'(\d+)', phone):
phone = "602-" + phone
leg = Legislator(term, chamber, district, full_name=name,
- party=party, url=link,
- photo_url=photo_url)
+ party=party, url=link,
+ photo_url=photo_url)
leg.add_office('capitol', 'Capitol Office', address=address,
phone=phone, email=email)
if position:
- leg.add_role( position, term, chamber=chamber,
+ leg.add_role(position, term, chamber=chamber,
district=district, party=party)
leg.add_source(url)
- #Probably just get this from the committee scraper
- #self.scrape_member_page(link, session, chamber, leg)
+ # Probably just get this from the committee scraper
+ # self.scrape_member_page(link, session, chamber, leg)
self.save_legislator(leg)
def scrape_member_page(self, url, session, chamber, leg):
html = self.get(url).text
root = html.fromstring(html)
- #get the committee membership
+ # get the committee membership
c = root.xpath('//td/div/strong[contains(text(), "Committee")]')
for row in c.xpath('ancestor::table[1]')[1:]:
name = row[0].text_content().strip()
| {"golden_diff": "diff --git a/openstates/az/legislators.py b/openstates/az/legislators.py\n--- a/openstates/az/legislators.py\n+++ b/openstates/az/legislators.py\n@@ -1,8 +1,7 @@\n-from billy.scrape import NoDataForPeriod\n from billy.scrape.legislators import LegislatorScraper, Legislator\n from lxml import html\n+import re\n \n-import re, datetime\n \n class AZLegislatorScraper(LegislatorScraper):\n jurisdiction = 'az'\n@@ -80,30 +79,30 @@\n + \"\\nPhoenix, AZ 85007\"\n \n phone = phone.text_content().strip()\n- if not phone.startswith('602'):\n+ if '602' not in re.findall(r'(\\d+)', phone):\n phone = \"602-\" + phone\n \n leg = Legislator(term, chamber, district, full_name=name,\n- party=party, url=link,\n- photo_url=photo_url)\n+ party=party, url=link,\n+ photo_url=photo_url)\n \n leg.add_office('capitol', 'Capitol Office', address=address,\n phone=phone, email=email)\n \n if position:\n- leg.add_role( position, term, chamber=chamber,\n+ leg.add_role(position, term, chamber=chamber,\n district=district, party=party)\n \n leg.add_source(url)\n \n- #Probably just get this from the committee scraper\n- #self.scrape_member_page(link, session, chamber, leg)\n+ # Probably just get this from the committee scraper\n+ # self.scrape_member_page(link, session, chamber, leg)\n self.save_legislator(leg)\n \n def scrape_member_page(self, url, session, chamber, leg):\n html = self.get(url).text\n root = html.fromstring(html)\n- #get the committee membership\n+ # get the committee membership\n c = root.xpath('//td/div/strong[contains(text(), \"Committee\")]')\n for row in c.xpath('ancestor::table[1]')[1:]:\n name = row[0].text_content().strip()\n", "issue": "AZ Legislator with the following id has an invalid phone number AZL000372\nState: AZ (be sure to include in ticket title)\r\n\r\nThis repository is for issues with state data, for feature requests, etc.\r\nplease visit the contributor guide (see above message) to file the issue in the correct place.\r\n\n", "before_files": [{"content": "from billy.scrape import NoDataForPeriod\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\nfrom lxml import html\n\nimport re, datetime\n\nclass AZLegislatorScraper(LegislatorScraper):\n jurisdiction = 'az'\n parties = {\n 'R': 'Republican',\n 'D': 'Democratic',\n 'L': 'Libertarian',\n 'I': 'Independent',\n 'G': 'Green'\n }\n\n def get_party(self, abbr):\n return self.parties[abbr]\n\n def scrape(self, chamber, term):\n # TODO: old AZ scraper allowed old sessions, they seem to be gone?\n self.validate_term(term, latest_only=True)\n\n body = {'lower': 'H', 'upper': 'S'}[chamber]\n url = 'http://www.azleg.gov/MemberRoster/?body=' + body\n page = self.get(url).text\n\n # there is a bad comment closing tag on this page\n page = page.replace('--!>', '-->')\n\n root = html.fromstring(page)\n\n path = '//table//tr'\n roster = root.xpath(path)[1:]\n for row in roster:\n position = ''\n name, district, party, email, room, phone, = row.xpath('td')\n\n if email.attrib.get('class') == 'vacantmember':\n continue # Skip any vacant members.\n\n link = name.xpath('string(a/@href)')\n if len(name) == 1:\n name = name.text_content().strip()\n else:\n position = name.tail.strip()\n name = name[0].text_content().strip()\n if '--' in name:\n name = name.split('--')[0].strip()\n\n linkpage = self.get(link).text\n linkpage = linkpage.replace('--!>', '-->')\n linkroot = html.fromstring(linkpage)\n linkroot.make_links_absolute(link)\n\n photos = linkroot.xpath(\"//img[contains(@src, 'MemberPhoto')]\")\n\n if len(photos) != 1:\n self.warning('no photo on ' + link)\n photo_url = ''\n else:\n photo_url = photos[0].attrib['src']\n\n district = district.text_content()\n party = party.text_content().strip()\n email = email.text_content().strip()\n\n if email.startswith('Email: '):\n email = email.replace('Email: ', '').lower() + '@azleg.gov'\n else:\n email = ''\n\n party = self.get_party(party)\n room = room.text_content().strip()\n if chamber == 'lower':\n address = \"House of Representatives\\n\"\n else:\n address = \"Senate\\n\"\n address = address + \"1700 West Washington\\n Room \" + room \\\n + \"\\nPhoenix, AZ 85007\"\n\n phone = phone.text_content().strip()\n if not phone.startswith('602'):\n phone = \"602-\" + phone\n\n leg = Legislator(term, chamber, district, full_name=name,\n party=party, url=link,\n photo_url=photo_url)\n\n leg.add_office('capitol', 'Capitol Office', address=address,\n phone=phone, email=email)\n\n if position:\n leg.add_role( position, term, chamber=chamber,\n district=district, party=party)\n\n leg.add_source(url)\n\n #Probably just get this from the committee scraper\n #self.scrape_member_page(link, session, chamber, leg)\n self.save_legislator(leg)\n\n def scrape_member_page(self, url, session, chamber, leg):\n html = self.get(url).text\n root = html.fromstring(html)\n #get the committee membership\n c = root.xpath('//td/div/strong[contains(text(), \"Committee\")]')\n for row in c.xpath('ancestor::table[1]')[1:]:\n name = row[0].text_content().strip()\n role = row[1].text_content().strip()\n leg.add_role(role, session, chamber=chamber, committee=name)\n\n leg.add_source(url)\n", "path": "openstates/az/legislators.py"}]} | 1,754 | 491 |
gh_patches_debug_3289 | rasdani/github-patches | git_diff | mne-tools__mne-bids-272 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JOSS publication
At the MNE-Sprint in Paris, @teonbrooks @jasmainak and I discussed about writing a short report on MNE-BIDS and publishing it in [JOSS](https://joss.theoj.org/about).
JOSS articles generally provide a very high level description of the software and its relevance:
> Your submission should probably be somewhere between 250-1000 words.
It would allow us to properly point to MNE-BIDS in citations and get some scholarly recognition for our work.
I suggest that we take `pybids` as an example and create a [`/paper`](https://github.com/bids-standard/pybids/tree/master/paper) directory in our repository where we prepare the submission.
Publishing at JOSS would mean that mne-bids stays separate from mne-python instead of being integrated eventually. In a short discussion with @agramfort, we all approved of this idea, because it will allow us to stay with our lightweight and "independent" repository, while users can still benefit from mne-bids by using it as a simple "module" to MNE-Python.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 """Setup MNE-BIDS."""
3 import os
4 from setuptools import setup, find_packages
5
6 # get the version
7 version = None
8 with open(os.path.join('mne_bids', '__init__.py'), 'r') as fid:
9 for line in (line.strip() for line in fid):
10 if line.startswith('__version__'):
11 version = line.split('=')[1].strip().strip('\'')
12 break
13 if version is None:
14 raise RuntimeError('Could not determine version')
15
16
17 descr = ('An MNE project for organizing and formatting MEG and EEG data '
18 'according to the BIDS specification.')
19
20 DISTNAME = 'mne-bids'
21 DESCRIPTION = descr
22 MAINTAINER = 'Mainak Jas'
23 MAINTAINER_EMAIL = '[email protected]'
24 URL = 'https://mne.tools/mne-bids/'
25 LICENSE = 'BSD (3-clause)'
26 DOWNLOAD_URL = 'https://github.com/mne-tools/mne-bids.git'
27 VERSION = version
28
29 if __name__ == "__main__":
30 setup(name=DISTNAME,
31 maintainer=MAINTAINER,
32 maintainer_email=MAINTAINER_EMAIL,
33 description=DESCRIPTION,
34 license=LICENSE,
35 url=URL,
36 version=VERSION,
37 download_url=DOWNLOAD_URL,
38 long_description=open('README.rst').read(),
39 long_description_content_type='text/x-rst',
40 classifiers=[
41 'Intended Audience :: Science/Research',
42 'Intended Audience :: Developers',
43 'License :: OSI Approved',
44 'Programming Language :: Python',
45 'Topic :: Software Development',
46 'Topic :: Scientific/Engineering',
47 'Operating System :: Microsoft :: Windows',
48 'Operating System :: POSIX',
49 'Operating System :: Unix',
50 'Operating System :: MacOS',
51 ],
52 platforms='any',
53 packages=find_packages(),
54 scripts=['bin/mne_bids'],
55 project_urls={
56 'Documentation': 'https://mne.tools/mne-bids',
57 'Bug Reports': 'https://github.com/mne-tools/mne-bids/issues',
58 'Source': 'https://github.com/mne-tools/mne-bids',
59 },
60 )
61
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,8 +14,8 @@
raise RuntimeError('Could not determine version')
-descr = ('An MNE project for organizing and formatting MEG and EEG data '
- 'according to the BIDS specification.')
+descr = ('MNE-BIDS: Organizing MEG, EEG, and iEEG data according to the BIDS '
+ 'specification and facilitating their analysis with MNE-Python')
DISTNAME = 'mne-bids'
DESCRIPTION = descr
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,8 +14,8 @@\n raise RuntimeError('Could not determine version')\n \n \n-descr = ('An MNE project for organizing and formatting MEG and EEG data '\n- 'according to the BIDS specification.')\n+descr = ('MNE-BIDS: Organizing MEG, EEG, and iEEG data according to the BIDS '\n+ 'specification and facilitating their analysis with MNE-Python')\n \n DISTNAME = 'mne-bids'\n DESCRIPTION = descr\n", "issue": "JOSS publication\nAt the MNE-Sprint in Paris, @teonbrooks @jasmainak and I discussed about writing a short report on MNE-BIDS and publishing it in [JOSS](https://joss.theoj.org/about).\r\n\r\nJOSS articles generally provide a very high level description of the software and its relevance:\r\n\r\n> Your submission should probably be somewhere between 250-1000 words.\r\n\r\nIt would allow us to properly point to MNE-BIDS in citations and get some scholarly recognition for our work.\r\n\r\nI suggest that we take `pybids` as an example and create a [`/paper`](https://github.com/bids-standard/pybids/tree/master/paper) directory in our repository where we prepare the submission.\r\n\r\nPublishing at JOSS would mean that mne-bids stays separate from mne-python instead of being integrated eventually. In a short discussion with @agramfort, we all approved of this idea, because it will allow us to stay with our lightweight and \"independent\" repository, while users can still benefit from mne-bids by using it as a simple \"module\" to MNE-Python.\r\n\r\n\n", "before_files": [{"content": "#! /usr/bin/env python\n\"\"\"Setup MNE-BIDS.\"\"\"\nimport os\nfrom setuptools import setup, find_packages\n\n# get the version\nversion = None\nwith open(os.path.join('mne_bids', '__init__.py'), 'r') as fid:\n for line in (line.strip() for line in fid):\n if line.startswith('__version__'):\n version = line.split('=')[1].strip().strip('\\'')\n break\nif version is None:\n raise RuntimeError('Could not determine version')\n\n\ndescr = ('An MNE project for organizing and formatting MEG and EEG data '\n 'according to the BIDS specification.')\n\nDISTNAME = 'mne-bids'\nDESCRIPTION = descr\nMAINTAINER = 'Mainak Jas'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'https://mne.tools/mne-bids/'\nLICENSE = 'BSD (3-clause)'\nDOWNLOAD_URL = 'https://github.com/mne-tools/mne-bids.git'\nVERSION = version\n\nif __name__ == \"__main__\":\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n ],\n platforms='any',\n packages=find_packages(),\n scripts=['bin/mne_bids'],\n project_urls={\n 'Documentation': 'https://mne.tools/mne-bids',\n 'Bug Reports': 'https://github.com/mne-tools/mne-bids/issues',\n 'Source': 'https://github.com/mne-tools/mne-bids',\n },\n )\n", "path": "setup.py"}]} | 1,343 | 126 |
gh_patches_debug_19572 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1065 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support mask_zero argument in ElasticDL embedding layer
Support `mask_zero` argument in `elasticdl.Embedding`.
`mask_zero` is an argument in keras Embedding layer and it is used for inputs with different `input_length` in one minibatch. More details can be found in [keras Embedding doc](https://keras.io/layers/embeddings/).
</issue>
<code>
[start of elasticdl/python/elasticdl/layers/embedding.py]
1 import tensorflow as tf
2 from tensorflow.python.keras.utils import tf_utils
3
4
5 class Embedding(tf.keras.layers.Layer):
6 """
7 Input: indexes for the embedding entries with a shape of
8 (batch_size, input_length). Input can be either dense tensor
9 or SparseTensor.
10 Output:
11 corresponding (combined) embeddings with a shape of
12 (batch_size, input_length, output_dim) if combiner is None
13 (batch_size, output_dim) if combiner is not None
14 Arguments:
15 output_dim: the dimension of the embedding vector
16 embedding_initializer: Initializer for embedding table
17 mask_zero: Whether or not the input value 0 is a special "padding"
18 value that should be masked out.
19 If input is SparseTensor, mask_zero must be False.
20 input_length: Length of input sequences, when it is constant.
21 This argument is required if you are going to connect
22 `Flatten` then `Dense` layers upstream
23 (without it, the shape of the dense outputs cannot be computed).
24 combiner: A string specifying the reduction op or None if not used.
25 "mean", "sqrtn" and "sum" are supported for the reduction op.
26 If input is SparseTensor, combiner must set as a reduction op.
27 """
28
29 def __init__(
30 self,
31 output_dim,
32 embedding_initializer="uniform",
33 mask_zero=False,
34 input_length=None,
35 combiner=None,
36 **kwargs
37 ):
38 if "input_shape" not in kwargs and input_length:
39 kwargs["input_shape"] = (input_length,)
40 super(Embedding, self).__init__(**kwargs)
41
42 self.output_dim = output_dim
43 self.embedding_initializer = embedding_initializer
44 # TODO: support mask_zero
45 self.supports_masking = mask_zero
46 self.input_length = input_length
47 self.combiner = combiner
48 self.tape = None
49 self.worker = None
50 self.bet_ids_pair = []
51
52 @tf_utils.shape_type_conversion
53 def compute_output_shape(self, input_shape):
54 # this function is taken from
55 # tf.keras.layers.Embedding.compute_output_shape
56 # https://github.com/tensorflow/tensorflow/blob/3f3c728bf80e0fd6653744318cbbfe1454c6ddca/tensorflow/python/keras/layers/embeddings.py#L156
57 if self.input_length is None:
58 return input_shape + (self.output_dim,)
59 else:
60 if isinstance(self.input_length, (list, tuple)):
61 in_lens = list(self.input_length)
62 else:
63 in_lens = [self.input_length]
64 if len(in_lens) != len(input_shape) - 1:
65 raise ValueError(
66 '"input_length" is %s, '
67 "but received input has shape %s"
68 % (str(self.input_length), str(input_shape))
69 )
70 else:
71 for i, (s1, s2) in enumerate(zip(in_lens, input_shape[1:])):
72 if s1 is not None and s2 is not None and s1 != s2:
73 raise ValueError(
74 '"input_length" is %s, '
75 "but received input has shape %s"
76 % (str(self.input_length), str(input_shape))
77 )
78 elif s1 is None:
79 in_lens[i] = s2
80 return (input_shape[0],) + tuple(in_lens) + (self.output_dim,)
81
82 @property
83 def name(self):
84 return self._name
85
86 @staticmethod
87 def get_key(name_list):
88 return "-".join(map(str, name_list))
89
90 def lookup_embedding(self, unique_ids):
91 batch_embedding = self.worker.embedding_lookup(
92 unique_ids, self._name, self.embedding_initializer
93 )
94 return batch_embedding
95
96 def call(self, input):
97 if isinstance(input, tf.SparseTensor):
98 return self._sparse_input_call(input)
99
100 ids = tf.convert_to_tensor(input, name="embedding_ids")
101 flat_ids = tf.reshape(ids, [-1])
102 unique_ids, idx = tf.unique(flat_ids)
103 batch_embedding_tensor = tf.py_function(
104 self.lookup_embedding, inp=[unique_ids], Tout=tf.float32
105 )
106 if self.tape:
107 # tape.watch works with eager mode only.
108 # Gradient for embeddings is SparseTensor here due to tf.gather op.
109 # tf.gather accesses tensor slices, resulting in sparse tensor
110 # gradient.
111 if not tf.executing_eagerly():
112 raise RuntimeError("tape.watch only works with eager mode")
113 self.tape.watch(batch_embedding_tensor)
114 self.bet_ids_pair.append((batch_embedding_tensor, unique_ids))
115 outputs = tf.gather(batch_embedding_tensor, idx)
116 outputs = tf.reshape(
117 outputs, ids.get_shape().concatenate(self.output_dim)
118 )
119 # TODO: support combiner for dense input
120 return outputs
121
122 def _sparse_input_call(self, sparse_input):
123 if self.combiner not in ["sum", "mean", "sqrtn"]:
124 raise ValueError(
125 "combiner must set sum, mean or sqrtn for sparse input"
126 )
127 unique_ids, idx = tf.unique(sparse_input.values)
128 embeddings = tf.py_function(
129 self.lookup_embedding, inp=[unique_ids], Tout=tf.float32
130 )
131 if self.tape:
132 # tape.watch works with eager mode only
133 # gradient for embeddings is dense tensor for sparse_input_call
134 if not tf.executing_eagerly():
135 raise RuntimeError("tape.watch only works with eager mode")
136 self.tape.watch(embeddings)
137 self.bet_ids_pair.append((embeddings, unique_ids))
138 segment_ids = sparse_input.indices[:, 0]
139 if segment_ids.dtype != tf.int32:
140 segment_ids = tf.cast(segment_ids, tf.int32)
141
142 if self.combiner == "sum":
143 embeddings = tf.sparse.segment_sum(embeddings, idx, segment_ids)
144 elif self.combiner == "mean":
145 embeddings = tf.sparse.segment_mean(embeddings, idx, segment_ids)
146 elif self.combiner == "sqrtn":
147 embeddings = tf.sparse.segment_sqrt_n(embeddings, idx, segment_ids)
148 return embeddings
149
150 def reset(self):
151 self.bet_ids_pair = []
152 self.tape = None
153
154 def set_tape(self, tape):
155 self.tape = tape
156
157 def set_worker(self, worker):
158 self.worker = worker
159
[end of elasticdl/python/elasticdl/layers/embedding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/python/elasticdl/layers/embedding.py b/elasticdl/python/elasticdl/layers/embedding.py
--- a/elasticdl/python/elasticdl/layers/embedding.py
+++ b/elasticdl/python/elasticdl/layers/embedding.py
@@ -41,7 +41,6 @@
self.output_dim = output_dim
self.embedding_initializer = embedding_initializer
- # TODO: support mask_zero
self.supports_masking = mask_zero
self.input_length = input_length
self.combiner = combiner
@@ -147,6 +146,13 @@
embeddings = tf.sparse.segment_sqrt_n(embeddings, idx, segment_ids)
return embeddings
+ def compute_mask(self, inputs, mask=None):
+ if isinstance(input, tf.SparseTensor):
+ raise ValueError("SparseTensor inputs do not support mask_zero")
+ if not self.supports_masking:
+ return None
+ return tf.math.not_equal(inputs, 0)
+
def reset(self):
self.bet_ids_pair = []
self.tape = None
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/layers/embedding.py b/elasticdl/python/elasticdl/layers/embedding.py\n--- a/elasticdl/python/elasticdl/layers/embedding.py\n+++ b/elasticdl/python/elasticdl/layers/embedding.py\n@@ -41,7 +41,6 @@\n \n self.output_dim = output_dim\n self.embedding_initializer = embedding_initializer\n- # TODO: support mask_zero\n self.supports_masking = mask_zero\n self.input_length = input_length\n self.combiner = combiner\n@@ -147,6 +146,13 @@\n embeddings = tf.sparse.segment_sqrt_n(embeddings, idx, segment_ids)\n return embeddings\n \n+ def compute_mask(self, inputs, mask=None):\n+ if isinstance(input, tf.SparseTensor):\n+ raise ValueError(\"SparseTensor inputs do not support mask_zero\")\n+ if not self.supports_masking:\n+ return None\n+ return tf.math.not_equal(inputs, 0)\n+\n def reset(self):\n self.bet_ids_pair = []\n self.tape = None\n", "issue": "Support mask_zero argument in ElasticDL embedding layer\nSupport `mask_zero` argument in `elasticdl.Embedding`.\r\n\r\n`mask_zero` is an argument in keras Embedding layer and it is used for inputs with different `input_length` in one minibatch. More details can be found in [keras Embedding doc](https://keras.io/layers/embeddings/).\n", "before_files": [{"content": "import tensorflow as tf\nfrom tensorflow.python.keras.utils import tf_utils\n\n\nclass Embedding(tf.keras.layers.Layer):\n \"\"\"\n Input: indexes for the embedding entries with a shape of\n (batch_size, input_length). Input can be either dense tensor\n or SparseTensor.\n Output:\n corresponding (combined) embeddings with a shape of\n (batch_size, input_length, output_dim) if combiner is None\n (batch_size, output_dim) if combiner is not None\n Arguments:\n output_dim: the dimension of the embedding vector\n embedding_initializer: Initializer for embedding table\n mask_zero: Whether or not the input value 0 is a special \"padding\"\n value that should be masked out.\n If input is SparseTensor, mask_zero must be False.\n input_length: Length of input sequences, when it is constant.\n This argument is required if you are going to connect\n `Flatten` then `Dense` layers upstream\n (without it, the shape of the dense outputs cannot be computed).\n combiner: A string specifying the reduction op or None if not used.\n \"mean\", \"sqrtn\" and \"sum\" are supported for the reduction op.\n If input is SparseTensor, combiner must set as a reduction op.\n \"\"\"\n\n def __init__(\n self,\n output_dim,\n embedding_initializer=\"uniform\",\n mask_zero=False,\n input_length=None,\n combiner=None,\n **kwargs\n ):\n if \"input_shape\" not in kwargs and input_length:\n kwargs[\"input_shape\"] = (input_length,)\n super(Embedding, self).__init__(**kwargs)\n\n self.output_dim = output_dim\n self.embedding_initializer = embedding_initializer\n # TODO: support mask_zero\n self.supports_masking = mask_zero\n self.input_length = input_length\n self.combiner = combiner\n self.tape = None\n self.worker = None\n self.bet_ids_pair = []\n\n @tf_utils.shape_type_conversion\n def compute_output_shape(self, input_shape):\n # this function is taken from\n # tf.keras.layers.Embedding.compute_output_shape\n # https://github.com/tensorflow/tensorflow/blob/3f3c728bf80e0fd6653744318cbbfe1454c6ddca/tensorflow/python/keras/layers/embeddings.py#L156\n if self.input_length is None:\n return input_shape + (self.output_dim,)\n else:\n if isinstance(self.input_length, (list, tuple)):\n in_lens = list(self.input_length)\n else:\n in_lens = [self.input_length]\n if len(in_lens) != len(input_shape) - 1:\n raise ValueError(\n '\"input_length\" is %s, '\n \"but received input has shape %s\"\n % (str(self.input_length), str(input_shape))\n )\n else:\n for i, (s1, s2) in enumerate(zip(in_lens, input_shape[1:])):\n if s1 is not None and s2 is not None and s1 != s2:\n raise ValueError(\n '\"input_length\" is %s, '\n \"but received input has shape %s\"\n % (str(self.input_length), str(input_shape))\n )\n elif s1 is None:\n in_lens[i] = s2\n return (input_shape[0],) + tuple(in_lens) + (self.output_dim,)\n\n @property\n def name(self):\n return self._name\n\n @staticmethod\n def get_key(name_list):\n return \"-\".join(map(str, name_list))\n\n def lookup_embedding(self, unique_ids):\n batch_embedding = self.worker.embedding_lookup(\n unique_ids, self._name, self.embedding_initializer\n )\n return batch_embedding\n\n def call(self, input):\n if isinstance(input, tf.SparseTensor):\n return self._sparse_input_call(input)\n\n ids = tf.convert_to_tensor(input, name=\"embedding_ids\")\n flat_ids = tf.reshape(ids, [-1])\n unique_ids, idx = tf.unique(flat_ids)\n batch_embedding_tensor = tf.py_function(\n self.lookup_embedding, inp=[unique_ids], Tout=tf.float32\n )\n if self.tape:\n # tape.watch works with eager mode only.\n # Gradient for embeddings is SparseTensor here due to tf.gather op.\n # tf.gather accesses tensor slices, resulting in sparse tensor\n # gradient.\n if not tf.executing_eagerly():\n raise RuntimeError(\"tape.watch only works with eager mode\")\n self.tape.watch(batch_embedding_tensor)\n self.bet_ids_pair.append((batch_embedding_tensor, unique_ids))\n outputs = tf.gather(batch_embedding_tensor, idx)\n outputs = tf.reshape(\n outputs, ids.get_shape().concatenate(self.output_dim)\n )\n # TODO: support combiner for dense input\n return outputs\n\n def _sparse_input_call(self, sparse_input):\n if self.combiner not in [\"sum\", \"mean\", \"sqrtn\"]:\n raise ValueError(\n \"combiner must set sum, mean or sqrtn for sparse input\"\n )\n unique_ids, idx = tf.unique(sparse_input.values)\n embeddings = tf.py_function(\n self.lookup_embedding, inp=[unique_ids], Tout=tf.float32\n )\n if self.tape:\n # tape.watch works with eager mode only\n # gradient for embeddings is dense tensor for sparse_input_call\n if not tf.executing_eagerly():\n raise RuntimeError(\"tape.watch only works with eager mode\")\n self.tape.watch(embeddings)\n self.bet_ids_pair.append((embeddings, unique_ids))\n segment_ids = sparse_input.indices[:, 0]\n if segment_ids.dtype != tf.int32:\n segment_ids = tf.cast(segment_ids, tf.int32)\n\n if self.combiner == \"sum\":\n embeddings = tf.sparse.segment_sum(embeddings, idx, segment_ids)\n elif self.combiner == \"mean\":\n embeddings = tf.sparse.segment_mean(embeddings, idx, segment_ids)\n elif self.combiner == \"sqrtn\":\n embeddings = tf.sparse.segment_sqrt_n(embeddings, idx, segment_ids)\n return embeddings\n\n def reset(self):\n self.bet_ids_pair = []\n self.tape = None\n\n def set_tape(self, tape):\n self.tape = tape\n\n def set_worker(self, worker):\n self.worker = worker\n", "path": "elasticdl/python/elasticdl/layers/embedding.py"}]} | 2,400 | 245 |
gh_patches_debug_15089 | rasdani/github-patches | git_diff | spack__spack-2798 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
spack find does not accept same specs as other commands
The following works:
```
$ spack find hdf5~mpi
```
but the following equivalent command does not:
```
$ spack find hdf5 -mpi
usage: spack find [-h] [-s | -p | -d] [-l] [-L] [-f] [-e | -E] [-u] [-m] [-v]
[-M] [-N]
[constraint [constraint ...]]
spack find: error: argument -p/--paths: ignored explicit argument 'i'
```
</issue>
<code>
[start of lib/spack/spack/cmd/common/arguments.py]
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the LICENSE file for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25
26 import argparse
27
28 import spack.cmd
29 import spack.store
30 import spack.modules
31 from spack.util.pattern import Args
32 __all__ = ['add_common_arguments']
33
34 _arguments = {}
35
36
37 def add_common_arguments(parser, list_of_arguments):
38 for argument in list_of_arguments:
39 if argument not in _arguments:
40 message = 'Trying to add non existing argument "{0}" to a command'
41 raise KeyError(message.format(argument))
42 x = _arguments[argument]
43 parser.add_argument(*x.flags, **x.kwargs)
44
45
46 class ConstraintAction(argparse.Action):
47 """Constructs a list of specs based on a constraint given on the command line
48
49 An instance of this class is supposed to be used as an argument action
50 in a parser. It will read a constraint and will attach a function to the
51 arguments that accepts optional keyword arguments.
52
53 To obtain the specs from a command the function must be called.
54 """
55
56 def __call__(self, parser, namespace, values, option_string=None):
57 # Query specs from command line
58 self.values = values
59 namespace.constraint = values
60 namespace.specs = self._specs
61
62 def _specs(self, **kwargs):
63 qspecs = spack.cmd.parse_specs(self.values)
64
65 # return everything for an empty query.
66 if not qspecs:
67 return spack.store.db.query()
68
69 # Return only matching stuff otherwise.
70 specs = set()
71 for spec in qspecs:
72 for s in spack.store.db.query(spec, **kwargs):
73 specs.add(s)
74 return sorted(specs)
75
76
77 _arguments['constraint'] = Args(
78 'constraint', nargs='*', action=ConstraintAction,
79 help='Constraint to select a subset of installed packages')
80
81 _arguments['module_type'] = Args(
82 '-m', '--module-type', help='Type of module files',
83 default='tcl', choices=spack.modules.module_types)
84
85 _arguments['yes_to_all'] = Args(
86 '-y', '--yes-to-all', action='store_true', dest='yes_to_all',
87 help='Assume "yes" is the answer to every confirmation request.')
88
89 _arguments['recurse_dependencies'] = Args(
90 '-r', '--dependencies', action='store_true', dest='recurse_dependencies',
91 help='Recursively traverse spec dependencies')
92
93 _arguments['clean'] = Args(
94 '--clean', action='store_false', dest='dirty',
95 help='Clean environment before installing package.')
96
97 _arguments['dirty'] = Args(
98 '--dirty', action='store_true', dest='dirty',
99 help='Do NOT clean environment before installing.')
100
101 _arguments['long'] = Args(
102 '-l', '--long', action='store_true',
103 help='Show dependency hashes as well as versions.')
104
105 _arguments['very_long'] = Args(
106 '-L', '--very-long', action='store_true',
107 help='Show full dependency hashes as well as versions.')
108
[end of lib/spack/spack/cmd/common/arguments.py]
[start of lib/spack/spack/cmd/find.py]
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the LICENSE file for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 import sys
26
27 import llnl.util.tty as tty
28 import spack.cmd.common.arguments as arguments
29
30 from spack.cmd import display_specs
31
32 description = "Find installed spack packages"
33
34
35 def setup_parser(subparser):
36 format_group = subparser.add_mutually_exclusive_group()
37 format_group.add_argument('-s', '--short',
38 action='store_const',
39 dest='mode',
40 const='short',
41 default='short',
42 help='Show only specs (default)')
43 format_group.add_argument('-p', '--paths',
44 action='store_const',
45 dest='mode',
46 const='paths',
47 help='Show paths to package install directories')
48 format_group.add_argument(
49 '-d', '--deps',
50 action='store_const',
51 dest='mode',
52 const='deps',
53 help='Show full dependency DAG of installed packages')
54
55 arguments.add_common_arguments(subparser, ['long', 'very_long'])
56
57 subparser.add_argument('-f', '--show-flags',
58 action='store_true',
59 dest='show_flags',
60 help='Show spec compiler flags.')
61 implicit_explicit = subparser.add_mutually_exclusive_group()
62 implicit_explicit.add_argument(
63 '-e', '--explicit',
64 action='store_true',
65 help='Show only specs that were installed explicitly')
66 implicit_explicit.add_argument(
67 '-E', '--implicit',
68 action='store_true',
69 help='Show only specs that were installed as dependencies')
70 subparser.add_argument(
71 '-u', '--unknown',
72 action='store_true',
73 dest='unknown',
74 help='Show only specs Spack does not have a package for.')
75 subparser.add_argument(
76 '-m', '--missing',
77 action='store_true',
78 dest='missing',
79 help='Show missing dependencies as well as installed specs.')
80 subparser.add_argument(
81 '-v', '--variants',
82 action='store_true',
83 dest='variants',
84 help='Show variants in output (can be long)')
85 subparser.add_argument('-M', '--only-missing',
86 action='store_true',
87 dest='only_missing',
88 help='Show only missing dependencies.')
89 subparser.add_argument('-N', '--namespace',
90 action='store_true',
91 help='Show fully qualified package names.')
92
93 arguments.add_common_arguments(subparser, ['constraint'])
94
95
96 def query_arguments(args):
97 # Set up query arguments.
98 installed, known = True, any
99 if args.only_missing:
100 installed = False
101 elif args.missing:
102 installed = any
103 if args.unknown:
104 known = False
105 explicit = any
106 if args.explicit:
107 explicit = True
108 if args.implicit:
109 explicit = False
110 q_args = {'installed': installed, 'known': known, "explicit": explicit}
111 return q_args
112
113
114 def find(parser, args):
115 q_args = query_arguments(args)
116 query_specs = args.specs(**q_args)
117
118 # Exit early if no package matches the constraint
119 if not query_specs and args.constraint:
120 msg = "No package matches the query: {0}".format(args.constraint)
121 tty.msg(msg)
122 return
123
124 # Display the result
125 if sys.stdout.isatty():
126 tty.msg("%d installed packages." % len(query_specs))
127
128 display_specs(query_specs,
129 mode=args.mode,
130 long=args.long,
131 very_long=args.very_long,
132 show_flags=args.show_flags,
133 namespace=args.namespace,
134 variants=args.variants)
135
[end of lib/spack/spack/cmd/find.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/spack/spack/cmd/common/arguments.py b/lib/spack/spack/cmd/common/arguments.py
--- a/lib/spack/spack/cmd/common/arguments.py
+++ b/lib/spack/spack/cmd/common/arguments.py
@@ -75,7 +75,7 @@
_arguments['constraint'] = Args(
- 'constraint', nargs='*', action=ConstraintAction,
+ 'constraint', nargs=argparse.REMAINDER, action=ConstraintAction,
help='Constraint to select a subset of installed packages')
_arguments['module_type'] = Args(
diff --git a/lib/spack/spack/cmd/find.py b/lib/spack/spack/cmd/find.py
--- a/lib/spack/spack/cmd/find.py
+++ b/lib/spack/spack/cmd/find.py
@@ -117,7 +117,8 @@
# Exit early if no package matches the constraint
if not query_specs and args.constraint:
- msg = "No package matches the query: {0}".format(args.constraint)
+ msg = "No package matches the query: {0}".format(
+ ' '.join(args.constraint))
tty.msg(msg)
return
| {"golden_diff": "diff --git a/lib/spack/spack/cmd/common/arguments.py b/lib/spack/spack/cmd/common/arguments.py\n--- a/lib/spack/spack/cmd/common/arguments.py\n+++ b/lib/spack/spack/cmd/common/arguments.py\n@@ -75,7 +75,7 @@\n \n \n _arguments['constraint'] = Args(\n- 'constraint', nargs='*', action=ConstraintAction,\n+ 'constraint', nargs=argparse.REMAINDER, action=ConstraintAction,\n help='Constraint to select a subset of installed packages')\n \n _arguments['module_type'] = Args(\ndiff --git a/lib/spack/spack/cmd/find.py b/lib/spack/spack/cmd/find.py\n--- a/lib/spack/spack/cmd/find.py\n+++ b/lib/spack/spack/cmd/find.py\n@@ -117,7 +117,8 @@\n \n # Exit early if no package matches the constraint\n if not query_specs and args.constraint:\n- msg = \"No package matches the query: {0}\".format(args.constraint)\n+ msg = \"No package matches the query: {0}\".format(\n+ ' '.join(args.constraint))\n tty.msg(msg)\n return\n", "issue": "spack find does not accept same specs as other commands\nThe following works:\r\n```\r\n$ spack find hdf5~mpi\r\n```\r\nbut the following equivalent command does not:\r\n```\r\n$ spack find hdf5 -mpi\r\nusage: spack find [-h] [-s | -p | -d] [-l] [-L] [-f] [-e | -E] [-u] [-m] [-v]\r\n [-M] [-N]\r\n [constraint [constraint ...]]\r\nspack find: error: argument -p/--paths: ignored explicit argument 'i'\r\n```\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\n\nimport argparse\n\nimport spack.cmd\nimport spack.store\nimport spack.modules\nfrom spack.util.pattern import Args\n__all__ = ['add_common_arguments']\n\n_arguments = {}\n\n\ndef add_common_arguments(parser, list_of_arguments):\n for argument in list_of_arguments:\n if argument not in _arguments:\n message = 'Trying to add non existing argument \"{0}\" to a command'\n raise KeyError(message.format(argument))\n x = _arguments[argument]\n parser.add_argument(*x.flags, **x.kwargs)\n\n\nclass ConstraintAction(argparse.Action):\n \"\"\"Constructs a list of specs based on a constraint given on the command line\n\n An instance of this class is supposed to be used as an argument action\n in a parser. It will read a constraint and will attach a function to the\n arguments that accepts optional keyword arguments.\n\n To obtain the specs from a command the function must be called.\n \"\"\"\n\n def __call__(self, parser, namespace, values, option_string=None):\n # Query specs from command line\n self.values = values\n namespace.constraint = values\n namespace.specs = self._specs\n\n def _specs(self, **kwargs):\n qspecs = spack.cmd.parse_specs(self.values)\n\n # return everything for an empty query.\n if not qspecs:\n return spack.store.db.query()\n\n # Return only matching stuff otherwise.\n specs = set()\n for spec in qspecs:\n for s in spack.store.db.query(spec, **kwargs):\n specs.add(s)\n return sorted(specs)\n\n\n_arguments['constraint'] = Args(\n 'constraint', nargs='*', action=ConstraintAction,\n help='Constraint to select a subset of installed packages')\n\n_arguments['module_type'] = Args(\n '-m', '--module-type', help='Type of module files',\n default='tcl', choices=spack.modules.module_types)\n\n_arguments['yes_to_all'] = Args(\n '-y', '--yes-to-all', action='store_true', dest='yes_to_all',\n help='Assume \"yes\" is the answer to every confirmation request.')\n\n_arguments['recurse_dependencies'] = Args(\n '-r', '--dependencies', action='store_true', dest='recurse_dependencies',\n help='Recursively traverse spec dependencies')\n\n_arguments['clean'] = Args(\n '--clean', action='store_false', dest='dirty',\n help='Clean environment before installing package.')\n\n_arguments['dirty'] = Args(\n '--dirty', action='store_true', dest='dirty',\n help='Do NOT clean environment before installing.')\n\n_arguments['long'] = Args(\n '-l', '--long', action='store_true',\n help='Show dependency hashes as well as versions.')\n\n_arguments['very_long'] = Args(\n '-L', '--very-long', action='store_true',\n help='Show full dependency hashes as well as versions.')\n", "path": "lib/spack/spack/cmd/common/arguments.py"}, {"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nimport sys\n\nimport llnl.util.tty as tty\nimport spack.cmd.common.arguments as arguments\n\nfrom spack.cmd import display_specs\n\ndescription = \"Find installed spack packages\"\n\n\ndef setup_parser(subparser):\n format_group = subparser.add_mutually_exclusive_group()\n format_group.add_argument('-s', '--short',\n action='store_const',\n dest='mode',\n const='short',\n default='short',\n help='Show only specs (default)')\n format_group.add_argument('-p', '--paths',\n action='store_const',\n dest='mode',\n const='paths',\n help='Show paths to package install directories')\n format_group.add_argument(\n '-d', '--deps',\n action='store_const',\n dest='mode',\n const='deps',\n help='Show full dependency DAG of installed packages')\n\n arguments.add_common_arguments(subparser, ['long', 'very_long'])\n\n subparser.add_argument('-f', '--show-flags',\n action='store_true',\n dest='show_flags',\n help='Show spec compiler flags.')\n implicit_explicit = subparser.add_mutually_exclusive_group()\n implicit_explicit.add_argument(\n '-e', '--explicit',\n action='store_true',\n help='Show only specs that were installed explicitly')\n implicit_explicit.add_argument(\n '-E', '--implicit',\n action='store_true',\n help='Show only specs that were installed as dependencies')\n subparser.add_argument(\n '-u', '--unknown',\n action='store_true',\n dest='unknown',\n help='Show only specs Spack does not have a package for.')\n subparser.add_argument(\n '-m', '--missing',\n action='store_true',\n dest='missing',\n help='Show missing dependencies as well as installed specs.')\n subparser.add_argument(\n '-v', '--variants',\n action='store_true',\n dest='variants',\n help='Show variants in output (can be long)')\n subparser.add_argument('-M', '--only-missing',\n action='store_true',\n dest='only_missing',\n help='Show only missing dependencies.')\n subparser.add_argument('-N', '--namespace',\n action='store_true',\n help='Show fully qualified package names.')\n\n arguments.add_common_arguments(subparser, ['constraint'])\n\n\ndef query_arguments(args):\n # Set up query arguments.\n installed, known = True, any\n if args.only_missing:\n installed = False\n elif args.missing:\n installed = any\n if args.unknown:\n known = False\n explicit = any\n if args.explicit:\n explicit = True\n if args.implicit:\n explicit = False\n q_args = {'installed': installed, 'known': known, \"explicit\": explicit}\n return q_args\n\n\ndef find(parser, args):\n q_args = query_arguments(args)\n query_specs = args.specs(**q_args)\n\n # Exit early if no package matches the constraint\n if not query_specs and args.constraint:\n msg = \"No package matches the query: {0}\".format(args.constraint)\n tty.msg(msg)\n return\n\n # Display the result\n if sys.stdout.isatty():\n tty.msg(\"%d installed packages.\" % len(query_specs))\n\n display_specs(query_specs,\n mode=args.mode,\n long=args.long,\n very_long=args.very_long,\n show_flags=args.show_flags,\n namespace=args.namespace,\n variants=args.variants)\n", "path": "lib/spack/spack/cmd/find.py"}]} | 3,139 | 255 |
gh_patches_debug_24877 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-2101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
add data-base-url attribute to HTML body tag in Plone 4
Since plone 4.3.12, the `<base href` attribute in HTML generated by Plone no longer always points to the context URL as it used to prior to the change. This change broke Plone and some add-ons. More breakage may still surface. Fixes have varied because no alternative was provided when the change was made.
For a lengthy background, see the [discussion](https://community.plone.org/t/how-to-get-context-url-in-js-on-plone-4-3-12/4031).
Rather than rolling back the change which was done to support some other things and would require reverting them, I suggest providing a future-proof alternative (thanks @rodfersou for suggesting using a new attribute):
Plone 5 has removed `<base href` completely. Instead Plone 5 has added a `data-base-url` attribute to the HTML `body` tag. Which points to the context URL.
So, I suggest same be done for Plone 4. That way, anything in Plone core and/or add-ons needing context URL in Javascript have a future-proof way of getting it from here on.
@@sharing is broken on Page objects in Plone 4.3.12 and 4.3.14
## BUG/PROBLEM REPORT (OR OTHER COMMON ISSUE)
### What I did:
1. Create vanilla Plone 4.3.14 site.
2. Add private Page with title Test at the top of the site
3. Navigate to sharing tab for that page
4. Type some characters into the Search box
5. Kaboom: exception
### What I expect to happen:
List of potential users as search results
### What actually happened:
Large python back trace because the search form AJAX accessed:
http://localhost:8080/Plone2/test/@@sharing/@@updateSharingInfo
rather than the following used by Plone 4.3.11:
http://localhost:8080/Plone2/test/@@updateSharingInfo
The root cause appears to be:
https://pypi.python.org/pypi/plone.app.layout/2.3.17
2.3.15 (2016-06-28)
Fixes:
_Fix base tag differs from actual URL (fixes [86](https://github.com/plone/plone.app.layout/issues/86)). [rodfersou]_
which was actually made **after** plone.app.layout 2.3.15 was released, (December 2016): that comment is placed incorrectly in the README file. I'm happy to make a bug report there as well.
### What version of Plone/ Addons I am using:
Vanilla Plone 4.3.14, which uses plone.app.layout 2.3.17. So does Plone 4.3.12, and I see exactly the same problem there.
(NB: Pinning plone.app.layout 2.3.15 in Plone 4.3.14 resolves the problem).
Update: appears to be the same issue discussed in: https://github.com/plone/Products.CMFPlone/issues/2051
</issue>
<code>
[start of Products/CMFPlone/browser/jsvariables.py]
1 from zope.i18n import translate
2 from zope.publisher.browser import BrowserView
3
4 from Products.CMFCore.utils import getToolByName
5 from Products.CMFPlone import PloneMessageFactory as _
6
7
8 TEMPLATE = """\
9 var portal_url = '%(portal_url)s';
10 var form_modified_message = '%(form_modified)s';
11 var form_resubmit_message = '%(form_resubmit)s';
12 var external_links_open_new_window = '%(open_links)s';
13 var mark_special_links = '%(mark_links)s';
14 var ajax_noresponse_message = '%(ajax_noresponse)s';
15 """
16
17 FORM_MODIFIED = _(u'text_form_modified_message',
18 default=u'Your form has not been saved. All changes you '
19 u'have made will be lost.')
20
21 FORM_RESUBMIT = _(u'text_form_resubmit_message',
22 default=u'You already clicked the submit button. Do you '
23 u'really want to submit this form again?')
24
25 AJAX_NORESPONSE = _(u'text_ajax_noresponse_message',
26 default=u'No response from server. Please try again '
27 u'later.')
28
29
30 class JSVariables(BrowserView):
31
32 def __call__(self, *args, **kwargs):
33 context = self.context
34 response = self.request.response
35 response.setHeader('content-type', 'text/javascript;;charset=utf-8')
36
37 props = getToolByName(context, 'portal_properties').site_properties
38 portal_url = getToolByName(context, 'portal_url')()
39
40 # the following are flags for mark_special_links.js
41 # links get the target="_blank" attribute
42 open_links = props.getProperty('external_links_open_new_window',
43 'false')
44 mark_links = props.getProperty('mark_special_links', 'false')
45
46 form_modified = translate(FORM_MODIFIED, context=self.request)
47 form_resubmit = translate(FORM_RESUBMIT, context=self.request)
48 ajax_noresponse = translate(AJAX_NORESPONSE, context=self.request)
49
50 # escape_for_js
51 form_modified = form_modified.replace("'", "\\'")
52 form_resubmit = form_resubmit.replace("'", "\\'")
53 ajax_noresponse = ajax_noresponse.replace("'", "\\'")
54
55 return TEMPLATE % dict(
56 portal_url=portal_url,
57 open_links=open_links,
58 mark_links=mark_links,
59 form_modified=form_modified,
60 form_resubmit=form_resubmit,
61 ajax_noresponse=ajax_noresponse,
62 )
63
[end of Products/CMFPlone/browser/jsvariables.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/browser/jsvariables.py b/Products/CMFPlone/browser/jsvariables.py
--- a/Products/CMFPlone/browser/jsvariables.py
+++ b/Products/CMFPlone/browser/jsvariables.py
@@ -7,6 +7,7 @@
TEMPLATE = """\
var portal_url = '%(portal_url)s';
+var base_url = '%(base_url)s';
var form_modified_message = '%(form_modified)s';
var form_resubmit_message = '%(form_resubmit)s';
var external_links_open_new_window = '%(open_links)s';
@@ -36,6 +37,7 @@
props = getToolByName(context, 'portal_properties').site_properties
portal_url = getToolByName(context, 'portal_url')()
+ base_url = self.request['HTTP_REFERER']
# the following are flags for mark_special_links.js
# links get the target="_blank" attribute
@@ -54,6 +56,7 @@
return TEMPLATE % dict(
portal_url=portal_url,
+ base_url=base_url,
open_links=open_links,
mark_links=mark_links,
form_modified=form_modified,
| {"golden_diff": "diff --git a/Products/CMFPlone/browser/jsvariables.py b/Products/CMFPlone/browser/jsvariables.py\n--- a/Products/CMFPlone/browser/jsvariables.py\n+++ b/Products/CMFPlone/browser/jsvariables.py\n@@ -7,6 +7,7 @@\n \n TEMPLATE = \"\"\"\\\n var portal_url = '%(portal_url)s';\n+var base_url = '%(base_url)s';\n var form_modified_message = '%(form_modified)s';\n var form_resubmit_message = '%(form_resubmit)s';\n var external_links_open_new_window = '%(open_links)s';\n@@ -36,6 +37,7 @@\n \n props = getToolByName(context, 'portal_properties').site_properties\n portal_url = getToolByName(context, 'portal_url')()\n+ base_url = self.request['HTTP_REFERER']\n \n # the following are flags for mark_special_links.js\n # links get the target=\"_blank\" attribute\n@@ -54,6 +56,7 @@\n \n return TEMPLATE % dict(\n portal_url=portal_url,\n+ base_url=base_url,\n open_links=open_links,\n mark_links=mark_links,\n form_modified=form_modified,\n", "issue": "add data-base-url attribute to HTML body tag in Plone 4\nSince plone 4.3.12, the `<base href` attribute in HTML generated by Plone no longer always points to the context URL as it used to prior to the change. This change broke Plone and some add-ons. More breakage may still surface. Fixes have varied because no alternative was provided when the change was made.\r\n\r\nFor a lengthy background, see the [discussion](https://community.plone.org/t/how-to-get-context-url-in-js-on-plone-4-3-12/4031). \r\n\r\nRather than rolling back the change which was done to support some other things and would require reverting them, I suggest providing a future-proof alternative (thanks @rodfersou for suggesting using a new attribute):\r\n\r\nPlone 5 has removed `<base href` completely. Instead Plone 5 has added a `data-base-url` attribute to the HTML `body` tag. Which points to the context URL.\r\n\r\nSo, I suggest same be done for Plone 4. That way, anything in Plone core and/or add-ons needing context URL in Javascript have a future-proof way of getting it from here on.\r\n\r\n\n@@sharing is broken on Page objects in Plone 4.3.12 and 4.3.14\n## BUG/PROBLEM REPORT (OR OTHER COMMON ISSUE)\r\n\r\n### What I did:\r\n\r\n1. Create vanilla Plone 4.3.14 site.\r\n2. Add private Page with title Test at the top of the site\r\n3. Navigate to sharing tab for that page\r\n4. Type some characters into the Search box\r\n5. Kaboom: exception\r\n\r\n### What I expect to happen:\r\n\r\nList of potential users as search results\r\n\r\n### What actually happened:\r\n\r\nLarge python back trace because the search form AJAX accessed:\r\n\r\n http://localhost:8080/Plone2/test/@@sharing/@@updateSharingInfo\r\n\r\nrather than the following used by Plone 4.3.11:\r\n\r\n http://localhost:8080/Plone2/test/@@updateSharingInfo\r\n \r\nThe root cause appears to be:\r\n\r\nhttps://pypi.python.org/pypi/plone.app.layout/2.3.17\r\n\r\n 2.3.15 (2016-06-28)\r\n Fixes:\r\n _Fix base tag differs from actual URL (fixes [86](https://github.com/plone/plone.app.layout/issues/86)). [rodfersou]_\r\n\r\nwhich was actually made **after** plone.app.layout 2.3.15 was released, (December 2016): that comment is placed incorrectly in the README file. I'm happy to make a bug report there as well.\r\n\r\n### What version of Plone/ Addons I am using:\r\n\r\nVanilla Plone 4.3.14, which uses plone.app.layout 2.3.17. So does Plone 4.3.12, and I see exactly the same problem there.\r\n\r\n(NB: Pinning plone.app.layout 2.3.15 in Plone 4.3.14 resolves the problem).\r\n\r\nUpdate: appears to be the same issue discussed in: https://github.com/plone/Products.CMFPlone/issues/2051\r\n\n", "before_files": [{"content": "from zope.i18n import translate\nfrom zope.publisher.browser import BrowserView\n\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _\n\n\nTEMPLATE = \"\"\"\\\nvar portal_url = '%(portal_url)s';\nvar form_modified_message = '%(form_modified)s';\nvar form_resubmit_message = '%(form_resubmit)s';\nvar external_links_open_new_window = '%(open_links)s';\nvar mark_special_links = '%(mark_links)s';\nvar ajax_noresponse_message = '%(ajax_noresponse)s';\n\"\"\"\n\nFORM_MODIFIED = _(u'text_form_modified_message',\n default=u'Your form has not been saved. All changes you '\n u'have made will be lost.')\n\nFORM_RESUBMIT = _(u'text_form_resubmit_message',\n default=u'You already clicked the submit button. Do you '\n u'really want to submit this form again?')\n\nAJAX_NORESPONSE = _(u'text_ajax_noresponse_message',\n default=u'No response from server. Please try again '\n u'later.')\n\n\nclass JSVariables(BrowserView):\n\n def __call__(self, *args, **kwargs):\n context = self.context\n response = self.request.response\n response.setHeader('content-type', 'text/javascript;;charset=utf-8')\n\n props = getToolByName(context, 'portal_properties').site_properties\n portal_url = getToolByName(context, 'portal_url')()\n\n # the following are flags for mark_special_links.js\n # links get the target=\"_blank\" attribute\n open_links = props.getProperty('external_links_open_new_window',\n 'false')\n mark_links = props.getProperty('mark_special_links', 'false')\n\n form_modified = translate(FORM_MODIFIED, context=self.request)\n form_resubmit = translate(FORM_RESUBMIT, context=self.request)\n ajax_noresponse = translate(AJAX_NORESPONSE, context=self.request)\n\n # escape_for_js\n form_modified = form_modified.replace(\"'\", \"\\\\'\")\n form_resubmit = form_resubmit.replace(\"'\", \"\\\\'\")\n ajax_noresponse = ajax_noresponse.replace(\"'\", \"\\\\'\")\n\n return TEMPLATE % dict(\n portal_url=portal_url,\n open_links=open_links,\n mark_links=mark_links,\n form_modified=form_modified,\n form_resubmit=form_resubmit,\n ajax_noresponse=ajax_noresponse,\n )\n", "path": "Products/CMFPlone/browser/jsvariables.py"}]} | 1,890 | 261 |
gh_patches_debug_12490 | rasdani/github-patches | git_diff | elastic__apm-agent-python-2002 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'Retry' object has no attribute 'copy'
The `elasticapm` Python client fails with `AttributeError: 'Retry' object has no attribute 'copy'` during HTTP retries.
The Elastic APM UI points towards the following lines being responsible for the crash https://github.com/elastic/apm-agent-python/blob/c9a1f7a74b4b8e39af06f9b383037a3ae49ca42b/elasticapm/instrumentation/packages/urllib3.py#L66-L68
**To Reproduce**
Configure a `requests` session with [urllib3 retries](https://urllib3.readthedocs.io/en/stable/reference/urllib3.util.html) as described below. Then point it at a flaky endpoint.
``` python
class TimeoutHTTPAdapter(HTTPAdapter):
def __init__(self, retries: int, *args: Any, **kwargs: Any) -> None:
self.timeout = kwargs.pop("timeout", 30)
self._retries = retries
super().__init__(*args, **kwargs)
def send(
self, request: requests.PreparedRequest, **kwargs: Any
) -> requests.Response:
kwargs.setdefault("timeout", self.timeout)
return super().send(request, **kwargs)
def session_with_retries(
retries: int = 3,
backoff_factor: float = 1,
timeout: int = 30,
status_forcelist: Collection[int] = (408, 429, 500, 502, 503, 504),
) -> requests.Session:
retry_strategy = Retry(
total=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
allowed_methods=Retry.DEFAULT_ALLOWED_METHODS.union(["POST", "PATCH"]),
)
adapter = TimeoutHTTPAdapter(
retries=retries, max_retries=retry_strategy, timeout=timeout
)
s = requests.Session()
s.mount("http://", adapter)
s.mount("https://", adapter)
return s
```
**Environment**
- OS: Linux
- Python version: Python 3.11
- Framework and version: `urllib3==2.0.6` and `requests==2.31.0`
- APM Server version: 8.10.4
- Agent version: `elastic-apm==6.19.0`
**Background**
* This appears to be related to the recent urllib3 fixes https://github.com/elastic/apm-agent-python/pull/1822 and https://github.com/elastic/apm-agent-python/issues/1816
AttributeError: 'Retry' object has no attribute 'copy'
The `elasticapm` Python client fails with `AttributeError: 'Retry' object has no attribute 'copy'` during HTTP retries.
The Elastic APM UI points towards the following lines being responsible for the crash https://github.com/elastic/apm-agent-python/blob/c9a1f7a74b4b8e39af06f9b383037a3ae49ca42b/elasticapm/instrumentation/packages/urllib3.py#L66-L68
**To Reproduce**
Configure a `requests` session with [urllib3 retries](https://urllib3.readthedocs.io/en/stable/reference/urllib3.util.html) as described below. Then point it at a flaky endpoint.
``` python
class TimeoutHTTPAdapter(HTTPAdapter):
def __init__(self, retries: int, *args: Any, **kwargs: Any) -> None:
self.timeout = kwargs.pop("timeout", 30)
self._retries = retries
super().__init__(*args, **kwargs)
def send(
self, request: requests.PreparedRequest, **kwargs: Any
) -> requests.Response:
kwargs.setdefault("timeout", self.timeout)
return super().send(request, **kwargs)
def session_with_retries(
retries: int = 3,
backoff_factor: float = 1,
timeout: int = 30,
status_forcelist: Collection[int] = (408, 429, 500, 502, 503, 504),
) -> requests.Session:
retry_strategy = Retry(
total=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
allowed_methods=Retry.DEFAULT_ALLOWED_METHODS.union(["POST", "PATCH"]),
)
adapter = TimeoutHTTPAdapter(
retries=retries, max_retries=retry_strategy, timeout=timeout
)
s = requests.Session()
s.mount("http://", adapter)
s.mount("https://", adapter)
return s
```
**Environment**
- OS: Linux
- Python version: Python 3.11
- Framework and version: `urllib3==2.0.6` and `requests==2.31.0`
- APM Server version: 8.10.4
- Agent version: `elastic-apm==6.19.0`
**Background**
* This appears to be related to the recent urllib3 fixes https://github.com/elastic/apm-agent-python/pull/1822 and https://github.com/elastic/apm-agent-python/issues/1816
</issue>
<code>
[start of elasticapm/instrumentation/packages/urllib3.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 import itertools
32
33 from elasticapm.conf import constants
34 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
35 from elasticapm.traces import DroppedSpan, capture_span, execution_context
36 from elasticapm.utils import default_ports
37 from elasticapm.utils.disttracing import TracingOptions
38
39
40 def _set_disttracing_headers(headers, trace_parent, transaction) -> None:
41 trace_parent_str = trace_parent.to_string()
42 headers[constants.TRACEPARENT_HEADER_NAME] = trace_parent_str
43 if transaction.tracer.config.use_elastic_traceparent_header:
44 headers[constants.TRACEPARENT_LEGACY_HEADER_NAME] = trace_parent_str
45 if trace_parent.tracestate:
46 headers[constants.TRACESTATE_HEADER_NAME] = trace_parent.tracestate
47
48
49 def update_headers(args, kwargs, instance, transaction, trace_parent):
50 """
51 The headers might be in 3 different places: as 4th positional argument, as "headers" keyword argument,
52 or, if none of the former two are provided, as instance variable on the HTTPConnection object.
53
54 If the headers are in the positional arguments tuple, a new tuple with updated headers will be returned.
55 If they are in the keyword arguments or on the instance, an updated kwargs dict will be returned
56
57 :param args: list of positional arguments
58 :param kwargs: dict of keyword arguments
59 :param instance: the HTTPConnection instance
60 :param transaction: the Transaction object
61 :param trace_parent: the TraceParent object
62 :return: an (args, kwargs) tuple
63 """
64 from urllib3._version import __version__ as urllib3_version
65
66 if urllib3_version.startswith("2") and len(args) >= 5 and args[4]:
67 headers = args[4].copy()
68 args = tuple(itertools.chain((args[:4]), (headers,), args[5:]))
69 elif len(args) >= 4 and args[3]:
70 headers = args[3].copy()
71 args = tuple(itertools.chain((args[:3]), (headers,), args[4:]))
72 elif "headers" in kwargs and kwargs["headers"]:
73 headers = kwargs["headers"].copy()
74 kwargs["headers"] = headers
75 else:
76 headers = instance.headers.copy() if instance.headers else {}
77 # we don't want to change the instance headers, so we'll cheat and
78 # set the headers as keywords. This slightly changes how the wrapped
79 # method is called compared to uninstrumented code.
80 kwargs["headers"] = headers
81 _set_disttracing_headers(headers, trace_parent, transaction)
82 return args, kwargs
83
84
85 class Urllib3Instrumentation(AbstractInstrumentedModule):
86 name = "urllib3"
87
88 instrument_list = [
89 ("urllib3.connectionpool", "HTTPConnectionPool.urlopen"),
90 # packages that vendor or vendored urllib3 in the past
91 ("requests.packages.urllib3.connectionpool", "HTTPConnectionPool.urlopen"),
92 ("botocore.vendored.requests.packages.urllib3.connectionpool", "HTTPConnectionPool.urlopen"),
93 ]
94
95 def call(self, module, method, wrapped, instance, args, kwargs):
96 if "method" in kwargs:
97 method = kwargs["method"]
98 else:
99 method = args[0]
100
101 host = instance.host
102
103 if instance.port != default_ports.get(instance.scheme):
104 host += ":" + str(instance.port)
105
106 if "url" in kwargs:
107 url = kwargs["url"]
108 else:
109 url = args[1]
110
111 signature = method.upper() + " " + host
112
113 if url.startswith("/"):
114 url = "%s://%s%s" % (instance.scheme, host, url)
115
116 transaction = execution_context.get_transaction()
117
118 with capture_span(
119 signature,
120 span_type="external",
121 span_subtype="http",
122 extra={"http": {"url": url}},
123 leaf=True,
124 ) as span:
125 # if urllib3 has been called in a leaf span, this span might be a DroppedSpan.
126 leaf_span = span
127 while isinstance(leaf_span, DroppedSpan):
128 leaf_span = leaf_span.parent
129
130 parent_id = leaf_span.id if leaf_span else transaction.id
131 trace_parent = transaction.trace_parent.copy_from(
132 span_id=parent_id, trace_options=TracingOptions(recorded=True)
133 )
134 args, kwargs = update_headers(args, kwargs, instance, transaction, trace_parent)
135 if leaf_span:
136 leaf_span.dist_tracing_propagated = True
137 response = wrapped(*args, **kwargs)
138 if response:
139 if span.context:
140 span.context["http"]["status_code"] = response.status
141 span.set_success() if response.status < 400 else span.set_failure()
142 return response
143
144 def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):
145 # since we don't have a span, we set the span id to the transaction id
146 trace_parent = transaction.trace_parent.copy_from(
147 span_id=transaction.id, trace_options=TracingOptions(recorded=False)
148 )
149 return update_headers(args, kwargs, instance, transaction, trace_parent)
150
[end of elasticapm/instrumentation/packages/urllib3.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/urllib3.py b/elasticapm/instrumentation/packages/urllib3.py
--- a/elasticapm/instrumentation/packages/urllib3.py
+++ b/elasticapm/instrumentation/packages/urllib3.py
@@ -61,12 +61,7 @@
:param trace_parent: the TraceParent object
:return: an (args, kwargs) tuple
"""
- from urllib3._version import __version__ as urllib3_version
-
- if urllib3_version.startswith("2") and len(args) >= 5 and args[4]:
- headers = args[4].copy()
- args = tuple(itertools.chain((args[:4]), (headers,), args[5:]))
- elif len(args) >= 4 and args[3]:
+ if len(args) >= 4 and args[3]:
headers = args[3].copy()
args = tuple(itertools.chain((args[:3]), (headers,), args[4:]))
elif "headers" in kwargs and kwargs["headers"]:
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/urllib3.py b/elasticapm/instrumentation/packages/urllib3.py\n--- a/elasticapm/instrumentation/packages/urllib3.py\n+++ b/elasticapm/instrumentation/packages/urllib3.py\n@@ -61,12 +61,7 @@\n :param trace_parent: the TraceParent object\n :return: an (args, kwargs) tuple\n \"\"\"\n- from urllib3._version import __version__ as urllib3_version\n-\n- if urllib3_version.startswith(\"2\") and len(args) >= 5 and args[4]:\n- headers = args[4].copy()\n- args = tuple(itertools.chain((args[:4]), (headers,), args[5:]))\n- elif len(args) >= 4 and args[3]:\n+ if len(args) >= 4 and args[3]:\n headers = args[3].copy()\n args = tuple(itertools.chain((args[:3]), (headers,), args[4:]))\n elif \"headers\" in kwargs and kwargs[\"headers\"]:\n", "issue": "AttributeError: 'Retry' object has no attribute 'copy'\nThe `elasticapm` Python client fails with `AttributeError: 'Retry' object has no attribute 'copy'` during HTTP retries.\r\n\r\nThe Elastic APM UI points towards the following lines being responsible for the crash https://github.com/elastic/apm-agent-python/blob/c9a1f7a74b4b8e39af06f9b383037a3ae49ca42b/elasticapm/instrumentation/packages/urllib3.py#L66-L68 \r\n\r\n**To Reproduce**\r\n\r\nConfigure a `requests` session with [urllib3 retries](https://urllib3.readthedocs.io/en/stable/reference/urllib3.util.html) as described below. Then point it at a flaky endpoint.\r\n\r\n``` python\r\nclass TimeoutHTTPAdapter(HTTPAdapter):\r\n def __init__(self, retries: int, *args: Any, **kwargs: Any) -> None:\r\n self.timeout = kwargs.pop(\"timeout\", 30)\r\n self._retries = retries\r\n super().__init__(*args, **kwargs)\r\n\r\n def send( \r\n self, request: requests.PreparedRequest, **kwargs: Any\r\n ) -> requests.Response:\r\n kwargs.setdefault(\"timeout\", self.timeout)\r\n return super().send(request, **kwargs)\r\n\r\ndef session_with_retries(\r\n retries: int = 3,\r\n backoff_factor: float = 1,\r\n timeout: int = 30,\r\n status_forcelist: Collection[int] = (408, 429, 500, 502, 503, 504),\r\n) -> requests.Session:\r\n\r\n retry_strategy = Retry(\r\n total=retries,\r\n backoff_factor=backoff_factor,\r\n status_forcelist=status_forcelist,\r\n allowed_methods=Retry.DEFAULT_ALLOWED_METHODS.union([\"POST\", \"PATCH\"]),\r\n )\r\n adapter = TimeoutHTTPAdapter(\r\n retries=retries, max_retries=retry_strategy, timeout=timeout\r\n )\r\n\r\n s = requests.Session()\r\n s.mount(\"http://\", adapter)\r\n s.mount(\"https://\", adapter)\r\n return s\r\n```\r\n\r\n**Environment**\r\n- OS: Linux\r\n- Python version: Python 3.11\r\n- Framework and version: `urllib3==2.0.6` and `requests==2.31.0`\r\n- APM Server version: 8.10.4\r\n- Agent version: `elastic-apm==6.19.0`\r\n\r\n**Background**\r\n* This appears to be related to the recent urllib3 fixes https://github.com/elastic/apm-agent-python/pull/1822 and https://github.com/elastic/apm-agent-python/issues/1816 \nAttributeError: 'Retry' object has no attribute 'copy'\nThe `elasticapm` Python client fails with `AttributeError: 'Retry' object has no attribute 'copy'` during HTTP retries.\r\n\r\nThe Elastic APM UI points towards the following lines being responsible for the crash https://github.com/elastic/apm-agent-python/blob/c9a1f7a74b4b8e39af06f9b383037a3ae49ca42b/elasticapm/instrumentation/packages/urllib3.py#L66-L68 \r\n\r\n**To Reproduce**\r\n\r\nConfigure a `requests` session with [urllib3 retries](https://urllib3.readthedocs.io/en/stable/reference/urllib3.util.html) as described below. Then point it at a flaky endpoint.\r\n\r\n``` python\r\nclass TimeoutHTTPAdapter(HTTPAdapter):\r\n def __init__(self, retries: int, *args: Any, **kwargs: Any) -> None:\r\n self.timeout = kwargs.pop(\"timeout\", 30)\r\n self._retries = retries\r\n super().__init__(*args, **kwargs)\r\n\r\n def send( \r\n self, request: requests.PreparedRequest, **kwargs: Any\r\n ) -> requests.Response:\r\n kwargs.setdefault(\"timeout\", self.timeout)\r\n return super().send(request, **kwargs)\r\n\r\ndef session_with_retries(\r\n retries: int = 3,\r\n backoff_factor: float = 1,\r\n timeout: int = 30,\r\n status_forcelist: Collection[int] = (408, 429, 500, 502, 503, 504),\r\n) -> requests.Session:\r\n\r\n retry_strategy = Retry(\r\n total=retries,\r\n backoff_factor=backoff_factor,\r\n status_forcelist=status_forcelist,\r\n allowed_methods=Retry.DEFAULT_ALLOWED_METHODS.union([\"POST\", \"PATCH\"]),\r\n )\r\n adapter = TimeoutHTTPAdapter(\r\n retries=retries, max_retries=retry_strategy, timeout=timeout\r\n )\r\n\r\n s = requests.Session()\r\n s.mount(\"http://\", adapter)\r\n s.mount(\"https://\", adapter)\r\n return s\r\n```\r\n\r\n**Environment**\r\n- OS: Linux\r\n- Python version: Python 3.11\r\n- Framework and version: `urllib3==2.0.6` and `requests==2.31.0`\r\n- APM Server version: 8.10.4\r\n- Agent version: `elastic-apm==6.19.0`\r\n\r\n**Background**\r\n* This appears to be related to the recent urllib3 fixes https://github.com/elastic/apm-agent-python/pull/1822 and https://github.com/elastic/apm-agent-python/issues/1816 \n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport itertools\n\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import DroppedSpan, capture_span, execution_context\nfrom elasticapm.utils import default_ports\nfrom elasticapm.utils.disttracing import TracingOptions\n\n\ndef _set_disttracing_headers(headers, trace_parent, transaction) -> None:\n trace_parent_str = trace_parent.to_string()\n headers[constants.TRACEPARENT_HEADER_NAME] = trace_parent_str\n if transaction.tracer.config.use_elastic_traceparent_header:\n headers[constants.TRACEPARENT_LEGACY_HEADER_NAME] = trace_parent_str\n if trace_parent.tracestate:\n headers[constants.TRACESTATE_HEADER_NAME] = trace_parent.tracestate\n\n\ndef update_headers(args, kwargs, instance, transaction, trace_parent):\n \"\"\"\n The headers might be in 3 different places: as 4th positional argument, as \"headers\" keyword argument,\n or, if none of the former two are provided, as instance variable on the HTTPConnection object.\n\n If the headers are in the positional arguments tuple, a new tuple with updated headers will be returned.\n If they are in the keyword arguments or on the instance, an updated kwargs dict will be returned\n\n :param args: list of positional arguments\n :param kwargs: dict of keyword arguments\n :param instance: the HTTPConnection instance\n :param transaction: the Transaction object\n :param trace_parent: the TraceParent object\n :return: an (args, kwargs) tuple\n \"\"\"\n from urllib3._version import __version__ as urllib3_version\n\n if urllib3_version.startswith(\"2\") and len(args) >= 5 and args[4]:\n headers = args[4].copy()\n args = tuple(itertools.chain((args[:4]), (headers,), args[5:]))\n elif len(args) >= 4 and args[3]:\n headers = args[3].copy()\n args = tuple(itertools.chain((args[:3]), (headers,), args[4:]))\n elif \"headers\" in kwargs and kwargs[\"headers\"]:\n headers = kwargs[\"headers\"].copy()\n kwargs[\"headers\"] = headers\n else:\n headers = instance.headers.copy() if instance.headers else {}\n # we don't want to change the instance headers, so we'll cheat and\n # set the headers as keywords. This slightly changes how the wrapped\n # method is called compared to uninstrumented code.\n kwargs[\"headers\"] = headers\n _set_disttracing_headers(headers, trace_parent, transaction)\n return args, kwargs\n\n\nclass Urllib3Instrumentation(AbstractInstrumentedModule):\n name = \"urllib3\"\n\n instrument_list = [\n (\"urllib3.connectionpool\", \"HTTPConnectionPool.urlopen\"),\n # packages that vendor or vendored urllib3 in the past\n (\"requests.packages.urllib3.connectionpool\", \"HTTPConnectionPool.urlopen\"),\n (\"botocore.vendored.requests.packages.urllib3.connectionpool\", \"HTTPConnectionPool.urlopen\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"method\" in kwargs:\n method = kwargs[\"method\"]\n else:\n method = args[0]\n\n host = instance.host\n\n if instance.port != default_ports.get(instance.scheme):\n host += \":\" + str(instance.port)\n\n if \"url\" in kwargs:\n url = kwargs[\"url\"]\n else:\n url = args[1]\n\n signature = method.upper() + \" \" + host\n\n if url.startswith(\"/\"):\n url = \"%s://%s%s\" % (instance.scheme, host, url)\n\n transaction = execution_context.get_transaction()\n\n with capture_span(\n signature,\n span_type=\"external\",\n span_subtype=\"http\",\n extra={\"http\": {\"url\": url}},\n leaf=True,\n ) as span:\n # if urllib3 has been called in a leaf span, this span might be a DroppedSpan.\n leaf_span = span\n while isinstance(leaf_span, DroppedSpan):\n leaf_span = leaf_span.parent\n\n parent_id = leaf_span.id if leaf_span else transaction.id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=parent_id, trace_options=TracingOptions(recorded=True)\n )\n args, kwargs = update_headers(args, kwargs, instance, transaction, trace_parent)\n if leaf_span:\n leaf_span.dist_tracing_propagated = True\n response = wrapped(*args, **kwargs)\n if response:\n if span.context:\n span.context[\"http\"][\"status_code\"] = response.status\n span.set_success() if response.status < 400 else span.set_failure()\n return response\n\n def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):\n # since we don't have a span, we set the span id to the transaction id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=transaction.id, trace_options=TracingOptions(recorded=False)\n )\n return update_headers(args, kwargs, instance, transaction, trace_parent)\n", "path": "elasticapm/instrumentation/packages/urllib3.py"}]} | 3,535 | 241 |
gh_patches_debug_56062 | rasdani/github-patches | git_diff | pypa__pip-12373 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
After uprade pip to version 23.3+ cant use hg+http with current branch
### Description
After uprade pip to version 23.3+ can`t isntall my package from repository with current branch
if i use older version or without branch works fine, olready use tag and hash instead of branch
### Expected behavior
pip will work correctly
### pip version
23.3
### Python version
3.9
### OS
Linux
### How to Reproduce
try to install from hg repository using branch or hash
for example
pip install hg+http://mylink/hg/folder/[email protected]
### Output
Collecting hg+http://mylink/hg/folder/[email protected]
Cloning hg http://mylink/hg/folder/mypackage (to revision 0.1.3) to /tmp/pip-req-build-hmzsjinv
Running command hg clone --noupdate --quiet http://mylink/hg/folder/mypackage /tmp/pip-req-build-hmzsjinv
Running command hg update --quiet -r=0.1.3
hg: parse error at 0: not a prefix: =
(=0.1.3
^ here)
error: subprocess-exited-with-error
× hg update --quiet -r=0.1.3 did not run successfully.
│ exit code: 255
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× hg update --quiet -r=0.1.3 did not run successfully.
│ exit code: 255
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
### Code of Conduct
- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).
</issue>
<code>
[start of src/pip/_internal/vcs/mercurial.py]
1 import configparser
2 import logging
3 import os
4 from typing import List, Optional, Tuple
5
6 from pip._internal.exceptions import BadCommand, InstallationError
7 from pip._internal.utils.misc import HiddenText, display_path
8 from pip._internal.utils.subprocess import make_command
9 from pip._internal.utils.urls import path_to_url
10 from pip._internal.vcs.versioncontrol import (
11 RevOptions,
12 VersionControl,
13 find_path_to_project_root_from_repo_root,
14 vcs,
15 )
16
17 logger = logging.getLogger(__name__)
18
19
20 class Mercurial(VersionControl):
21 name = "hg"
22 dirname = ".hg"
23 repo_name = "clone"
24 schemes = (
25 "hg+file",
26 "hg+http",
27 "hg+https",
28 "hg+ssh",
29 "hg+static-http",
30 )
31
32 @staticmethod
33 def get_base_rev_args(rev: str) -> List[str]:
34 return [f"-r={rev}"]
35
36 def fetch_new(
37 self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
38 ) -> None:
39 rev_display = rev_options.to_display()
40 logger.info(
41 "Cloning hg %s%s to %s",
42 url,
43 rev_display,
44 display_path(dest),
45 )
46 if verbosity <= 0:
47 flags: Tuple[str, ...] = ("--quiet",)
48 elif verbosity == 1:
49 flags = ()
50 elif verbosity == 2:
51 flags = ("--verbose",)
52 else:
53 flags = ("--verbose", "--debug")
54 self.run_command(make_command("clone", "--noupdate", *flags, url, dest))
55 self.run_command(
56 make_command("update", *flags, rev_options.to_args()),
57 cwd=dest,
58 )
59
60 def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
61 repo_config = os.path.join(dest, self.dirname, "hgrc")
62 config = configparser.RawConfigParser()
63 try:
64 config.read(repo_config)
65 config.set("paths", "default", url.secret)
66 with open(repo_config, "w") as config_file:
67 config.write(config_file)
68 except (OSError, configparser.NoSectionError) as exc:
69 logger.warning("Could not switch Mercurial repository to %s: %s", url, exc)
70 else:
71 cmd_args = make_command("update", "-q", rev_options.to_args())
72 self.run_command(cmd_args, cwd=dest)
73
74 def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
75 self.run_command(["pull", "-q"], cwd=dest)
76 cmd_args = make_command("update", "-q", rev_options.to_args())
77 self.run_command(cmd_args, cwd=dest)
78
79 @classmethod
80 def get_remote_url(cls, location: str) -> str:
81 url = cls.run_command(
82 ["showconfig", "paths.default"],
83 show_stdout=False,
84 stdout_only=True,
85 cwd=location,
86 ).strip()
87 if cls._is_local_repository(url):
88 url = path_to_url(url)
89 return url.strip()
90
91 @classmethod
92 def get_revision(cls, location: str) -> str:
93 """
94 Return the repository-local changeset revision number, as an integer.
95 """
96 current_revision = cls.run_command(
97 ["parents", "--template={rev}"],
98 show_stdout=False,
99 stdout_only=True,
100 cwd=location,
101 ).strip()
102 return current_revision
103
104 @classmethod
105 def get_requirement_revision(cls, location: str) -> str:
106 """
107 Return the changeset identification hash, as a 40-character
108 hexadecimal string
109 """
110 current_rev_hash = cls.run_command(
111 ["parents", "--template={node}"],
112 show_stdout=False,
113 stdout_only=True,
114 cwd=location,
115 ).strip()
116 return current_rev_hash
117
118 @classmethod
119 def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
120 """Always assume the versions don't match"""
121 return False
122
123 @classmethod
124 def get_subdirectory(cls, location: str) -> Optional[str]:
125 """
126 Return the path to Python project root, relative to the repo root.
127 Return None if the project root is in the repo root.
128 """
129 # find the repo root
130 repo_root = cls.run_command(
131 ["root"], show_stdout=False, stdout_only=True, cwd=location
132 ).strip()
133 if not os.path.isabs(repo_root):
134 repo_root = os.path.abspath(os.path.join(location, repo_root))
135 return find_path_to_project_root_from_repo_root(location, repo_root)
136
137 @classmethod
138 def get_repository_root(cls, location: str) -> Optional[str]:
139 loc = super().get_repository_root(location)
140 if loc:
141 return loc
142 try:
143 r = cls.run_command(
144 ["root"],
145 cwd=location,
146 show_stdout=False,
147 stdout_only=True,
148 on_returncode="raise",
149 log_failed_cmd=False,
150 )
151 except BadCommand:
152 logger.debug(
153 "could not determine if %s is under hg control "
154 "because hg is not available",
155 location,
156 )
157 return None
158 except InstallationError:
159 return None
160 return os.path.normpath(r.rstrip("\r\n"))
161
162
163 vcs.register(Mercurial)
164
[end of src/pip/_internal/vcs/mercurial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pip/_internal/vcs/mercurial.py b/src/pip/_internal/vcs/mercurial.py
--- a/src/pip/_internal/vcs/mercurial.py
+++ b/src/pip/_internal/vcs/mercurial.py
@@ -31,7 +31,7 @@
@staticmethod
def get_base_rev_args(rev: str) -> List[str]:
- return [f"-r={rev}"]
+ return [f"--rev={rev}"]
def fetch_new(
self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
| {"golden_diff": "diff --git a/src/pip/_internal/vcs/mercurial.py b/src/pip/_internal/vcs/mercurial.py\n--- a/src/pip/_internal/vcs/mercurial.py\n+++ b/src/pip/_internal/vcs/mercurial.py\n@@ -31,7 +31,7 @@\n \n @staticmethod\n def get_base_rev_args(rev: str) -> List[str]:\n- return [f\"-r={rev}\"]\n+ return [f\"--rev={rev}\"]\n \n def fetch_new(\n self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int\n", "issue": "After uprade pip to version 23.3+ cant use hg+http with current branch\n### Description\n\nAfter uprade pip to version 23.3+ can`t isntall my package from repository with current branch\r\nif i use older version or without branch works fine, olready use tag and hash instead of branch\n\n### Expected behavior\n\npip will work correctly\n\n### pip version\n\n23.3\n\n### Python version\n\n3.9\n\n### OS\n\nLinux\n\n### How to Reproduce\n\ntry to install from hg repository using branch or hash\r\nfor example \r\npip install hg+http://mylink/hg/folder/[email protected]\n\n### Output\n\nCollecting hg+http://mylink/hg/folder/[email protected]\r\n Cloning hg http://mylink/hg/folder/mypackage (to revision 0.1.3) to /tmp/pip-req-build-hmzsjinv\r\n Running command hg clone --noupdate --quiet http://mylink/hg/folder/mypackage /tmp/pip-req-build-hmzsjinv\r\n\r\n Running command hg update --quiet -r=0.1.3\r\n hg: parse error at 0: not a prefix: =\r\n (=0.1.3\r\n ^ here)\r\n error: subprocess-exited-with-error\r\n \r\n \u00d7 hg update --quiet -r=0.1.3 did not run successfully.\r\n \u2502 exit code: 255\r\n \u2570\u2500> See above for output.\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: subprocess-exited-with-error\r\n\r\n\u00d7 hg update --quiet -r=0.1.3 did not run successfully.\r\n\u2502 exit code: 255\r\n\u2570\u2500> See above for output.\r\n\r\nnote: This error originates from a subprocess, and is likely not a problem with pip.\n\n### Code of Conduct\n\n- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).\n", "before_files": [{"content": "import configparser\nimport logging\nimport os\nfrom typing import List, Optional, Tuple\n\nfrom pip._internal.exceptions import BadCommand, InstallationError\nfrom pip._internal.utils.misc import HiddenText, display_path\nfrom pip._internal.utils.subprocess import make_command\nfrom pip._internal.utils.urls import path_to_url\nfrom pip._internal.vcs.versioncontrol import (\n RevOptions,\n VersionControl,\n find_path_to_project_root_from_repo_root,\n vcs,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass Mercurial(VersionControl):\n name = \"hg\"\n dirname = \".hg\"\n repo_name = \"clone\"\n schemes = (\n \"hg+file\",\n \"hg+http\",\n \"hg+https\",\n \"hg+ssh\",\n \"hg+static-http\",\n )\n\n @staticmethod\n def get_base_rev_args(rev: str) -> List[str]:\n return [f\"-r={rev}\"]\n\n def fetch_new(\n self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int\n ) -> None:\n rev_display = rev_options.to_display()\n logger.info(\n \"Cloning hg %s%s to %s\",\n url,\n rev_display,\n display_path(dest),\n )\n if verbosity <= 0:\n flags: Tuple[str, ...] = (\"--quiet\",)\n elif verbosity == 1:\n flags = ()\n elif verbosity == 2:\n flags = (\"--verbose\",)\n else:\n flags = (\"--verbose\", \"--debug\")\n self.run_command(make_command(\"clone\", \"--noupdate\", *flags, url, dest))\n self.run_command(\n make_command(\"update\", *flags, rev_options.to_args()),\n cwd=dest,\n )\n\n def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:\n repo_config = os.path.join(dest, self.dirname, \"hgrc\")\n config = configparser.RawConfigParser()\n try:\n config.read(repo_config)\n config.set(\"paths\", \"default\", url.secret)\n with open(repo_config, \"w\") as config_file:\n config.write(config_file)\n except (OSError, configparser.NoSectionError) as exc:\n logger.warning(\"Could not switch Mercurial repository to %s: %s\", url, exc)\n else:\n cmd_args = make_command(\"update\", \"-q\", rev_options.to_args())\n self.run_command(cmd_args, cwd=dest)\n\n def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:\n self.run_command([\"pull\", \"-q\"], cwd=dest)\n cmd_args = make_command(\"update\", \"-q\", rev_options.to_args())\n self.run_command(cmd_args, cwd=dest)\n\n @classmethod\n def get_remote_url(cls, location: str) -> str:\n url = cls.run_command(\n [\"showconfig\", \"paths.default\"],\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n ).strip()\n if cls._is_local_repository(url):\n url = path_to_url(url)\n return url.strip()\n\n @classmethod\n def get_revision(cls, location: str) -> str:\n \"\"\"\n Return the repository-local changeset revision number, as an integer.\n \"\"\"\n current_revision = cls.run_command(\n [\"parents\", \"--template={rev}\"],\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n ).strip()\n return current_revision\n\n @classmethod\n def get_requirement_revision(cls, location: str) -> str:\n \"\"\"\n Return the changeset identification hash, as a 40-character\n hexadecimal string\n \"\"\"\n current_rev_hash = cls.run_command(\n [\"parents\", \"--template={node}\"],\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n ).strip()\n return current_rev_hash\n\n @classmethod\n def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:\n \"\"\"Always assume the versions don't match\"\"\"\n return False\n\n @classmethod\n def get_subdirectory(cls, location: str) -> Optional[str]:\n \"\"\"\n Return the path to Python project root, relative to the repo root.\n Return None if the project root is in the repo root.\n \"\"\"\n # find the repo root\n repo_root = cls.run_command(\n [\"root\"], show_stdout=False, stdout_only=True, cwd=location\n ).strip()\n if not os.path.isabs(repo_root):\n repo_root = os.path.abspath(os.path.join(location, repo_root))\n return find_path_to_project_root_from_repo_root(location, repo_root)\n\n @classmethod\n def get_repository_root(cls, location: str) -> Optional[str]:\n loc = super().get_repository_root(location)\n if loc:\n return loc\n try:\n r = cls.run_command(\n [\"root\"],\n cwd=location,\n show_stdout=False,\n stdout_only=True,\n on_returncode=\"raise\",\n log_failed_cmd=False,\n )\n except BadCommand:\n logger.debug(\n \"could not determine if %s is under hg control \"\n \"because hg is not available\",\n location,\n )\n return None\n except InstallationError:\n return None\n return os.path.normpath(r.rstrip(\"\\r\\n\"))\n\n\nvcs.register(Mercurial)\n", "path": "src/pip/_internal/vcs/mercurial.py"}]} | 2,544 | 142 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.