problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_38772
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.aws-928
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add partition strategy to placement groups module
### Summary
Add partition as a strategy for the community.aws.ec2_placement_group module.
Also add an option to choose the actual number of partitions (min 2 which is the default and a max of 7). This option would be taken into account when the strategy is set to partition.
### Issue Type
Feature Idea
### Component Name
ec2_placement_group
### Additional Information
Possible module definition
```yaml (paste below)
- name: Create a Spread placement group.
community.aws.ec2_placement_group:
name: my-cluster
state: present
strategy: partition
partition_number: 4
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/ec2_placement_group.py]
1 #!/usr/bin/python
2 # Copyright (c) 2017 Ansible Project
3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
4
5 from __future__ import absolute_import, division, print_function
6 __metaclass__ = type
7
8
9 DOCUMENTATION = '''
10 ---
11 module: ec2_placement_group
12 version_added: 1.0.0
13 short_description: Create or delete an EC2 Placement Group
14 description:
15 - Create an EC2 Placement Group; if the placement group already exists,
16 nothing is done. Or, delete an existing placement group. If the placement
17 group is absent, do nothing. See also
18 U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html)
19 author: "Brad Macpherson (@iiibrad)"
20 options:
21 name:
22 description:
23 - The name for the placement group.
24 required: true
25 type: str
26 state:
27 description:
28 - Create or delete placement group.
29 default: present
30 choices: [ 'present', 'absent' ]
31 type: str
32 strategy:
33 description:
34 - Placement group strategy. Cluster will cluster instances into a
35 low-latency group in a single Availability Zone, while Spread spreads
36 instances across underlying hardware.
37 default: cluster
38 choices: [ 'cluster', 'spread' ]
39 type: str
40 extends_documentation_fragment:
41 - amazon.aws.aws
42 - amazon.aws.ec2
43
44 '''
45
46 EXAMPLES = '''
47 # Note: These examples do not set authentication details, see the AWS Guide
48 # for details.
49
50 - name: Create a placement group.
51 community.aws.ec2_placement_group:
52 name: my-cluster
53 state: present
54
55 - name: Create a Spread placement group.
56 community.aws.ec2_placement_group:
57 name: my-cluster
58 state: present
59 strategy: spread
60
61 - name: Delete a placement group.
62 community.aws.ec2_placement_group:
63 name: my-cluster
64 state: absent
65
66 '''
67
68
69 RETURN = '''
70 placement_group:
71 description: Placement group attributes
72 returned: when state != absent
73 type: complex
74 contains:
75 name:
76 description: PG name
77 type: str
78 sample: my-cluster
79 state:
80 description: PG state
81 type: str
82 sample: "available"
83 strategy:
84 description: PG strategy
85 type: str
86 sample: "cluster"
87
88 '''
89
90 try:
91 import botocore
92 except ImportError:
93 pass # caught by AnsibleAWSModule
94
95 from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
96 from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
97 from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
98
99
100 @AWSRetry.exponential_backoff()
101 def get_placement_group_details(connection, module):
102 name = module.params.get("name")
103 try:
104 response = connection.describe_placement_groups(
105 Filters=[{
106 "Name": "group-name",
107 "Values": [name]
108 }])
109 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
110 module.fail_json_aws(
111 e,
112 msg="Couldn't find placement group named [%s]" % name)
113
114 if len(response['PlacementGroups']) != 1:
115 return None
116 else:
117 placement_group = response['PlacementGroups'][0]
118 return {
119 "name": placement_group['GroupName'],
120 "state": placement_group['State'],
121 "strategy": placement_group['Strategy'],
122 }
123
124
125 @AWSRetry.exponential_backoff()
126 def create_placement_group(connection, module):
127 name = module.params.get("name")
128 strategy = module.params.get("strategy")
129
130 try:
131 connection.create_placement_group(
132 GroupName=name, Strategy=strategy, DryRun=module.check_mode)
133 except is_boto3_error_code('DryRunOperation'):
134 module.exit_json(changed=True, placement_group={
135 "name": name,
136 "state": 'DryRun',
137 "strategy": strategy,
138 })
139 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
140 module.fail_json_aws(
141 e,
142 msg="Couldn't create placement group [%s]" % name)
143
144 module.exit_json(changed=True,
145 placement_group=get_placement_group_details(
146 connection, module
147 ))
148
149
150 @AWSRetry.exponential_backoff()
151 def delete_placement_group(connection, module):
152 name = module.params.get("name")
153
154 try:
155 connection.delete_placement_group(
156 GroupName=name, DryRun=module.check_mode)
157 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
158 module.fail_json_aws(
159 e,
160 msg="Couldn't delete placement group [%s]" % name)
161
162 module.exit_json(changed=True)
163
164
165 def main():
166 argument_spec = dict(
167 name=dict(required=True, type='str'),
168 state=dict(default='present', choices=['present', 'absent']),
169 strategy=dict(default='cluster', choices=['cluster', 'spread'])
170 )
171
172 module = AnsibleAWSModule(
173 argument_spec=argument_spec,
174 supports_check_mode=True
175 )
176
177 connection = module.client('ec2')
178
179 state = module.params.get("state")
180
181 if state == 'present':
182 placement_group = get_placement_group_details(connection, module)
183 if placement_group is None:
184 create_placement_group(connection, module)
185 else:
186 strategy = module.params.get("strategy")
187 if placement_group['strategy'] == strategy:
188 module.exit_json(
189 changed=False, placement_group=placement_group)
190 else:
191 name = module.params.get("name")
192 module.fail_json(
193 msg=("Placement group '{}' exists, can't change strategy" +
194 " from '{}' to '{}'").format(
195 name,
196 placement_group['strategy'],
197 strategy))
198
199 elif state == 'absent':
200 placement_group = get_placement_group_details(connection, module)
201 if placement_group is None:
202 module.exit_json(changed=False)
203 else:
204 delete_placement_group(connection, module)
205
206
207 if __name__ == '__main__':
208 main()
209
[end of plugins/modules/ec2_placement_group.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/modules/ec2_placement_group.py b/plugins/modules/ec2_placement_group.py
--- a/plugins/modules/ec2_placement_group.py
+++ b/plugins/modules/ec2_placement_group.py
@@ -23,6 +23,13 @@
- The name for the placement group.
required: true
type: str
+ partition_count:
+ description:
+ - The number of partitions.
+ - Valid only when I(Strategy) is set to C(partition).
+ - Must be a value between C(1) and C(7).
+ type: int
+ version_added: 3.1.0
state:
description:
- Create or delete placement group.
@@ -35,7 +42,7 @@
low-latency group in a single Availability Zone, while Spread spreads
instances across underlying hardware.
default: cluster
- choices: [ 'cluster', 'spread' ]
+ choices: [ 'cluster', 'spread', 'partition' ]
type: str
extends_documentation_fragment:
- amazon.aws.aws
@@ -58,6 +65,13 @@
state: present
strategy: spread
+- name: Create a Partition strategy placement group.
+ community.aws.ec2_placement_group:
+ name: my-cluster
+ state: present
+ strategy: partition
+ partition_count: 3
+
- name: Delete a placement group.
community.aws.ec2_placement_group:
name: my-cluster
@@ -126,10 +140,21 @@
def create_placement_group(connection, module):
name = module.params.get("name")
strategy = module.params.get("strategy")
+ partition_count = module.params.get("partition_count")
+
+ if strategy != 'partition' and partition_count:
+ module.fail_json(
+ msg="'partition_count' can only be set when strategy is set to 'partition'.")
+
+ params = {}
+ params['GroupName'] = name
+ params['Strategy'] = strategy
+ if partition_count:
+ params['PartitionCount'] = partition_count
+ params['DryRun'] = module.check_mode
try:
- connection.create_placement_group(
- GroupName=name, Strategy=strategy, DryRun=module.check_mode)
+ connection.create_placement_group(**params)
except is_boto3_error_code('DryRunOperation'):
module.exit_json(changed=True, placement_group={
"name": name,
@@ -165,8 +190,9 @@
def main():
argument_spec = dict(
name=dict(required=True, type='str'),
+ partition_count=dict(type='int'),
state=dict(default='present', choices=['present', 'absent']),
- strategy=dict(default='cluster', choices=['cluster', 'spread'])
+ strategy=dict(default='cluster', choices=['cluster', 'spread', 'partition'])
)
module = AnsibleAWSModule(
|
{"golden_diff": "diff --git a/plugins/modules/ec2_placement_group.py b/plugins/modules/ec2_placement_group.py\n--- a/plugins/modules/ec2_placement_group.py\n+++ b/plugins/modules/ec2_placement_group.py\n@@ -23,6 +23,13 @@\n - The name for the placement group.\n required: true\n type: str\n+ partition_count:\n+ description:\n+ - The number of partitions.\n+ - Valid only when I(Strategy) is set to C(partition).\n+ - Must be a value between C(1) and C(7).\n+ type: int\n+ version_added: 3.1.0\n state:\n description:\n - Create or delete placement group.\n@@ -35,7 +42,7 @@\n low-latency group in a single Availability Zone, while Spread spreads\n instances across underlying hardware.\n default: cluster\n- choices: [ 'cluster', 'spread' ]\n+ choices: [ 'cluster', 'spread', 'partition' ]\n type: str\n extends_documentation_fragment:\n - amazon.aws.aws\n@@ -58,6 +65,13 @@\n state: present\n strategy: spread\n \n+- name: Create a Partition strategy placement group.\n+ community.aws.ec2_placement_group:\n+ name: my-cluster\n+ state: present\n+ strategy: partition\n+ partition_count: 3\n+\n - name: Delete a placement group.\n community.aws.ec2_placement_group:\n name: my-cluster\n@@ -126,10 +140,21 @@\n def create_placement_group(connection, module):\n name = module.params.get(\"name\")\n strategy = module.params.get(\"strategy\")\n+ partition_count = module.params.get(\"partition_count\")\n+\n+ if strategy != 'partition' and partition_count:\n+ module.fail_json(\n+ msg=\"'partition_count' can only be set when strategy is set to 'partition'.\")\n+\n+ params = {}\n+ params['GroupName'] = name\n+ params['Strategy'] = strategy\n+ if partition_count:\n+ params['PartitionCount'] = partition_count\n+ params['DryRun'] = module.check_mode\n \n try:\n- connection.create_placement_group(\n- GroupName=name, Strategy=strategy, DryRun=module.check_mode)\n+ connection.create_placement_group(**params)\n except is_boto3_error_code('DryRunOperation'):\n module.exit_json(changed=True, placement_group={\n \"name\": name,\n@@ -165,8 +190,9 @@\n def main():\n argument_spec = dict(\n name=dict(required=True, type='str'),\n+ partition_count=dict(type='int'),\n state=dict(default='present', choices=['present', 'absent']),\n- strategy=dict(default='cluster', choices=['cluster', 'spread'])\n+ strategy=dict(default='cluster', choices=['cluster', 'spread', 'partition'])\n )\n \n module = AnsibleAWSModule(\n", "issue": "Add partition strategy to placement groups module\n### Summary\n\nAdd partition as a strategy for the community.aws.ec2_placement_group module.\r\n\r\nAlso add an option to choose the actual number of partitions (min 2 which is the default and a max of 7). This option would be taken into account when the strategy is set to partition.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\nec2_placement_group\n\n### Additional Information\n\nPossible module definition\r\n```yaml (paste below)\r\n- name: Create a Spread placement group.\r\n community.aws.ec2_placement_group:\r\n name: my-cluster\r\n state: present\r\n strategy: partition\r\n partition_number: 4\r\n```\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# Copyright (c) 2017 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ec2_placement_group\nversion_added: 1.0.0\nshort_description: Create or delete an EC2 Placement Group\ndescription:\n - Create an EC2 Placement Group; if the placement group already exists,\n nothing is done. Or, delete an existing placement group. If the placement\n group is absent, do nothing. See also\n U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html)\nauthor: \"Brad Macpherson (@iiibrad)\"\noptions:\n name:\n description:\n - The name for the placement group.\n required: true\n type: str\n state:\n description:\n - Create or delete placement group.\n default: present\n choices: [ 'present', 'absent' ]\n type: str\n strategy:\n description:\n - Placement group strategy. Cluster will cluster instances into a\n low-latency group in a single Availability Zone, while Spread spreads\n instances across underlying hardware.\n default: cluster\n choices: [ 'cluster', 'spread' ]\n type: str\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n'''\n\nEXAMPLES = '''\n# Note: These examples do not set authentication details, see the AWS Guide\n# for details.\n\n- name: Create a placement group.\n community.aws.ec2_placement_group:\n name: my-cluster\n state: present\n\n- name: Create a Spread placement group.\n community.aws.ec2_placement_group:\n name: my-cluster\n state: present\n strategy: spread\n\n- name: Delete a placement group.\n community.aws.ec2_placement_group:\n name: my-cluster\n state: absent\n\n'''\n\n\nRETURN = '''\nplacement_group:\n description: Placement group attributes\n returned: when state != absent\n type: complex\n contains:\n name:\n description: PG name\n type: str\n sample: my-cluster\n state:\n description: PG state\n type: str\n sample: \"available\"\n strategy:\n description: PG strategy\n type: str\n sample: \"cluster\"\n\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # caught by AnsibleAWSModule\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\[email protected]_backoff()\ndef get_placement_group_details(connection, module):\n name = module.params.get(\"name\")\n try:\n response = connection.describe_placement_groups(\n Filters=[{\n \"Name\": \"group-name\",\n \"Values\": [name]\n }])\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e,\n msg=\"Couldn't find placement group named [%s]\" % name)\n\n if len(response['PlacementGroups']) != 1:\n return None\n else:\n placement_group = response['PlacementGroups'][0]\n return {\n \"name\": placement_group['GroupName'],\n \"state\": placement_group['State'],\n \"strategy\": placement_group['Strategy'],\n }\n\n\[email protected]_backoff()\ndef create_placement_group(connection, module):\n name = module.params.get(\"name\")\n strategy = module.params.get(\"strategy\")\n\n try:\n connection.create_placement_group(\n GroupName=name, Strategy=strategy, DryRun=module.check_mode)\n except is_boto3_error_code('DryRunOperation'):\n module.exit_json(changed=True, placement_group={\n \"name\": name,\n \"state\": 'DryRun',\n \"strategy\": strategy,\n })\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(\n e,\n msg=\"Couldn't create placement group [%s]\" % name)\n\n module.exit_json(changed=True,\n placement_group=get_placement_group_details(\n connection, module\n ))\n\n\[email protected]_backoff()\ndef delete_placement_group(connection, module):\n name = module.params.get(\"name\")\n\n try:\n connection.delete_placement_group(\n GroupName=name, DryRun=module.check_mode)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e,\n msg=\"Couldn't delete placement group [%s]\" % name)\n\n module.exit_json(changed=True)\n\n\ndef main():\n argument_spec = dict(\n name=dict(required=True, type='str'),\n state=dict(default='present', choices=['present', 'absent']),\n strategy=dict(default='cluster', choices=['cluster', 'spread'])\n )\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n connection = module.client('ec2')\n\n state = module.params.get(\"state\")\n\n if state == 'present':\n placement_group = get_placement_group_details(connection, module)\n if placement_group is None:\n create_placement_group(connection, module)\n else:\n strategy = module.params.get(\"strategy\")\n if placement_group['strategy'] == strategy:\n module.exit_json(\n changed=False, placement_group=placement_group)\n else:\n name = module.params.get(\"name\")\n module.fail_json(\n msg=(\"Placement group '{}' exists, can't change strategy\" +\n \" from '{}' to '{}'\").format(\n name,\n placement_group['strategy'],\n strategy))\n\n elif state == 'absent':\n placement_group = get_placement_group_details(connection, module)\n if placement_group is None:\n module.exit_json(changed=False)\n else:\n delete_placement_group(connection, module)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/ec2_placement_group.py"}]}
| 2,564 | 640 |
gh_patches_debug_9221
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-2030
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
multiple mutable rev warnings issued on `autoupdate`
when running `pre-commit autoupdate` I get 2 warnings per mutable rev, when I expected 0 see #974
```sh
~/projects/pytest-cov pre-commit-autoupdate pipx run pre-commit autoupdate
[WARNING] The 'rev' field of repo 'https://github.com/pre-commit/pre-commit-hooks' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.
[WARNING] The 'rev' field of repo 'https://github.com/timothycrosley/isort' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.
[WARNING] The 'rev' field of repo 'https://gitlab.com/pycqa/flake8' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.
[WARNING] The 'rev' field of repo 'https://github.com/pre-commit/pre-commit-hooks' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.
[WARNING] The 'rev' field of repo 'https://github.com/timothycrosley/isort' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.
[WARNING] The 'rev' field of repo 'https://gitlab.com/pycqa/flake8' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.
Updating https://github.com/pre-commit/pre-commit-hooks ... updating master -> v4.0.1.
Updating https://github.com/timothycrosley/isort ... updating master -> 5.9.3.
Updating https://gitlab.com/pycqa/flake8 ... updating master -> 3.9.2.
```
</issue>
<code>
[start of pre_commit/commands/migrate_config.py]
1 import re
2 import textwrap
3
4 import yaml
5
6 from pre_commit.clientlib import load_config
7 from pre_commit.util import yaml_load
8
9
10 def _is_header_line(line: str) -> bool:
11 return line.startswith(('#', '---')) or not line.strip()
12
13
14 def _migrate_map(contents: str) -> str:
15 if isinstance(yaml_load(contents), list):
16 # Find the first non-header line
17 lines = contents.splitlines(True)
18 i = 0
19 # Only loop on non empty configuration file
20 while i < len(lines) and _is_header_line(lines[i]):
21 i += 1
22
23 header = ''.join(lines[:i])
24 rest = ''.join(lines[i:])
25
26 # If they are using the "default" flow style of yaml, this operation
27 # will yield a valid configuration
28 try:
29 trial_contents = f'{header}repos:\n{rest}'
30 yaml_load(trial_contents)
31 contents = trial_contents
32 except yaml.YAMLError:
33 contents = f'{header}repos:\n{textwrap.indent(rest, " " * 4)}'
34
35 return contents
36
37
38 def _migrate_sha_to_rev(contents: str) -> str:
39 return re.sub(r'(\n\s+)sha:', r'\1rev:', contents)
40
41
42 def migrate_config(config_file: str, quiet: bool = False) -> int:
43 # ensure that the configuration is a valid pre-commit configuration
44 load_config(config_file)
45
46 with open(config_file) as f:
47 orig_contents = contents = f.read()
48
49 contents = _migrate_map(contents)
50 contents = _migrate_sha_to_rev(contents)
51
52 if contents != orig_contents:
53 with open(config_file, 'w') as f:
54 f.write(contents)
55
56 print('Configuration has been migrated.')
57 elif not quiet:
58 print('Configuration is already migrated.')
59 return 0
60
[end of pre_commit/commands/migrate_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py
--- a/pre_commit/commands/migrate_config.py
+++ b/pre_commit/commands/migrate_config.py
@@ -3,7 +3,6 @@
import yaml
-from pre_commit.clientlib import load_config
from pre_commit.util import yaml_load
@@ -40,9 +39,6 @@
def migrate_config(config_file: str, quiet: bool = False) -> int:
- # ensure that the configuration is a valid pre-commit configuration
- load_config(config_file)
-
with open(config_file) as f:
orig_contents = contents = f.read()
|
{"golden_diff": "diff --git a/pre_commit/commands/migrate_config.py b/pre_commit/commands/migrate_config.py\n--- a/pre_commit/commands/migrate_config.py\n+++ b/pre_commit/commands/migrate_config.py\n@@ -3,7 +3,6 @@\n \n import yaml\n \n-from pre_commit.clientlib import load_config\n from pre_commit.util import yaml_load\n \n \n@@ -40,9 +39,6 @@\n \n \n def migrate_config(config_file: str, quiet: bool = False) -> int:\n- # ensure that the configuration is a valid pre-commit configuration\n- load_config(config_file)\n-\n with open(config_file) as f:\n orig_contents = contents = f.read()\n", "issue": "multiple mutable rev warnings issued on `autoupdate`\nwhen running `pre-commit autoupdate` I get 2 warnings per mutable rev, when I expected 0 see #974\r\n\r\n```sh\r\n~/projects/pytest-cov \ue0b0 \ue0a0 pre-commit-autoupdate \ue0b0 pipx run pre-commit autoupdate \r\n[WARNING] The 'rev' field of repo 'https://github.com/pre-commit/pre-commit-hooks' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.\r\n[WARNING] The 'rev' field of repo 'https://github.com/timothycrosley/isort' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.\r\n[WARNING] The 'rev' field of repo 'https://gitlab.com/pycqa/flake8' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.\r\n[WARNING] The 'rev' field of repo 'https://github.com/pre-commit/pre-commit-hooks' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.\r\n[WARNING] The 'rev' field of repo 'https://github.com/timothycrosley/isort' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.\r\n[WARNING] The 'rev' field of repo 'https://gitlab.com/pycqa/flake8' appears to be a mutable reference (moving tag / branch). Mutable references are never updated after first install and are not supported. See https://pre-commit.com/#using-the-latest-version-for-a-repository for more details. Hint: `pre-commit autoupdate` often fixes this.\r\nUpdating https://github.com/pre-commit/pre-commit-hooks ... updating master -> v4.0.1.\r\nUpdating https://github.com/timothycrosley/isort ... updating master -> 5.9.3.\r\nUpdating https://gitlab.com/pycqa/flake8 ... updating master -> 3.9.2.\r\n```\n", "before_files": [{"content": "import re\nimport textwrap\n\nimport yaml\n\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.util import yaml_load\n\n\ndef _is_header_line(line: str) -> bool:\n return line.startswith(('#', '---')) or not line.strip()\n\n\ndef _migrate_map(contents: str) -> str:\n if isinstance(yaml_load(contents), list):\n # Find the first non-header line\n lines = contents.splitlines(True)\n i = 0\n # Only loop on non empty configuration file\n while i < len(lines) and _is_header_line(lines[i]):\n i += 1\n\n header = ''.join(lines[:i])\n rest = ''.join(lines[i:])\n\n # If they are using the \"default\" flow style of yaml, this operation\n # will yield a valid configuration\n try:\n trial_contents = f'{header}repos:\\n{rest}'\n yaml_load(trial_contents)\n contents = trial_contents\n except yaml.YAMLError:\n contents = f'{header}repos:\\n{textwrap.indent(rest, \" \" * 4)}'\n\n return contents\n\n\ndef _migrate_sha_to_rev(contents: str) -> str:\n return re.sub(r'(\\n\\s+)sha:', r'\\1rev:', contents)\n\n\ndef migrate_config(config_file: str, quiet: bool = False) -> int:\n # ensure that the configuration is a valid pre-commit configuration\n load_config(config_file)\n\n with open(config_file) as f:\n orig_contents = contents = f.read()\n\n contents = _migrate_map(contents)\n contents = _migrate_sha_to_rev(contents)\n\n if contents != orig_contents:\n with open(config_file, 'w') as f:\n f.write(contents)\n\n print('Configuration has been migrated.')\n elif not quiet:\n print('Configuration is already migrated.')\n return 0\n", "path": "pre_commit/commands/migrate_config.py"}]}
| 1,709 | 146 |
gh_patches_debug_7821
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-1324
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AsyncPG Instrumentation Span Names are too long when using query string
**Is your feature request related to a problem?**
Not a problem per se however, the `asyncpg` instrumentation uses sets span names as the query string which results in some very messing looking trace names in jaeger, datadog, etc and outright doesn't work with promscale due to long queries exhaust the available bytes for btree indexes.
**Describe the solution you'd like**
- The ability to change the name of the span with a hook or something similar. The `httpx` instrumentation provides hooks that receive the span and the name can be updated there.
- Just use a shorter or truncated version of the query as the name.
Which alternative solutions or features have you considered?
Not using the `asyncpg` instrumentation and manually instrumenting specific queries.
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This library allows tracing PostgreSQL queries made by the
17 `asyncpg <https://magicstack.github.io/asyncpg/current/>`_ library.
18
19 Usage
20 -----
21
22 .. code-block:: python
23
24 import asyncpg
25 from opentelemetry.instrumentation.asyncpg import AsyncPGInstrumentor
26
27 # You can optionally pass a custom TracerProvider to AsyncPGInstrumentor.instrument()
28 AsyncPGInstrumentor().instrument()
29 conn = await asyncpg.connect(user='user', password='password',
30 database='database', host='127.0.0.1')
31 values = await conn.fetch('''SELECT 42;''')
32
33 API
34 ---
35 """
36
37 from typing import Collection
38
39 import asyncpg
40 import wrapt
41
42 from opentelemetry import trace
43 from opentelemetry.instrumentation.asyncpg.package import _instruments
44 from opentelemetry.instrumentation.asyncpg.version import __version__
45 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
46 from opentelemetry.instrumentation.utils import unwrap
47 from opentelemetry.semconv.trace import (
48 DbSystemValues,
49 NetTransportValues,
50 SpanAttributes,
51 )
52 from opentelemetry.trace import SpanKind
53 from opentelemetry.trace.status import Status, StatusCode
54
55
56 def _hydrate_span_from_args(connection, query, parameters) -> dict:
57 """Get network and database attributes from connection."""
58 span_attributes = {
59 SpanAttributes.DB_SYSTEM: DbSystemValues.POSTGRESQL.value
60 }
61
62 # connection contains _params attribute which is a namedtuple ConnectionParameters.
63 # https://github.com/MagicStack/asyncpg/blob/master/asyncpg/connection.py#L68
64
65 params = getattr(connection, "_params", None)
66 dbname = getattr(params, "database", None)
67 if dbname:
68 span_attributes[SpanAttributes.DB_NAME] = dbname
69 user = getattr(params, "user", None)
70 if user:
71 span_attributes[SpanAttributes.DB_USER] = user
72
73 # connection contains _addr attribute which is either a host/port tuple, or unix socket string
74 # https://magicstack.github.io/asyncpg/current/_modules/asyncpg/connection.html
75 addr = getattr(connection, "_addr", None)
76 if isinstance(addr, tuple):
77 span_attributes[SpanAttributes.NET_PEER_NAME] = addr[0]
78 span_attributes[SpanAttributes.NET_PEER_PORT] = addr[1]
79 span_attributes[
80 SpanAttributes.NET_TRANSPORT
81 ] = NetTransportValues.IP_TCP.value
82 elif isinstance(addr, str):
83 span_attributes[SpanAttributes.NET_PEER_NAME] = addr
84 span_attributes[
85 SpanAttributes.NET_TRANSPORT
86 ] = NetTransportValues.UNIX.value
87
88 if query is not None:
89 span_attributes[SpanAttributes.DB_STATEMENT] = query
90
91 if parameters is not None and len(parameters) > 0:
92 span_attributes["db.statement.parameters"] = str(parameters)
93
94 return span_attributes
95
96
97 class AsyncPGInstrumentor(BaseInstrumentor):
98 def __init__(self, capture_parameters=False):
99 super().__init__()
100 self.capture_parameters = capture_parameters
101 self._tracer = None
102
103 def instrumentation_dependencies(self) -> Collection[str]:
104 return _instruments
105
106 def _instrument(self, **kwargs):
107 tracer_provider = kwargs.get("tracer_provider")
108 self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)
109
110 for method in [
111 "Connection.execute",
112 "Connection.executemany",
113 "Connection.fetch",
114 "Connection.fetchval",
115 "Connection.fetchrow",
116 ]:
117 wrapt.wrap_function_wrapper(
118 "asyncpg.connection", method, self._do_execute
119 )
120
121 def _uninstrument(self, **__):
122 for method in [
123 "execute",
124 "executemany",
125 "fetch",
126 "fetchval",
127 "fetchrow",
128 ]:
129 unwrap(asyncpg.Connection, method)
130
131 async def _do_execute(self, func, instance, args, kwargs):
132
133 exception = None
134 params = getattr(instance, "_params", {})
135 name = args[0] if args[0] else params.get("database", "postgresql")
136
137 with self._tracer.start_as_current_span(
138 name, kind=SpanKind.CLIENT
139 ) as span:
140 if span.is_recording():
141 span_attributes = _hydrate_span_from_args(
142 instance,
143 args[0],
144 args[1:] if self.capture_parameters else None,
145 )
146 for attribute, value in span_attributes.items():
147 span.set_attribute(attribute, value)
148
149 try:
150 result = await func(*args, **kwargs)
151 except Exception as exc: # pylint: disable=W0703
152 exception = exc
153 raise
154 finally:
155 if span.is_recording() and exception is not None:
156 span.set_status(Status(StatusCode.ERROR))
157
158 return result
159
[end of instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py b/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py
@@ -134,6 +134,11 @@
params = getattr(instance, "_params", {})
name = args[0] if args[0] else params.get("database", "postgresql")
+ try:
+ name = name.split()[0]
+ except IndexError:
+ name = ""
+
with self._tracer.start_as_current_span(
name, kind=SpanKind.CLIENT
) as span:
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py b/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py\n@@ -134,6 +134,11 @@\n params = getattr(instance, \"_params\", {})\n name = args[0] if args[0] else params.get(\"database\", \"postgresql\")\n \n+ try:\n+ name = name.split()[0]\n+ except IndexError:\n+ name = \"\"\n+\n with self._tracer.start_as_current_span(\n name, kind=SpanKind.CLIENT\n ) as span:\n", "issue": "AsyncPG Instrumentation Span Names are too long when using query string\n**Is your feature request related to a problem?**\r\nNot a problem per se however, the `asyncpg` instrumentation uses sets span names as the query string which results in some very messing looking trace names in jaeger, datadog, etc and outright doesn't work with promscale due to long queries exhaust the available bytes for btree indexes.\r\n\r\n**Describe the solution you'd like**\r\n- The ability to change the name of the span with a hook or something similar. The `httpx` instrumentation provides hooks that receive the span and the name can be updated there.\r\n- Just use a shorter or truncated version of the query as the name.\r\n\r\nWhich alternative solutions or features have you considered?\r\nNot using the `asyncpg` instrumentation and manually instrumenting specific queries.\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows tracing PostgreSQL queries made by the\n`asyncpg <https://magicstack.github.io/asyncpg/current/>`_ library.\n\nUsage\n-----\n\n.. code-block:: python\n\n import asyncpg\n from opentelemetry.instrumentation.asyncpg import AsyncPGInstrumentor\n\n # You can optionally pass a custom TracerProvider to AsyncPGInstrumentor.instrument()\n AsyncPGInstrumentor().instrument()\n conn = await asyncpg.connect(user='user', password='password',\n database='database', host='127.0.0.1')\n values = await conn.fetch('''SELECT 42;''')\n\nAPI\n---\n\"\"\"\n\nfrom typing import Collection\n\nimport asyncpg\nimport wrapt\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.asyncpg.package import _instruments\nfrom opentelemetry.instrumentation.asyncpg.version import __version__\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.utils import unwrap\nfrom opentelemetry.semconv.trace import (\n DbSystemValues,\n NetTransportValues,\n SpanAttributes,\n)\nfrom opentelemetry.trace import SpanKind\nfrom opentelemetry.trace.status import Status, StatusCode\n\n\ndef _hydrate_span_from_args(connection, query, parameters) -> dict:\n \"\"\"Get network and database attributes from connection.\"\"\"\n span_attributes = {\n SpanAttributes.DB_SYSTEM: DbSystemValues.POSTGRESQL.value\n }\n\n # connection contains _params attribute which is a namedtuple ConnectionParameters.\n # https://github.com/MagicStack/asyncpg/blob/master/asyncpg/connection.py#L68\n\n params = getattr(connection, \"_params\", None)\n dbname = getattr(params, \"database\", None)\n if dbname:\n span_attributes[SpanAttributes.DB_NAME] = dbname\n user = getattr(params, \"user\", None)\n if user:\n span_attributes[SpanAttributes.DB_USER] = user\n\n # connection contains _addr attribute which is either a host/port tuple, or unix socket string\n # https://magicstack.github.io/asyncpg/current/_modules/asyncpg/connection.html\n addr = getattr(connection, \"_addr\", None)\n if isinstance(addr, tuple):\n span_attributes[SpanAttributes.NET_PEER_NAME] = addr[0]\n span_attributes[SpanAttributes.NET_PEER_PORT] = addr[1]\n span_attributes[\n SpanAttributes.NET_TRANSPORT\n ] = NetTransportValues.IP_TCP.value\n elif isinstance(addr, str):\n span_attributes[SpanAttributes.NET_PEER_NAME] = addr\n span_attributes[\n SpanAttributes.NET_TRANSPORT\n ] = NetTransportValues.UNIX.value\n\n if query is not None:\n span_attributes[SpanAttributes.DB_STATEMENT] = query\n\n if parameters is not None and len(parameters) > 0:\n span_attributes[\"db.statement.parameters\"] = str(parameters)\n\n return span_attributes\n\n\nclass AsyncPGInstrumentor(BaseInstrumentor):\n def __init__(self, capture_parameters=False):\n super().__init__()\n self.capture_parameters = capture_parameters\n self._tracer = None\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n tracer_provider = kwargs.get(\"tracer_provider\")\n self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)\n\n for method in [\n \"Connection.execute\",\n \"Connection.executemany\",\n \"Connection.fetch\",\n \"Connection.fetchval\",\n \"Connection.fetchrow\",\n ]:\n wrapt.wrap_function_wrapper(\n \"asyncpg.connection\", method, self._do_execute\n )\n\n def _uninstrument(self, **__):\n for method in [\n \"execute\",\n \"executemany\",\n \"fetch\",\n \"fetchval\",\n \"fetchrow\",\n ]:\n unwrap(asyncpg.Connection, method)\n\n async def _do_execute(self, func, instance, args, kwargs):\n\n exception = None\n params = getattr(instance, \"_params\", {})\n name = args[0] if args[0] else params.get(\"database\", \"postgresql\")\n\n with self._tracer.start_as_current_span(\n name, kind=SpanKind.CLIENT\n ) as span:\n if span.is_recording():\n span_attributes = _hydrate_span_from_args(\n instance,\n args[0],\n args[1:] if self.capture_parameters else None,\n )\n for attribute, value in span_attributes.items():\n span.set_attribute(attribute, value)\n\n try:\n result = await func(*args, **kwargs)\n except Exception as exc: # pylint: disable=W0703\n exception = exc\n raise\n finally:\n if span.is_recording() and exception is not None:\n span.set_status(Status(StatusCode.ERROR))\n\n return result\n", "path": "instrumentation/opentelemetry-instrumentation-asyncpg/src/opentelemetry/instrumentation/asyncpg/__init__.py"}]}
| 2,274 | 211 |
gh_patches_debug_67093
|
rasdani/github-patches
|
git_diff
|
celery__celery-6774
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Events command always fails when camera is specified
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 5.05 and master (2411504f4164ac9acfa20007038d37591c6f57e5)
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:5.1.0b2 (singularity) kombu:5.1.0b1 py:3.9.5
billiard:3.6.4.0 py-amqp:5.0.6
platform -> system:Darwin arch:64bit
kernel version:20.4.0 imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
deprecated_settings: None
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: Unknown, tested on 3.9
* **Minimal Celery Version**: Unknown, tested on 5.05 and master
* **Minimal Kombu Version**: N/A
* **Minimal Broker Version**: N/A
* **Minimal Result Backend Version**: N/A
* **Minimal OS and/or Kernel Version**: N/A
* **Minimal Broker Client Version**: N/A
* **Minimal Result Backend Client Version**: N/A
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
amqp==5.0.6
billiard==3.6.4.0
celery @ git+https://github.com/celery/celery.git@2411504f4164ac9acfa20007038d37591c6f57e5
click==7.1.2
click-didyoumean==0.0.3
click-plugins==1.1.1
click-repl==0.1.6
kombu==5.1.0b1
prompt-toolkit==3.0.18
pytz==2021.1
six==1.16.0
vine==5.0.0
wcwidth==0.2.5
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
`app.py`:
```python
import celery
app = celery.Celery()
```
`camera.py`:
```python
from celery.events.snapshot import Polaroid
class Camera(Polaroid):
def on_shutter(self, _):
print("Hello!")
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
The following command should (attempt to) start the event camera:
```
$ celery -A app events -c camera
...
ModuleNotFoundError: No module named 'camera'
```
(The bug is independent of whether the module `camera` exists.)
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
A backtrace is produced:
```
Traceback (most recent call last):
File "/Users/user/Desktop/tmp/venv/bin/celery", line 8, in <module>
sys.exit(main())
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/__main__.py", line 15, in main
sys.exit(_main())
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/celery.py", line 213, in main
return celery(auto_envvar_prefix="CELERY")
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/base.py", line 132, in caller
return f(ctx, *args, **kwargs)
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/events.py", line 90, in events
return _run_evcam(camera, app=app, freq=frequency, maxrate=maxrate,
File "/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/events.py", line 37, in _run_evcam
return cam()
TypeError: evcam() got an unexpected keyword argument 'executable'
```
</issue>
<code>
[start of celery/events/snapshot.py]
1 """Periodically store events in a database.
2
3 Consuming the events as a stream isn't always suitable
4 so this module implements a system to take snapshots of the
5 state of a cluster at regular intervals. There's a full
6 implementation of this writing the snapshots to a database
7 in :mod:`djcelery.snapshots` in the `django-celery` distribution.
8 """
9 from kombu.utils.limits import TokenBucket
10
11 from celery import platforms
12 from celery.app import app_or_default
13 from celery.utils.dispatch import Signal
14 from celery.utils.imports import instantiate
15 from celery.utils.log import get_logger
16 from celery.utils.time import rate
17 from celery.utils.timer2 import Timer
18
19 __all__ = ('Polaroid', 'evcam')
20
21 logger = get_logger('celery.evcam')
22
23
24 class Polaroid:
25 """Record event snapshots."""
26
27 timer = None
28 shutter_signal = Signal(name='shutter_signal', providing_args={'state'})
29 cleanup_signal = Signal(name='cleanup_signal')
30 clear_after = False
31
32 _tref = None
33 _ctref = None
34
35 def __init__(self, state, freq=1.0, maxrate=None,
36 cleanup_freq=3600.0, timer=None, app=None):
37 self.app = app_or_default(app)
38 self.state = state
39 self.freq = freq
40 self.cleanup_freq = cleanup_freq
41 self.timer = timer or self.timer or Timer()
42 self.logger = logger
43 self.maxrate = maxrate and TokenBucket(rate(maxrate))
44
45 def install(self):
46 self._tref = self.timer.call_repeatedly(self.freq, self.capture)
47 self._ctref = self.timer.call_repeatedly(
48 self.cleanup_freq, self.cleanup,
49 )
50
51 def on_shutter(self, state):
52 pass
53
54 def on_cleanup(self):
55 pass
56
57 def cleanup(self):
58 logger.debug('Cleanup: Running...')
59 self.cleanup_signal.send(sender=self.state)
60 self.on_cleanup()
61
62 def shutter(self):
63 if self.maxrate is None or self.maxrate.can_consume():
64 logger.debug('Shutter: %s', self.state)
65 self.shutter_signal.send(sender=self.state)
66 self.on_shutter(self.state)
67
68 def capture(self):
69 self.state.freeze_while(self.shutter, clear_after=self.clear_after)
70
71 def cancel(self):
72 if self._tref:
73 self._tref() # flush all received events.
74 self._tref.cancel()
75 if self._ctref:
76 self._ctref.cancel()
77
78 def __enter__(self):
79 self.install()
80 return self
81
82 def __exit__(self, *exc_info):
83 self.cancel()
84
85
86 def evcam(camera, freq=1.0, maxrate=None, loglevel=0,
87 logfile=None, pidfile=None, timer=None, app=None):
88 """Start snapshot recorder."""
89 app = app_or_default(app)
90
91 if pidfile:
92 platforms.create_pidlock(pidfile)
93
94 app.log.setup_logging_subsystem(loglevel, logfile)
95
96 print(f'-> evcam: Taking snapshots with {camera} (every {freq} secs.)')
97 state = app.events.State()
98 cam = instantiate(camera, state, app=app, freq=freq,
99 maxrate=maxrate, timer=timer)
100 cam.install()
101 conn = app.connection_for_read()
102 recv = app.events.Receiver(conn, handlers={'*': state.event})
103 try:
104 try:
105 recv.capture(limit=None)
106 except KeyboardInterrupt:
107 raise SystemExit
108 finally:
109 cam.cancel()
110 conn.close()
111
[end of celery/events/snapshot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/celery/events/snapshot.py b/celery/events/snapshot.py
--- a/celery/events/snapshot.py
+++ b/celery/events/snapshot.py
@@ -84,7 +84,8 @@
def evcam(camera, freq=1.0, maxrate=None, loglevel=0,
- logfile=None, pidfile=None, timer=None, app=None):
+ logfile=None, pidfile=None, timer=None, app=None,
+ **kwargs):
"""Start snapshot recorder."""
app = app_or_default(app)
|
{"golden_diff": "diff --git a/celery/events/snapshot.py b/celery/events/snapshot.py\n--- a/celery/events/snapshot.py\n+++ b/celery/events/snapshot.py\n@@ -84,7 +84,8 @@\n \n \n def evcam(camera, freq=1.0, maxrate=None, loglevel=0,\n- logfile=None, pidfile=None, timer=None, app=None):\n+ logfile=None, pidfile=None, timer=None, app=None,\n+ **kwargs):\n \"\"\"Start snapshot recorder.\"\"\"\n app = app_or_default(app)\n", "issue": "Events command always fails when camera is specified\n<!--\r\nPlease fill this template entirely and do not erase parts of it.\r\nWe reserve the right to close without a response\r\nbug reports which are incomplete.\r\n-->\r\n# Checklist\r\n<!--\r\nTo check an item on the list replace [ ] with [x].\r\n-->\r\n- [x] I have verified that the issue exists against the `master` branch of Celery.\r\n- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.\r\n- [x] I have read the relevant section in the\r\n [contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)\r\n on reporting bugs.\r\n- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)\r\n for similar or identical bug reports.\r\n- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)\r\n for existing proposed fixes.\r\n- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)\r\n to find out if the bug was already fixed in the master branch.\r\n- [x] I have included all related issues and possible duplicate issues\r\n in this issue (If there are none, check this box anyway).\r\n\r\n## Mandatory Debugging Information\r\n\r\n- [x] I have included the output of ``celery -A proj report`` in the issue.\r\n (if you are not able to do this, then at least specify the Celery\r\n version affected).\r\n- [x] I have verified that the issue exists against the `master` branch of Celery.\r\n- [x] I have included the contents of ``pip freeze`` in the issue.\r\n- [x] I have included all the versions of all the external dependencies required\r\n to reproduce this bug.\r\n\r\n## Optional Debugging Information\r\n<!--\r\nTry some of the below if you think they are relevant.\r\nIt will help us figure out the scope of the bug and how many users it affects.\r\n-->\r\n- [ ] I have tried reproducing the issue on more than one Python version\r\n and/or implementation.\r\n- [ ] I have tried reproducing the issue on more than one message broker and/or\r\n result backend.\r\n- [ ] I have tried reproducing the issue on more than one version of the message\r\n broker and/or result backend.\r\n- [ ] I have tried reproducing the issue on more than one operating system.\r\n- [ ] I have tried reproducing the issue on more than one workers pool.\r\n- [ ] I have tried reproducing the issue with autoscaling, retries,\r\n ETA/Countdown & rate limits disabled.\r\n- [ ] I have tried reproducing the issue after downgrading\r\n and/or upgrading Celery and its dependencies.\r\n\r\n## Related Issues and Possible Duplicates\r\n<!--\r\nPlease make sure to search and mention any related issues\r\nor possible duplicates to this issue as requested by the checklist above.\r\n\r\nThis may or may not include issues in other repositories that the Celery project\r\nmaintains or other repositories that are dependencies of Celery.\r\n\r\nIf you don't know how to mention issues, please refer to Github's documentation\r\non the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests\r\n-->\r\n\r\n#### Related Issues\r\n\r\n- None\r\n\r\n#### Possible Duplicates\r\n\r\n- None\r\n\r\n## Environment & Settings\r\n<!-- Include the contents of celery --version below -->\r\n**Celery version**: 5.05 and master (2411504f4164ac9acfa20007038d37591c6f57e5)\r\n<!-- Include the output of celery -A proj report below -->\r\n<details>\r\n<summary><b><code>celery report</code> Output:</b></summary>\r\n<p>\r\n\r\n```\r\nsoftware -> celery:5.1.0b2 (singularity) kombu:5.1.0b1 py:3.9.5\r\n billiard:3.6.4.0 py-amqp:5.0.6\r\nplatform -> system:Darwin arch:64bit\r\n kernel version:20.4.0 imp:CPython\r\nloader -> celery.loaders.app.AppLoader\r\nsettings -> transport:amqp results:disabled\r\n\r\ndeprecated_settings: None\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n# Steps to Reproduce\r\n\r\n## Required Dependencies\r\n<!-- Please fill the required dependencies to reproduce this issue -->\r\n* **Minimal Python Version**: Unknown, tested on 3.9\r\n* **Minimal Celery Version**: Unknown, tested on 5.05 and master\r\n* **Minimal Kombu Version**: N/A \r\n* **Minimal Broker Version**: N/A\r\n* **Minimal Result Backend Version**: N/A\r\n* **Minimal OS and/or Kernel Version**: N/A\r\n* **Minimal Broker Client Version**: N/A\r\n* **Minimal Result Backend Client Version**: N/A\r\n\r\n### Python Packages\r\n<!-- Please fill the contents of pip freeze below -->\r\n<details>\r\n<summary><b><code>pip freeze</code> Output:</b></summary>\r\n<p>\r\n\r\n```\r\namqp==5.0.6\r\nbilliard==3.6.4.0\r\ncelery @ git+https://github.com/celery/celery.git@2411504f4164ac9acfa20007038d37591c6f57e5\r\nclick==7.1.2\r\nclick-didyoumean==0.0.3\r\nclick-plugins==1.1.1\r\nclick-repl==0.1.6\r\nkombu==5.1.0b1\r\nprompt-toolkit==3.0.18\r\npytz==2021.1\r\nsix==1.16.0\r\nvine==5.0.0\r\nwcwidth==0.2.5\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n### Other Dependencies\r\n<!--\r\nPlease provide system dependencies, configuration files\r\nand other dependency information if applicable\r\n-->\r\n<details>\r\n<p>\r\nN/A\r\n</p>\r\n</details>\r\n\r\n## Minimally Reproducible Test Case\r\n<!--\r\nPlease provide a reproducible test case.\r\nRefer to the Reporting Bugs section in our contribution guide.\r\n\r\nWe prefer submitting test cases in the form of a PR to our integration test suite.\r\nIf you can provide one, please mention the PR number below.\r\nIf not, please attach the most minimal code example required to reproduce the issue below.\r\nIf the test case is too large, please include a link to a gist or a repository below.\r\n-->\r\n\r\n<details>\r\n<p>\r\n\r\n`app.py`:\r\n\r\n```python\r\nimport celery\r\n\r\napp = celery.Celery()\r\n```\r\n\r\n`camera.py`:\r\n\r\n```python\r\nfrom celery.events.snapshot import Polaroid\r\n\r\nclass Camera(Polaroid):\r\n def on_shutter(self, _):\r\n print(\"Hello!\")\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n# Expected Behavior\r\n<!-- Describe in detail what you expect to happen -->\r\n\r\nThe following command should (attempt to) start the event camera:\r\n\r\n```\r\n$ celery -A app events -c camera\r\n...\r\nModuleNotFoundError: No module named 'camera'\r\n```\r\n\r\n(The bug is independent of whether the module `camera` exists.)\r\n\r\n# Actual Behavior\r\n<!--\r\nDescribe in detail what actually happened.\r\nPlease include a backtrace and surround it with triple backticks (```).\r\nIn addition, include the Celery daemon logs, the broker logs,\r\nthe result backend logs and system logs below if they will help us debug\r\nthe issue.\r\n-->\r\n\r\nA backtrace is produced:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/user/Desktop/tmp/venv/bin/celery\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/__main__.py\", line 15, in main\r\n sys.exit(_main())\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/celery.py\", line 213, in main\r\n return celery(auto_envvar_prefix=\"CELERY\")\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/click/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/base.py\", line 132, in caller\r\n return f(ctx, *args, **kwargs)\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/events.py\", line 90, in events\r\n return _run_evcam(camera, app=app, freq=frequency, maxrate=maxrate,\r\n File \"/Users/user/Desktop/tmp/venv/lib/python3.9/site-packages/celery/bin/events.py\", line 37, in _run_evcam\r\n return cam()\r\nTypeError: evcam() got an unexpected keyword argument 'executable'\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Periodically store events in a database.\n\nConsuming the events as a stream isn't always suitable\nso this module implements a system to take snapshots of the\nstate of a cluster at regular intervals. There's a full\nimplementation of this writing the snapshots to a database\nin :mod:`djcelery.snapshots` in the `django-celery` distribution.\n\"\"\"\nfrom kombu.utils.limits import TokenBucket\n\nfrom celery import platforms\nfrom celery.app import app_or_default\nfrom celery.utils.dispatch import Signal\nfrom celery.utils.imports import instantiate\nfrom celery.utils.log import get_logger\nfrom celery.utils.time import rate\nfrom celery.utils.timer2 import Timer\n\n__all__ = ('Polaroid', 'evcam')\n\nlogger = get_logger('celery.evcam')\n\n\nclass Polaroid:\n \"\"\"Record event snapshots.\"\"\"\n\n timer = None\n shutter_signal = Signal(name='shutter_signal', providing_args={'state'})\n cleanup_signal = Signal(name='cleanup_signal')\n clear_after = False\n\n _tref = None\n _ctref = None\n\n def __init__(self, state, freq=1.0, maxrate=None,\n cleanup_freq=3600.0, timer=None, app=None):\n self.app = app_or_default(app)\n self.state = state\n self.freq = freq\n self.cleanup_freq = cleanup_freq\n self.timer = timer or self.timer or Timer()\n self.logger = logger\n self.maxrate = maxrate and TokenBucket(rate(maxrate))\n\n def install(self):\n self._tref = self.timer.call_repeatedly(self.freq, self.capture)\n self._ctref = self.timer.call_repeatedly(\n self.cleanup_freq, self.cleanup,\n )\n\n def on_shutter(self, state):\n pass\n\n def on_cleanup(self):\n pass\n\n def cleanup(self):\n logger.debug('Cleanup: Running...')\n self.cleanup_signal.send(sender=self.state)\n self.on_cleanup()\n\n def shutter(self):\n if self.maxrate is None or self.maxrate.can_consume():\n logger.debug('Shutter: %s', self.state)\n self.shutter_signal.send(sender=self.state)\n self.on_shutter(self.state)\n\n def capture(self):\n self.state.freeze_while(self.shutter, clear_after=self.clear_after)\n\n def cancel(self):\n if self._tref:\n self._tref() # flush all received events.\n self._tref.cancel()\n if self._ctref:\n self._ctref.cancel()\n\n def __enter__(self):\n self.install()\n return self\n\n def __exit__(self, *exc_info):\n self.cancel()\n\n\ndef evcam(camera, freq=1.0, maxrate=None, loglevel=0,\n logfile=None, pidfile=None, timer=None, app=None):\n \"\"\"Start snapshot recorder.\"\"\"\n app = app_or_default(app)\n\n if pidfile:\n platforms.create_pidlock(pidfile)\n\n app.log.setup_logging_subsystem(loglevel, logfile)\n\n print(f'-> evcam: Taking snapshots with {camera} (every {freq} secs.)')\n state = app.events.State()\n cam = instantiate(camera, state, app=app, freq=freq,\n maxrate=maxrate, timer=timer)\n cam.install()\n conn = app.connection_for_read()\n recv = app.events.Receiver(conn, handlers={'*': state.event})\n try:\n try:\n recv.capture(limit=None)\n except KeyboardInterrupt:\n raise SystemExit\n finally:\n cam.cancel()\n conn.close()\n", "path": "celery/events/snapshot.py"}]}
| 3,768 | 122 |
gh_patches_debug_23894
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-2168
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bodhi should not erroneously 'downgrade' bug status from VERIFIED
We frequently play a game of pong with Bodhi, where we verify a fix when a bug is in `MODIFIED` state (i.e. not yet pushed to u-t) and change the state to `VERIFIED`, only for Bodhi to come along and pong the state back to `ON_QA` when it pushes the update. For e.g., every single damn F25 blocker fix:
https://bugzilla.redhat.com/show_bug.cgi?id=1393110
https://bugzilla.redhat.com/show_bug.cgi?id=1376471
https://bugzilla.redhat.com/show_bug.cgi?id=1390607
https://bugzilla.redhat.com/show_bug.cgi?id=1378156
https://bugzilla.redhat.com/show_bug.cgi?id=1392654
https://bugzilla.redhat.com/show_bug.cgi?id=1393109
when Bodhi is only doing the `ON_QA` change for an update being pushed, whose contents have not changed since it was submitted, it should avoid changing the bug's status if it's `VERIFIED`.
</issue>
<code>
[start of bodhi/server/bugs.py]
1 # -*- coding: utf-8 -*-
2 # Copyright © 2013-2017 Red Hat, Inc. and others.
3 #
4 # This file is part of Bodhi.
5 #
6 # This program is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU General Public License
8 # as published by the Free Software Foundation; either version 2
9 # of the License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program; if not, write to the Free Software
18 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19 """Defines utilities for accessing Bugzilla."""
20
21 import logging
22
23 from collections import namedtuple
24 from kitchen.text.converters import to_unicode
25 import bugzilla
26 import six
27 from six.moves import xmlrpc_client
28
29 from bodhi.server.config import config
30
31
32 bugtracker = None
33 log = logging.getLogger('bodhi')
34 FakeBug = namedtuple('FakeBug', ['bug_id'])
35
36
37 class BugTracker(object):
38 """A superclass to share between FakeBugTracker and Bugzilla."""
39
40 def _(self, *args, **kw): # pragma: no cover
41 """
42 Raise NotImplementedError.
43
44 Raises:
45 NotImplementedError: Always.
46 """
47 raise NotImplementedError
48
49 getbug = update_details = modified = on_qa = close = update_details = _
50
51
52 class FakeBugTracker(BugTracker):
53 """Provide an API similar to bugzilla.base.Bugzilla without doing anything."""
54
55 def getbug(self, bug_id, *args, **kw):
56 """
57 Return a FakeBug representing the requested bug id.
58
59 Args:
60 bug_id (basestring or int): The requested bug id.
61 args (list): Unused.
62 kwargs (dict): Unused.
63 """
64 return FakeBug(bug_id=int(bug_id))
65
66 def __noop__(self, *args, **kw):
67 """
68 Log the method call at debug.
69
70 Args:
71 args (list): The list of args passed to the method.
72 kwargs (dict): The kwargs passed to the method.
73 """
74 log.debug('__noop__(%s)' % str(args))
75
76 comment = update_details = modified = close = on_qa = __noop__
77
78
79 class InvalidComment(Exception):
80 """Exception thrown when the comment posted is invalid (for example too long)."""
81
82
83 class Bugzilla(BugTracker):
84 """Provide methods for Bodhi's frequent Bugzilla operations."""
85
86 def __init__(self):
87 """Initialize self._bz as None."""
88 self._bz = None
89
90 def _connect(self):
91 """Create a Bugzilla client instance and store it on self._bz."""
92 user = config.get('bodhi_email')
93 password = config.get('bodhi_password')
94 url = config.get("bz_server")
95 log.info("Using BZ URL %s" % url)
96 if user and password:
97 self._bz = bugzilla.Bugzilla(url=url,
98 user=user, password=password,
99 cookiefile=None, tokenfile=None)
100 else:
101 self._bz = bugzilla.Bugzilla(url=url,
102 cookiefile=None, tokenfile=None)
103
104 @property
105 def bz(self):
106 """
107 Ensure we have connected to Bugzilla and return the client instance.
108
109 Returns:
110 bugzilla.base.Bugzilla: A client Bugzilla instance.
111 """
112 if self._bz is None:
113 self._connect()
114 return self._bz
115
116 def get_url(self, bug_id):
117 """
118 Generate and return a URL to the given bug.
119
120 Args:
121 bug_id (basestring or int): The id of the bug you want a URl for.
122 Returns:
123 basestring: The requested URL.
124 """
125 return "%s/show_bug.cgi?id=%s" % (config['bz_baseurl'], bug_id)
126
127 def getbug(self, bug_id):
128 """
129 Retrieve a bug from Bugzilla.
130
131 Args:
132 bug_id (int): The id of the bug you wish to retreive.
133 Returns:
134 bugzilla.bug.Bug: A Bug instance representing the bug in Bugzilla.
135 """
136 return self.bz.getbug(bug_id)
137
138 def comment(self, bug_id, comment):
139 """
140 Add a comment to the given bug.
141
142 Args:
143 bug_id (int): The id of the bug you wish to comment on.
144 comment (basestring): The comment to add to the bug.
145 """
146 try:
147 if len(comment) > 65535:
148 raise InvalidComment("Comment is too long: %s" % comment)
149 bug = self.bz.getbug(bug_id)
150 attempts = 0
151 while attempts < 5:
152 try:
153 bug.addcomment(comment)
154 break
155 except xmlrpc_client.Fault as e:
156 attempts += 1
157 log.exception(
158 "\nA fault has occurred \nFault code: %d \nFault string: %s" %
159 (e.faultCode, e.faultString))
160 except InvalidComment:
161 log.exception(
162 "Comment too long for bug #%d: %s" % (bug_id, comment))
163 except Exception:
164 log.exception("Unable to add comment to bug #%d" % bug_id)
165
166 def on_qa(self, bug_id, comment):
167 """
168 Change the status of this bug to ON_QA.
169
170 This will also comment on the bug with some details on how to test and provide feedback for
171 this update.
172
173 Args:
174 bug_id (int): The bug id you wish to set to ON_QA.
175 comment (basestring): The comment to be included with the state change.
176 """
177 log.debug("Setting Bug #%d to ON_QA" % bug_id)
178 try:
179 bug = self.bz.getbug(bug_id)
180 bug.setstatus('ON_QA', comment=comment)
181 except Exception:
182 log.exception("Unable to alter bug #%d" % bug_id)
183
184 def close(self, bug_id, versions, comment):
185 """
186 Close the bug given by bug_id, mark it as fixed in the given versions, and add a comment.
187
188 Args:
189 bug_id (int): The ID of the bug you wish to close.
190 versions (dict): A mapping of package names to nvrs of those packages that close the
191 bug.
192 comment (basestring): A comment to leave on the bug when closing it.
193 """
194 args = {'comment': comment}
195 try:
196 bug = self.bz.getbug(bug_id)
197 # If this bug is for one of these builds...
198 if bug.component in versions:
199 version = versions[bug.component]
200 # Get the existing list
201 fixedin = [v.strip() for v in bug.fixed_in.split()]
202 # Strip out any empty strings (already stripped)
203 fixedin = [v for v in fixedin if v]
204 # And add our build if its not already there
205 if version not in fixedin:
206 fixedin.append(version)
207
208 # There are Red Hat preferences to how this field should be
209 # structured. We should use:
210 # - the full NVR as it appears in koji
211 # - space-separated if there's more than one.
212 args['fixedin'] = " ".join(fixedin)
213
214 bug.close('ERRATA', **args)
215 except xmlrpc_client.Fault:
216 log.exception("Unable to close bug #%d" % bug_id)
217
218 def update_details(self, bug, bug_entity):
219 """
220 Update the details on bug_entity to match what is found in Bugzilla.
221
222 Args:
223 bug (bugzilla.bug.Bug or None): The Bugzilla Bug we will use to update our own Bug
224 object from. If None, bug_entity.bug_id will be used to fetch the object from
225 Bugzilla.
226 bug_entity(bodhi.server.models.Bug): The bug we wish to update.
227 """
228 if not bug:
229 try:
230 bug = self.bz.getbug(bug_entity.bug_id)
231 except xmlrpc_client.Fault:
232 bug_entity.title = 'Invalid bug number'
233 log.exception("Got fault from Bugzilla")
234 return
235 except Exception:
236 log.exception("Unknown exception from Bugzilla")
237 return
238 if bug.product == 'Security Response':
239 bug_entity.parent = True
240 bug_entity.title = to_unicode(bug.short_desc)
241 if isinstance(bug.keywords, six.string_types):
242 keywords = bug.keywords.split()
243 else: # python-bugzilla 0.8.0+
244 keywords = bug.keywords
245 if 'security' in [keyword.lower() for keyword in keywords]:
246 bug_entity.security = True
247
248 def modified(self, bug_id):
249 """
250 Mark the given bug as MODIFIED.
251
252 The bug will only be marked MODIFIED if it is not already MODIFIED, VERIFIED, or CLOSED.
253
254 Args:
255 bug_id (basestring or int): The bug you wish to mark MODIFIED.
256 """
257 try:
258 bug = self.bz.getbug(bug_id)
259 if bug.product not in config.get('bz_products'):
260 log.info("Skipping %r bug" % bug.product)
261 return
262 if bug.bug_status not in ('MODIFIED', 'VERIFIED', 'CLOSED'):
263 log.info('Setting bug #%d status to MODIFIED' % bug_id)
264 bug.setstatus('MODIFIED')
265 except Exception:
266 log.exception("Unable to alter bug #%d" % bug_id)
267
268
269 def set_bugtracker():
270 """Set the module-level bugtracker attribute to the correct bugtracker, based on the config."""
271 global bugtracker
272 if config.get('bugtracker') == 'bugzilla':
273 log.info('Using python-bugzilla')
274 bugtracker = Bugzilla()
275 else:
276 log.info('Using the FakeBugTracker')
277 bugtracker = FakeBugTracker()
278
[end of bodhi/server/bugs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bodhi/server/bugs.py b/bodhi/server/bugs.py
--- a/bodhi/server/bugs.py
+++ b/bodhi/server/bugs.py
@@ -165,7 +165,7 @@
def on_qa(self, bug_id, comment):
"""
- Change the status of this bug to ON_QA.
+ Change the status of this bug to ON_QA if it is not already ON_QA, VERIFIED, or CLOSED.
This will also comment on the bug with some details on how to test and provide feedback for
this update.
@@ -174,10 +174,13 @@
bug_id (int): The bug id you wish to set to ON_QA.
comment (basestring): The comment to be included with the state change.
"""
- log.debug("Setting Bug #%d to ON_QA" % bug_id)
try:
bug = self.bz.getbug(bug_id)
- bug.setstatus('ON_QA', comment=comment)
+ if bug.bug_status not in ('ON_QA', 'VERIFIED', 'CLOSED'):
+ log.debug("Setting Bug #%d to ON_QA" % bug_id)
+ bug.setstatus('ON_QA', comment=comment)
+ else:
+ bug.addcomment(comment)
except Exception:
log.exception("Unable to alter bug #%d" % bug_id)
|
{"golden_diff": "diff --git a/bodhi/server/bugs.py b/bodhi/server/bugs.py\n--- a/bodhi/server/bugs.py\n+++ b/bodhi/server/bugs.py\n@@ -165,7 +165,7 @@\n \n def on_qa(self, bug_id, comment):\n \"\"\"\n- Change the status of this bug to ON_QA.\n+ Change the status of this bug to ON_QA if it is not already ON_QA, VERIFIED, or CLOSED.\n \n This will also comment on the bug with some details on how to test and provide feedback for\n this update.\n@@ -174,10 +174,13 @@\n bug_id (int): The bug id you wish to set to ON_QA.\n comment (basestring): The comment to be included with the state change.\n \"\"\"\n- log.debug(\"Setting Bug #%d to ON_QA\" % bug_id)\n try:\n bug = self.bz.getbug(bug_id)\n- bug.setstatus('ON_QA', comment=comment)\n+ if bug.bug_status not in ('ON_QA', 'VERIFIED', 'CLOSED'):\n+ log.debug(\"Setting Bug #%d to ON_QA\" % bug_id)\n+ bug.setstatus('ON_QA', comment=comment)\n+ else:\n+ bug.addcomment(comment)\n except Exception:\n log.exception(\"Unable to alter bug #%d\" % bug_id)\n", "issue": "Bodhi should not erroneously 'downgrade' bug status from VERIFIED\nWe frequently play a game of pong with Bodhi, where we verify a fix when a bug is in `MODIFIED` state (i.e. not yet pushed to u-t) and change the state to `VERIFIED`, only for Bodhi to come along and pong the state back to `ON_QA` when it pushes the update. For e.g., every single damn F25 blocker fix:\r\n\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1393110\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1376471\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1390607\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1378156\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1392654\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1393109\r\n\r\nwhen Bodhi is only doing the `ON_QA` change for an update being pushed, whose contents have not changed since it was submitted, it should avoid changing the bug's status if it's `VERIFIED`.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright \u00a9 2013-2017 Red Hat, Inc. and others.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"Defines utilities for accessing Bugzilla.\"\"\"\n\nimport logging\n\nfrom collections import namedtuple\nfrom kitchen.text.converters import to_unicode\nimport bugzilla\nimport six\nfrom six.moves import xmlrpc_client\n\nfrom bodhi.server.config import config\n\n\nbugtracker = None\nlog = logging.getLogger('bodhi')\nFakeBug = namedtuple('FakeBug', ['bug_id'])\n\n\nclass BugTracker(object):\n \"\"\"A superclass to share between FakeBugTracker and Bugzilla.\"\"\"\n\n def _(self, *args, **kw): # pragma: no cover\n \"\"\"\n Raise NotImplementedError.\n\n Raises:\n NotImplementedError: Always.\n \"\"\"\n raise NotImplementedError\n\n getbug = update_details = modified = on_qa = close = update_details = _\n\n\nclass FakeBugTracker(BugTracker):\n \"\"\"Provide an API similar to bugzilla.base.Bugzilla without doing anything.\"\"\"\n\n def getbug(self, bug_id, *args, **kw):\n \"\"\"\n Return a FakeBug representing the requested bug id.\n\n Args:\n bug_id (basestring or int): The requested bug id.\n args (list): Unused.\n kwargs (dict): Unused.\n \"\"\"\n return FakeBug(bug_id=int(bug_id))\n\n def __noop__(self, *args, **kw):\n \"\"\"\n Log the method call at debug.\n\n Args:\n args (list): The list of args passed to the method.\n kwargs (dict): The kwargs passed to the method.\n \"\"\"\n log.debug('__noop__(%s)' % str(args))\n\n comment = update_details = modified = close = on_qa = __noop__\n\n\nclass InvalidComment(Exception):\n \"\"\"Exception thrown when the comment posted is invalid (for example too long).\"\"\"\n\n\nclass Bugzilla(BugTracker):\n \"\"\"Provide methods for Bodhi's frequent Bugzilla operations.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize self._bz as None.\"\"\"\n self._bz = None\n\n def _connect(self):\n \"\"\"Create a Bugzilla client instance and store it on self._bz.\"\"\"\n user = config.get('bodhi_email')\n password = config.get('bodhi_password')\n url = config.get(\"bz_server\")\n log.info(\"Using BZ URL %s\" % url)\n if user and password:\n self._bz = bugzilla.Bugzilla(url=url,\n user=user, password=password,\n cookiefile=None, tokenfile=None)\n else:\n self._bz = bugzilla.Bugzilla(url=url,\n cookiefile=None, tokenfile=None)\n\n @property\n def bz(self):\n \"\"\"\n Ensure we have connected to Bugzilla and return the client instance.\n\n Returns:\n bugzilla.base.Bugzilla: A client Bugzilla instance.\n \"\"\"\n if self._bz is None:\n self._connect()\n return self._bz\n\n def get_url(self, bug_id):\n \"\"\"\n Generate and return a URL to the given bug.\n\n Args:\n bug_id (basestring or int): The id of the bug you want a URl for.\n Returns:\n basestring: The requested URL.\n \"\"\"\n return \"%s/show_bug.cgi?id=%s\" % (config['bz_baseurl'], bug_id)\n\n def getbug(self, bug_id):\n \"\"\"\n Retrieve a bug from Bugzilla.\n\n Args:\n bug_id (int): The id of the bug you wish to retreive.\n Returns:\n bugzilla.bug.Bug: A Bug instance representing the bug in Bugzilla.\n \"\"\"\n return self.bz.getbug(bug_id)\n\n def comment(self, bug_id, comment):\n \"\"\"\n Add a comment to the given bug.\n\n Args:\n bug_id (int): The id of the bug you wish to comment on.\n comment (basestring): The comment to add to the bug.\n \"\"\"\n try:\n if len(comment) > 65535:\n raise InvalidComment(\"Comment is too long: %s\" % comment)\n bug = self.bz.getbug(bug_id)\n attempts = 0\n while attempts < 5:\n try:\n bug.addcomment(comment)\n break\n except xmlrpc_client.Fault as e:\n attempts += 1\n log.exception(\n \"\\nA fault has occurred \\nFault code: %d \\nFault string: %s\" %\n (e.faultCode, e.faultString))\n except InvalidComment:\n log.exception(\n \"Comment too long for bug #%d: %s\" % (bug_id, comment))\n except Exception:\n log.exception(\"Unable to add comment to bug #%d\" % bug_id)\n\n def on_qa(self, bug_id, comment):\n \"\"\"\n Change the status of this bug to ON_QA.\n\n This will also comment on the bug with some details on how to test and provide feedback for\n this update.\n\n Args:\n bug_id (int): The bug id you wish to set to ON_QA.\n comment (basestring): The comment to be included with the state change.\n \"\"\"\n log.debug(\"Setting Bug #%d to ON_QA\" % bug_id)\n try:\n bug = self.bz.getbug(bug_id)\n bug.setstatus('ON_QA', comment=comment)\n except Exception:\n log.exception(\"Unable to alter bug #%d\" % bug_id)\n\n def close(self, bug_id, versions, comment):\n \"\"\"\n Close the bug given by bug_id, mark it as fixed in the given versions, and add a comment.\n\n Args:\n bug_id (int): The ID of the bug you wish to close.\n versions (dict): A mapping of package names to nvrs of those packages that close the\n bug.\n comment (basestring): A comment to leave on the bug when closing it.\n \"\"\"\n args = {'comment': comment}\n try:\n bug = self.bz.getbug(bug_id)\n # If this bug is for one of these builds...\n if bug.component in versions:\n version = versions[bug.component]\n # Get the existing list\n fixedin = [v.strip() for v in bug.fixed_in.split()]\n # Strip out any empty strings (already stripped)\n fixedin = [v for v in fixedin if v]\n # And add our build if its not already there\n if version not in fixedin:\n fixedin.append(version)\n\n # There are Red Hat preferences to how this field should be\n # structured. We should use:\n # - the full NVR as it appears in koji\n # - space-separated if there's more than one.\n args['fixedin'] = \" \".join(fixedin)\n\n bug.close('ERRATA', **args)\n except xmlrpc_client.Fault:\n log.exception(\"Unable to close bug #%d\" % bug_id)\n\n def update_details(self, bug, bug_entity):\n \"\"\"\n Update the details on bug_entity to match what is found in Bugzilla.\n\n Args:\n bug (bugzilla.bug.Bug or None): The Bugzilla Bug we will use to update our own Bug\n object from. If None, bug_entity.bug_id will be used to fetch the object from\n Bugzilla.\n bug_entity(bodhi.server.models.Bug): The bug we wish to update.\n \"\"\"\n if not bug:\n try:\n bug = self.bz.getbug(bug_entity.bug_id)\n except xmlrpc_client.Fault:\n bug_entity.title = 'Invalid bug number'\n log.exception(\"Got fault from Bugzilla\")\n return\n except Exception:\n log.exception(\"Unknown exception from Bugzilla\")\n return\n if bug.product == 'Security Response':\n bug_entity.parent = True\n bug_entity.title = to_unicode(bug.short_desc)\n if isinstance(bug.keywords, six.string_types):\n keywords = bug.keywords.split()\n else: # python-bugzilla 0.8.0+\n keywords = bug.keywords\n if 'security' in [keyword.lower() for keyword in keywords]:\n bug_entity.security = True\n\n def modified(self, bug_id):\n \"\"\"\n Mark the given bug as MODIFIED.\n\n The bug will only be marked MODIFIED if it is not already MODIFIED, VERIFIED, or CLOSED.\n\n Args:\n bug_id (basestring or int): The bug you wish to mark MODIFIED.\n \"\"\"\n try:\n bug = self.bz.getbug(bug_id)\n if bug.product not in config.get('bz_products'):\n log.info(\"Skipping %r bug\" % bug.product)\n return\n if bug.bug_status not in ('MODIFIED', 'VERIFIED', 'CLOSED'):\n log.info('Setting bug #%d status to MODIFIED' % bug_id)\n bug.setstatus('MODIFIED')\n except Exception:\n log.exception(\"Unable to alter bug #%d\" % bug_id)\n\n\ndef set_bugtracker():\n \"\"\"Set the module-level bugtracker attribute to the correct bugtracker, based on the config.\"\"\"\n global bugtracker\n if config.get('bugtracker') == 'bugzilla':\n log.info('Using python-bugzilla')\n bugtracker = Bugzilla()\n else:\n log.info('Using the FakeBugTracker')\n bugtracker = FakeBugTracker()\n", "path": "bodhi/server/bugs.py"}]}
| 3,734 | 316 |
gh_patches_debug_10376
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-2222
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UniversalImageQualityIndex & SpectralDistortionIndex are still possible to give NaN results
## 🐛 Bug
I noticed the issue [#1520](https://github.com/Lightning-AI/torchmetrics/issues/1520).
The fix may still cause "divided by zero" problem because eps is not directly added to the divisor.
### To Reproduce
```python
upper = 2 * sigma_pred_target
lower = sigma_pred_sq + sigma_target_sq + torch.finfo(sigma_pred_sq.dtype).eps
uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower)
```
Here `(mu_pred_sq + mu_target_sq) * lower` still may contain zero values. For example, if there are several pictures with large black area, both `mu_pred_sq` and `mu_target_sq` can be zeros. To reproduce the problem, these two images can be as examples.


### Expected behavior
```python
upper = 2 * sigma_pred_target
lower = sigma_pred_sq + sigma_target_sq
uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower + torch.finfo(lower.dtype).eps)
```
This fix is very easy to understand. Just thanks to your excellent jobs!
### Environment
|Name|Version|Build|Channel|
|:-:|:-:|:-:|:-:|
|_anaconda_depends|2023.07|py311_0|https://repo.anaconda.com/pkgs/main|
|conda|23.9.0|py311haa95532_0|https://repo.anaconda.com/pkgs/main|
|python|3.10.13|he1021f5_0|defaults|
|pytorch|2.1.0|py3.10_cuda12.1_cudnn8_0|pytorch|
|pytorch-cuda|12.1|hde6ce7c_5|pytorch|
|pytorch-mutex|1.0|cuda|pytorch|
|torchaudio|2.1.0|pypi_0|pypi|
|torchmetrics|1.2.0|pyhd8ed1ab_0|conda-forge|
|torchvision| 0.16.0|pypi_0|pypi|
</issue>
<code>
[start of src/torchmetrics/functional/image/uqi.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Optional, Sequence, Tuple
15
16 import torch
17 from torch import Tensor
18 from torch.nn import functional as F # noqa: N812
19 from typing_extensions import Literal
20
21 from torchmetrics.functional.image.helper import _gaussian_kernel_2d
22 from torchmetrics.utilities.checks import _check_same_shape
23 from torchmetrics.utilities.distributed import reduce
24
25
26 def _uqi_update(preds: Tensor, target: Tensor) -> Tuple[Tensor, Tensor]:
27 """Update and returns variables required to compute Universal Image Quality Index.
28
29 Args:
30 preds: Predicted tensor
31 target: Ground truth tensor
32
33 """
34 if preds.dtype != target.dtype:
35 raise TypeError(
36 "Expected `preds` and `target` to have the same data type."
37 f" Got preds: {preds.dtype} and target: {target.dtype}."
38 )
39 _check_same_shape(preds, target)
40 if len(preds.shape) != 4:
41 raise ValueError(
42 "Expected `preds` and `target` to have BxCxHxW shape."
43 f" Got preds: {preds.shape} and target: {target.shape}."
44 )
45 return preds, target
46
47
48 def _uqi_compute(
49 preds: Tensor,
50 target: Tensor,
51 kernel_size: Sequence[int] = (11, 11),
52 sigma: Sequence[float] = (1.5, 1.5),
53 reduction: Optional[Literal["elementwise_mean", "sum", "none"]] = "elementwise_mean",
54 ) -> Tensor:
55 """Compute Universal Image Quality Index.
56
57 Args:
58 preds: estimated image
59 target: ground truth image
60 kernel_size: size of the gaussian kernel
61 sigma: Standard deviation of the gaussian kernel
62 reduction: a method to reduce metric score over labels.
63
64 - ``'elementwise_mean'``: takes the mean (default)
65 - ``'sum'``: takes the sum
66 - ``'none'`` or ``None``: no reduction will be applied
67
68 Example:
69 >>> preds = torch.rand([16, 1, 16, 16])
70 >>> target = preds * 0.75
71 >>> preds, target = _uqi_update(preds, target)
72 >>> _uqi_compute(preds, target)
73 tensor(0.9216)
74
75 """
76 if len(kernel_size) != 2 or len(sigma) != 2:
77 raise ValueError(
78 "Expected `kernel_size` and `sigma` to have the length of two."
79 f" Got kernel_size: {len(kernel_size)} and sigma: {len(sigma)}."
80 )
81
82 if any(x % 2 == 0 or x <= 0 for x in kernel_size):
83 raise ValueError(f"Expected `kernel_size` to have odd positive number. Got {kernel_size}.")
84
85 if any(y <= 0 for y in sigma):
86 raise ValueError(f"Expected `sigma` to have positive number. Got {sigma}.")
87
88 device = preds.device
89 channel = preds.size(1)
90 dtype = preds.dtype
91 kernel = _gaussian_kernel_2d(channel, kernel_size, sigma, dtype, device)
92 pad_h = (kernel_size[0] - 1) // 2
93 pad_w = (kernel_size[1] - 1) // 2
94
95 preds = F.pad(preds, (pad_h, pad_h, pad_w, pad_w), mode="reflect")
96 target = F.pad(target, (pad_h, pad_h, pad_w, pad_w), mode="reflect")
97
98 input_list = torch.cat((preds, target, preds * preds, target * target, preds * target)) # (5 * B, C, H, W)
99 outputs = F.conv2d(input_list, kernel, groups=channel)
100 output_list = outputs.split(preds.shape[0])
101
102 mu_pred_sq = output_list[0].pow(2)
103 mu_target_sq = output_list[1].pow(2)
104 mu_pred_target = output_list[0] * output_list[1]
105
106 sigma_pred_sq = output_list[2] - mu_pred_sq
107 sigma_target_sq = output_list[3] - mu_target_sq
108 sigma_pred_target = output_list[4] - mu_pred_target
109
110 upper = 2 * sigma_pred_target
111 lower = sigma_pred_sq + sigma_target_sq + torch.finfo(sigma_pred_sq.dtype).eps
112
113 uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower)
114 uqi_idx = uqi_idx[..., pad_h:-pad_h, pad_w:-pad_w]
115
116 return reduce(uqi_idx, reduction)
117
118
119 def universal_image_quality_index(
120 preds: Tensor,
121 target: Tensor,
122 kernel_size: Sequence[int] = (11, 11),
123 sigma: Sequence[float] = (1.5, 1.5),
124 reduction: Optional[Literal["elementwise_mean", "sum", "none"]] = "elementwise_mean",
125 ) -> Tensor:
126 """Universal Image Quality Index.
127
128 Args:
129 preds: estimated image
130 target: ground truth image
131 kernel_size: size of the gaussian kernel
132 sigma: Standard deviation of the gaussian kernel
133 reduction: a method to reduce metric score over labels.
134
135 - ``'elementwise_mean'``: takes the mean (default)
136 - ``'sum'``: takes the sum
137 - ``'none'`` or ``None``: no reduction will be applied
138
139 Return:
140 Tensor with UniversalImageQualityIndex score
141
142 Raises:
143 TypeError:
144 If ``preds`` and ``target`` don't have the same data type.
145 ValueError:
146 If ``preds`` and ``target`` don't have ``BxCxHxW shape``.
147 ValueError:
148 If the length of ``kernel_size`` or ``sigma`` is not ``2``.
149 ValueError:
150 If one of the elements of ``kernel_size`` is not an ``odd positive number``.
151 ValueError:
152 If one of the elements of ``sigma`` is not a ``positive number``.
153
154 Example:
155 >>> from torchmetrics.functional.image import universal_image_quality_index
156 >>> preds = torch.rand([16, 1, 16, 16])
157 >>> target = preds * 0.75
158 >>> universal_image_quality_index(preds, target)
159 tensor(0.9216)
160
161 References:
162 [1] Zhou Wang and A. C. Bovik, "A universal image quality index," in IEEE Signal Processing Letters, vol. 9,
163 no. 3, pp. 81-84, March 2002, doi: 10.1109/97.995823.
164
165 [2] Zhou Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: from error visibility
166 to structural similarity," in IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 2004,
167 doi: 10.1109/TIP.2003.819861.
168
169 """
170 preds, target = _uqi_update(preds, target)
171 return _uqi_compute(preds, target, kernel_size, sigma, reduction)
172
[end of src/torchmetrics/functional/image/uqi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/torchmetrics/functional/image/uqi.py b/src/torchmetrics/functional/image/uqi.py
--- a/src/torchmetrics/functional/image/uqi.py
+++ b/src/torchmetrics/functional/image/uqi.py
@@ -108,9 +108,9 @@
sigma_pred_target = output_list[4] - mu_pred_target
upper = 2 * sigma_pred_target
- lower = sigma_pred_sq + sigma_target_sq + torch.finfo(sigma_pred_sq.dtype).eps
-
- uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower)
+ lower = sigma_pred_sq + sigma_target_sq
+ eps = torch.finfo(sigma_pred_sq.dtype).eps
+ uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower + eps)
uqi_idx = uqi_idx[..., pad_h:-pad_h, pad_w:-pad_w]
return reduce(uqi_idx, reduction)
|
{"golden_diff": "diff --git a/src/torchmetrics/functional/image/uqi.py b/src/torchmetrics/functional/image/uqi.py\n--- a/src/torchmetrics/functional/image/uqi.py\n+++ b/src/torchmetrics/functional/image/uqi.py\n@@ -108,9 +108,9 @@\n sigma_pred_target = output_list[4] - mu_pred_target\n \n upper = 2 * sigma_pred_target\n- lower = sigma_pred_sq + sigma_target_sq + torch.finfo(sigma_pred_sq.dtype).eps\n-\n- uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower)\n+ lower = sigma_pred_sq + sigma_target_sq\n+ eps = torch.finfo(sigma_pred_sq.dtype).eps\n+ uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower + eps)\n uqi_idx = uqi_idx[..., pad_h:-pad_h, pad_w:-pad_w]\n \n return reduce(uqi_idx, reduction)\n", "issue": "UniversalImageQualityIndex & SpectralDistortionIndex are still possible to give NaN results \n## \ud83d\udc1b Bug\r\n\r\nI noticed the issue [#1520](https://github.com/Lightning-AI/torchmetrics/issues/1520).\r\nThe fix may still cause \"divided by zero\" problem because eps is not directly added to the divisor.\r\n\r\n### To Reproduce\r\n\r\n```python\r\n upper = 2 * sigma_pred_target\r\n lower = sigma_pred_sq + sigma_target_sq + torch.finfo(sigma_pred_sq.dtype).eps\r\n\r\n uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower)\r\n```\r\n\r\nHere `(mu_pred_sq + mu_target_sq) * lower` still may contain zero values. For example, if there are several pictures with large black area, both `mu_pred_sq` and `mu_target_sq` can be zeros. To reproduce the problem, these two images can be as examples.\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\n```python\r\n upper = 2 * sigma_pred_target\r\n lower = sigma_pred_sq + sigma_target_sq\r\n\r\n uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower + torch.finfo(lower.dtype).eps)\r\n```\r\n\r\nThis fix is very easy to understand. Just thanks to your excellent jobs!\r\n\r\n### Environment\r\n|Name|Version|Build|Channel|\r\n|:-:|:-:|:-:|:-:|\r\n|_anaconda_depends|2023.07|py311_0|https://repo.anaconda.com/pkgs/main|\r\n|conda|23.9.0|py311haa95532_0|https://repo.anaconda.com/pkgs/main|\r\n|python|3.10.13|he1021f5_0|defaults|\r\n|pytorch|2.1.0|py3.10_cuda12.1_cudnn8_0|pytorch|\r\n|pytorch-cuda|12.1|hde6ce7c_5|pytorch|\r\n|pytorch-mutex|1.0|cuda|pytorch|\r\n|torchaudio|2.1.0|pypi_0|pypi|\r\n|torchmetrics|1.2.0|pyhd8ed1ab_0|conda-forge|\r\n|torchvision| 0.16.0|pypi_0|pypi|\r\n\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional, Sequence, Tuple\n\nimport torch\nfrom torch import Tensor\nfrom torch.nn import functional as F # noqa: N812\nfrom typing_extensions import Literal\n\nfrom torchmetrics.functional.image.helper import _gaussian_kernel_2d\nfrom torchmetrics.utilities.checks import _check_same_shape\nfrom torchmetrics.utilities.distributed import reduce\n\n\ndef _uqi_update(preds: Tensor, target: Tensor) -> Tuple[Tensor, Tensor]:\n \"\"\"Update and returns variables required to compute Universal Image Quality Index.\n\n Args:\n preds: Predicted tensor\n target: Ground truth tensor\n\n \"\"\"\n if preds.dtype != target.dtype:\n raise TypeError(\n \"Expected `preds` and `target` to have the same data type.\"\n f\" Got preds: {preds.dtype} and target: {target.dtype}.\"\n )\n _check_same_shape(preds, target)\n if len(preds.shape) != 4:\n raise ValueError(\n \"Expected `preds` and `target` to have BxCxHxW shape.\"\n f\" Got preds: {preds.shape} and target: {target.shape}.\"\n )\n return preds, target\n\n\ndef _uqi_compute(\n preds: Tensor,\n target: Tensor,\n kernel_size: Sequence[int] = (11, 11),\n sigma: Sequence[float] = (1.5, 1.5),\n reduction: Optional[Literal[\"elementwise_mean\", \"sum\", \"none\"]] = \"elementwise_mean\",\n) -> Tensor:\n \"\"\"Compute Universal Image Quality Index.\n\n Args:\n preds: estimated image\n target: ground truth image\n kernel_size: size of the gaussian kernel\n sigma: Standard deviation of the gaussian kernel\n reduction: a method to reduce metric score over labels.\n\n - ``'elementwise_mean'``: takes the mean (default)\n - ``'sum'``: takes the sum\n - ``'none'`` or ``None``: no reduction will be applied\n\n Example:\n >>> preds = torch.rand([16, 1, 16, 16])\n >>> target = preds * 0.75\n >>> preds, target = _uqi_update(preds, target)\n >>> _uqi_compute(preds, target)\n tensor(0.9216)\n\n \"\"\"\n if len(kernel_size) != 2 or len(sigma) != 2:\n raise ValueError(\n \"Expected `kernel_size` and `sigma` to have the length of two.\"\n f\" Got kernel_size: {len(kernel_size)} and sigma: {len(sigma)}.\"\n )\n\n if any(x % 2 == 0 or x <= 0 for x in kernel_size):\n raise ValueError(f\"Expected `kernel_size` to have odd positive number. Got {kernel_size}.\")\n\n if any(y <= 0 for y in sigma):\n raise ValueError(f\"Expected `sigma` to have positive number. Got {sigma}.\")\n\n device = preds.device\n channel = preds.size(1)\n dtype = preds.dtype\n kernel = _gaussian_kernel_2d(channel, kernel_size, sigma, dtype, device)\n pad_h = (kernel_size[0] - 1) // 2\n pad_w = (kernel_size[1] - 1) // 2\n\n preds = F.pad(preds, (pad_h, pad_h, pad_w, pad_w), mode=\"reflect\")\n target = F.pad(target, (pad_h, pad_h, pad_w, pad_w), mode=\"reflect\")\n\n input_list = torch.cat((preds, target, preds * preds, target * target, preds * target)) # (5 * B, C, H, W)\n outputs = F.conv2d(input_list, kernel, groups=channel)\n output_list = outputs.split(preds.shape[0])\n\n mu_pred_sq = output_list[0].pow(2)\n mu_target_sq = output_list[1].pow(2)\n mu_pred_target = output_list[0] * output_list[1]\n\n sigma_pred_sq = output_list[2] - mu_pred_sq\n sigma_target_sq = output_list[3] - mu_target_sq\n sigma_pred_target = output_list[4] - mu_pred_target\n\n upper = 2 * sigma_pred_target\n lower = sigma_pred_sq + sigma_target_sq + torch.finfo(sigma_pred_sq.dtype).eps\n\n uqi_idx = ((2 * mu_pred_target) * upper) / ((mu_pred_sq + mu_target_sq) * lower)\n uqi_idx = uqi_idx[..., pad_h:-pad_h, pad_w:-pad_w]\n\n return reduce(uqi_idx, reduction)\n\n\ndef universal_image_quality_index(\n preds: Tensor,\n target: Tensor,\n kernel_size: Sequence[int] = (11, 11),\n sigma: Sequence[float] = (1.5, 1.5),\n reduction: Optional[Literal[\"elementwise_mean\", \"sum\", \"none\"]] = \"elementwise_mean\",\n) -> Tensor:\n \"\"\"Universal Image Quality Index.\n\n Args:\n preds: estimated image\n target: ground truth image\n kernel_size: size of the gaussian kernel\n sigma: Standard deviation of the gaussian kernel\n reduction: a method to reduce metric score over labels.\n\n - ``'elementwise_mean'``: takes the mean (default)\n - ``'sum'``: takes the sum\n - ``'none'`` or ``None``: no reduction will be applied\n\n Return:\n Tensor with UniversalImageQualityIndex score\n\n Raises:\n TypeError:\n If ``preds`` and ``target`` don't have the same data type.\n ValueError:\n If ``preds`` and ``target`` don't have ``BxCxHxW shape``.\n ValueError:\n If the length of ``kernel_size`` or ``sigma`` is not ``2``.\n ValueError:\n If one of the elements of ``kernel_size`` is not an ``odd positive number``.\n ValueError:\n If one of the elements of ``sigma`` is not a ``positive number``.\n\n Example:\n >>> from torchmetrics.functional.image import universal_image_quality_index\n >>> preds = torch.rand([16, 1, 16, 16])\n >>> target = preds * 0.75\n >>> universal_image_quality_index(preds, target)\n tensor(0.9216)\n\n References:\n [1] Zhou Wang and A. C. Bovik, \"A universal image quality index,\" in IEEE Signal Processing Letters, vol. 9,\n no. 3, pp. 81-84, March 2002, doi: 10.1109/97.995823.\n\n [2] Zhou Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, \"Image quality assessment: from error visibility\n to structural similarity,\" in IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 2004,\n doi: 10.1109/TIP.2003.819861.\n\n \"\"\"\n preds, target = _uqi_update(preds, target)\n return _uqi_compute(preds, target, kernel_size, sigma, reduction)\n", "path": "src/torchmetrics/functional/image/uqi.py"}]}
| 3,373 | 231 |
gh_patches_debug_26064
|
rasdani/github-patches
|
git_diff
|
holoviz__holoviews-671
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dependencies missing
Hi,
I think that the holoviews pip package does not correctly state its dependencies. These are packages that holoviews complained about not finding when I tried importing it:
- jinja2
- nbformat
- nbconvert
- matplotlib
After installing them manually via pip, I can import holoviews fine.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import sys, os
4 try:
5 from setuptools import setup
6 except ImportError:
7 from distutils.core import setup
8
9
10 setup_args = {}
11 install_requires = ['param>=1.3.2', 'numpy>=1.0']
12 extras_require={}
13
14 # Notebook dependencies of IPython 3
15 extras_require['notebook-dependencies'] = ['ipython', 'pyzmq', 'jinja2', 'tornado',
16 'jsonschema', 'ipython', 'pygments']
17 # IPython Notebook + matplotlib + Lancet
18 extras_require['recommended'] = (extras_require['notebook-dependencies']
19 + ['matplotlib', 'lancet-ioam'])
20 # Additional, useful third-party packages
21 extras_require['extras'] = (['pandas', 'seaborn', 'mpld3', 'bokeh']
22 + extras_require['recommended'])
23 # Everything including cyordereddict (optimization) and nosetests
24 extras_require['all'] = (extras_require['recommended']
25 + extras_require['extras']
26 + ['cyordereddict', 'nose'])
27
28 setup_args.update(dict(
29 name='holoviews',
30 version="1.4.3",
31 install_requires = install_requires,
32 extras_require = extras_require,
33 description='Stop plotting your data - annotate your data and let it visualize itself.',
34 long_description=open('README.rst').read() if os.path.isfile('README.rst') else 'Consult README.rst',
35 author= "Jean-Luc Stevens and Philipp Rudiger",
36 author_email= "[email protected]",
37 maintainer= "IOAM",
38 maintainer_email= "[email protected]",
39 platforms=['Windows', 'Mac OS X', 'Linux'],
40 license='BSD',
41 url='http://ioam.github.com/holoviews/',
42 packages = ["holoviews",
43 "holoviews.core",
44 "holoviews.core.data",
45 "holoviews.element",
46 "holoviews.interface",
47 "holoviews.ipython",
48 "holoviews.operation",
49 "holoviews.plotting",
50 "holoviews.plotting.mpl",
51 "holoviews.plotting.bokeh",
52 "holoviews.plotting.widgets"],
53 package_data={'holoviews.ipython': ['*.html'],
54 'holoviews.plotting.mpl': ['*.mplstyle', '*.jinja', '*.js'],
55 'holoviews.plotting.bokeh': ['*.js', '*.css'],
56 'holoviews.plotting.widgets': ['*.jinja', '*.js', '*.css']},
57 classifiers = [
58 "License :: OSI Approved :: BSD License",
59 "Development Status :: 5 - Production/Stable",
60 "Programming Language :: Python :: 2.7",
61 "Programming Language :: Python :: 3.3",
62 "Programming Language :: Python :: 3.4",
63 "Operating System :: OS Independent",
64 "Intended Audience :: Science/Research",
65 "Intended Audience :: Developers",
66 "Natural Language :: English",
67 "Topic :: Scientific/Engineering",
68 "Topic :: Software Development :: Libraries"]
69 ))
70
71 def check_pseudo_package(path):
72 """
73 Verifies that a fake subpackage path for assets (notebooks, svgs,
74 pngs etc) both exists and is populated with files.
75 """
76 if not os.path.isdir(path):
77 raise Exception("Please make sure pseudo-package %s exists." % path)
78 else:
79 assets = os.listdir(path)
80 if len(assets) == 0:
81 raise Exception("Please make sure pseudo-package %s is populated." % path)
82
83
84 if __name__=="__main__":
85
86 if 'HOLOVIEWS_RELEASE' in os.environ:
87 # Make sure to create these directories and populate them before upload
88 setup_args['packages'] += ["holoviews.assets", 'holoviews.notebooks']
89
90 # Add unit tests
91 setup_args['packages'].append('holoviews.tests')
92
93 setup_args['package_data']['holoviews.assets'] = ['*.png', '*.svg', '*.rst']
94 setup_args['package_data']['holoviews.notebooks'] = ['*.ipynb', '*.npy']
95
96 if ('upload' in sys.argv) or ('sdist' in sys.argv):
97 check_pseudo_package(os.path.join('.', 'holoviews', 'tests'))
98 check_pseudo_package(os.path.join('.', 'holoviews', 'assets'))
99 check_pseudo_package(os.path.join('.', 'holoviews', 'notebooks'))
100
101 import holoviews
102 holoviews.__version__.verify(setup_args['version'])
103
104 setup(**setup_args)
105
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -13,7 +13,7 @@
# Notebook dependencies of IPython 3
extras_require['notebook-dependencies'] = ['ipython', 'pyzmq', 'jinja2', 'tornado',
- 'jsonschema', 'ipython', 'pygments']
+ 'jsonschema', 'notebook', 'pygments']
# IPython Notebook + matplotlib + Lancet
extras_require['recommended'] = (extras_require['notebook-dependencies']
+ ['matplotlib', 'lancet-ioam'])
@@ -101,4 +101,22 @@
import holoviews
holoviews.__version__.verify(setup_args['version'])
+
+ if 'install' in sys.argv:
+ header = "HOLOVIEWS INSTALLATION INFORMATION"
+ bars = "="*len(header)
+
+ extras = '\n'.join('holoviews[%s]' % e for e in setup_args['extras_require'])
+
+ print("%s\n%s\n%s" % (bars, header, bars))
+
+ print("\nHoloViews supports the following installation types:\n")
+ print("%s\n" % extras)
+ print("Users should consider using one of these options.\n")
+ print("By default only a core installation is performed and ")
+ print("only the minimal set of dependencies are fetched.\n\n")
+ print("For more information please visit http://holoviews.org/install.html\n")
+ print(bars+'\n')
+
+
setup(**setup_args)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -13,7 +13,7 @@\n \n # Notebook dependencies of IPython 3\n extras_require['notebook-dependencies'] = ['ipython', 'pyzmq', 'jinja2', 'tornado',\n- 'jsonschema', 'ipython', 'pygments']\n+ 'jsonschema', 'notebook', 'pygments']\n # IPython Notebook + matplotlib + Lancet\n extras_require['recommended'] = (extras_require['notebook-dependencies']\n + ['matplotlib', 'lancet-ioam'])\n@@ -101,4 +101,22 @@\n import holoviews\n holoviews.__version__.verify(setup_args['version'])\n \n+\n+ if 'install' in sys.argv:\n+ header = \"HOLOVIEWS INSTALLATION INFORMATION\"\n+ bars = \"=\"*len(header)\n+\n+ extras = '\\n'.join('holoviews[%s]' % e for e in setup_args['extras_require'])\n+\n+ print(\"%s\\n%s\\n%s\" % (bars, header, bars))\n+\n+ print(\"\\nHoloViews supports the following installation types:\\n\")\n+ print(\"%s\\n\" % extras)\n+ print(\"Users should consider using one of these options.\\n\")\n+ print(\"By default only a core installation is performed and \")\n+ print(\"only the minimal set of dependencies are fetched.\\n\\n\")\n+ print(\"For more information please visit http://holoviews.org/install.html\\n\")\n+ print(bars+'\\n')\n+\n+\n setup(**setup_args)\n", "issue": "Dependencies missing\nHi,\n\nI think that the holoviews pip package does not correctly state its dependencies. These are packages that holoviews complained about not finding when I tried importing it:\n- jinja2 \n- nbformat \n- nbconvert \n- matplotlib\n\nAfter installing them manually via pip, I can import holoviews fine.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport sys, os\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n\nsetup_args = {}\ninstall_requires = ['param>=1.3.2', 'numpy>=1.0']\nextras_require={}\n\n# Notebook dependencies of IPython 3\nextras_require['notebook-dependencies'] = ['ipython', 'pyzmq', 'jinja2', 'tornado',\n 'jsonschema', 'ipython', 'pygments']\n# IPython Notebook + matplotlib + Lancet\nextras_require['recommended'] = (extras_require['notebook-dependencies']\n + ['matplotlib', 'lancet-ioam'])\n# Additional, useful third-party packages\nextras_require['extras'] = (['pandas', 'seaborn', 'mpld3', 'bokeh']\n + extras_require['recommended'])\n# Everything including cyordereddict (optimization) and nosetests\nextras_require['all'] = (extras_require['recommended']\n + extras_require['extras']\n + ['cyordereddict', 'nose'])\n\nsetup_args.update(dict(\n name='holoviews',\n version=\"1.4.3\",\n install_requires = install_requires,\n extras_require = extras_require,\n description='Stop plotting your data - annotate your data and let it visualize itself.',\n long_description=open('README.rst').read() if os.path.isfile('README.rst') else 'Consult README.rst',\n author= \"Jean-Luc Stevens and Philipp Rudiger\",\n author_email= \"[email protected]\",\n maintainer= \"IOAM\",\n maintainer_email= \"[email protected]\",\n platforms=['Windows', 'Mac OS X', 'Linux'],\n license='BSD',\n url='http://ioam.github.com/holoviews/',\n packages = [\"holoviews\",\n \"holoviews.core\",\n \"holoviews.core.data\",\n \"holoviews.element\",\n \"holoviews.interface\",\n \"holoviews.ipython\",\n \"holoviews.operation\",\n \"holoviews.plotting\",\n \"holoviews.plotting.mpl\",\n \"holoviews.plotting.bokeh\",\n \"holoviews.plotting.widgets\"],\n package_data={'holoviews.ipython': ['*.html'],\n 'holoviews.plotting.mpl': ['*.mplstyle', '*.jinja', '*.js'],\n 'holoviews.plotting.bokeh': ['*.js', '*.css'],\n 'holoviews.plotting.widgets': ['*.jinja', '*.js', '*.css']},\n classifiers = [\n \"License :: OSI Approved :: BSD License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Operating System :: OS Independent\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development :: Libraries\"]\n))\n\ndef check_pseudo_package(path):\n \"\"\"\n Verifies that a fake subpackage path for assets (notebooks, svgs,\n pngs etc) both exists and is populated with files.\n \"\"\"\n if not os.path.isdir(path):\n raise Exception(\"Please make sure pseudo-package %s exists.\" % path)\n else:\n assets = os.listdir(path)\n if len(assets) == 0:\n raise Exception(\"Please make sure pseudo-package %s is populated.\" % path)\n\n\nif __name__==\"__main__\":\n\n if 'HOLOVIEWS_RELEASE' in os.environ:\n # Make sure to create these directories and populate them before upload\n setup_args['packages'] += [\"holoviews.assets\", 'holoviews.notebooks']\n\n # Add unit tests\n setup_args['packages'].append('holoviews.tests')\n\n setup_args['package_data']['holoviews.assets'] = ['*.png', '*.svg', '*.rst']\n setup_args['package_data']['holoviews.notebooks'] = ['*.ipynb', '*.npy']\n\n if ('upload' in sys.argv) or ('sdist' in sys.argv):\n check_pseudo_package(os.path.join('.', 'holoviews', 'tests'))\n check_pseudo_package(os.path.join('.', 'holoviews', 'assets'))\n check_pseudo_package(os.path.join('.', 'holoviews', 'notebooks'))\n\n import holoviews\n holoviews.__version__.verify(setup_args['version'])\n\n setup(**setup_args)\n", "path": "setup.py"}]}
| 1,832 | 362 |
gh_patches_debug_40298
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-3339
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consent stage shown every time: Something went wrong! Please try again later. (DB Key already exists)
**Describe the bug**
The consent stage is shown every time, although consent was already given and did not expire, which leads to an exception.
**To Reproduce**
Steps to reproduce the behavior:
1. Log into authentik
2. Click on an application
3. Give consent
4. Log out
5. Click on the same application
6. Consent stage will appear again
7. Giving consent leads into: "Whoops! Something went wrong! Please try again later."
**Expected behavior**
Consent stage should only be shown once in x days (as configured) and not always. Did work perfectly in versions before authentik 2022.7.
**Logs**
<details>
<summary>Stacktrace from authentik</summary>
```
Traceback (most recent call last):
File "/authentik/flows/views/executor.py", line 337, in post
stage_response = self.current_stage_view.post(request, *args, **kwargs)
File "/authentik/flows/stage.py", line 121, in post
return self.challenge_valid(challenge)
File "/authentik/stages/consent/stage.py", line 127, in challenge_valid
UserConsent.objects.create(
File "/usr/local/lib/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 514, in create
obj.save(force_insert=True, using=self.db)
File "/usr/local/lib/python3.10/site-packages/django/db/models/base.py", line 806, in save
self.save_base(
File "/usr/local/lib/python3.10/site-packages/django/db/models/base.py", line 857, in save_base
updated = self._save_table(
File "/usr/local/lib/python3.10/site-packages/django/db/models/base.py", line 1000, in _save_table
results = self._do_insert(
File "/usr/local/lib/python3.10/site-packages/django/db/models/base.py", line 1041, in _do_insert
return manager._insert(
File "/usr/local/lib/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 1434, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1621, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/local/lib/python3.10/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.10/site-packages/django_prometheus/db/common.py", line 71, in execute
return super().execute(*args, **kwargs)
django.db.utils.IntegrityError: duplicate key value violates unique constraint "authentik_stages_consent_user_id_application_id_p_e44c6458_uniq"
DETAIL: Key (user_id, application_id, permissions)=(79, 86534e2a-befb-40ae-8581-c2387b4ba06b, ) already exists.
```
</details>
**Version and Deployment (please complete the following information):**
- authentik version: 2022.7.2
- Deployment: docker-compose
</issue>
<code>
[start of authentik/stages/consent/stage.py]
1 """authentik consent stage"""
2 from typing import Optional
3
4 from django.http import HttpRequest, HttpResponse
5 from django.utils.timezone import now
6 from rest_framework.fields import CharField
7
8 from authentik.flows.challenge import (
9 Challenge,
10 ChallengeResponse,
11 ChallengeTypes,
12 PermissionSerializer,
13 WithUserInfoChallenge,
14 )
15 from authentik.flows.planner import PLAN_CONTEXT_APPLICATION, PLAN_CONTEXT_PENDING_USER
16 from authentik.flows.stage import ChallengeStageView
17 from authentik.lib.utils.time import timedelta_from_string
18 from authentik.stages.consent.models import ConsentMode, ConsentStage, UserConsent
19
20 PLAN_CONTEXT_CONSENT_TITLE = "consent_title"
21 PLAN_CONTEXT_CONSENT_HEADER = "consent_header"
22 PLAN_CONTEXT_CONSENT_PERMISSIONS = "consent_permissions"
23 PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS = "consent_additional_permissions"
24
25
26 class ConsentChallenge(WithUserInfoChallenge):
27 """Challenge info for consent screens"""
28
29 header_text = CharField(required=False)
30 permissions = PermissionSerializer(many=True)
31 additional_permissions = PermissionSerializer(many=True)
32 component = CharField(default="ak-stage-consent")
33
34
35 class ConsentChallengeResponse(ChallengeResponse):
36 """Consent challenge response, any valid response request is valid"""
37
38 component = CharField(default="ak-stage-consent")
39
40
41 class ConsentStageView(ChallengeStageView):
42 """Simple consent checker."""
43
44 response_class = ConsentChallengeResponse
45
46 def get_challenge(self) -> Challenge:
47 data = {
48 "type": ChallengeTypes.NATIVE.value,
49 "permissions": self.executor.plan.context.get(PLAN_CONTEXT_CONSENT_PERMISSIONS, []),
50 "additional_permissions": self.executor.plan.context.get(
51 PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS, []
52 ),
53 }
54 if PLAN_CONTEXT_CONSENT_TITLE in self.executor.plan.context:
55 data["title"] = self.executor.plan.context[PLAN_CONTEXT_CONSENT_TITLE]
56 if PLAN_CONTEXT_CONSENT_HEADER in self.executor.plan.context:
57 data["header_text"] = self.executor.plan.context[PLAN_CONTEXT_CONSENT_HEADER]
58 challenge = ConsentChallenge(data=data)
59 return challenge
60
61 def get(self, request: HttpRequest, *args, **kwargs) -> HttpResponse:
62 current_stage: ConsentStage = self.executor.current_stage
63 # Make this StageView work when injected, in which case `current_stage` is an instance
64 # of the base class, and we don't save any consent, as it is assumed to be a one-time
65 # prompt
66 if not isinstance(current_stage, ConsentStage):
67 return super().get(request, *args, **kwargs)
68 # For always require, we always return the challenge
69 if current_stage.mode == ConsentMode.ALWAYS_REQUIRE:
70 return super().get(request, *args, **kwargs)
71 # at this point we need to check consent from database
72 if PLAN_CONTEXT_APPLICATION not in self.executor.plan.context:
73 # No application in this plan, hence we can't check DB and require user consent
74 return super().get(request, *args, **kwargs)
75
76 application = self.executor.plan.context[PLAN_CONTEXT_APPLICATION]
77
78 user = self.request.user
79 if PLAN_CONTEXT_PENDING_USER in self.executor.plan.context:
80 user = self.executor.plan.context[PLAN_CONTEXT_PENDING_USER]
81
82 consent: Optional[UserConsent] = UserConsent.filter_not_expired(
83 user=user, application=application
84 ).first()
85
86 if consent:
87 perms = self.executor.plan.context.get(PLAN_CONTEXT_CONSENT_PERMISSIONS, [])
88 allowed_perms = set(consent.permissions.split(" "))
89 requested_perms = set(x["id"] for x in perms)
90
91 if allowed_perms != requested_perms:
92 self.executor.plan.context[PLAN_CONTEXT_CONSENT_PERMISSIONS] = [
93 x for x in perms if x["id"] in allowed_perms
94 ]
95 self.executor.plan.context[PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS] = [
96 x for x in perms if x["id"] in requested_perms.difference(allowed_perms)
97 ]
98 return super().get(request, *args, **kwargs)
99 return self.executor.stage_ok()
100
101 # No consent found, return consent prompt
102 return super().get(request, *args, **kwargs)
103
104 def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:
105 current_stage: ConsentStage = self.executor.current_stage
106 if PLAN_CONTEXT_APPLICATION not in self.executor.plan.context:
107 return self.executor.stage_ok()
108 application = self.executor.plan.context[PLAN_CONTEXT_APPLICATION]
109 permissions = self.executor.plan.context.get(
110 PLAN_CONTEXT_CONSENT_PERMISSIONS, []
111 ) + self.executor.plan.context.get(PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS, [])
112 permissions_string = " ".join(x["id"] for x in permissions)
113 # Make this StageView work when injected, in which case `current_stage` is an instance
114 # of the base class, and we don't save any consent, as it is assumed to be a one-time
115 # prompt
116 if not isinstance(current_stage, ConsentStage):
117 return self.executor.stage_ok()
118 # Since we only get here when no consent exists, we can create it without update
119 if current_stage.mode == ConsentMode.PERMANENT:
120 UserConsent.objects.create(
121 user=self.request.user,
122 application=application,
123 expiring=False,
124 permissions=permissions_string,
125 )
126 if current_stage.mode == ConsentMode.EXPIRING:
127 UserConsent.objects.create(
128 user=self.request.user,
129 application=application,
130 expires=now() + timedelta_from_string(current_stage.consent_expire_in),
131 permissions=permissions_string,
132 )
133 return self.executor.stage_ok()
134
[end of authentik/stages/consent/stage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/stages/consent/stage.py b/authentik/stages/consent/stage.py
--- a/authentik/stages/consent/stage.py
+++ b/authentik/stages/consent/stage.py
@@ -1,5 +1,6 @@
"""authentik consent stage"""
from typing import Optional
+from uuid import uuid4
from django.http import HttpRequest, HttpResponse
from django.utils.timezone import now
@@ -21,6 +22,7 @@
PLAN_CONTEXT_CONSENT_HEADER = "consent_header"
PLAN_CONTEXT_CONSENT_PERMISSIONS = "consent_permissions"
PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS = "consent_additional_permissions"
+SESSION_KEY_CONSENT_TOKEN = "authentik/stages/consent/token" # nosec
class ConsentChallenge(WithUserInfoChallenge):
@@ -30,12 +32,14 @@
permissions = PermissionSerializer(many=True)
additional_permissions = PermissionSerializer(many=True)
component = CharField(default="ak-stage-consent")
+ token = CharField(required=True)
class ConsentChallengeResponse(ChallengeResponse):
"""Consent challenge response, any valid response request is valid"""
component = CharField(default="ak-stage-consent")
+ token = CharField(required=True)
class ConsentStageView(ChallengeStageView):
@@ -44,12 +48,15 @@
response_class = ConsentChallengeResponse
def get_challenge(self) -> Challenge:
+ token = str(uuid4())
+ self.request.session[SESSION_KEY_CONSENT_TOKEN] = token
data = {
"type": ChallengeTypes.NATIVE.value,
"permissions": self.executor.plan.context.get(PLAN_CONTEXT_CONSENT_PERMISSIONS, []),
"additional_permissions": self.executor.plan.context.get(
PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS, []
),
+ "token": token,
}
if PLAN_CONTEXT_CONSENT_TITLE in self.executor.plan.context:
data["title"] = self.executor.plan.context[PLAN_CONTEXT_CONSENT_TITLE]
@@ -102,6 +109,8 @@
return super().get(request, *args, **kwargs)
def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:
+ if response.data["token"] != self.request.session[SESSION_KEY_CONSENT_TOKEN]:
+ return self.get(self.request)
current_stage: ConsentStage = self.executor.current_stage
if PLAN_CONTEXT_APPLICATION not in self.executor.plan.context:
return self.executor.stage_ok()
|
{"golden_diff": "diff --git a/authentik/stages/consent/stage.py b/authentik/stages/consent/stage.py\n--- a/authentik/stages/consent/stage.py\n+++ b/authentik/stages/consent/stage.py\n@@ -1,5 +1,6 @@\n \"\"\"authentik consent stage\"\"\"\n from typing import Optional\n+from uuid import uuid4\n \n from django.http import HttpRequest, HttpResponse\n from django.utils.timezone import now\n@@ -21,6 +22,7 @@\n PLAN_CONTEXT_CONSENT_HEADER = \"consent_header\"\n PLAN_CONTEXT_CONSENT_PERMISSIONS = \"consent_permissions\"\n PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS = \"consent_additional_permissions\"\n+SESSION_KEY_CONSENT_TOKEN = \"authentik/stages/consent/token\" # nosec\n \n \n class ConsentChallenge(WithUserInfoChallenge):\n@@ -30,12 +32,14 @@\n permissions = PermissionSerializer(many=True)\n additional_permissions = PermissionSerializer(many=True)\n component = CharField(default=\"ak-stage-consent\")\n+ token = CharField(required=True)\n \n \n class ConsentChallengeResponse(ChallengeResponse):\n \"\"\"Consent challenge response, any valid response request is valid\"\"\"\n \n component = CharField(default=\"ak-stage-consent\")\n+ token = CharField(required=True)\n \n \n class ConsentStageView(ChallengeStageView):\n@@ -44,12 +48,15 @@\n response_class = ConsentChallengeResponse\n \n def get_challenge(self) -> Challenge:\n+ token = str(uuid4())\n+ self.request.session[SESSION_KEY_CONSENT_TOKEN] = token\n data = {\n \"type\": ChallengeTypes.NATIVE.value,\n \"permissions\": self.executor.plan.context.get(PLAN_CONTEXT_CONSENT_PERMISSIONS, []),\n \"additional_permissions\": self.executor.plan.context.get(\n PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS, []\n ),\n+ \"token\": token,\n }\n if PLAN_CONTEXT_CONSENT_TITLE in self.executor.plan.context:\n data[\"title\"] = self.executor.plan.context[PLAN_CONTEXT_CONSENT_TITLE]\n@@ -102,6 +109,8 @@\n return super().get(request, *args, **kwargs)\n \n def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:\n+ if response.data[\"token\"] != self.request.session[SESSION_KEY_CONSENT_TOKEN]:\n+ return self.get(self.request)\n current_stage: ConsentStage = self.executor.current_stage\n if PLAN_CONTEXT_APPLICATION not in self.executor.plan.context:\n return self.executor.stage_ok()\n", "issue": "Consent stage shown every time: Something went wrong! Please try again later. (DB Key already exists)\n**Describe the bug**\r\nThe consent stage is shown every time, although consent was already given and did not expire, which leads to an exception.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Log into authentik\r\n2. Click on an application\r\n3. Give consent\r\n4. Log out\r\n5. Click on the same application\r\n6. Consent stage will appear again\r\n7. Giving consent leads into: \"Whoops! Something went wrong! Please try again later.\"\r\n\r\n**Expected behavior**\r\nConsent stage should only be shown once in x days (as configured) and not always. Did work perfectly in versions before authentik 2022.7.\r\n\r\n**Logs**\r\n<details>\r\n <summary>Stacktrace from authentik</summary>\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/authentik/flows/views/executor.py\", line 337, in post\r\n stage_response = self.current_stage_view.post(request, *args, **kwargs)\r\n File \"/authentik/flows/stage.py\", line 121, in post\r\n return self.challenge_valid(challenge)\r\n File \"/authentik/stages/consent/stage.py\", line 127, in challenge_valid\r\n UserConsent.objects.create(\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/manager.py\", line 85, in manager_method\r\n return getattr(self.get_queryset(), name)(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/query.py\", line 514, in create\r\n obj.save(force_insert=True, using=self.db)\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/base.py\", line 806, in save\r\n self.save_base(\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/base.py\", line 857, in save_base\r\n updated = self._save_table(\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/base.py\", line 1000, in _save_table\r\n results = self._do_insert(\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/base.py\", line 1041, in _do_insert\r\n return manager._insert(\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/manager.py\", line 85, in manager_method\r\n return getattr(self.get_queryset(), name)(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/query.py\", line 1434, in _insert\r\n return query.get_compiler(using=using).execute_sql(returning_fields)\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py\", line 1621, in execute_sql\r\n cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py\", line 67, in execute\r\n return self._execute_with_wrappers(\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py\", line 80, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py\", line 84, in _execute\r\n with self.db.wrap_database_errors:\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/utils.py\", line 91, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/usr/local/lib/python3.10/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.10/site-packages/django_prometheus/db/common.py\", line 71, in execute\r\n return super().execute(*args, **kwargs)\r\ndjango.db.utils.IntegrityError: duplicate key value violates unique constraint \"authentik_stages_consent_user_id_application_id_p_e44c6458_uniq\"\r\nDETAIL: Key (user_id, application_id, permissions)=(79, 86534e2a-befb-40ae-8581-c2387b4ba06b, ) already exists.\r\n\r\n```\r\n</details>\r\n\r\n\r\n**Version and Deployment (please complete the following information):**\r\n- authentik version: 2022.7.2\r\n- Deployment: docker-compose\r\n \n", "before_files": [{"content": "\"\"\"authentik consent stage\"\"\"\nfrom typing import Optional\n\nfrom django.http import HttpRequest, HttpResponse\nfrom django.utils.timezone import now\nfrom rest_framework.fields import CharField\n\nfrom authentik.flows.challenge import (\n Challenge,\n ChallengeResponse,\n ChallengeTypes,\n PermissionSerializer,\n WithUserInfoChallenge,\n)\nfrom authentik.flows.planner import PLAN_CONTEXT_APPLICATION, PLAN_CONTEXT_PENDING_USER\nfrom authentik.flows.stage import ChallengeStageView\nfrom authentik.lib.utils.time import timedelta_from_string\nfrom authentik.stages.consent.models import ConsentMode, ConsentStage, UserConsent\n\nPLAN_CONTEXT_CONSENT_TITLE = \"consent_title\"\nPLAN_CONTEXT_CONSENT_HEADER = \"consent_header\"\nPLAN_CONTEXT_CONSENT_PERMISSIONS = \"consent_permissions\"\nPLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS = \"consent_additional_permissions\"\n\n\nclass ConsentChallenge(WithUserInfoChallenge):\n \"\"\"Challenge info for consent screens\"\"\"\n\n header_text = CharField(required=False)\n permissions = PermissionSerializer(many=True)\n additional_permissions = PermissionSerializer(many=True)\n component = CharField(default=\"ak-stage-consent\")\n\n\nclass ConsentChallengeResponse(ChallengeResponse):\n \"\"\"Consent challenge response, any valid response request is valid\"\"\"\n\n component = CharField(default=\"ak-stage-consent\")\n\n\nclass ConsentStageView(ChallengeStageView):\n \"\"\"Simple consent checker.\"\"\"\n\n response_class = ConsentChallengeResponse\n\n def get_challenge(self) -> Challenge:\n data = {\n \"type\": ChallengeTypes.NATIVE.value,\n \"permissions\": self.executor.plan.context.get(PLAN_CONTEXT_CONSENT_PERMISSIONS, []),\n \"additional_permissions\": self.executor.plan.context.get(\n PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS, []\n ),\n }\n if PLAN_CONTEXT_CONSENT_TITLE in self.executor.plan.context:\n data[\"title\"] = self.executor.plan.context[PLAN_CONTEXT_CONSENT_TITLE]\n if PLAN_CONTEXT_CONSENT_HEADER in self.executor.plan.context:\n data[\"header_text\"] = self.executor.plan.context[PLAN_CONTEXT_CONSENT_HEADER]\n challenge = ConsentChallenge(data=data)\n return challenge\n\n def get(self, request: HttpRequest, *args, **kwargs) -> HttpResponse:\n current_stage: ConsentStage = self.executor.current_stage\n # Make this StageView work when injected, in which case `current_stage` is an instance\n # of the base class, and we don't save any consent, as it is assumed to be a one-time\n # prompt\n if not isinstance(current_stage, ConsentStage):\n return super().get(request, *args, **kwargs)\n # For always require, we always return the challenge\n if current_stage.mode == ConsentMode.ALWAYS_REQUIRE:\n return super().get(request, *args, **kwargs)\n # at this point we need to check consent from database\n if PLAN_CONTEXT_APPLICATION not in self.executor.plan.context:\n # No application in this plan, hence we can't check DB and require user consent\n return super().get(request, *args, **kwargs)\n\n application = self.executor.plan.context[PLAN_CONTEXT_APPLICATION]\n\n user = self.request.user\n if PLAN_CONTEXT_PENDING_USER in self.executor.plan.context:\n user = self.executor.plan.context[PLAN_CONTEXT_PENDING_USER]\n\n consent: Optional[UserConsent] = UserConsent.filter_not_expired(\n user=user, application=application\n ).first()\n\n if consent:\n perms = self.executor.plan.context.get(PLAN_CONTEXT_CONSENT_PERMISSIONS, [])\n allowed_perms = set(consent.permissions.split(\" \"))\n requested_perms = set(x[\"id\"] for x in perms)\n\n if allowed_perms != requested_perms:\n self.executor.plan.context[PLAN_CONTEXT_CONSENT_PERMISSIONS] = [\n x for x in perms if x[\"id\"] in allowed_perms\n ]\n self.executor.plan.context[PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS] = [\n x for x in perms if x[\"id\"] in requested_perms.difference(allowed_perms)\n ]\n return super().get(request, *args, **kwargs)\n return self.executor.stage_ok()\n\n # No consent found, return consent prompt\n return super().get(request, *args, **kwargs)\n\n def challenge_valid(self, response: ChallengeResponse) -> HttpResponse:\n current_stage: ConsentStage = self.executor.current_stage\n if PLAN_CONTEXT_APPLICATION not in self.executor.plan.context:\n return self.executor.stage_ok()\n application = self.executor.plan.context[PLAN_CONTEXT_APPLICATION]\n permissions = self.executor.plan.context.get(\n PLAN_CONTEXT_CONSENT_PERMISSIONS, []\n ) + self.executor.plan.context.get(PLAN_CONTEXT_CONSNET_EXTRA_PERMISSIONS, [])\n permissions_string = \" \".join(x[\"id\"] for x in permissions)\n # Make this StageView work when injected, in which case `current_stage` is an instance\n # of the base class, and we don't save any consent, as it is assumed to be a one-time\n # prompt\n if not isinstance(current_stage, ConsentStage):\n return self.executor.stage_ok()\n # Since we only get here when no consent exists, we can create it without update\n if current_stage.mode == ConsentMode.PERMANENT:\n UserConsent.objects.create(\n user=self.request.user,\n application=application,\n expiring=False,\n permissions=permissions_string,\n )\n if current_stage.mode == ConsentMode.EXPIRING:\n UserConsent.objects.create(\n user=self.request.user,\n application=application,\n expires=now() + timedelta_from_string(current_stage.consent_expire_in),\n permissions=permissions_string,\n )\n return self.executor.stage_ok()\n", "path": "authentik/stages/consent/stage.py"}]}
| 3,060 | 540 |
gh_patches_debug_7427
|
rasdani/github-patches
|
git_diff
|
microsoft__torchgeo-1921
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add instructions on downloading the DeepGlobeLandCover dataset
### Issue
The [dataset](https://torchgeo.readthedocs.io/en/stable/api/datasets.html#torchgeo.datasets.DeepGlobeLandCover) docs state `The dataset that we use with a custom train/test split can be downloaded from Kaggle` - however this is a necessity as you cannot pass `download=True`
### Fix
Suggest documenting the steps using kaggle CLI (below), or just to state that this must be performed? Alternatively host on Huggingface and automate the download
```
pip install kaggle # place api key at ~/.kaggle/kaggle.json
cd data
kaggle datasets download -d geoap96/deepglobe2018-landcover-segmentation-traindataset
unzip deepglobe2018-landcover-segmentation-traindataset.zip
```
</issue>
<code>
[start of torchgeo/datasets/deepglobelandcover.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 """DeepGlobe Land Cover Classification Challenge dataset."""
5
6 import os
7 from typing import Callable, Optional
8
9 import matplotlib.pyplot as plt
10 import numpy as np
11 import torch
12 from matplotlib.figure import Figure
13 from PIL import Image
14 from torch import Tensor
15
16 from .geo import NonGeoDataset
17 from .utils import (
18 DatasetNotFoundError,
19 check_integrity,
20 draw_semantic_segmentation_masks,
21 extract_archive,
22 rgb_to_mask,
23 )
24
25
26 class DeepGlobeLandCover(NonGeoDataset):
27 """DeepGlobe Land Cover Classification Challenge dataset.
28
29 The `DeepGlobe Land Cover Classification Challenge
30 <https://competitions.codalab.org/competitions/18468>`__ dataset
31 offers high-resolution sub-meter satellite imagery focusing for the task of
32 semantic segmentation to detect areas of urban, agriculture, rangeland, forest,
33 water, barren, and unknown. It contains 1,146 satellite images of size
34 2448 x 2448 pixels in total, split into training/validation/test sets, the original
35 dataset can be downloaded from `Kaggle <https://www.kaggle.com/datasets/balraj98/
36 deepglobe-land-cover-classification-dataset>`__.
37 However, we only use the training dataset with 803 images since the original test
38 and valid dataset are not accompanied by labels. The dataset that we use with a
39 custom train/test split can be downloaded from `Kaggle <https://www.kaggle.com/
40 datasets/geoap96/deepglobe2018-landcover-segmentation-traindataset>`__ (created as a
41 part of Computer Vision by Deep Learning (CS4245) course offered at TU Delft).
42
43 Dataset format:
44
45 * images are RGB data
46 * masks are RGB image with with unique RGB values representing the class
47
48 Dataset classes:
49
50 0. Urban land
51 1. Agriculture land
52 2. Rangeland
53 3. Forest land
54 4. Water
55 5. Barren land
56 6. Unknown
57
58 File names for satellite images and the corresponding mask image are id_sat.jpg and
59 id_mask.png, where id is an integer assigned to every image.
60
61 If you use this dataset in your research, please cite the following paper:
62
63 * https://arxiv.org/pdf/1805.06561.pdf
64
65 .. versionadded:: 0.3
66 """
67
68 filename = "data.zip"
69 data_root = "data"
70 md5 = "f32684b0b2bf6f8d604cd359a399c061"
71 splits = ["train", "test"]
72 classes = [
73 "Urban land",
74 "Agriculture land",
75 "Rangeland",
76 "Forest land",
77 "Water",
78 "Barren land",
79 "Unknown",
80 ]
81 colormap = [
82 (0, 255, 255),
83 (255, 255, 0),
84 (255, 0, 255),
85 (0, 255, 0),
86 (0, 0, 255),
87 (255, 255, 255),
88 (0, 0, 0),
89 ]
90
91 def __init__(
92 self,
93 root: str = "data",
94 split: str = "train",
95 transforms: Optional[Callable[[dict[str, Tensor]], dict[str, Tensor]]] = None,
96 checksum: bool = False,
97 ) -> None:
98 """Initialize a new DeepGlobeLandCover dataset instance.
99
100 Args:
101 root: root directory where dataset can be found
102 split: one of "train" or "test"
103 transforms: a function/transform that takes input sample and its target as
104 entry and returns a transformed version
105 checksum: if True, check the MD5 of the downloaded files (may be slow)
106
107 Raises:
108 DatasetNotFoundError: If dataset is not found.
109 """
110 assert split in self.splits
111 self.root = root
112 self.split = split
113 self.transforms = transforms
114 self.checksum = checksum
115
116 self._verify()
117 if split == "train":
118 split_folder = "training_data"
119 else:
120 split_folder = "test_data"
121
122 self.image_fns = []
123 self.mask_fns = []
124 for image in sorted(
125 os.listdir(os.path.join(root, self.data_root, split_folder, "images"))
126 ):
127 if image.endswith(".jpg"):
128 id = image[:-8]
129 image_path = os.path.join(
130 root, self.data_root, split_folder, "images", image
131 )
132 mask_path = os.path.join(
133 root, self.data_root, split_folder, "masks", str(id) + "_mask.png"
134 )
135
136 self.image_fns.append(image_path)
137 self.mask_fns.append(mask_path)
138
139 def __getitem__(self, index: int) -> dict[str, Tensor]:
140 """Return an index within the dataset.
141
142 Args:
143 index: index to return
144
145 Returns:
146 data and label at that index
147 """
148 image = self._load_image(index)
149 mask = self._load_target(index)
150 sample = {"image": image, "mask": mask}
151
152 if self.transforms is not None:
153 sample = self.transforms(sample)
154
155 return sample
156
157 def __len__(self) -> int:
158 """Return the number of data points in the dataset.
159
160 Returns:
161 length of the dataset
162 """
163 return len(self.image_fns)
164
165 def _load_image(self, index: int) -> Tensor:
166 """Load a single image.
167
168 Args:
169 index: index to return
170
171 Returns:
172 the image
173 """
174 path = self.image_fns[index]
175
176 with Image.open(path) as img:
177 array: "np.typing.NDArray[np.int_]" = np.array(img)
178 tensor = torch.from_numpy(array)
179 # Convert from HxWxC to CxHxW
180 tensor = tensor.permute((2, 0, 1)).to(torch.float32)
181 return tensor
182
183 def _load_target(self, index: int) -> Tensor:
184 """Load the target mask for a single image.
185
186 Args:
187 index: index to return
188
189 Returns:
190 the target mask
191 """
192 path = self.mask_fns[index]
193 with Image.open(path) as img:
194 array: "np.typing.NDArray[np.uint8]" = np.array(img)
195 array = rgb_to_mask(array, self.colormap)
196 tensor = torch.from_numpy(array)
197 # Convert from HxWxC to CxHxW
198 tensor = tensor.to(torch.long)
199 return tensor
200
201 def _verify(self) -> None:
202 """Verify the integrity of the dataset."""
203 # Check if the files already exist
204 if os.path.exists(os.path.join(self.root, self.data_root)):
205 return
206
207 # Check if .zip file already exists (if so extract)
208 filepath = os.path.join(self.root, self.filename)
209
210 if os.path.isfile(filepath):
211 if self.checksum and not check_integrity(filepath, self.md5):
212 raise RuntimeError("Dataset found, but corrupted.")
213 extract_archive(filepath)
214 return
215
216 raise DatasetNotFoundError(self)
217
218 def plot(
219 self,
220 sample: dict[str, Tensor],
221 show_titles: bool = True,
222 suptitle: Optional[str] = None,
223 alpha: float = 0.5,
224 ) -> Figure:
225 """Plot a sample from the dataset.
226
227 Args:
228 sample: a sample returned by :meth:`__getitem__`
229 show_titles: flag indicating whether to show titles above each panel
230 suptitle: optional string to use as a suptitle
231 alpha: opacity with which to render predictions on top of the imagery
232
233 Returns:
234 a matplotlib Figure with the rendered sample
235 """
236 ncols = 1
237 image1 = draw_semantic_segmentation_masks(
238 sample["image"], sample["mask"], alpha=alpha, colors=self.colormap
239 )
240 if "prediction" in sample:
241 ncols += 1
242 image2 = draw_semantic_segmentation_masks(
243 sample["image"], sample["prediction"], alpha=alpha, colors=self.colormap
244 )
245
246 fig, axs = plt.subplots(ncols=ncols, figsize=(ncols * 10, 10))
247 if ncols > 1:
248 (ax0, ax1) = axs
249 else:
250 ax0 = axs
251
252 ax0.imshow(image1)
253 ax0.axis("off")
254 if ncols > 1:
255 ax1.imshow(image2)
256 ax1.axis("off")
257
258 if show_titles:
259 ax0.set_title("Ground Truth")
260 if ncols > 1:
261 ax1.set_title("Predictions")
262
263 if suptitle is not None:
264 plt.suptitle(suptitle)
265
266 return fig
267
[end of torchgeo/datasets/deepglobelandcover.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchgeo/datasets/deepglobelandcover.py b/torchgeo/datasets/deepglobelandcover.py
--- a/torchgeo/datasets/deepglobelandcover.py
+++ b/torchgeo/datasets/deepglobelandcover.py
@@ -62,8 +62,18 @@
* https://arxiv.org/pdf/1805.06561.pdf
+ .. note::
+
+ This dataset can be downloaded using:
+
+ .. code-block:: console
+
+ $ pip install kaggle # place api key at ~/.kaggle/kaggle.json
+ $ kaggle datasets download -d geoap96/deepglobe2018-landcover-segmentation-traindataset
+ $ unzip deepglobe2018-landcover-segmentation-traindataset.zip
+
.. versionadded:: 0.3
- """
+ """ # noqa: E501
filename = "data.zip"
data_root = "data"
|
{"golden_diff": "diff --git a/torchgeo/datasets/deepglobelandcover.py b/torchgeo/datasets/deepglobelandcover.py\n--- a/torchgeo/datasets/deepglobelandcover.py\n+++ b/torchgeo/datasets/deepglobelandcover.py\n@@ -62,8 +62,18 @@\n \n * https://arxiv.org/pdf/1805.06561.pdf\n \n+ .. note::\n+\n+ This dataset can be downloaded using:\n+\n+ .. code-block:: console\n+\n+ $ pip install kaggle # place api key at ~/.kaggle/kaggle.json\n+ $ kaggle datasets download -d geoap96/deepglobe2018-landcover-segmentation-traindataset\n+ $ unzip deepglobe2018-landcover-segmentation-traindataset.zip\n+\n .. versionadded:: 0.3\n- \"\"\"\n+ \"\"\" # noqa: E501\n \n filename = \"data.zip\"\n data_root = \"data\"\n", "issue": "Add instructions on downloading the DeepGlobeLandCover dataset\n### Issue\r\n\r\nThe [dataset](https://torchgeo.readthedocs.io/en/stable/api/datasets.html#torchgeo.datasets.DeepGlobeLandCover) docs state `The dataset that we use with a custom train/test split can be downloaded from Kaggle` - however this is a necessity as you cannot pass `download=True`\r\n\r\n### Fix\r\n\r\nSuggest documenting the steps using kaggle CLI (below), or just to state that this must be performed? Alternatively host on Huggingface and automate the download\r\n\r\n```\r\npip install kaggle #\u00a0place api key at ~/.kaggle/kaggle.json\r\ncd data\r\nkaggle datasets download -d geoap96/deepglobe2018-landcover-segmentation-traindataset\r\nunzip deepglobe2018-landcover-segmentation-traindataset.zip\r\n```\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"DeepGlobe Land Cover Classification Challenge dataset.\"\"\"\n\nimport os\nfrom typing import Callable, Optional\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom matplotlib.figure import Figure\nfrom PIL import Image\nfrom torch import Tensor\n\nfrom .geo import NonGeoDataset\nfrom .utils import (\n DatasetNotFoundError,\n check_integrity,\n draw_semantic_segmentation_masks,\n extract_archive,\n rgb_to_mask,\n)\n\n\nclass DeepGlobeLandCover(NonGeoDataset):\n \"\"\"DeepGlobe Land Cover Classification Challenge dataset.\n\n The `DeepGlobe Land Cover Classification Challenge\n <https://competitions.codalab.org/competitions/18468>`__ dataset\n offers high-resolution sub-meter satellite imagery focusing for the task of\n semantic segmentation to detect areas of urban, agriculture, rangeland, forest,\n water, barren, and unknown. It contains 1,146 satellite images of size\n 2448 x 2448 pixels in total, split into training/validation/test sets, the original\n dataset can be downloaded from `Kaggle <https://www.kaggle.com/datasets/balraj98/\n deepglobe-land-cover-classification-dataset>`__.\n However, we only use the training dataset with 803 images since the original test\n and valid dataset are not accompanied by labels. The dataset that we use with a\n custom train/test split can be downloaded from `Kaggle <https://www.kaggle.com/\n datasets/geoap96/deepglobe2018-landcover-segmentation-traindataset>`__ (created as a\n part of Computer Vision by Deep Learning (CS4245) course offered at TU Delft).\n\n Dataset format:\n\n * images are RGB data\n * masks are RGB image with with unique RGB values representing the class\n\n Dataset classes:\n\n 0. Urban land\n 1. Agriculture land\n 2. Rangeland\n 3. Forest land\n 4. Water\n 5. Barren land\n 6. Unknown\n\n File names for satellite images and the corresponding mask image are id_sat.jpg and\n id_mask.png, where id is an integer assigned to every image.\n\n If you use this dataset in your research, please cite the following paper:\n\n * https://arxiv.org/pdf/1805.06561.pdf\n\n .. versionadded:: 0.3\n \"\"\"\n\n filename = \"data.zip\"\n data_root = \"data\"\n md5 = \"f32684b0b2bf6f8d604cd359a399c061\"\n splits = [\"train\", \"test\"]\n classes = [\n \"Urban land\",\n \"Agriculture land\",\n \"Rangeland\",\n \"Forest land\",\n \"Water\",\n \"Barren land\",\n \"Unknown\",\n ]\n colormap = [\n (0, 255, 255),\n (255, 255, 0),\n (255, 0, 255),\n (0, 255, 0),\n (0, 0, 255),\n (255, 255, 255),\n (0, 0, 0),\n ]\n\n def __init__(\n self,\n root: str = \"data\",\n split: str = \"train\",\n transforms: Optional[Callable[[dict[str, Tensor]], dict[str, Tensor]]] = None,\n checksum: bool = False,\n ) -> None:\n \"\"\"Initialize a new DeepGlobeLandCover dataset instance.\n\n Args:\n root: root directory where dataset can be found\n split: one of \"train\" or \"test\"\n transforms: a function/transform that takes input sample and its target as\n entry and returns a transformed version\n checksum: if True, check the MD5 of the downloaded files (may be slow)\n\n Raises:\n DatasetNotFoundError: If dataset is not found.\n \"\"\"\n assert split in self.splits\n self.root = root\n self.split = split\n self.transforms = transforms\n self.checksum = checksum\n\n self._verify()\n if split == \"train\":\n split_folder = \"training_data\"\n else:\n split_folder = \"test_data\"\n\n self.image_fns = []\n self.mask_fns = []\n for image in sorted(\n os.listdir(os.path.join(root, self.data_root, split_folder, \"images\"))\n ):\n if image.endswith(\".jpg\"):\n id = image[:-8]\n image_path = os.path.join(\n root, self.data_root, split_folder, \"images\", image\n )\n mask_path = os.path.join(\n root, self.data_root, split_folder, \"masks\", str(id) + \"_mask.png\"\n )\n\n self.image_fns.append(image_path)\n self.mask_fns.append(mask_path)\n\n def __getitem__(self, index: int) -> dict[str, Tensor]:\n \"\"\"Return an index within the dataset.\n\n Args:\n index: index to return\n\n Returns:\n data and label at that index\n \"\"\"\n image = self._load_image(index)\n mask = self._load_target(index)\n sample = {\"image\": image, \"mask\": mask}\n\n if self.transforms is not None:\n sample = self.transforms(sample)\n\n return sample\n\n def __len__(self) -> int:\n \"\"\"Return the number of data points in the dataset.\n\n Returns:\n length of the dataset\n \"\"\"\n return len(self.image_fns)\n\n def _load_image(self, index: int) -> Tensor:\n \"\"\"Load a single image.\n\n Args:\n index: index to return\n\n Returns:\n the image\n \"\"\"\n path = self.image_fns[index]\n\n with Image.open(path) as img:\n array: \"np.typing.NDArray[np.int_]\" = np.array(img)\n tensor = torch.from_numpy(array)\n # Convert from HxWxC to CxHxW\n tensor = tensor.permute((2, 0, 1)).to(torch.float32)\n return tensor\n\n def _load_target(self, index: int) -> Tensor:\n \"\"\"Load the target mask for a single image.\n\n Args:\n index: index to return\n\n Returns:\n the target mask\n \"\"\"\n path = self.mask_fns[index]\n with Image.open(path) as img:\n array: \"np.typing.NDArray[np.uint8]\" = np.array(img)\n array = rgb_to_mask(array, self.colormap)\n tensor = torch.from_numpy(array)\n # Convert from HxWxC to CxHxW\n tensor = tensor.to(torch.long)\n return tensor\n\n def _verify(self) -> None:\n \"\"\"Verify the integrity of the dataset.\"\"\"\n # Check if the files already exist\n if os.path.exists(os.path.join(self.root, self.data_root)):\n return\n\n # Check if .zip file already exists (if so extract)\n filepath = os.path.join(self.root, self.filename)\n\n if os.path.isfile(filepath):\n if self.checksum and not check_integrity(filepath, self.md5):\n raise RuntimeError(\"Dataset found, but corrupted.\")\n extract_archive(filepath)\n return\n\n raise DatasetNotFoundError(self)\n\n def plot(\n self,\n sample: dict[str, Tensor],\n show_titles: bool = True,\n suptitle: Optional[str] = None,\n alpha: float = 0.5,\n ) -> Figure:\n \"\"\"Plot a sample from the dataset.\n\n Args:\n sample: a sample returned by :meth:`__getitem__`\n show_titles: flag indicating whether to show titles above each panel\n suptitle: optional string to use as a suptitle\n alpha: opacity with which to render predictions on top of the imagery\n\n Returns:\n a matplotlib Figure with the rendered sample\n \"\"\"\n ncols = 1\n image1 = draw_semantic_segmentation_masks(\n sample[\"image\"], sample[\"mask\"], alpha=alpha, colors=self.colormap\n )\n if \"prediction\" in sample:\n ncols += 1\n image2 = draw_semantic_segmentation_masks(\n sample[\"image\"], sample[\"prediction\"], alpha=alpha, colors=self.colormap\n )\n\n fig, axs = plt.subplots(ncols=ncols, figsize=(ncols * 10, 10))\n if ncols > 1:\n (ax0, ax1) = axs\n else:\n ax0 = axs\n\n ax0.imshow(image1)\n ax0.axis(\"off\")\n if ncols > 1:\n ax1.imshow(image2)\n ax1.axis(\"off\")\n\n if show_titles:\n ax0.set_title(\"Ground Truth\")\n if ncols > 1:\n ax1.set_title(\"Predictions\")\n\n if suptitle is not None:\n plt.suptitle(suptitle)\n\n return fig\n", "path": "torchgeo/datasets/deepglobelandcover.py"}]}
| 3,451 | 233 |
gh_patches_debug_16634
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1426
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Command output logged before command in ttylog
**Describe the bug**
When using ssh in 'execcmd' mode, the command output is logged before the command itself.
Here a snippet from `src/cowrie/insults/insults.py` with my comments:
```
if self.type == 'e':
cmd = self.terminalProtocol.execcmd.encode('utf8') <-- during this call, the output is logged
if self.ttylogEnabled:
ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd) <-- here the command is logged
```
**To Reproduce**
Steps to reproduce the behavior:
1. Connect using ssh client and execute a command:
```
$ ssh root@cowrie "cat /proc/cpuinfo | grep name | wc -l"
root@cowrie's password:
2
```
2. A snippet from the cowrie log
```
executing command "b'cat /proc/cpuinfo | grep name | wc -l'"
CMD: cat /proc/cpuinfo | grep name | wc -l
Command found: wc -l
Command found: grep name
Command found: cat /proc/cpuinfo
exitCode: 0
sending request b'exit-status'
sending close 0
remote close
Closing TTY Log: var/lib/cowrie/tty/3f1f9a5db692d999bb3d576b5e9956a242136e961ff3f52ba6202b1254ccdb99 after 0 seconds
```
3. Run playlog on the new ttylog:
```
$ bin/playlog -b var/lib/cowrie/tty/3f1f9a5db692d999bb3d576b5e9956a242136e961ff3f52ba6202b1254ccdb99
2
cat /proc/cpuinfo | grep name | wc -l
```
**Expected behavior**
The command should be logged first and the output should be logged at the time when it is available.
</issue>
<code>
[start of src/cowrie/insults/insults.py]
1 # Copyright (c) 2009-2014 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 from __future__ import absolute_import, division
5
6 import hashlib
7 import os
8 import time
9
10 from twisted.conch.insults import insults
11 from twisted.python import log
12
13 from cowrie.core import ttylog
14 from cowrie.core.config import CowrieConfig
15 from cowrie.shell import protocol
16
17
18 class LoggingServerProtocol(insults.ServerProtocol):
19 """
20 Wrapper for ServerProtocol that implements TTY logging
21 """
22 redirlogOpen = False # it will be set at core/protocol.py
23 stdinlogOpen = False
24 ttylogOpen = False
25 ttylogPath = CowrieConfig().get('honeypot', 'ttylog_path')
26 downloadPath = CowrieConfig().get('honeypot', 'download_path')
27 ttylogEnabled = CowrieConfig().getboolean('honeypot', 'ttylog', fallback=True)
28 bytesReceivedLimit = CowrieConfig().getint('honeypot', 'download_limit_size', fallback=0)
29 bytesReceived = 0
30 redirFiles = set()
31
32 def __init__(self, prot=None, *a, **kw):
33 insults.ServerProtocol.__init__(self, prot, *a, **kw)
34
35 if prot is protocol.HoneyPotExecProtocol:
36 self.type = 'e' # Execcmd
37 else:
38 self.type = 'i' # Interactive
39
40 def getSessionId(self):
41 transportId = self.transport.session.conn.transport.transportId
42 channelId = self.transport.session.id
43 return (transportId, channelId)
44
45 def connectionMade(self):
46 transportId, channelId = self.getSessionId()
47 self.startTime = time.time()
48
49 if self.ttylogEnabled:
50 self.ttylogFile = '%s/%s-%s-%s%s.log' % \
51 (self.ttylogPath, time.strftime('%Y%m%d-%H%M%S'),
52 transportId, channelId, self.type)
53 ttylog.ttylog_open(self.ttylogFile, self.startTime)
54 self.ttylogOpen = True
55 self.ttylogSize = 0
56
57 self.stdinlogFile = '%s/%s-%s-%s-stdin.log' % \
58 (self.downloadPath, time.strftime('%Y%m%d-%H%M%S'), transportId, channelId)
59
60 if self.type == 'e':
61 self.stdinlogOpen = True
62 else:
63 self.stdinlogOpen = False
64
65 insults.ServerProtocol.connectionMade(self)
66
67 if self.type == 'e':
68 cmd = self.terminalProtocol.execcmd.encode('utf8')
69 if self.ttylogEnabled:
70 ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd)
71
72 def write(self, data):
73 if self.ttylogEnabled and self.ttylogOpen:
74 ttylog.ttylog_write(self.ttylogFile, len(data), ttylog.TYPE_OUTPUT, time.time(), data)
75 self.ttylogSize += len(data)
76
77 insults.ServerProtocol.write(self, data)
78
79 def dataReceived(self, data):
80 """
81 Input received from user
82 """
83 self.bytesReceived += len(data)
84 if self.bytesReceivedLimit and self.bytesReceived > self.bytesReceivedLimit:
85 log.msg(format='Data upload limit reached')
86 self.eofReceived()
87 return
88
89 if self.stdinlogOpen:
90 with open(self.stdinlogFile, 'ab') as f:
91 f.write(data)
92 elif self.ttylogEnabled and self.ttylogOpen:
93 ttylog.ttylog_write(self.ttylogFile, len(data), ttylog.TYPE_INPUT, time.time(), data)
94
95 # prevent crash if something like this was passed:
96 # echo cmd ; exit; \n\n
97 if self.terminalProtocol:
98 insults.ServerProtocol.dataReceived(self, data)
99
100 def eofReceived(self):
101 """
102 Receive channel close and pass on to terminal
103 """
104 if self.terminalProtocol:
105 self.terminalProtocol.eofReceived()
106
107 def loseConnection(self):
108 """
109 Override super to remove the terminal reset on logout
110 """
111 self.transport.loseConnection()
112
113 def connectionLost(self, reason):
114 """
115 FIXME: this method is called 4 times on logout....
116 it's called once from Avatar.closed() if disconnected
117 """
118 if self.stdinlogOpen:
119 try:
120 with open(self.stdinlogFile, 'rb') as f:
121 shasum = hashlib.sha256(f.read()).hexdigest()
122 shasumfile = os.path.join(self.downloadPath, shasum)
123 if os.path.exists(shasumfile):
124 os.remove(self.stdinlogFile)
125 duplicate = True
126 else:
127 os.rename(self.stdinlogFile, shasumfile)
128 duplicate = False
129
130 log.msg(eventid='cowrie.session.file_download',
131 format='Saved stdin contents with SHA-256 %(shasum)s to %(outfile)s',
132 duplicate=duplicate,
133 outfile=shasumfile,
134 shasum=shasum,
135 destfile='')
136 except IOError:
137 pass
138 finally:
139 self.stdinlogOpen = False
140
141 if self.redirFiles:
142 for rp in self.redirFiles:
143
144 rf = rp[0]
145
146 if rp[1]:
147 url = rp[1]
148 else:
149 url = rf[rf.find('redir_') + len('redir_'):]
150
151 try:
152 if not os.path.exists(rf):
153 continue
154
155 if os.path.getsize(rf) == 0:
156 os.remove(rf)
157 continue
158
159 with open(rf, 'rb') as f:
160 shasum = hashlib.sha256(f.read()).hexdigest()
161 shasumfile = os.path.join(self.downloadPath, shasum)
162 if os.path.exists(shasumfile):
163 os.remove(rf)
164 duplicate = True
165 else:
166 os.rename(rf, shasumfile)
167 duplicate = False
168 log.msg(eventid='cowrie.session.file_download',
169 format='Saved redir contents with SHA-256 %(shasum)s to %(outfile)s',
170 duplicate=duplicate,
171 outfile=shasumfile,
172 shasum=shasum,
173 destfile=url)
174 except IOError:
175 pass
176 self.redirFiles.clear()
177
178 if self.ttylogEnabled and self.ttylogOpen:
179 ttylog.ttylog_close(self.ttylogFile, time.time())
180 self.ttylogOpen = False
181 shasum = ttylog.ttylog_inputhash(self.ttylogFile)
182 shasumfile = os.path.join(self.ttylogPath, shasum)
183
184 if os.path.exists(shasumfile):
185 duplicate = True
186 os.remove(self.ttylogFile)
187 else:
188 duplicate = False
189 os.rename(self.ttylogFile, shasumfile)
190 umask = os.umask(0)
191 os.umask(umask)
192 os.chmod(shasumfile, 0o666 & ~umask)
193
194 log.msg(eventid='cowrie.log.closed',
195 format='Closing TTY Log: %(ttylog)s after %(duration)d seconds',
196 ttylog=shasumfile,
197 size=self.ttylogSize,
198 shasum=shasum,
199 duplicate=duplicate,
200 duration=time.time() - self.startTime)
201
202 insults.ServerProtocol.connectionLost(self, reason)
203
204
205 class LoggingTelnetServerProtocol(LoggingServerProtocol):
206 """
207 Wrap LoggingServerProtocol with single method to fetch session id for Telnet
208 """
209
210 def getSessionId(self):
211 transportId = self.transport.session.transportId
212 sn = self.transport.session.transport.transport.sessionno
213 return (transportId, sn)
214
[end of src/cowrie/insults/insults.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cowrie/insults/insults.py b/src/cowrie/insults/insults.py
--- a/src/cowrie/insults/insults.py
+++ b/src/cowrie/insults/insults.py
@@ -59,15 +59,17 @@
if self.type == 'e':
self.stdinlogOpen = True
+ # log the command into ttylog
+ if self.ttylogEnabled:
+ (sess, cmd) = self.protocolArgs
+ ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd)
else:
self.stdinlogOpen = False
insults.ServerProtocol.connectionMade(self)
if self.type == 'e':
- cmd = self.terminalProtocol.execcmd.encode('utf8')
- if self.ttylogEnabled:
- ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd)
+ self.terminalProtocol.execcmd.encode('utf8')
def write(self, data):
if self.ttylogEnabled and self.ttylogOpen:
|
{"golden_diff": "diff --git a/src/cowrie/insults/insults.py b/src/cowrie/insults/insults.py\n--- a/src/cowrie/insults/insults.py\n+++ b/src/cowrie/insults/insults.py\n@@ -59,15 +59,17 @@\n \n if self.type == 'e':\n self.stdinlogOpen = True\n+ # log the command into ttylog\n+ if self.ttylogEnabled:\n+ (sess, cmd) = self.protocolArgs\n+ ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd)\n else:\n self.stdinlogOpen = False\n \n insults.ServerProtocol.connectionMade(self)\n \n if self.type == 'e':\n- cmd = self.terminalProtocol.execcmd.encode('utf8')\n- if self.ttylogEnabled:\n- ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd)\n+ self.terminalProtocol.execcmd.encode('utf8')\n \n def write(self, data):\n if self.ttylogEnabled and self.ttylogOpen:\n", "issue": "Command output logged before command in ttylog\n**Describe the bug**\r\nWhen using ssh in 'execcmd' mode, the command output is logged before the command itself.\r\n\r\nHere a snippet from `src/cowrie/insults/insults.py` with my comments:\r\n```\r\nif self.type == 'e':\r\n cmd = self.terminalProtocol.execcmd.encode('utf8') <-- during this call, the output is logged\r\n if self.ttylogEnabled:\r\n ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd) <-- here the command is logged\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Connect using ssh client and execute a command:\r\n```\r\n$ ssh root@cowrie \"cat /proc/cpuinfo | grep name | wc -l\"\r\nroot@cowrie's password:\r\n2\r\n```\r\n2. A snippet from the cowrie log\r\n```\r\nexecuting command \"b'cat /proc/cpuinfo | grep name | wc -l'\"\r\nCMD: cat /proc/cpuinfo | grep name | wc -l\r\nCommand found: wc -l\r\nCommand found: grep name\r\nCommand found: cat /proc/cpuinfo\r\nexitCode: 0\r\nsending request b'exit-status'\r\nsending close 0\r\nremote close\r\nClosing TTY Log: var/lib/cowrie/tty/3f1f9a5db692d999bb3d576b5e9956a242136e961ff3f52ba6202b1254ccdb99 after 0 seconds\r\n```\r\n3. Run playlog on the new ttylog:\r\n```\r\n$ bin/playlog -b var/lib/cowrie/tty/3f1f9a5db692d999bb3d576b5e9956a242136e961ff3f52ba6202b1254ccdb99\r\n2\r\ncat /proc/cpuinfo | grep name | wc -l\r\n```\r\n**Expected behavior**\r\nThe command should be logged first and the output should be logged at the time when it is available.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2009-2014 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\nfrom __future__ import absolute_import, division\n\nimport hashlib\nimport os\nimport time\n\nfrom twisted.conch.insults import insults\nfrom twisted.python import log\n\nfrom cowrie.core import ttylog\nfrom cowrie.core.config import CowrieConfig\nfrom cowrie.shell import protocol\n\n\nclass LoggingServerProtocol(insults.ServerProtocol):\n \"\"\"\n Wrapper for ServerProtocol that implements TTY logging\n \"\"\"\n redirlogOpen = False # it will be set at core/protocol.py\n stdinlogOpen = False\n ttylogOpen = False\n ttylogPath = CowrieConfig().get('honeypot', 'ttylog_path')\n downloadPath = CowrieConfig().get('honeypot', 'download_path')\n ttylogEnabled = CowrieConfig().getboolean('honeypot', 'ttylog', fallback=True)\n bytesReceivedLimit = CowrieConfig().getint('honeypot', 'download_limit_size', fallback=0)\n bytesReceived = 0\n redirFiles = set()\n\n def __init__(self, prot=None, *a, **kw):\n insults.ServerProtocol.__init__(self, prot, *a, **kw)\n\n if prot is protocol.HoneyPotExecProtocol:\n self.type = 'e' # Execcmd\n else:\n self.type = 'i' # Interactive\n\n def getSessionId(self):\n transportId = self.transport.session.conn.transport.transportId\n channelId = self.transport.session.id\n return (transportId, channelId)\n\n def connectionMade(self):\n transportId, channelId = self.getSessionId()\n self.startTime = time.time()\n\n if self.ttylogEnabled:\n self.ttylogFile = '%s/%s-%s-%s%s.log' % \\\n (self.ttylogPath, time.strftime('%Y%m%d-%H%M%S'),\n transportId, channelId, self.type)\n ttylog.ttylog_open(self.ttylogFile, self.startTime)\n self.ttylogOpen = True\n self.ttylogSize = 0\n\n self.stdinlogFile = '%s/%s-%s-%s-stdin.log' % \\\n (self.downloadPath, time.strftime('%Y%m%d-%H%M%S'), transportId, channelId)\n\n if self.type == 'e':\n self.stdinlogOpen = True\n else:\n self.stdinlogOpen = False\n\n insults.ServerProtocol.connectionMade(self)\n\n if self.type == 'e':\n cmd = self.terminalProtocol.execcmd.encode('utf8')\n if self.ttylogEnabled:\n ttylog.ttylog_write(self.ttylogFile, len(cmd), ttylog.TYPE_INTERACT, time.time(), cmd)\n\n def write(self, data):\n if self.ttylogEnabled and self.ttylogOpen:\n ttylog.ttylog_write(self.ttylogFile, len(data), ttylog.TYPE_OUTPUT, time.time(), data)\n self.ttylogSize += len(data)\n\n insults.ServerProtocol.write(self, data)\n\n def dataReceived(self, data):\n \"\"\"\n Input received from user\n \"\"\"\n self.bytesReceived += len(data)\n if self.bytesReceivedLimit and self.bytesReceived > self.bytesReceivedLimit:\n log.msg(format='Data upload limit reached')\n self.eofReceived()\n return\n\n if self.stdinlogOpen:\n with open(self.stdinlogFile, 'ab') as f:\n f.write(data)\n elif self.ttylogEnabled and self.ttylogOpen:\n ttylog.ttylog_write(self.ttylogFile, len(data), ttylog.TYPE_INPUT, time.time(), data)\n\n # prevent crash if something like this was passed:\n # echo cmd ; exit; \\n\\n\n if self.terminalProtocol:\n insults.ServerProtocol.dataReceived(self, data)\n\n def eofReceived(self):\n \"\"\"\n Receive channel close and pass on to terminal\n \"\"\"\n if self.terminalProtocol:\n self.terminalProtocol.eofReceived()\n\n def loseConnection(self):\n \"\"\"\n Override super to remove the terminal reset on logout\n \"\"\"\n self.transport.loseConnection()\n\n def connectionLost(self, reason):\n \"\"\"\n FIXME: this method is called 4 times on logout....\n it's called once from Avatar.closed() if disconnected\n \"\"\"\n if self.stdinlogOpen:\n try:\n with open(self.stdinlogFile, 'rb') as f:\n shasum = hashlib.sha256(f.read()).hexdigest()\n shasumfile = os.path.join(self.downloadPath, shasum)\n if os.path.exists(shasumfile):\n os.remove(self.stdinlogFile)\n duplicate = True\n else:\n os.rename(self.stdinlogFile, shasumfile)\n duplicate = False\n\n log.msg(eventid='cowrie.session.file_download',\n format='Saved stdin contents with SHA-256 %(shasum)s to %(outfile)s',\n duplicate=duplicate,\n outfile=shasumfile,\n shasum=shasum,\n destfile='')\n except IOError:\n pass\n finally:\n self.stdinlogOpen = False\n\n if self.redirFiles:\n for rp in self.redirFiles:\n\n rf = rp[0]\n\n if rp[1]:\n url = rp[1]\n else:\n url = rf[rf.find('redir_') + len('redir_'):]\n\n try:\n if not os.path.exists(rf):\n continue\n\n if os.path.getsize(rf) == 0:\n os.remove(rf)\n continue\n\n with open(rf, 'rb') as f:\n shasum = hashlib.sha256(f.read()).hexdigest()\n shasumfile = os.path.join(self.downloadPath, shasum)\n if os.path.exists(shasumfile):\n os.remove(rf)\n duplicate = True\n else:\n os.rename(rf, shasumfile)\n duplicate = False\n log.msg(eventid='cowrie.session.file_download',\n format='Saved redir contents with SHA-256 %(shasum)s to %(outfile)s',\n duplicate=duplicate,\n outfile=shasumfile,\n shasum=shasum,\n destfile=url)\n except IOError:\n pass\n self.redirFiles.clear()\n\n if self.ttylogEnabled and self.ttylogOpen:\n ttylog.ttylog_close(self.ttylogFile, time.time())\n self.ttylogOpen = False\n shasum = ttylog.ttylog_inputhash(self.ttylogFile)\n shasumfile = os.path.join(self.ttylogPath, shasum)\n\n if os.path.exists(shasumfile):\n duplicate = True\n os.remove(self.ttylogFile)\n else:\n duplicate = False\n os.rename(self.ttylogFile, shasumfile)\n umask = os.umask(0)\n os.umask(umask)\n os.chmod(shasumfile, 0o666 & ~umask)\n\n log.msg(eventid='cowrie.log.closed',\n format='Closing TTY Log: %(ttylog)s after %(duration)d seconds',\n ttylog=shasumfile,\n size=self.ttylogSize,\n shasum=shasum,\n duplicate=duplicate,\n duration=time.time() - self.startTime)\n\n insults.ServerProtocol.connectionLost(self, reason)\n\n\nclass LoggingTelnetServerProtocol(LoggingServerProtocol):\n \"\"\"\n Wrap LoggingServerProtocol with single method to fetch session id for Telnet\n \"\"\"\n\n def getSessionId(self):\n transportId = self.transport.session.transportId\n sn = self.transport.session.transport.transport.sessionno\n return (transportId, sn)\n", "path": "src/cowrie/insults/insults.py"}]}
| 3,285 | 271 |
gh_patches_debug_51500
|
rasdani/github-patches
|
git_diff
|
holoviz__holoviews-4491
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
hv.extension('bokeh', inline=False) doesn't load javascript from CDN
#### ALL software version info
* Holoviews 1.13.2
* jupyterlab 2.1.0
* bokeh 2.0.1
* panel 0.9.5
#### Description of expected behavior and the observed behavior
To reduce the size of the notebooks, I use `holoviews.extension('bokeh', inline=False)`, but the size of the notebook doesn't change.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import holoviews as hv
hv.extension('bokeh', inline=False)
```
and then check the size of the notebook.
I found how to fix this:
https://github.com/holoviz/holoviews/blob/0693a07f2af1095bca84be9c9a8a2503d1ded9ab/holoviews/ipython/__init__.py#L188
change the line to `Renderer.load_nb(inline=p.inline)`.
</issue>
<code>
[start of holoviews/ipython/__init__.py]
1 import os
2 from unittest import SkipTest
3
4 import param
5 import holoviews
6
7 from IPython import version_info
8 from IPython.core.completer import IPCompleter
9 from IPython.display import HTML, publish_display_data
10 from param import ipython as param_ext
11
12 from ..core.dimension import LabelledData
13 from ..core.tree import AttrTree
14 from ..core.options import Store
15 from ..element.comparison import ComparisonTestCase
16 from ..util import extension
17 from ..plotting.renderer import Renderer
18 from .magics import load_magics
19 from .display_hooks import display # noqa (API import)
20 from .display_hooks import pprint_display, png_display, svg_display
21
22
23 AttrTree._disabled_prefixes = ['_repr_','_ipython_canary_method_should_not_exist']
24
25 def show_traceback():
26 """
27 Display the full traceback after an abbreviated traceback has occurred.
28 """
29 from .display_hooks import FULL_TRACEBACK
30 print(FULL_TRACEBACK)
31
32
33 class IPTestCase(ComparisonTestCase):
34 """
35 This class extends ComparisonTestCase to handle IPython specific
36 objects and support the execution of cells and magic.
37 """
38
39 def setUp(self):
40 super(IPTestCase, self).setUp()
41 try:
42 import IPython
43 from IPython.display import HTML, SVG
44 self.ip = IPython.InteractiveShell()
45 if self.ip is None:
46 raise TypeError()
47 except Exception:
48 raise SkipTest("IPython could not be started")
49
50 self.addTypeEqualityFunc(HTML, self.skip_comparison)
51 self.addTypeEqualityFunc(SVG, self.skip_comparison)
52
53 def skip_comparison(self, obj1, obj2, msg): pass
54
55 def get_object(self, name):
56 obj = self.ip._object_find(name).obj
57 if obj is None:
58 raise self.failureException("Could not find object %s" % name)
59 return obj
60
61
62 def cell(self, line):
63 "Run an IPython cell"
64 self.ip.run_cell(line, silent=True)
65
66 def cell_magic(self, *args, **kwargs):
67 "Run an IPython cell magic"
68 self.ip.run_cell_magic(*args, **kwargs)
69
70
71 def line_magic(self, *args, **kwargs):
72 "Run an IPython line magic"
73 self.ip.run_line_magic(*args, **kwargs)
74
75
76 class notebook_extension(extension):
77 """
78 Notebook specific extension to hv.extension that offers options for
79 controlling the notebook environment.
80 """
81
82 css = param.String(default='', doc="Optional CSS rule set to apply to the notebook.")
83
84 logo = param.Boolean(default=True, doc="Toggles display of HoloViews logo")
85
86 inline = param.Boolean(default=True, doc="""
87 Whether to inline JS and CSS resources.
88 If disabled, resources are loaded from CDN if one is available.""")
89
90 width = param.Number(default=None, bounds=(0, 100), doc="""
91 Width of the notebook as a percentage of the browser screen window width.""")
92
93 display_formats = param.List(default=['html'], doc="""
94 A list of formats that are rendered to the notebook where
95 multiple formats may be selected at once (although only one
96 format will be displayed).
97
98 Although the 'html' format is supported across backends, other
99 formats supported by the current backend (e.g 'png' and 'svg'
100 using the matplotlib backend) may be used. This may be useful to
101 export figures to other formats such as PDF with nbconvert. """)
102
103 allow_jedi_completion = param.Boolean(default=False, doc="""
104 Whether to allow jedi tab-completion to be enabled in IPython.
105 Disabled by default because many HoloViews features rely on
106 tab-completion machinery not supported when using jedi.""")
107
108 case_sensitive_completion = param.Boolean(default=False, doc="""
109 Whether to monkey patch IPython to use the correct tab-completion
110 behavior. """)
111
112 _loaded = False
113
114 def __call__(self, *args, **params):
115 comms = params.pop('comms', None)
116 super(notebook_extension, self).__call__(*args, **params)
117 # Abort if IPython not found
118 try:
119 ip = params.pop('ip', None) or get_ipython() # noqa (get_ipython)
120 except:
121 return
122
123 # Notebook archive relies on display hooks being set to work.
124 try:
125 if version_info[0] >= 4:
126 import nbformat # noqa (ensures availability)
127 else:
128 from IPython import nbformat # noqa (ensures availability)
129 try:
130 from .archive import notebook_archive
131 holoviews.archive = notebook_archive
132 except AttributeError as e:
133 if str(e) != "module 'tornado.web' has no attribute 'asynchronous'":
134 raise
135
136 except ImportError:
137 pass
138
139 # Not quite right, should be set when switching backends
140 if 'matplotlib' in Store.renderers and not notebook_extension._loaded:
141 svg_exporter = Store.renderers['matplotlib'].instance(holomap=None,fig='svg')
142 holoviews.archive.exporters = [svg_exporter] + holoviews.archive.exporters
143
144 p = param.ParamOverrides(self, {k:v for k,v in params.items() if k!='config'})
145 if p.case_sensitive_completion:
146 from IPython.core import completer
147 completer.completions_sorting_key = self.completions_sorting_key
148 if not p.allow_jedi_completion and hasattr(IPCompleter, 'use_jedi'):
149 ip.run_line_magic('config', 'IPCompleter.use_jedi = False')
150
151 resources = self._get_resources(args, params)
152
153 Store.display_formats = p.display_formats
154 if 'html' not in p.display_formats and len(p.display_formats) > 1:
155 msg = ('Output magic unable to control displayed format '
156 'as IPython notebook uses fixed precedence '
157 'between %r' % p.display_formats)
158 display(HTML('<b>Warning</b>: %s' % msg))
159
160 loaded = notebook_extension._loaded
161 if loaded == False:
162 param_ext.load_ipython_extension(ip, verbose=False)
163 load_magics(ip)
164 Store.output_settings.initialize(list(Store.renderers.keys()))
165 Store.set_display_hook('html+js', LabelledData, pprint_display)
166 Store.set_display_hook('png', LabelledData, png_display)
167 Store.set_display_hook('svg', LabelledData, svg_display)
168 notebook_extension._loaded = True
169
170 css = ''
171 if p.width is not None:
172 css += '<style>div.container { width: %s%% }</style>' % p.width
173 if p.css:
174 css += '<style>%s</style>' % p.css
175
176 if css:
177 display(HTML(css))
178
179 resources = list(resources)
180 if len(resources) == 0: return
181
182 from panel import config
183 if hasattr(config, 'comms') and comms:
184 config.comms = comms
185
186 for r in [r for r in resources if r != 'holoviews']:
187 Store.renderers[r].load_nb(inline=p.inline)
188 Renderer.load_nb()
189
190 if hasattr(ip, 'kernel') and not loaded:
191 Renderer.comm_manager.get_client_comm(notebook_extension._process_comm_msg,
192 "hv-extension-comm")
193
194 # Create a message for the logo (if shown)
195 self.load_hvjs(logo=p.logo,
196 bokeh_logo= p.logo and ('bokeh' in resources),
197 mpl_logo= p.logo and (('matplotlib' in resources)
198 or resources==['holoviews']),
199 plotly_logo= p.logo and ('plotly' in resources))
200
201 @classmethod
202 def completions_sorting_key(cls, word):
203 "Fixed version of IPyton.completer.completions_sorting_key"
204 prio1, prio2 = 0, 0
205 if word.startswith('__'): prio1 = 2
206 elif word.startswith('_'): prio1 = 1
207 if word.endswith('='): prio1 = -1
208 if word.startswith('%%'):
209 if not "%" in word[2:]:
210 word = word[2:]; prio2 = 2
211 elif word.startswith('%'):
212 if not "%" in word[1:]:
213 word = word[1:]; prio2 = 1
214 return prio1, word, prio2
215
216
217 def _get_resources(self, args, params):
218 """
219 Finds the list of resources from the keyword parameters and pops
220 them out of the params dictionary.
221 """
222 resources = []
223 disabled = []
224 for resource in ['holoviews'] + list(Store.renderers.keys()):
225 if resource in args:
226 resources.append(resource)
227
228 if resource in params:
229 setting = params.pop(resource)
230 if setting is True and resource != 'matplotlib':
231 if resource not in resources:
232 resources.append(resource)
233 if setting is False:
234 disabled.append(resource)
235
236 unmatched_args = set(args) - set(resources)
237 if unmatched_args:
238 display(HTML('<b>Warning:</b> Unrecognized resources %s'
239 % ', '.join(unmatched_args)))
240
241 resources = [r for r in resources if r not in disabled]
242 if ('holoviews' not in disabled) and ('holoviews' not in resources):
243 resources = ['holoviews'] + resources
244 return resources
245
246 @classmethod
247 def load_hvjs(cls, logo=False, bokeh_logo=False, mpl_logo=False, plotly_logo=False,
248 JS=True, message='HoloViewsJS successfully loaded.'):
249 """
250 Displays javascript and CSS to initialize HoloViews widgets.
251 """
252 import jinja2
253
254 templateLoader = jinja2.FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))
255 jinjaEnv = jinja2.Environment(loader=templateLoader)
256 template = jinjaEnv.get_template('load_notebook.html')
257 html = template.render({'logo': logo,
258 'bokeh_logo': bokeh_logo,
259 'mpl_logo': mpl_logo,
260 'plotly_logo': plotly_logo,
261 'message': message})
262 publish_display_data(data={'text/html': html})
263
264
265 notebook_extension.add_delete_action(Renderer._delete_plot)
266
267
268 def load_ipython_extension(ip):
269 notebook_extension(ip=ip)
270
271 def unload_ipython_extension(ip):
272 notebook_extension._loaded = False
273
[end of holoviews/ipython/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/holoviews/ipython/__init__.py b/holoviews/ipython/__init__.py
--- a/holoviews/ipython/__init__.py
+++ b/holoviews/ipython/__init__.py
@@ -185,7 +185,7 @@
for r in [r for r in resources if r != 'holoviews']:
Store.renderers[r].load_nb(inline=p.inline)
- Renderer.load_nb()
+ Renderer.load_nb(inline=p.inline)
if hasattr(ip, 'kernel') and not loaded:
Renderer.comm_manager.get_client_comm(notebook_extension._process_comm_msg,
|
{"golden_diff": "diff --git a/holoviews/ipython/__init__.py b/holoviews/ipython/__init__.py\n--- a/holoviews/ipython/__init__.py\n+++ b/holoviews/ipython/__init__.py\n@@ -185,7 +185,7 @@\n \n for r in [r for r in resources if r != 'holoviews']:\n Store.renderers[r].load_nb(inline=p.inline)\n- Renderer.load_nb()\n+ Renderer.load_nb(inline=p.inline)\n \n if hasattr(ip, 'kernel') and not loaded:\n Renderer.comm_manager.get_client_comm(notebook_extension._process_comm_msg,\n", "issue": "hv.extension('bokeh', inline=False) doesn't load javascript from CDN\n#### ALL software version info\r\n\r\n* Holoviews 1.13.2\r\n* jupyterlab 2.1.0\r\n* bokeh 2.0.1\r\n* panel 0.9.5\r\n\r\n#### Description of expected behavior and the observed behavior\r\n\r\nTo reduce the size of the notebooks, I use `holoviews.extension('bokeh', inline=False)`, but the size of the notebook doesn't change.\r\n\r\n#### Complete, minimal, self-contained example code that reproduces the issue\r\n\r\n```python\r\nimport holoviews as hv\r\nhv.extension('bokeh', inline=False)\r\n```\r\n\r\nand then check the size of the notebook.\r\n\r\nI found how to fix this:\r\n\r\nhttps://github.com/holoviz/holoviews/blob/0693a07f2af1095bca84be9c9a8a2503d1ded9ab/holoviews/ipython/__init__.py#L188\r\n\r\nchange the line to `Renderer.load_nb(inline=p.inline)`.\n", "before_files": [{"content": "import os\nfrom unittest import SkipTest\n\nimport param\nimport holoviews\n\nfrom IPython import version_info\nfrom IPython.core.completer import IPCompleter\nfrom IPython.display import HTML, publish_display_data\nfrom param import ipython as param_ext\n\nfrom ..core.dimension import LabelledData\nfrom ..core.tree import AttrTree\nfrom ..core.options import Store\nfrom ..element.comparison import ComparisonTestCase\nfrom ..util import extension\nfrom ..plotting.renderer import Renderer\nfrom .magics import load_magics\nfrom .display_hooks import display # noqa (API import)\nfrom .display_hooks import pprint_display, png_display, svg_display\n\n\nAttrTree._disabled_prefixes = ['_repr_','_ipython_canary_method_should_not_exist']\n\ndef show_traceback():\n \"\"\"\n Display the full traceback after an abbreviated traceback has occurred.\n \"\"\"\n from .display_hooks import FULL_TRACEBACK\n print(FULL_TRACEBACK)\n\n\nclass IPTestCase(ComparisonTestCase):\n \"\"\"\n This class extends ComparisonTestCase to handle IPython specific\n objects and support the execution of cells and magic.\n \"\"\"\n\n def setUp(self):\n super(IPTestCase, self).setUp()\n try:\n import IPython\n from IPython.display import HTML, SVG\n self.ip = IPython.InteractiveShell()\n if self.ip is None:\n raise TypeError()\n except Exception:\n raise SkipTest(\"IPython could not be started\")\n\n self.addTypeEqualityFunc(HTML, self.skip_comparison)\n self.addTypeEqualityFunc(SVG, self.skip_comparison)\n\n def skip_comparison(self, obj1, obj2, msg): pass\n\n def get_object(self, name):\n obj = self.ip._object_find(name).obj\n if obj is None:\n raise self.failureException(\"Could not find object %s\" % name)\n return obj\n\n\n def cell(self, line):\n \"Run an IPython cell\"\n self.ip.run_cell(line, silent=True)\n\n def cell_magic(self, *args, **kwargs):\n \"Run an IPython cell magic\"\n self.ip.run_cell_magic(*args, **kwargs)\n\n\n def line_magic(self, *args, **kwargs):\n \"Run an IPython line magic\"\n self.ip.run_line_magic(*args, **kwargs)\n\n\nclass notebook_extension(extension):\n \"\"\"\n Notebook specific extension to hv.extension that offers options for\n controlling the notebook environment.\n \"\"\"\n\n css = param.String(default='', doc=\"Optional CSS rule set to apply to the notebook.\")\n\n logo = param.Boolean(default=True, doc=\"Toggles display of HoloViews logo\")\n\n inline = param.Boolean(default=True, doc=\"\"\"\n Whether to inline JS and CSS resources. \n If disabled, resources are loaded from CDN if one is available.\"\"\")\n\n width = param.Number(default=None, bounds=(0, 100), doc=\"\"\"\n Width of the notebook as a percentage of the browser screen window width.\"\"\")\n\n display_formats = param.List(default=['html'], doc=\"\"\"\n A list of formats that are rendered to the notebook where\n multiple formats may be selected at once (although only one\n format will be displayed).\n\n Although the 'html' format is supported across backends, other\n formats supported by the current backend (e.g 'png' and 'svg'\n using the matplotlib backend) may be used. This may be useful to\n export figures to other formats such as PDF with nbconvert. \"\"\")\n\n allow_jedi_completion = param.Boolean(default=False, doc=\"\"\"\n Whether to allow jedi tab-completion to be enabled in IPython.\n Disabled by default because many HoloViews features rely on\n tab-completion machinery not supported when using jedi.\"\"\")\n\n case_sensitive_completion = param.Boolean(default=False, doc=\"\"\"\n Whether to monkey patch IPython to use the correct tab-completion\n behavior. \"\"\")\n\n _loaded = False\n\n def __call__(self, *args, **params):\n comms = params.pop('comms', None)\n super(notebook_extension, self).__call__(*args, **params)\n # Abort if IPython not found\n try:\n ip = params.pop('ip', None) or get_ipython() # noqa (get_ipython)\n except:\n return\n\n # Notebook archive relies on display hooks being set to work.\n try:\n if version_info[0] >= 4:\n import nbformat # noqa (ensures availability)\n else:\n from IPython import nbformat # noqa (ensures availability)\n try:\n from .archive import notebook_archive\n holoviews.archive = notebook_archive\n except AttributeError as e:\n if str(e) != \"module 'tornado.web' has no attribute 'asynchronous'\":\n raise\n\n except ImportError:\n pass\n\n # Not quite right, should be set when switching backends\n if 'matplotlib' in Store.renderers and not notebook_extension._loaded:\n svg_exporter = Store.renderers['matplotlib'].instance(holomap=None,fig='svg')\n holoviews.archive.exporters = [svg_exporter] + holoviews.archive.exporters\n\n p = param.ParamOverrides(self, {k:v for k,v in params.items() if k!='config'})\n if p.case_sensitive_completion:\n from IPython.core import completer\n completer.completions_sorting_key = self.completions_sorting_key\n if not p.allow_jedi_completion and hasattr(IPCompleter, 'use_jedi'):\n ip.run_line_magic('config', 'IPCompleter.use_jedi = False')\n\n resources = self._get_resources(args, params)\n\n Store.display_formats = p.display_formats\n if 'html' not in p.display_formats and len(p.display_formats) > 1:\n msg = ('Output magic unable to control displayed format '\n 'as IPython notebook uses fixed precedence '\n 'between %r' % p.display_formats)\n display(HTML('<b>Warning</b>: %s' % msg))\n\n loaded = notebook_extension._loaded\n if loaded == False:\n param_ext.load_ipython_extension(ip, verbose=False)\n load_magics(ip)\n Store.output_settings.initialize(list(Store.renderers.keys()))\n Store.set_display_hook('html+js', LabelledData, pprint_display)\n Store.set_display_hook('png', LabelledData, png_display)\n Store.set_display_hook('svg', LabelledData, svg_display)\n notebook_extension._loaded = True\n\n css = ''\n if p.width is not None:\n css += '<style>div.container { width: %s%% }</style>' % p.width\n if p.css:\n css += '<style>%s</style>' % p.css\n\n if css:\n display(HTML(css))\n\n resources = list(resources)\n if len(resources) == 0: return\n\n from panel import config\n if hasattr(config, 'comms') and comms:\n config.comms = comms\n\n for r in [r for r in resources if r != 'holoviews']:\n Store.renderers[r].load_nb(inline=p.inline)\n Renderer.load_nb()\n\n if hasattr(ip, 'kernel') and not loaded:\n Renderer.comm_manager.get_client_comm(notebook_extension._process_comm_msg,\n \"hv-extension-comm\")\n\n # Create a message for the logo (if shown)\n self.load_hvjs(logo=p.logo,\n bokeh_logo= p.logo and ('bokeh' in resources),\n mpl_logo= p.logo and (('matplotlib' in resources)\n or resources==['holoviews']),\n plotly_logo= p.logo and ('plotly' in resources))\n\n @classmethod\n def completions_sorting_key(cls, word):\n \"Fixed version of IPyton.completer.completions_sorting_key\"\n prio1, prio2 = 0, 0\n if word.startswith('__'): prio1 = 2\n elif word.startswith('_'): prio1 = 1\n if word.endswith('='): prio1 = -1\n if word.startswith('%%'):\n if not \"%\" in word[2:]:\n word = word[2:]; prio2 = 2\n elif word.startswith('%'):\n if not \"%\" in word[1:]:\n word = word[1:]; prio2 = 1\n return prio1, word, prio2\n\n\n def _get_resources(self, args, params):\n \"\"\"\n Finds the list of resources from the keyword parameters and pops\n them out of the params dictionary.\n \"\"\"\n resources = []\n disabled = []\n for resource in ['holoviews'] + list(Store.renderers.keys()):\n if resource in args:\n resources.append(resource)\n\n if resource in params:\n setting = params.pop(resource)\n if setting is True and resource != 'matplotlib':\n if resource not in resources:\n resources.append(resource)\n if setting is False:\n disabled.append(resource)\n\n unmatched_args = set(args) - set(resources)\n if unmatched_args:\n display(HTML('<b>Warning:</b> Unrecognized resources %s'\n % ', '.join(unmatched_args)))\n\n resources = [r for r in resources if r not in disabled]\n if ('holoviews' not in disabled) and ('holoviews' not in resources):\n resources = ['holoviews'] + resources\n return resources\n\n @classmethod\n def load_hvjs(cls, logo=False, bokeh_logo=False, mpl_logo=False, plotly_logo=False,\n JS=True, message='HoloViewsJS successfully loaded.'):\n \"\"\"\n Displays javascript and CSS to initialize HoloViews widgets.\n \"\"\"\n import jinja2\n\n templateLoader = jinja2.FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))\n jinjaEnv = jinja2.Environment(loader=templateLoader)\n template = jinjaEnv.get_template('load_notebook.html')\n html = template.render({'logo': logo,\n 'bokeh_logo': bokeh_logo,\n 'mpl_logo': mpl_logo,\n 'plotly_logo': plotly_logo,\n 'message': message})\n publish_display_data(data={'text/html': html})\n\n\nnotebook_extension.add_delete_action(Renderer._delete_plot)\n\n\ndef load_ipython_extension(ip):\n notebook_extension(ip=ip)\n\ndef unload_ipython_extension(ip):\n notebook_extension._loaded = False\n", "path": "holoviews/ipython/__init__.py"}]}
| 3,766 | 151 |
gh_patches_debug_14663
|
rasdani/github-patches
|
git_diff
|
pytorch__text-1889
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error running unit tests when building with setup.py install
## 🐛 Bug
**Describe the bug** A clear and concise description of what the bug is.
When building with python setup.py install, running pytest from either the project root directory or the test/ directory causes the error `ImportError: torchtext C++ Extension is not found`. This can be worked-around by renaming the torchtext subdirectory, or by instead using python setup.py develop like the CI does (see .circleci/unittest/linux/scripts/install.sh#L36).
**To Reproduce** Steps to reproduce the behavior:
1. Follow the build steps like normal, running python setup.py install
2. Run pytest
3. Every test fails with the error `ImportError: torchtext C++ Extension is not found`.
**Expected behavior** A clear and concise description of what you expected to happen.
The tests should succeed even when installing with setup.py install, either running pytest from the project root or the test/ directory (this is the case in pytorch) without having to rename the torchtext subdirectory.
**Screenshots** If applicable, add screenshots to help explain your problem.
**Environment**
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or
fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
python -c "import torchtext; print(\"torchtext version is \", torchtext.__version__)"
```
- PyTorch Version (e.g., 1.0): 1.12
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): Compiled from source
- Build command you used (if compiling from source): python3 ./setup.py install
- Python version: 3.7.13
- CUDA/cuDNN version: ROCm version 5.2
- GPU models and configuration: N/A
- Any other relevant information:
**Additional context** Add any other context about the problem here.
</issue>
<code>
[start of tools/setup_helpers/extension.py]
1 import distutils.sysconfig
2 import os
3 import platform
4 import subprocess
5 from pathlib import Path
6
7 import torch
8 from setuptools import Extension
9 from setuptools.command.build_ext import build_ext
10
11
12 __all__ = [
13 "get_ext_modules",
14 "CMakeBuild",
15 ]
16
17
18 _LIBTORCHTEXT_NAME = "torchtext.lib.libtorchtext"
19 _EXT_NAME = "torchtext._torchtext"
20 _THIS_DIR = Path(__file__).parent.resolve()
21 _ROOT_DIR = _THIS_DIR.parent.parent.resolve()
22
23
24 def get_ext_modules():
25 modules = [
26 Extension(name=_LIBTORCHTEXT_NAME, sources=[]),
27 Extension(name=_EXT_NAME, sources=[]),
28 ]
29 return modules
30
31
32 # Based off of
33 # https://github.com/pybind/cmake_example/blob/580c5fd29d4651db99d8874714b07c0c49a53f8a/setup.py
34
35
36 class CMakeBuild(build_ext):
37 def run(self):
38 try:
39 subprocess.check_output(["cmake", "--version"])
40 except OSError:
41 raise RuntimeError("CMake is not available.") from None
42 super().run()
43
44 def build_extension(self, ext):
45 # Since two library files (libtorchaudio and _torchaudio) need to be
46 # recognized by setuptools, we instantiate `Extension` twice. (see `get_ext_modules`)
47 # This leads to the situation where this `build_extension` method is called twice.
48 # However, the following `cmake` command will build all of them at the same time,
49 # so, we do not need to perform `cmake` twice.
50 # Therefore we call `cmake` only for `torchaudio._torchaudio`.
51 if ext.name != "torchtext._torchtext":
52 return
53
54 extdir = os.path.abspath(os.path.dirname(self.get_ext_fullpath(ext.name)))
55
56 # required for auto-detection of auxiliary "native" libs
57 if not extdir.endswith(os.path.sep):
58 extdir += os.path.sep
59
60 cfg = "Debug" if self.debug else "Release"
61
62 cmake_args = [
63 f"-DCMAKE_BUILD_TYPE={cfg}",
64 f"-DCMAKE_PREFIX_PATH={torch.utils.cmake_prefix_path}",
65 f"-DCMAKE_INSTALL_PREFIX={extdir}",
66 "-DCMAKE_VERBOSE_MAKEFILE=ON",
67 f"-DPython_INCLUDE_DIR={distutils.sysconfig.get_python_inc()}",
68 f"-DTORCH_INSTALL_PREFIX:STRING={os.path.dirname(torch.__file__)}",
69 "-DBUILD_TORCHTEXT_PYTHON_EXTENSION:BOOL=ON",
70 "-DRE2_BUILD_TESTING:BOOL=OFF",
71 "-DBUILD_TESTING:BOOL=OFF",
72 "-DBUILD_SHARED_LIBS=OFF",
73 "-DCMAKE_POLICY_DEFAULT_CMP0063=NEW",
74 "-DSPM_ENABLE_SHARED=OFF",
75 ]
76 build_args = ["--target", "install"]
77
78 # Default to Ninja
79 if "CMAKE_GENERATOR" not in os.environ or platform.system() == "Windows":
80 cmake_args += ["-GNinja"]
81 if platform.system() == "Windows":
82 import sys
83
84 python_version = sys.version_info
85 cmake_args += [
86 "-DCMAKE_C_COMPILER=cl",
87 "-DCMAKE_CXX_COMPILER=cl",
88 f"-DPYTHON_VERSION={python_version.major}.{python_version.minor}",
89 ]
90
91 # Set CMAKE_BUILD_PARALLEL_LEVEL to control the parallel build level
92 # across all generators.
93 if "CMAKE_BUILD_PARALLEL_LEVEL" not in os.environ:
94 # self.parallel is a Python 3 only way to set parallel jobs by hand
95 # using -j in the build_ext call, not supported by pip or PyPA-build.
96 if hasattr(self, "parallel") and self.parallel:
97 # CMake 3.12+ only.
98 build_args += ["-j{}".format(self.parallel)]
99
100 if not os.path.exists(self.build_temp):
101 os.makedirs(self.build_temp)
102
103 subprocess.check_call(["cmake", str(_ROOT_DIR)] + cmake_args, cwd=self.build_temp)
104 subprocess.check_call(["cmake", "--build", "."] + build_args, cwd=self.build_temp)
105
106 def get_ext_filename(self, fullname):
107 ext_filename = super().get_ext_filename(fullname)
108 ext_filename_parts = ext_filename.split(".")
109 without_abi = ext_filename_parts[:-2] + ext_filename_parts[-1:]
110 ext_filename = ".".join(without_abi)
111 return ext_filename
112
[end of tools/setup_helpers/extension.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/setup_helpers/extension.py b/tools/setup_helpers/extension.py
--- a/tools/setup_helpers/extension.py
+++ b/tools/setup_helpers/extension.py
@@ -21,6 +21,10 @@
_ROOT_DIR = _THIS_DIR.parent.parent.resolve()
+def _get_cxx11_abi():
+ return "-D_GLIBCXX_USE_CXX11_ABI=" + str(int(torch.compiled_with_cxx11_abi()))
+
+
def get_ext_modules():
modules = [
Extension(name=_LIBTORCHTEXT_NAME, sources=[]),
@@ -72,6 +76,7 @@
"-DBUILD_SHARED_LIBS=OFF",
"-DCMAKE_POLICY_DEFAULT_CMP0063=NEW",
"-DSPM_ENABLE_SHARED=OFF",
+ f"-DTORCH_COMPILED_WITH_CXX_ABI={_get_cxx11_abi()}",
]
build_args = ["--target", "install"]
|
{"golden_diff": "diff --git a/tools/setup_helpers/extension.py b/tools/setup_helpers/extension.py\n--- a/tools/setup_helpers/extension.py\n+++ b/tools/setup_helpers/extension.py\n@@ -21,6 +21,10 @@\n _ROOT_DIR = _THIS_DIR.parent.parent.resolve()\n \n \n+def _get_cxx11_abi():\n+ return \"-D_GLIBCXX_USE_CXX11_ABI=\" + str(int(torch.compiled_with_cxx11_abi()))\n+\n+\n def get_ext_modules():\n modules = [\n Extension(name=_LIBTORCHTEXT_NAME, sources=[]),\n@@ -72,6 +76,7 @@\n \"-DBUILD_SHARED_LIBS=OFF\",\n \"-DCMAKE_POLICY_DEFAULT_CMP0063=NEW\",\n \"-DSPM_ENABLE_SHARED=OFF\",\n+ f\"-DTORCH_COMPILED_WITH_CXX_ABI={_get_cxx11_abi()}\",\n ]\n build_args = [\"--target\", \"install\"]\n", "issue": "Error running unit tests when building with setup.py install\n## \ud83d\udc1b Bug\r\n\r\n**Describe the bug** A clear and concise description of what the bug is.\r\nWhen building with python setup.py install, running pytest from either the project root directory or the test/ directory causes the error `ImportError: torchtext C++ Extension is not found`. This can be worked-around by renaming the torchtext subdirectory, or by instead using python setup.py develop like the CI does (see .circleci/unittest/linux/scripts/install.sh#L36).\r\n\r\n**To Reproduce** Steps to reproduce the behavior:\r\n\r\n1. Follow the build steps like normal, running python setup.py install\r\n2. Run pytest\r\n3. Every test fails with the error `ImportError: torchtext C++ Extension is not found`.\r\n\r\n**Expected behavior** A clear and concise description of what you expected to happen.\r\n\r\nThe tests should succeed even when installing with setup.py install, either running pytest from the project root or the test/ directory (this is the case in pytorch) without having to rename the torchtext subdirectory.\r\n\r\n**Screenshots** If applicable, add screenshots to help explain your problem.\r\n\r\n**Environment**\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or\r\nfill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\npython -c \"import torchtext; print(\\\"torchtext version is \\\", torchtext.__version__)\"\r\n```\r\n\r\n- PyTorch Version (e.g., 1.0): 1.12\r\n- OS (e.g., Linux): Linux\r\n- How you installed PyTorch (`conda`, `pip`, source): Compiled from source\r\n- Build command you used (if compiling from source): python3 ./setup.py install\r\n- Python version: 3.7.13\r\n- CUDA/cuDNN version: ROCm version 5.2\r\n- GPU models and configuration: N/A\r\n- Any other relevant information:\r\n\r\n**Additional context** Add any other context about the problem here.\r\n\n", "before_files": [{"content": "import distutils.sysconfig\nimport os\nimport platform\nimport subprocess\nfrom pathlib import Path\n\nimport torch\nfrom setuptools import Extension\nfrom setuptools.command.build_ext import build_ext\n\n\n__all__ = [\n \"get_ext_modules\",\n \"CMakeBuild\",\n]\n\n\n_LIBTORCHTEXT_NAME = \"torchtext.lib.libtorchtext\"\n_EXT_NAME = \"torchtext._torchtext\"\n_THIS_DIR = Path(__file__).parent.resolve()\n_ROOT_DIR = _THIS_DIR.parent.parent.resolve()\n\n\ndef get_ext_modules():\n modules = [\n Extension(name=_LIBTORCHTEXT_NAME, sources=[]),\n Extension(name=_EXT_NAME, sources=[]),\n ]\n return modules\n\n\n# Based off of\n# https://github.com/pybind/cmake_example/blob/580c5fd29d4651db99d8874714b07c0c49a53f8a/setup.py\n\n\nclass CMakeBuild(build_ext):\n def run(self):\n try:\n subprocess.check_output([\"cmake\", \"--version\"])\n except OSError:\n raise RuntimeError(\"CMake is not available.\") from None\n super().run()\n\n def build_extension(self, ext):\n # Since two library files (libtorchaudio and _torchaudio) need to be\n # recognized by setuptools, we instantiate `Extension` twice. (see `get_ext_modules`)\n # This leads to the situation where this `build_extension` method is called twice.\n # However, the following `cmake` command will build all of them at the same time,\n # so, we do not need to perform `cmake` twice.\n # Therefore we call `cmake` only for `torchaudio._torchaudio`.\n if ext.name != \"torchtext._torchtext\":\n return\n\n extdir = os.path.abspath(os.path.dirname(self.get_ext_fullpath(ext.name)))\n\n # required for auto-detection of auxiliary \"native\" libs\n if not extdir.endswith(os.path.sep):\n extdir += os.path.sep\n\n cfg = \"Debug\" if self.debug else \"Release\"\n\n cmake_args = [\n f\"-DCMAKE_BUILD_TYPE={cfg}\",\n f\"-DCMAKE_PREFIX_PATH={torch.utils.cmake_prefix_path}\",\n f\"-DCMAKE_INSTALL_PREFIX={extdir}\",\n \"-DCMAKE_VERBOSE_MAKEFILE=ON\",\n f\"-DPython_INCLUDE_DIR={distutils.sysconfig.get_python_inc()}\",\n f\"-DTORCH_INSTALL_PREFIX:STRING={os.path.dirname(torch.__file__)}\",\n \"-DBUILD_TORCHTEXT_PYTHON_EXTENSION:BOOL=ON\",\n \"-DRE2_BUILD_TESTING:BOOL=OFF\",\n \"-DBUILD_TESTING:BOOL=OFF\",\n \"-DBUILD_SHARED_LIBS=OFF\",\n \"-DCMAKE_POLICY_DEFAULT_CMP0063=NEW\",\n \"-DSPM_ENABLE_SHARED=OFF\",\n ]\n build_args = [\"--target\", \"install\"]\n\n # Default to Ninja\n if \"CMAKE_GENERATOR\" not in os.environ or platform.system() == \"Windows\":\n cmake_args += [\"-GNinja\"]\n if platform.system() == \"Windows\":\n import sys\n\n python_version = sys.version_info\n cmake_args += [\n \"-DCMAKE_C_COMPILER=cl\",\n \"-DCMAKE_CXX_COMPILER=cl\",\n f\"-DPYTHON_VERSION={python_version.major}.{python_version.minor}\",\n ]\n\n # Set CMAKE_BUILD_PARALLEL_LEVEL to control the parallel build level\n # across all generators.\n if \"CMAKE_BUILD_PARALLEL_LEVEL\" not in os.environ:\n # self.parallel is a Python 3 only way to set parallel jobs by hand\n # using -j in the build_ext call, not supported by pip or PyPA-build.\n if hasattr(self, \"parallel\") and self.parallel:\n # CMake 3.12+ only.\n build_args += [\"-j{}\".format(self.parallel)]\n\n if not os.path.exists(self.build_temp):\n os.makedirs(self.build_temp)\n\n subprocess.check_call([\"cmake\", str(_ROOT_DIR)] + cmake_args, cwd=self.build_temp)\n subprocess.check_call([\"cmake\", \"--build\", \".\"] + build_args, cwd=self.build_temp)\n\n def get_ext_filename(self, fullname):\n ext_filename = super().get_ext_filename(fullname)\n ext_filename_parts = ext_filename.split(\".\")\n without_abi = ext_filename_parts[:-2] + ext_filename_parts[-1:]\n ext_filename = \".\".join(without_abi)\n return ext_filename\n", "path": "tools/setup_helpers/extension.py"}]}
| 2,234 | 209 |
gh_patches_debug_9928
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-377
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: __init__() got an unexpected keyword argument 'global_pool'
## 🐛 Bug
Flash backbones is hardcoding the global_pool argument but this isn't relevant for Visual Transformer backbones and causes them to crash
`TypeError: __init__() got an unexpected keyword argument 'global_pool'`
### To Reproduce
`model = ImageClassifier(backbone="vit_small_patch16_224", num_classes=10, serializer=Labels())`
### Expected behavior
global_pool should only be passed if expected model backbone should initialize and not crash
### Environment
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of flash/image/backbones.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import functools
15 import os
16 import urllib.error
17 import warnings
18 from functools import partial
19 from typing import Tuple
20
21 import torch
22 from pytorch_lightning import LightningModule
23 from pytorch_lightning.utilities import _BOLTS_AVAILABLE, rank_zero_warn
24 from torch import nn as nn
25
26 from flash.core.registry import FlashRegistry
27 from flash.core.utilities.imports import _IMAGE_AVAILABLE, _TIMM_AVAILABLE, _TORCHVISION_AVAILABLE
28
29 if _TIMM_AVAILABLE:
30 import timm
31
32 if _TORCHVISION_AVAILABLE:
33 import torchvision
34 from torchvision.models.detection.backbone_utils import resnet_fpn_backbone
35
36 if _BOLTS_AVAILABLE:
37 if os.getenv("WARN_MISSING_PACKAGE") == "0":
38 with warnings.catch_warnings(record=True) as w:
39 from pl_bolts.models.self_supervised import SimCLR, SwAV
40 else:
41 from pl_bolts.models.self_supervised import SimCLR, SwAV
42
43 ROOT_S3_BUCKET = "https://pl-bolts-weights.s3.us-east-2.amazonaws.com"
44
45 MOBILENET_MODELS = ["mobilenet_v2"]
46 VGG_MODELS = ["vgg11", "vgg13", "vgg16", "vgg19"]
47 RESNET_MODELS = ["resnet18", "resnet34", "resnet50", "resnet101", "resnet152", "resnext50_32x4d", "resnext101_32x8d"]
48 DENSENET_MODELS = ["densenet121", "densenet169", "densenet161"]
49 TORCHVISION_MODELS = MOBILENET_MODELS + VGG_MODELS + RESNET_MODELS + DENSENET_MODELS
50 BOLTS_MODELS = ["simclr-imagenet", "swav-imagenet"]
51
52 IMAGE_CLASSIFIER_BACKBONES = FlashRegistry("backbones")
53 OBJ_DETECTION_BACKBONES = FlashRegistry("backbones")
54
55
56 def catch_url_error(fn):
57
58 @functools.wraps(fn)
59 def wrapper(*args, pretrained=False, **kwargs):
60 try:
61 return fn(*args, pretrained=pretrained, **kwargs)
62 except urllib.error.URLError:
63 result = fn(*args, pretrained=False, **kwargs)
64 rank_zero_warn(
65 "Failed to download pretrained weights for the selected backbone. The backbone has been created with"
66 " `pretrained=False` instead. If you are loading from a local checkpoint, this warning can be safely"
67 " ignored.", UserWarning
68 )
69 return result
70
71 return wrapper
72
73
74 @IMAGE_CLASSIFIER_BACKBONES(name="simclr-imagenet", namespace="vision", package="bolts")
75 def load_simclr_imagenet(path_or_url: str = f"{ROOT_S3_BUCKET}/simclr/bolts_simclr_imagenet/simclr_imagenet.ckpt", **_):
76 simclr: LightningModule = SimCLR.load_from_checkpoint(path_or_url, strict=False)
77 # remove the last two layers & turn it into a Sequential model
78 backbone = nn.Sequential(*list(simclr.encoder.children())[:-2])
79 return backbone, 2048
80
81
82 @IMAGE_CLASSIFIER_BACKBONES(name="swav-imagenet", namespace="vision", package="bolts")
83 def load_swav_imagenet(
84 path_or_url: str = f"{ROOT_S3_BUCKET}/swav/swav_imagenet/swav_imagenet.pth.tar",
85 **_,
86 ) -> Tuple[nn.Module, int]:
87 swav: LightningModule = SwAV.load_from_checkpoint(path_or_url, strict=True)
88 # remove the last two layers & turn it into a Sequential model
89 backbone = nn.Sequential(*list(swav.model.children())[:-2])
90 return backbone, 2048
91
92
93 if _TORCHVISION_AVAILABLE:
94
95 def _fn_mobilenet_vgg(model_name: str, pretrained: bool = True) -> Tuple[nn.Module, int]:
96 model: nn.Module = getattr(torchvision.models, model_name, None)(pretrained)
97 backbone = model.features
98 num_features = 512 if model_name in VGG_MODELS else model.classifier[-1].in_features
99 return backbone, num_features
100
101 for model_name in MOBILENET_MODELS + VGG_MODELS:
102 _type = "mobilenet" if model_name in MOBILENET_MODELS else "vgg"
103
104 IMAGE_CLASSIFIER_BACKBONES(
105 fn=catch_url_error(partial(_fn_mobilenet_vgg, model_name)),
106 name=model_name,
107 namespace="vision",
108 package="torchvision",
109 type=_type
110 )
111
112 def _fn_resnet(model_name: str, pretrained: bool = True) -> Tuple[nn.Module, int]:
113 model: nn.Module = getattr(torchvision.models, model_name, None)(pretrained)
114 backbone = nn.Sequential(*list(model.children())[:-2])
115 num_features = model.fc.in_features
116 return backbone, num_features
117
118 def _fn_resnet_fpn(
119 model_name: str,
120 pretrained: bool = True,
121 trainable_layers: bool = True,
122 **kwargs,
123 ) -> Tuple[nn.Module, int]:
124 backbone = resnet_fpn_backbone(model_name, pretrained=pretrained, trainable_layers=trainable_layers, **kwargs)
125 return backbone, 256
126
127 for model_name in RESNET_MODELS:
128 IMAGE_CLASSIFIER_BACKBONES(
129 fn=catch_url_error(partial(_fn_resnet, model_name)),
130 name=model_name,
131 namespace="vision",
132 package="torchvision",
133 type="resnet"
134 )
135
136 OBJ_DETECTION_BACKBONES(
137 fn=catch_url_error(partial(_fn_resnet_fpn, model_name)),
138 name=model_name,
139 package="torchvision",
140 type="resnet-fpn"
141 )
142
143 def _fn_densenet(model_name: str, pretrained: bool = True) -> Tuple[nn.Module, int]:
144 model: nn.Module = getattr(torchvision.models, model_name, None)(pretrained)
145 backbone = nn.Sequential(*model.features, nn.ReLU(inplace=True))
146 num_features = model.classifier.in_features
147 return backbone, num_features
148
149 for model_name in DENSENET_MODELS:
150 IMAGE_CLASSIFIER_BACKBONES(
151 fn=catch_url_error(partial(_fn_densenet, model_name)),
152 name=model_name,
153 namespace="vision",
154 package="torchvision",
155 type="densenet"
156 )
157
158 if _TIMM_AVAILABLE:
159
160 def _fn_timm(
161 model_name: str,
162 pretrained: bool = True,
163 num_classes: int = 0,
164 global_pool: str = '',
165 ) -> Tuple[nn.Module, int]:
166 backbone = timm.create_model(
167 model_name, pretrained=pretrained, num_classes=num_classes, global_pool=global_pool
168 )
169 num_features = backbone.num_features
170 return backbone, num_features
171
172 for model_name in timm.list_models():
173
174 if model_name in TORCHVISION_MODELS:
175 continue
176
177 IMAGE_CLASSIFIER_BACKBONES(
178 fn=catch_url_error(partial(_fn_timm, model_name)), name=model_name, namespace="vision", package="timm"
179 )
180
181
182 # Paper: Emerging Properties in Self-Supervised Vision Transformers
183 # https://arxiv.org/abs/2104.14294 from Mathilde Caron and al. (29 Apr 2021)
184 # weights from https://github.com/facebookresearch/dino
185 def dino_deits16(*_, **__):
186 backbone = torch.hub.load('facebookresearch/dino:main', 'dino_deits16')
187 return backbone, 384
188
189
190 def dino_deits8(*_, **__):
191 backbone = torch.hub.load('facebookresearch/dino:main', 'dino_deits8')
192 return backbone, 384
193
194
195 def dino_vitb16(*_, **__):
196 backbone = torch.hub.load('facebookresearch/dino:main', 'dino_vitb16')
197 return backbone, 768
198
199
200 def dino_vitb8(*_, **__):
201 backbone = torch.hub.load('facebookresearch/dino:main', 'dino_vitb8')
202 return backbone, 768
203
204
205 IMAGE_CLASSIFIER_BACKBONES(dino_deits16)
206 IMAGE_CLASSIFIER_BACKBONES(dino_deits8)
207 IMAGE_CLASSIFIER_BACKBONES(dino_vitb16)
208 IMAGE_CLASSIFIER_BACKBONES(dino_vitb8)
209
[end of flash/image/backbones.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flash/image/backbones.py b/flash/image/backbones.py
--- a/flash/image/backbones.py
+++ b/flash/image/backbones.py
@@ -161,11 +161,9 @@
model_name: str,
pretrained: bool = True,
num_classes: int = 0,
- global_pool: str = '',
+ **kwargs,
) -> Tuple[nn.Module, int]:
- backbone = timm.create_model(
- model_name, pretrained=pretrained, num_classes=num_classes, global_pool=global_pool
- )
+ backbone = timm.create_model(model_name, pretrained=pretrained, num_classes=num_classes, **kwargs)
num_features = backbone.num_features
return backbone, num_features
|
{"golden_diff": "diff --git a/flash/image/backbones.py b/flash/image/backbones.py\n--- a/flash/image/backbones.py\n+++ b/flash/image/backbones.py\n@@ -161,11 +161,9 @@\n model_name: str,\n pretrained: bool = True,\n num_classes: int = 0,\n- global_pool: str = '',\n+ **kwargs,\n ) -> Tuple[nn.Module, int]:\n- backbone = timm.create_model(\n- model_name, pretrained=pretrained, num_classes=num_classes, global_pool=global_pool\n- )\n+ backbone = timm.create_model(model_name, pretrained=pretrained, num_classes=num_classes, **kwargs)\n num_features = backbone.num_features\n return backbone, num_features\n", "issue": "TypeError: __init__() got an unexpected keyword argument 'global_pool'\n## \ud83d\udc1b Bug\r\n\r\nFlash backbones is hardcoding the global_pool argument but this isn't relevant for Visual Transformer backbones and causes them to crash \r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'global_pool'`\r\n\r\n### To Reproduce\r\n\r\n`model = ImageClassifier(backbone=\"vit_small_patch16_224\", num_classes=10, serializer=Labels())`\r\n\r\n### Expected behavior\r\n\r\nglobal_pool should only be passed if expected model backbone should initialize and not crash\r\n\r\n### Environment\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport functools\nimport os\nimport urllib.error\nimport warnings\nfrom functools import partial\nfrom typing import Tuple\n\nimport torch\nfrom pytorch_lightning import LightningModule\nfrom pytorch_lightning.utilities import _BOLTS_AVAILABLE, rank_zero_warn\nfrom torch import nn as nn\n\nfrom flash.core.registry import FlashRegistry\nfrom flash.core.utilities.imports import _IMAGE_AVAILABLE, _TIMM_AVAILABLE, _TORCHVISION_AVAILABLE\n\nif _TIMM_AVAILABLE:\n import timm\n\nif _TORCHVISION_AVAILABLE:\n import torchvision\n from torchvision.models.detection.backbone_utils import resnet_fpn_backbone\n\nif _BOLTS_AVAILABLE:\n if os.getenv(\"WARN_MISSING_PACKAGE\") == \"0\":\n with warnings.catch_warnings(record=True) as w:\n from pl_bolts.models.self_supervised import SimCLR, SwAV\n else:\n from pl_bolts.models.self_supervised import SimCLR, SwAV\n\nROOT_S3_BUCKET = \"https://pl-bolts-weights.s3.us-east-2.amazonaws.com\"\n\nMOBILENET_MODELS = [\"mobilenet_v2\"]\nVGG_MODELS = [\"vgg11\", \"vgg13\", \"vgg16\", \"vgg19\"]\nRESNET_MODELS = [\"resnet18\", \"resnet34\", \"resnet50\", \"resnet101\", \"resnet152\", \"resnext50_32x4d\", \"resnext101_32x8d\"]\nDENSENET_MODELS = [\"densenet121\", \"densenet169\", \"densenet161\"]\nTORCHVISION_MODELS = MOBILENET_MODELS + VGG_MODELS + RESNET_MODELS + DENSENET_MODELS\nBOLTS_MODELS = [\"simclr-imagenet\", \"swav-imagenet\"]\n\nIMAGE_CLASSIFIER_BACKBONES = FlashRegistry(\"backbones\")\nOBJ_DETECTION_BACKBONES = FlashRegistry(\"backbones\")\n\n\ndef catch_url_error(fn):\n\n @functools.wraps(fn)\n def wrapper(*args, pretrained=False, **kwargs):\n try:\n return fn(*args, pretrained=pretrained, **kwargs)\n except urllib.error.URLError:\n result = fn(*args, pretrained=False, **kwargs)\n rank_zero_warn(\n \"Failed to download pretrained weights for the selected backbone. The backbone has been created with\"\n \" `pretrained=False` instead. If you are loading from a local checkpoint, this warning can be safely\"\n \" ignored.\", UserWarning\n )\n return result\n\n return wrapper\n\n\n@IMAGE_CLASSIFIER_BACKBONES(name=\"simclr-imagenet\", namespace=\"vision\", package=\"bolts\")\ndef load_simclr_imagenet(path_or_url: str = f\"{ROOT_S3_BUCKET}/simclr/bolts_simclr_imagenet/simclr_imagenet.ckpt\", **_):\n simclr: LightningModule = SimCLR.load_from_checkpoint(path_or_url, strict=False)\n # remove the last two layers & turn it into a Sequential model\n backbone = nn.Sequential(*list(simclr.encoder.children())[:-2])\n return backbone, 2048\n\n\n@IMAGE_CLASSIFIER_BACKBONES(name=\"swav-imagenet\", namespace=\"vision\", package=\"bolts\")\ndef load_swav_imagenet(\n path_or_url: str = f\"{ROOT_S3_BUCKET}/swav/swav_imagenet/swav_imagenet.pth.tar\",\n **_,\n) -> Tuple[nn.Module, int]:\n swav: LightningModule = SwAV.load_from_checkpoint(path_or_url, strict=True)\n # remove the last two layers & turn it into a Sequential model\n backbone = nn.Sequential(*list(swav.model.children())[:-2])\n return backbone, 2048\n\n\nif _TORCHVISION_AVAILABLE:\n\n def _fn_mobilenet_vgg(model_name: str, pretrained: bool = True) -> Tuple[nn.Module, int]:\n model: nn.Module = getattr(torchvision.models, model_name, None)(pretrained)\n backbone = model.features\n num_features = 512 if model_name in VGG_MODELS else model.classifier[-1].in_features\n return backbone, num_features\n\n for model_name in MOBILENET_MODELS + VGG_MODELS:\n _type = \"mobilenet\" if model_name in MOBILENET_MODELS else \"vgg\"\n\n IMAGE_CLASSIFIER_BACKBONES(\n fn=catch_url_error(partial(_fn_mobilenet_vgg, model_name)),\n name=model_name,\n namespace=\"vision\",\n package=\"torchvision\",\n type=_type\n )\n\n def _fn_resnet(model_name: str, pretrained: bool = True) -> Tuple[nn.Module, int]:\n model: nn.Module = getattr(torchvision.models, model_name, None)(pretrained)\n backbone = nn.Sequential(*list(model.children())[:-2])\n num_features = model.fc.in_features\n return backbone, num_features\n\n def _fn_resnet_fpn(\n model_name: str,\n pretrained: bool = True,\n trainable_layers: bool = True,\n **kwargs,\n ) -> Tuple[nn.Module, int]:\n backbone = resnet_fpn_backbone(model_name, pretrained=pretrained, trainable_layers=trainable_layers, **kwargs)\n return backbone, 256\n\n for model_name in RESNET_MODELS:\n IMAGE_CLASSIFIER_BACKBONES(\n fn=catch_url_error(partial(_fn_resnet, model_name)),\n name=model_name,\n namespace=\"vision\",\n package=\"torchvision\",\n type=\"resnet\"\n )\n\n OBJ_DETECTION_BACKBONES(\n fn=catch_url_error(partial(_fn_resnet_fpn, model_name)),\n name=model_name,\n package=\"torchvision\",\n type=\"resnet-fpn\"\n )\n\n def _fn_densenet(model_name: str, pretrained: bool = True) -> Tuple[nn.Module, int]:\n model: nn.Module = getattr(torchvision.models, model_name, None)(pretrained)\n backbone = nn.Sequential(*model.features, nn.ReLU(inplace=True))\n num_features = model.classifier.in_features\n return backbone, num_features\n\n for model_name in DENSENET_MODELS:\n IMAGE_CLASSIFIER_BACKBONES(\n fn=catch_url_error(partial(_fn_densenet, model_name)),\n name=model_name,\n namespace=\"vision\",\n package=\"torchvision\",\n type=\"densenet\"\n )\n\nif _TIMM_AVAILABLE:\n\n def _fn_timm(\n model_name: str,\n pretrained: bool = True,\n num_classes: int = 0,\n global_pool: str = '',\n ) -> Tuple[nn.Module, int]:\n backbone = timm.create_model(\n model_name, pretrained=pretrained, num_classes=num_classes, global_pool=global_pool\n )\n num_features = backbone.num_features\n return backbone, num_features\n\n for model_name in timm.list_models():\n\n if model_name in TORCHVISION_MODELS:\n continue\n\n IMAGE_CLASSIFIER_BACKBONES(\n fn=catch_url_error(partial(_fn_timm, model_name)), name=model_name, namespace=\"vision\", package=\"timm\"\n )\n\n\n# Paper: Emerging Properties in Self-Supervised Vision Transformers\n# https://arxiv.org/abs/2104.14294 from Mathilde Caron and al. (29 Apr 2021)\n# weights from https://github.com/facebookresearch/dino\ndef dino_deits16(*_, **__):\n backbone = torch.hub.load('facebookresearch/dino:main', 'dino_deits16')\n return backbone, 384\n\n\ndef dino_deits8(*_, **__):\n backbone = torch.hub.load('facebookresearch/dino:main', 'dino_deits8')\n return backbone, 384\n\n\ndef dino_vitb16(*_, **__):\n backbone = torch.hub.load('facebookresearch/dino:main', 'dino_vitb16')\n return backbone, 768\n\n\ndef dino_vitb8(*_, **__):\n backbone = torch.hub.load('facebookresearch/dino:main', 'dino_vitb8')\n return backbone, 768\n\n\nIMAGE_CLASSIFIER_BACKBONES(dino_deits16)\nIMAGE_CLASSIFIER_BACKBONES(dino_deits8)\nIMAGE_CLASSIFIER_BACKBONES(dino_vitb16)\nIMAGE_CLASSIFIER_BACKBONES(dino_vitb8)\n", "path": "flash/image/backbones.py"}]}
| 3,275 | 168 |
gh_patches_debug_18981
|
rasdani/github-patches
|
git_diff
|
librosa__librosa-1602
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running changelog, updated contributor guidelines
**Is your feature request related to a problem? Please describe.**
Historically, I've always managed release notes / changelog updates as one of the last steps in a release. This was basically fine when the documentation site was also only updated on release. However, now that we have automatic doc builds for every merge to main, I think we should try to keep this up to date at all times.
**Describe the solution you'd like**
1. We should revise the contributing guidelines to include instructions for updating the release notes.
2. We should fill in the changelog to date for the 0.10 cycle.
Retire the repet-sim example?
Our doc site has a [reimplementation](https://librosa.org/doc/latest/auto_examples/plot_vocal_separation.html#sphx-glr-auto-examples-plot-vocal-separation-py) of the repet-sim method for foreground/background separation, which gives a nice demo of a few features:
- audio source separation and resynthesis
- non-local means
- soft masking
Repet-sim is a pretty old method at this point, but its foreground (vocals) estimates are quite glitchy in general and very far from what's possible via more modern methods (open-umx, spleeter, demucs, etc). So while it's fine for demonstrative purposes, I worry that it misleads novice users about how to do vocal separation generally. As a result, I think we should retire this example from the gallery in the next major release (0.10).
If we do this, should we consider replacing it with something else to demonstrate some kind of source separation (eg nmf)? The docstring examples go pretty far with this, but I think there's added value to having a self-contained notebook with playable audio.
</issue>
<code>
[start of docs/examples/plot_vocal_separation.py]
1 # -*- coding: utf-8 -*-
2 """
3 ================
4 Vocal separation
5 ================
6
7 This notebook demonstrates a simple technique for separating vocals (and
8 other sporadic foreground signals) from accompanying instrumentation.
9
10 This is based on the "REPET-SIM" method of `Rafii and Pardo, 2012
11 <http://www.cs.northwestern.edu/~zra446/doc/Rafii-Pardo%20-%20Music-Voice%20Separation%20using%20the%20Similarity%20Matrix%20-%20ISMIR%202012.pdf>`_, but includes a couple of modifications and extensions:
12
13 - FFT windows overlap by 1/4, instead of 1/2
14 - Non-local filtering is converted into a soft mask by Wiener filtering.
15 This is similar in spirit to the soft-masking method used by `Fitzgerald, 2012
16 <http://arrow.dit.ie/cgi/viewcontent.cgi?article=1086&context=argcon>`_,
17 but is a bit more numerically stable in practice.
18 """
19
20 # Code source: Brian McFee
21 # License: ISC
22
23 ##################
24 # Standard imports
25 import numpy as np
26 import matplotlib.pyplot as plt
27 from IPython.display import Audio
28
29 import librosa
30
31 import librosa.display
32
33 #############################################
34 # Load an example with vocals.
35 y, sr = librosa.load(librosa.ex('fishin'), duration=120)
36
37
38 # And compute the spectrogram magnitude and phase
39 S_full, phase = librosa.magphase(librosa.stft(y))
40
41 # Play back a 5-second excerpt with vocals
42 Audio(data=y[10*sr:15*sr], rate=sr)
43
44 #######################################
45 # Plot a 5-second slice of the spectrum
46 idx = slice(*librosa.time_to_frames([10, 15], sr=sr))
47 fig, ax = plt.subplots()
48 img = librosa.display.specshow(librosa.amplitude_to_db(S_full[:, idx], ref=np.max),
49 y_axis='log', x_axis='time', sr=sr, ax=ax)
50 fig.colorbar(img, ax=ax)
51
52 ###########################################################
53 # The wiggly lines above are due to the vocal component.
54 # Our goal is to separate them from the accompanying
55 # instrumentation.
56 #
57
58 # We'll compare frames using cosine similarity, and aggregate similar frames
59 # by taking their (per-frequency) median value.
60 #
61 # To avoid being biased by local continuity, we constrain similar frames to be
62 # separated by at least 2 seconds.
63 #
64 # This suppresses sparse/non-repetetitive deviations from the average spectrum,
65 # and works well to discard vocal elements.
66
67 S_filter = librosa.decompose.nn_filter(S_full,
68 aggregate=np.median,
69 metric='cosine',
70 width=int(librosa.time_to_frames(2, sr=sr)))
71
72 # The output of the filter shouldn't be greater than the input
73 # if we assume signals are additive. Taking the pointwise minimum
74 # with the input spectrum forces this.
75 S_filter = np.minimum(S_full, S_filter)
76
77
78 ##############################################
79 # The raw filter output can be used as a mask,
80 # but it sounds better if we use soft-masking.
81
82 # We can also use a margin to reduce bleed between the vocals and instrumentation masks.
83 # Note: the margins need not be equal for foreground and background separation
84 margin_i, margin_v = 2, 10
85 power = 2
86
87 mask_i = librosa.util.softmask(S_filter,
88 margin_i * (S_full - S_filter),
89 power=power)
90
91 mask_v = librosa.util.softmask(S_full - S_filter,
92 margin_v * S_filter,
93 power=power)
94
95 # Once we have the masks, simply multiply them with the input spectrum
96 # to separate the components
97
98 S_foreground = mask_v * S_full
99 S_background = mask_i * S_full
100
101
102 ##########################################
103 # Plot the same slice, but separated into its foreground and background
104
105 # sphinx_gallery_thumbnail_number = 2
106
107 fig, ax = plt.subplots(nrows=3, sharex=True, sharey=True)
108 img = librosa.display.specshow(librosa.amplitude_to_db(S_full[:, idx], ref=np.max),
109 y_axis='log', x_axis='time', sr=sr, ax=ax[0])
110 ax[0].set(title='Full spectrum')
111 ax[0].label_outer()
112
113 librosa.display.specshow(librosa.amplitude_to_db(S_background[:, idx], ref=np.max),
114 y_axis='log', x_axis='time', sr=sr, ax=ax[1])
115 ax[1].set(title='Background')
116 ax[1].label_outer()
117
118 librosa.display.specshow(librosa.amplitude_to_db(S_foreground[:, idx], ref=np.max),
119 y_axis='log', x_axis='time', sr=sr, ax=ax[2])
120 ax[2].set(title='Foreground')
121 fig.colorbar(img, ax=ax)
122
123
124 ###########################################
125 # Recover the foreground audio from the masked spectrogram.
126 # To do this, we'll need to re-introduce the phase information
127 # that we had previously set aside.
128
129 y_foreground = librosa.istft(S_foreground * phase)
130 # Play back a 5-second excerpt with vocals
131 Audio(data=y_foreground[10*sr:15*sr], rate=sr)
132
[end of docs/examples/plot_vocal_separation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/examples/plot_vocal_separation.py b/docs/examples/plot_vocal_separation.py
--- a/docs/examples/plot_vocal_separation.py
+++ b/docs/examples/plot_vocal_separation.py
@@ -7,6 +7,13 @@
This notebook demonstrates a simple technique for separating vocals (and
other sporadic foreground signals) from accompanying instrumentation.
+.. warning::
+ This example is primarily of historical interest, and we do not recommend
+ this as a competitive method for vocal source separation.
+ For a more recent treatment of vocal and music source separation, please
+ refer to `Open Source Tools & Data for Music Source Separation <https://source-separation.github.io/tutorial/landing.html>`_
+ (Manilow, Seetharaman, and Salamon 2020).
+
This is based on the "REPET-SIM" method of `Rafii and Pardo, 2012
<http://www.cs.northwestern.edu/~zra446/doc/Rafii-Pardo%20-%20Music-Voice%20Separation%20using%20the%20Similarity%20Matrix%20-%20ISMIR%202012.pdf>`_, but includes a couple of modifications and extensions:
|
{"golden_diff": "diff --git a/docs/examples/plot_vocal_separation.py b/docs/examples/plot_vocal_separation.py\n--- a/docs/examples/plot_vocal_separation.py\n+++ b/docs/examples/plot_vocal_separation.py\n@@ -7,6 +7,13 @@\n This notebook demonstrates a simple technique for separating vocals (and\n other sporadic foreground signals) from accompanying instrumentation.\n \n+.. warning::\n+ This example is primarily of historical interest, and we do not recommend\n+ this as a competitive method for vocal source separation.\n+ For a more recent treatment of vocal and music source separation, please\n+ refer to `Open Source Tools & Data for Music Source Separation <https://source-separation.github.io/tutorial/landing.html>`_\n+ (Manilow, Seetharaman, and Salamon 2020).\n+\n This is based on the \"REPET-SIM\" method of `Rafii and Pardo, 2012\n <http://www.cs.northwestern.edu/~zra446/doc/Rafii-Pardo%20-%20Music-Voice%20Separation%20using%20the%20Similarity%20Matrix%20-%20ISMIR%202012.pdf>`_, but includes a couple of modifications and extensions:\n", "issue": "Running changelog, updated contributor guidelines\n**Is your feature request related to a problem? Please describe.**\r\n\r\nHistorically, I've always managed release notes / changelog updates as one of the last steps in a release. This was basically fine when the documentation site was also only updated on release. However, now that we have automatic doc builds for every merge to main, I think we should try to keep this up to date at all times.\r\n\r\n**Describe the solution you'd like**\r\n\r\n1. We should revise the contributing guidelines to include instructions for updating the release notes.\r\n2. We should fill in the changelog to date for the 0.10 cycle.\r\n\nRetire the repet-sim example?\nOur doc site has a [reimplementation](https://librosa.org/doc/latest/auto_examples/plot_vocal_separation.html#sphx-glr-auto-examples-plot-vocal-separation-py) of the repet-sim method for foreground/background separation, which gives a nice demo of a few features:\r\n\r\n- audio source separation and resynthesis\r\n- non-local means\r\n- soft masking\r\n\r\nRepet-sim is a pretty old method at this point, but its foreground (vocals) estimates are quite glitchy in general and very far from what's possible via more modern methods (open-umx, spleeter, demucs, etc). So while it's fine for demonstrative purposes, I worry that it misleads novice users about how to do vocal separation generally. As a result, I think we should retire this example from the gallery in the next major release (0.10).\r\n\r\nIf we do this, should we consider replacing it with something else to demonstrate some kind of source separation (eg nmf)? The docstring examples go pretty far with this, but I think there's added value to having a self-contained notebook with playable audio.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n================\nVocal separation\n================\n\nThis notebook demonstrates a simple technique for separating vocals (and\nother sporadic foreground signals) from accompanying instrumentation.\n\nThis is based on the \"REPET-SIM\" method of `Rafii and Pardo, 2012\n<http://www.cs.northwestern.edu/~zra446/doc/Rafii-Pardo%20-%20Music-Voice%20Separation%20using%20the%20Similarity%20Matrix%20-%20ISMIR%202012.pdf>`_, but includes a couple of modifications and extensions:\n\n - FFT windows overlap by 1/4, instead of 1/2\n - Non-local filtering is converted into a soft mask by Wiener filtering.\n This is similar in spirit to the soft-masking method used by `Fitzgerald, 2012\n <http://arrow.dit.ie/cgi/viewcontent.cgi?article=1086&context=argcon>`_,\n but is a bit more numerically stable in practice.\n\"\"\"\n\n# Code source: Brian McFee\n# License: ISC\n\n##################\n# Standard imports\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import Audio\n\nimport librosa\n\nimport librosa.display\n\n#############################################\n# Load an example with vocals.\ny, sr = librosa.load(librosa.ex('fishin'), duration=120)\n\n\n# And compute the spectrogram magnitude and phase\nS_full, phase = librosa.magphase(librosa.stft(y))\n\n# Play back a 5-second excerpt with vocals\nAudio(data=y[10*sr:15*sr], rate=sr)\n\n#######################################\n# Plot a 5-second slice of the spectrum\nidx = slice(*librosa.time_to_frames([10, 15], sr=sr))\nfig, ax = plt.subplots()\nimg = librosa.display.specshow(librosa.amplitude_to_db(S_full[:, idx], ref=np.max),\n y_axis='log', x_axis='time', sr=sr, ax=ax)\nfig.colorbar(img, ax=ax)\n\n###########################################################\n# The wiggly lines above are due to the vocal component.\n# Our goal is to separate them from the accompanying\n# instrumentation.\n#\n\n# We'll compare frames using cosine similarity, and aggregate similar frames\n# by taking their (per-frequency) median value.\n#\n# To avoid being biased by local continuity, we constrain similar frames to be\n# separated by at least 2 seconds.\n#\n# This suppresses sparse/non-repetetitive deviations from the average spectrum,\n# and works well to discard vocal elements.\n\nS_filter = librosa.decompose.nn_filter(S_full,\n aggregate=np.median,\n metric='cosine',\n width=int(librosa.time_to_frames(2, sr=sr)))\n\n# The output of the filter shouldn't be greater than the input\n# if we assume signals are additive. Taking the pointwise minimum\n# with the input spectrum forces this.\nS_filter = np.minimum(S_full, S_filter)\n\n\n##############################################\n# The raw filter output can be used as a mask,\n# but it sounds better if we use soft-masking.\n\n# We can also use a margin to reduce bleed between the vocals and instrumentation masks.\n# Note: the margins need not be equal for foreground and background separation\nmargin_i, margin_v = 2, 10\npower = 2\n\nmask_i = librosa.util.softmask(S_filter,\n margin_i * (S_full - S_filter),\n power=power)\n\nmask_v = librosa.util.softmask(S_full - S_filter,\n margin_v * S_filter,\n power=power)\n\n# Once we have the masks, simply multiply them with the input spectrum\n# to separate the components\n\nS_foreground = mask_v * S_full\nS_background = mask_i * S_full\n\n\n##########################################\n# Plot the same slice, but separated into its foreground and background\n\n# sphinx_gallery_thumbnail_number = 2\n\nfig, ax = plt.subplots(nrows=3, sharex=True, sharey=True)\nimg = librosa.display.specshow(librosa.amplitude_to_db(S_full[:, idx], ref=np.max),\n y_axis='log', x_axis='time', sr=sr, ax=ax[0])\nax[0].set(title='Full spectrum')\nax[0].label_outer()\n\nlibrosa.display.specshow(librosa.amplitude_to_db(S_background[:, idx], ref=np.max),\n y_axis='log', x_axis='time', sr=sr, ax=ax[1])\nax[1].set(title='Background')\nax[1].label_outer()\n\nlibrosa.display.specshow(librosa.amplitude_to_db(S_foreground[:, idx], ref=np.max),\n y_axis='log', x_axis='time', sr=sr, ax=ax[2])\nax[2].set(title='Foreground')\nfig.colorbar(img, ax=ax)\n\n\n###########################################\n# Recover the foreground audio from the masked spectrogram.\n# To do this, we'll need to re-introduce the phase information\n# that we had previously set aside.\n\ny_foreground = librosa.istft(S_foreground * phase)\n# Play back a 5-second excerpt with vocals\nAudio(data=y_foreground[10*sr:15*sr], rate=sr)\n", "path": "docs/examples/plot_vocal_separation.py"}]}
| 2,388 | 287 |
gh_patches_debug_38918
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-1933
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scrapy 1.1.0 RC3 - exception thrown with invalid ssl certificate
Hello,
I am crawling sometimes websites with an invalid ssl certificate. For example, Scrapy 1.1.0 RC3 fails to open when I do:
> scrapy shell https://www.directoriosanitario.com/directorio
> or
> scrapy shell https://saobinv.5go.cc/top/
and throws the following exception:
> twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure service_identity.exceptions.VerificationError: VerificationError(errors=[DNSMismatch(mismatched_id=DNS_ID(hostname=b'www.directoriosanitario.com'))])>]
I tried it with Scrapy 1.0.5 on python 2.7 and the spider opens but warns with:
> AttributeError: 'NoneType' object has no attribute 'failVerification'
Is there a way to force the spider to open with Scrapy 1.1.0 RC3?
</issue>
<code>
[start of scrapy/core/downloader/tls.py]
1 from OpenSSL import SSL
2
3
4 METHOD_SSLv3 = 'SSLv3'
5 METHOD_TLS = 'TLS'
6 METHOD_TLSv10 = 'TLSv1.0'
7 METHOD_TLSv11 = 'TLSv1.1'
8 METHOD_TLSv12 = 'TLSv1.2'
9
10 openssl_methods = {
11 METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)
12 METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)
13 METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only
14 METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only
15 METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only
16 }
17
[end of scrapy/core/downloader/tls.py]
[start of scrapy/core/downloader/contextfactory.py]
1 from OpenSSL import SSL
2 from twisted.internet.ssl import ClientContextFactory
3
4 try:
5
6 from zope.interface.declarations import implementer
7
8 # the following should be available from Twisted 14.0.0
9 from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust
10 from twisted.internet._sslverify import ClientTLSOptions
11 from twisted.web.client import BrowserLikePolicyForHTTPS
12 from twisted.web.iweb import IPolicyForHTTPS
13
14 @implementer(IPolicyForHTTPS)
15 class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):
16 """
17 Non-peer-certificate verifying HTTPS context factory
18
19 Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)
20 which allows TLS protocol negotiation
21
22 'A TLS/SSL connection established with [this method] may
23 understand the SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.'
24 """
25
26 def __init__(self, method=SSL.SSLv23_METHOD, *args, **kwargs):
27 super(ScrapyClientContextFactory, self).__init__(*args, **kwargs)
28 self._ssl_method = method
29
30 def getCertificateOptions(self):
31 # setting verify=True will require you to provide CAs
32 # to verify against; in other words: it's not that simple
33
34 # backward-compatible SSL/TLS method:
35 #
36 # * this will respect `method` attribute in often recommended
37 # `ScrapyClientContextFactory` subclass
38 # (https://github.com/scrapy/scrapy/issues/1429#issuecomment-131782133)
39 #
40 # * getattr() for `_ssl_method` attribute for context factories
41 # not calling super(..., self).__init__
42 return CertificateOptions(verify=False,
43 method=getattr(self, 'method',
44 getattr(self, '_ssl_method', None)))
45
46 # kept for old-style HTTP/1.0 downloader context twisted calls,
47 # e.g. connectSSL()
48 def getContext(self, hostname=None, port=None):
49 return self.getCertificateOptions().getContext()
50
51 def creatorForNetloc(self, hostname, port):
52 return ClientTLSOptions(hostname.decode("ascii"), self.getContext())
53
54
55 @implementer(IPolicyForHTTPS)
56 class BrowserLikeContextFactory(ScrapyClientContextFactory):
57 """
58 Twisted-recommended context factory for web clients.
59
60 Quoting http://twistedmatrix.com/documents/current/api/twisted.web.client.Agent.html:
61 "The default is to use a BrowserLikePolicyForHTTPS,
62 so unless you have special requirements you can leave this as-is."
63
64 creatorForNetloc() is the same as BrowserLikePolicyForHTTPS
65 except this context factory allows setting the TLS/SSL method to use.
66
67 Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)
68 which allows TLS protocol negotiation.
69 """
70 def creatorForNetloc(self, hostname, port):
71
72 # trustRoot set to platformTrust() will use the platform's root CAs.
73 #
74 # This means that a website like https://www.cacert.org will be rejected
75 # by default, since CAcert.org CA certificate is seldom shipped.
76 return optionsForClientTLS(hostname.decode("ascii"),
77 trustRoot=platformTrust(),
78 extraCertificateOptions={
79 'method': self._ssl_method,
80 })
81
82 except ImportError:
83
84 class ScrapyClientContextFactory(ClientContextFactory):
85 "A SSL context factory which is more permissive against SSL bugs."
86 # see https://github.com/scrapy/scrapy/issues/82
87 # and https://github.com/scrapy/scrapy/issues/26
88 # and https://github.com/scrapy/scrapy/issues/981
89
90 def __init__(self, method=SSL.SSLv23_METHOD):
91 self.method = method
92
93 def getContext(self, hostname=None, port=None):
94 ctx = ClientContextFactory.getContext(self)
95 # Enable all workarounds to SSL bugs as documented by
96 # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html
97 ctx.set_options(SSL.OP_ALL)
98 return ctx
99
[end of scrapy/core/downloader/contextfactory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py
--- a/scrapy/core/downloader/contextfactory.py
+++ b/scrapy/core/downloader/contextfactory.py
@@ -6,11 +6,16 @@
from zope.interface.declarations import implementer
# the following should be available from Twisted 14.0.0
- from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust
- from twisted.internet._sslverify import ClientTLSOptions
+ from twisted.internet.ssl import (optionsForClientTLS,
+ CertificateOptions,
+ platformTrust)
+
from twisted.web.client import BrowserLikePolicyForHTTPS
from twisted.web.iweb import IPolicyForHTTPS
+ from scrapy.core.downloader.tls import ScrapyClientTLSOptions
+
+
@implementer(IPolicyForHTTPS)
class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):
"""
@@ -49,7 +54,7 @@
return self.getCertificateOptions().getContext()
def creatorForNetloc(self, hostname, port):
- return ClientTLSOptions(hostname.decode("ascii"), self.getContext())
+ return ScrapyClientTLSOptions(hostname.decode("ascii"), self.getContext())
@implementer(IPolicyForHTTPS)
diff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py
--- a/scrapy/core/downloader/tls.py
+++ b/scrapy/core/downloader/tls.py
@@ -1,6 +1,9 @@
+import logging
from OpenSSL import SSL
+logger = logging.getLogger(__name__)
+
METHOD_SSLv3 = 'SSLv3'
METHOD_TLS = 'TLS'
METHOD_TLSv10 = 'TLSv1.0'
@@ -14,3 +17,36 @@
METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only
METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only
}
+
+# ClientTLSOptions requires a recent-enough version of Twisted
+try:
+
+ # taken from twisted/twisted/internet/_sslverify.py
+ try:
+ from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START
+ except ImportError:
+ SSL_CB_HANDSHAKE_START = 0x10
+ SSL_CB_HANDSHAKE_DONE = 0x20
+
+ from twisted.internet._sslverify import (ClientTLSOptions,
+ _maybeSetHostNameIndication,
+ verifyHostname,
+ VerificationError)
+
+ class ScrapyClientTLSOptions(ClientTLSOptions):
+ # same as Twisted's ClientTLSOptions,
+ # except that VerificationError is caught
+ # and doesn't close the connection
+ def _identityVerifyingInfoCallback(self, connection, where, ret):
+ if where & SSL_CB_HANDSHAKE_START:
+ _maybeSetHostNameIndication(connection, self._hostnameBytes)
+ elif where & SSL_CB_HANDSHAKE_DONE:
+ try:
+ verifyHostname(connection, self._hostnameASCII)
+ except VerificationError as e:
+ logger.warning(e)
+
+except ImportError:
+ # ImportError should not matter for older Twisted versions
+ # as the above is not used in the fallback ScrapyClientContextFactory
+ pass
|
{"golden_diff": "diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py\n--- a/scrapy/core/downloader/contextfactory.py\n+++ b/scrapy/core/downloader/contextfactory.py\n@@ -6,11 +6,16 @@\n from zope.interface.declarations import implementer\n \n # the following should be available from Twisted 14.0.0\n- from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust\n- from twisted.internet._sslverify import ClientTLSOptions\n+ from twisted.internet.ssl import (optionsForClientTLS,\n+ CertificateOptions,\n+ platformTrust)\n+\n from twisted.web.client import BrowserLikePolicyForHTTPS\n from twisted.web.iweb import IPolicyForHTTPS\n \n+ from scrapy.core.downloader.tls import ScrapyClientTLSOptions\n+\n+\n @implementer(IPolicyForHTTPS)\n class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):\n \"\"\"\n@@ -49,7 +54,7 @@\n return self.getCertificateOptions().getContext()\n \n def creatorForNetloc(self, hostname, port):\n- return ClientTLSOptions(hostname.decode(\"ascii\"), self.getContext())\n+ return ScrapyClientTLSOptions(hostname.decode(\"ascii\"), self.getContext())\n \n \n @implementer(IPolicyForHTTPS)\ndiff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py\n--- a/scrapy/core/downloader/tls.py\n+++ b/scrapy/core/downloader/tls.py\n@@ -1,6 +1,9 @@\n+import logging\n from OpenSSL import SSL\n \n \n+logger = logging.getLogger(__name__)\n+\n METHOD_SSLv3 = 'SSLv3'\n METHOD_TLS = 'TLS'\n METHOD_TLSv10 = 'TLSv1.0'\n@@ -14,3 +17,36 @@\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n }\n+\n+# ClientTLSOptions requires a recent-enough version of Twisted\n+try:\n+\n+ # taken from twisted/twisted/internet/_sslverify.py\n+ try:\n+ from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START\n+ except ImportError:\n+ SSL_CB_HANDSHAKE_START = 0x10\n+ SSL_CB_HANDSHAKE_DONE = 0x20\n+\n+ from twisted.internet._sslverify import (ClientTLSOptions,\n+ _maybeSetHostNameIndication,\n+ verifyHostname,\n+ VerificationError)\n+\n+ class ScrapyClientTLSOptions(ClientTLSOptions):\n+ # same as Twisted's ClientTLSOptions,\n+ # except that VerificationError is caught\n+ # and doesn't close the connection\n+ def _identityVerifyingInfoCallback(self, connection, where, ret):\n+ if where & SSL_CB_HANDSHAKE_START:\n+ _maybeSetHostNameIndication(connection, self._hostnameBytes)\n+ elif where & SSL_CB_HANDSHAKE_DONE:\n+ try:\n+ verifyHostname(connection, self._hostnameASCII)\n+ except VerificationError as e:\n+ logger.warning(e)\n+\n+except ImportError:\n+ # ImportError should not matter for older Twisted versions\n+ # as the above is not used in the fallback ScrapyClientContextFactory\n+ pass\n", "issue": "Scrapy 1.1.0 RC3 - exception thrown with invalid ssl certificate\nHello,\n\nI am crawling sometimes websites with an invalid ssl certificate. For example, Scrapy 1.1.0 RC3 fails to open when I do:\n\n> scrapy shell https://www.directoriosanitario.com/directorio\n> or\n> scrapy shell https://saobinv.5go.cc/top/\n\nand throws the following exception:\n\n> twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure service_identity.exceptions.VerificationError: VerificationError(errors=[DNSMismatch(mismatched_id=DNS_ID(hostname=b'www.directoriosanitario.com'))])>]\n\nI tried it with Scrapy 1.0.5 on python 2.7 and the spider opens but warns with: \n\n> AttributeError: 'NoneType' object has no attribute 'failVerification'\n\nIs there a way to force the spider to open with Scrapy 1.1.0 RC3?\n\n", "before_files": [{"content": "from OpenSSL import SSL\n\n\nMETHOD_SSLv3 = 'SSLv3'\nMETHOD_TLS = 'TLS'\nMETHOD_TLSv10 = 'TLSv1.0'\nMETHOD_TLSv11 = 'TLSv1.1'\nMETHOD_TLSv12 = 'TLSv1.2'\n\nopenssl_methods = {\n METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)\n METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)\n METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n}\n", "path": "scrapy/core/downloader/tls.py"}, {"content": "from OpenSSL import SSL\nfrom twisted.internet.ssl import ClientContextFactory\n\ntry:\n\n from zope.interface.declarations import implementer\n\n # the following should be available from Twisted 14.0.0\n from twisted.internet.ssl import optionsForClientTLS, CertificateOptions, platformTrust\n from twisted.internet._sslverify import ClientTLSOptions\n from twisted.web.client import BrowserLikePolicyForHTTPS\n from twisted.web.iweb import IPolicyForHTTPS\n\n @implementer(IPolicyForHTTPS)\n class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):\n \"\"\"\n Non-peer-certificate verifying HTTPS context factory\n\n Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)\n which allows TLS protocol negotiation\n\n 'A TLS/SSL connection established with [this method] may\n understand the SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.'\n \"\"\"\n\n def __init__(self, method=SSL.SSLv23_METHOD, *args, **kwargs):\n super(ScrapyClientContextFactory, self).__init__(*args, **kwargs)\n self._ssl_method = method\n\n def getCertificateOptions(self):\n # setting verify=True will require you to provide CAs\n # to verify against; in other words: it's not that simple\n\n # backward-compatible SSL/TLS method:\n #\n # * this will respect `method` attribute in often recommended\n # `ScrapyClientContextFactory` subclass\n # (https://github.com/scrapy/scrapy/issues/1429#issuecomment-131782133)\n #\n # * getattr() for `_ssl_method` attribute for context factories\n # not calling super(..., self).__init__\n return CertificateOptions(verify=False,\n method=getattr(self, 'method',\n getattr(self, '_ssl_method', None)))\n\n # kept for old-style HTTP/1.0 downloader context twisted calls,\n # e.g. connectSSL()\n def getContext(self, hostname=None, port=None):\n return self.getCertificateOptions().getContext()\n\n def creatorForNetloc(self, hostname, port):\n return ClientTLSOptions(hostname.decode(\"ascii\"), self.getContext())\n\n\n @implementer(IPolicyForHTTPS)\n class BrowserLikeContextFactory(ScrapyClientContextFactory):\n \"\"\"\n Twisted-recommended context factory for web clients.\n\n Quoting http://twistedmatrix.com/documents/current/api/twisted.web.client.Agent.html:\n \"The default is to use a BrowserLikePolicyForHTTPS,\n so unless you have special requirements you can leave this as-is.\"\n\n creatorForNetloc() is the same as BrowserLikePolicyForHTTPS\n except this context factory allows setting the TLS/SSL method to use.\n\n Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)\n which allows TLS protocol negotiation.\n \"\"\"\n def creatorForNetloc(self, hostname, port):\n\n # trustRoot set to platformTrust() will use the platform's root CAs.\n #\n # This means that a website like https://www.cacert.org will be rejected\n # by default, since CAcert.org CA certificate is seldom shipped.\n return optionsForClientTLS(hostname.decode(\"ascii\"),\n trustRoot=platformTrust(),\n extraCertificateOptions={\n 'method': self._ssl_method,\n })\n\nexcept ImportError:\n\n class ScrapyClientContextFactory(ClientContextFactory):\n \"A SSL context factory which is more permissive against SSL bugs.\"\n # see https://github.com/scrapy/scrapy/issues/82\n # and https://github.com/scrapy/scrapy/issues/26\n # and https://github.com/scrapy/scrapy/issues/981\n\n def __init__(self, method=SSL.SSLv23_METHOD):\n self.method = method\n\n def getContext(self, hostname=None, port=None):\n ctx = ClientContextFactory.getContext(self)\n # Enable all workarounds to SSL bugs as documented by\n # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html\n ctx.set_options(SSL.OP_ALL)\n return ctx\n", "path": "scrapy/core/downloader/contextfactory.py"}]}
| 2,078 | 758 |
gh_patches_debug_6961
|
rasdani/github-patches
|
git_diff
|
nextcloud__appstore-186
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Email confirmation email should be improved
I couldn't find the text for the email, so I suspect it's from some library.
Here is the content:
---
Hello from **apps.nextcloud.com**!
You're receiving this e-mail because user oparoz at apps.nextcloud.com has given **yours as an e-mail address to connect their account**.
To confirm this is correct, go to https://apps.nextcloud.com/confirm-email/Mzc:1bZksL:Y8YI3zMQ0fOllevi3VhZ-dmiSMU/
Thank you from **apps.nextcloud.com**!
**apps.nextcloud.com**
---
I've highlighted what should be altered.
</issue>
<code>
[start of nextcloudappstore/core/management/commands/setupsocial.py]
1 from allauth.socialaccount.models import SocialApp
2 from django.contrib.sites.models import Site
3 from django.core.management import BaseCommand
4
5
6 class Command(BaseCommand):
7 help = ('Updates the first site with the given domain and creates or '
8 'updates the GitHub social login application')
9
10 def add_arguments(self, parser):
11 social_meta = SocialApp._meta
12 parser.add_argument('--github-secret', required=True,
13 help=social_meta.get_field('secret').help_text)
14 parser.add_argument('--github-client-id', required=True,
15 help=social_meta.get_field('client_id').help_text)
16 site_meta = Site._meta
17 parser.add_argument('--domain', required=True,
18 help=site_meta.get_field('domain').help_text)
19
20 def handle(self, *args, **options):
21 # set up site which is required for social login
22 site = Site.objects.all()[0]
23 site.domain = options['domain']
24 site.name = options['domain']
25 site.save()
26 # set up github
27 app, created = SocialApp.objects.get_or_create(provider='github')
28 app.name = 'GitHub'
29 app.secret = options['github_secret']
30 app.client_id = options['github_client_id']
31 app.sites.add(site)
32 app.save()
33
34 msg = 'Successfully initialized social accounts'
35 self.stdout.write(self.style.SUCCESS(msg))
36
[end of nextcloudappstore/core/management/commands/setupsocial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nextcloudappstore/core/management/commands/setupsocial.py b/nextcloudappstore/core/management/commands/setupsocial.py
--- a/nextcloudappstore/core/management/commands/setupsocial.py
+++ b/nextcloudappstore/core/management/commands/setupsocial.py
@@ -21,7 +21,7 @@
# set up site which is required for social login
site = Site.objects.all()[0]
site.domain = options['domain']
- site.name = options['domain']
+ site.name = 'Nextcloud App Store'
site.save()
# set up github
app, created = SocialApp.objects.get_or_create(provider='github')
|
{"golden_diff": "diff --git a/nextcloudappstore/core/management/commands/setupsocial.py b/nextcloudappstore/core/management/commands/setupsocial.py\n--- a/nextcloudappstore/core/management/commands/setupsocial.py\n+++ b/nextcloudappstore/core/management/commands/setupsocial.py\n@@ -21,7 +21,7 @@\n # set up site which is required for social login\n site = Site.objects.all()[0]\n site.domain = options['domain']\n- site.name = options['domain']\n+ site.name = 'Nextcloud App Store'\n site.save()\n # set up github\n app, created = SocialApp.objects.get_or_create(provider='github')\n", "issue": "Email confirmation email should be improved\nI couldn't find the text for the email, so I suspect it's from some library.\n\nHere is the content:\n\n---\n\nHello from **apps.nextcloud.com**!\n\nYou're receiving this e-mail because user oparoz at apps.nextcloud.com has given **yours as an e-mail address to connect their account**.\n\nTo confirm this is correct, go to https://apps.nextcloud.com/confirm-email/Mzc:1bZksL:Y8YI3zMQ0fOllevi3VhZ-dmiSMU/\n\nThank you from **apps.nextcloud.com**!\n**apps.nextcloud.com**\n\n---\n\nI've highlighted what should be altered.\n\n", "before_files": [{"content": "from allauth.socialaccount.models import SocialApp\nfrom django.contrib.sites.models import Site\nfrom django.core.management import BaseCommand\n\n\nclass Command(BaseCommand):\n help = ('Updates the first site with the given domain and creates or '\n 'updates the GitHub social login application')\n\n def add_arguments(self, parser):\n social_meta = SocialApp._meta\n parser.add_argument('--github-secret', required=True,\n help=social_meta.get_field('secret').help_text)\n parser.add_argument('--github-client-id', required=True,\n help=social_meta.get_field('client_id').help_text)\n site_meta = Site._meta\n parser.add_argument('--domain', required=True,\n help=site_meta.get_field('domain').help_text)\n\n def handle(self, *args, **options):\n # set up site which is required for social login\n site = Site.objects.all()[0]\n site.domain = options['domain']\n site.name = options['domain']\n site.save()\n # set up github\n app, created = SocialApp.objects.get_or_create(provider='github')\n app.name = 'GitHub'\n app.secret = options['github_secret']\n app.client_id = options['github_client_id']\n app.sites.add(site)\n app.save()\n\n msg = 'Successfully initialized social accounts'\n self.stdout.write(self.style.SUCCESS(msg))\n", "path": "nextcloudappstore/core/management/commands/setupsocial.py"}]}
| 1,048 | 154 |
gh_patches_debug_23246
|
rasdani/github-patches
|
git_diff
|
docarray__docarray-85
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fix(hashing): ignore casting to float
Fixes #83
```python
>>> x = Document(text="float test 2.56")
>>> x.get_vocabulary()
Counter({'float': 1, 'test': 1, '2': 1, '56': 1})
```
</issue>
<code>
[start of docarray/document/mixins/featurehash.py]
1 import hashlib
2 import json
3 from typing import Tuple, TYPE_CHECKING
4
5 import numpy as np
6
7 if TYPE_CHECKING:
8 from ...types import T
9
10
11 class FeatureHashMixin:
12 """Provide helper functions for feature hashing."""
13
14 def embed_feature_hashing(
15 self: 'T',
16 n_dim: int = 256,
17 sparse: bool = False,
18 fields: Tuple[str, ...] = ('text', 'tags'),
19 max_value: int = 1_000_000,
20 ) -> 'T':
21 """Convert an arbitrary set of attributes into a fixed-dimensional matrix using the hashing trick.
22
23 :param n_dim: the dimensionality of each document in the output embedding.
24 Small numbers of features are likely to cause hash collisions,
25 but large numbers will cause larger overall parameter dimensions.
26 :param sparse: whether the resulting feature matrix should be a sparse csr_matrix or dense ndarray.
27 Note that this feature requires ``scipy``
28 :param fields: which attributes to be considered as for feature hashing.
29 """
30 if sparse:
31 from scipy.sparse import csr_matrix
32
33 idxs, data = [], [] # sparse
34 table = np.zeros(n_dim) # dense
35
36 for f in fields:
37 if 'text' in fields:
38 all_tokens = self.get_vocabulary(('text',))
39 for f_id, val in all_tokens.items():
40 _hash_column(f_id, val, n_dim, max_value, idxs, data, table)
41
42 if 'tags' in fields:
43 for k, v in self.tags.items():
44 _hash_column(k, v, n_dim, max_value, idxs, data, table)
45
46 v = getattr(self, f, None)
47 if v:
48 _hash_column(f, v, n_dim, max_value, idxs, data, table)
49
50 if sparse:
51 self.embedding = csr_matrix((data, zip(*idxs)), shape=(1, n_dim))
52 else:
53 self.embedding = table
54 return self
55
56
57 def _hash_column(col_name, col_val, n_dim, max_value, idxs, data, table):
58 h = _any_hash(col_name)
59 col_val = _any_hash(col_val) % max_value
60 col = h % n_dim
61 idxs.append((0, col))
62 data.append(np.sign(h) * col_val)
63 table[col] += np.sign(h) * col_val
64
65
66 def _any_hash(v):
67 try:
68 return int(v) # parse int parameter
69 except ValueError:
70 try:
71 return float(v) # parse float parameter
72 except ValueError:
73 if not v:
74 # ignore it when the parameter is empty
75 return 0
76 if isinstance(v, str):
77 v = v.strip()
78 if v.lower() in {'true', 'yes'}: # parse boolean parameter
79 return 1
80 if v.lower() in {'false', 'no'}:
81 return 0
82 if isinstance(v, (tuple, dict, list)):
83 v = json.dumps(v, sort_keys=True)
84
85 return int(hashlib.md5(str(v).encode('utf-8')).hexdigest(), base=16)
86
[end of docarray/document/mixins/featurehash.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docarray/document/mixins/featurehash.py b/docarray/document/mixins/featurehash.py
--- a/docarray/document/mixins/featurehash.py
+++ b/docarray/document/mixins/featurehash.py
@@ -64,22 +64,24 @@
def _any_hash(v):
- try:
- return int(v) # parse int parameter
- except ValueError:
+ if not v:
+ # ignore it when the parameter is empty
+ return 0
+ elif isinstance(v, (tuple, dict, list, str)):
+ if isinstance(v, str):
+ v = v.strip()
+ if v.lower() in {'true', 'yes'}: # parse boolean parameter
+ return 1
+ if v.lower() in {'false', 'no'}:
+ return 0
+ else:
+ v = json.dumps(v, sort_keys=True)
+ return int(hashlib.md5(str(v).encode('utf-8')).hexdigest(), base=16)
+ else:
try:
- return float(v) # parse float parameter
+ return int(v) # parse int parameter
except ValueError:
- if not v:
- # ignore it when the parameter is empty
- return 0
- if isinstance(v, str):
- v = v.strip()
- if v.lower() in {'true', 'yes'}: # parse boolean parameter
- return 1
- if v.lower() in {'false', 'no'}:
- return 0
- if isinstance(v, (tuple, dict, list)):
- v = json.dumps(v, sort_keys=True)
-
- return int(hashlib.md5(str(v).encode('utf-8')).hexdigest(), base=16)
+ try:
+ return float(v) # parse float parameter
+ except ValueError:
+ return 0 # unable to hash
|
{"golden_diff": "diff --git a/docarray/document/mixins/featurehash.py b/docarray/document/mixins/featurehash.py\n--- a/docarray/document/mixins/featurehash.py\n+++ b/docarray/document/mixins/featurehash.py\n@@ -64,22 +64,24 @@\n \n \n def _any_hash(v):\n- try:\n- return int(v) # parse int parameter\n- except ValueError:\n+ if not v:\n+ # ignore it when the parameter is empty\n+ return 0\n+ elif isinstance(v, (tuple, dict, list, str)):\n+ if isinstance(v, str):\n+ v = v.strip()\n+ if v.lower() in {'true', 'yes'}: # parse boolean parameter\n+ return 1\n+ if v.lower() in {'false', 'no'}:\n+ return 0\n+ else:\n+ v = json.dumps(v, sort_keys=True)\n+ return int(hashlib.md5(str(v).encode('utf-8')).hexdigest(), base=16)\n+ else:\n try:\n- return float(v) # parse float parameter\n+ return int(v) # parse int parameter\n except ValueError:\n- if not v:\n- # ignore it when the parameter is empty\n- return 0\n- if isinstance(v, str):\n- v = v.strip()\n- if v.lower() in {'true', 'yes'}: # parse boolean parameter\n- return 1\n- if v.lower() in {'false', 'no'}:\n- return 0\n- if isinstance(v, (tuple, dict, list)):\n- v = json.dumps(v, sort_keys=True)\n-\n- return int(hashlib.md5(str(v).encode('utf-8')).hexdigest(), base=16)\n+ try:\n+ return float(v) # parse float parameter\n+ except ValueError:\n+ return 0 # unable to hash\n", "issue": "fix(hashing): ignore casting to float\nFixes #83 \r\n\r\n```python\r\n>>> x = Document(text=\"float test 2.56\")\r\n>>> x.get_vocabulary()\r\nCounter({'float': 1, 'test': 1, '2': 1, '56': 1})\r\n```\n", "before_files": [{"content": "import hashlib\nimport json\nfrom typing import Tuple, TYPE_CHECKING\n\nimport numpy as np\n\nif TYPE_CHECKING:\n from ...types import T\n\n\nclass FeatureHashMixin:\n \"\"\"Provide helper functions for feature hashing.\"\"\"\n\n def embed_feature_hashing(\n self: 'T',\n n_dim: int = 256,\n sparse: bool = False,\n fields: Tuple[str, ...] = ('text', 'tags'),\n max_value: int = 1_000_000,\n ) -> 'T':\n \"\"\"Convert an arbitrary set of attributes into a fixed-dimensional matrix using the hashing trick.\n\n :param n_dim: the dimensionality of each document in the output embedding.\n Small numbers of features are likely to cause hash collisions,\n but large numbers will cause larger overall parameter dimensions.\n :param sparse: whether the resulting feature matrix should be a sparse csr_matrix or dense ndarray.\n Note that this feature requires ``scipy``\n :param fields: which attributes to be considered as for feature hashing.\n \"\"\"\n if sparse:\n from scipy.sparse import csr_matrix\n\n idxs, data = [], [] # sparse\n table = np.zeros(n_dim) # dense\n\n for f in fields:\n if 'text' in fields:\n all_tokens = self.get_vocabulary(('text',))\n for f_id, val in all_tokens.items():\n _hash_column(f_id, val, n_dim, max_value, idxs, data, table)\n\n if 'tags' in fields:\n for k, v in self.tags.items():\n _hash_column(k, v, n_dim, max_value, idxs, data, table)\n\n v = getattr(self, f, None)\n if v:\n _hash_column(f, v, n_dim, max_value, idxs, data, table)\n\n if sparse:\n self.embedding = csr_matrix((data, zip(*idxs)), shape=(1, n_dim))\n else:\n self.embedding = table\n return self\n\n\ndef _hash_column(col_name, col_val, n_dim, max_value, idxs, data, table):\n h = _any_hash(col_name)\n col_val = _any_hash(col_val) % max_value\n col = h % n_dim\n idxs.append((0, col))\n data.append(np.sign(h) * col_val)\n table[col] += np.sign(h) * col_val\n\n\ndef _any_hash(v):\n try:\n return int(v) # parse int parameter\n except ValueError:\n try:\n return float(v) # parse float parameter\n except ValueError:\n if not v:\n # ignore it when the parameter is empty\n return 0\n if isinstance(v, str):\n v = v.strip()\n if v.lower() in {'true', 'yes'}: # parse boolean parameter\n return 1\n if v.lower() in {'false', 'no'}:\n return 0\n if isinstance(v, (tuple, dict, list)):\n v = json.dumps(v, sort_keys=True)\n\n return int(hashlib.md5(str(v).encode('utf-8')).hexdigest(), base=16)\n", "path": "docarray/document/mixins/featurehash.py"}]}
| 1,467 | 434 |
gh_patches_debug_37795
|
rasdani/github-patches
|
git_diff
|
python-pillow__Pillow-3327
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sanitize TGA Image.info
Currently, the TGA image plugin sets `Image.info` "compression" to "tga_rle" on `load()`, while `save()` takes "rle" (bool) option. Both methods should use the same option for consistency. It probably makes sense to keep "rle" as TGA format doesn't support other compression methods, but "compression" may be more consistent with BMP and TIFF plugins. Neither of the options is documented, so there is no danger of breaking backward compatibility.
Also, it's not very clear whether `save()` method should "inherit" info like TIFF and PNG plugins do:
https://github.com/python-pillow/Pillow/blob/4407cb65079a7d1150277e3b9a144996f56357c9/src/PIL/TiffImagePlugin.py#L1399-L1400
Currently, TGA plugin only inherits "orientation" but doesn't allow to specify it as a keyword to `save()`, and "id_section" is ignored altogether.
</issue>
<code>
[start of src/PIL/TgaImagePlugin.py]
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # TGA file handling
6 #
7 # History:
8 # 95-09-01 fl created (reads 24-bit files only)
9 # 97-01-04 fl support more TGA versions, including compressed images
10 # 98-07-04 fl fixed orientation and alpha layer bugs
11 # 98-09-11 fl fixed orientation for runlength decoder
12 #
13 # Copyright (c) Secret Labs AB 1997-98.
14 # Copyright (c) Fredrik Lundh 1995-97.
15 #
16 # See the README file for information on usage and redistribution.
17 #
18
19
20 from . import Image, ImageFile, ImagePalette
21 from ._binary import i8, i16le as i16, o8, o16le as o16
22
23 __version__ = "0.3"
24
25
26 #
27 # --------------------------------------------------------------------
28 # Read RGA file
29
30
31 MODES = {
32 # map imagetype/depth to rawmode
33 (1, 8): "P",
34 (3, 1): "1",
35 (3, 8): "L",
36 (3, 16): "LA",
37 (2, 16): "BGR;5",
38 (2, 24): "BGR",
39 (2, 32): "BGRA",
40 }
41
42
43 ##
44 # Image plugin for Targa files.
45
46 class TgaImageFile(ImageFile.ImageFile):
47
48 format = "TGA"
49 format_description = "Targa"
50
51 def _open(self):
52
53 # process header
54 s = self.fp.read(18)
55
56 idlen = i8(s[0])
57
58 colormaptype = i8(s[1])
59 imagetype = i8(s[2])
60
61 depth = i8(s[16])
62
63 flags = i8(s[17])
64
65 self.size = i16(s[12:]), i16(s[14:])
66
67 # validate header fields
68 if colormaptype not in (0, 1) or\
69 self.size[0] <= 0 or self.size[1] <= 0 or\
70 depth not in (1, 8, 16, 24, 32):
71 raise SyntaxError("not a TGA file")
72
73 # image mode
74 if imagetype in (3, 11):
75 self.mode = "L"
76 if depth == 1:
77 self.mode = "1" # ???
78 elif depth == 16:
79 self.mode = "LA"
80 elif imagetype in (1, 9):
81 self.mode = "P"
82 elif imagetype in (2, 10):
83 self.mode = "RGB"
84 if depth == 32:
85 self.mode = "RGBA"
86 else:
87 raise SyntaxError("unknown TGA mode")
88
89 # orientation
90 orientation = flags & 0x30
91 if orientation == 0x20:
92 orientation = 1
93 elif not orientation:
94 orientation = -1
95 else:
96 raise SyntaxError("unknown TGA orientation")
97
98 self.info["orientation"] = orientation
99
100 if imagetype & 8:
101 self.info["compression"] = "tga_rle"
102
103 if idlen:
104 self.info["id_section"] = self.fp.read(idlen)
105
106 if colormaptype:
107 # read palette
108 start, size, mapdepth = i16(s[3:]), i16(s[5:]), i16(s[7:])
109 if mapdepth == 16:
110 self.palette = ImagePalette.raw(
111 "BGR;16", b"\0"*2*start + self.fp.read(2*size))
112 elif mapdepth == 24:
113 self.palette = ImagePalette.raw(
114 "BGR", b"\0"*3*start + self.fp.read(3*size))
115 elif mapdepth == 32:
116 self.palette = ImagePalette.raw(
117 "BGRA", b"\0"*4*start + self.fp.read(4*size))
118
119 # setup tile descriptor
120 try:
121 rawmode = MODES[(imagetype & 7, depth)]
122 if imagetype & 8:
123 # compressed
124 self.tile = [("tga_rle", (0, 0)+self.size,
125 self.fp.tell(), (rawmode, orientation, depth))]
126 else:
127 self.tile = [("raw", (0, 0)+self.size,
128 self.fp.tell(), (rawmode, 0, orientation))]
129 except KeyError:
130 pass # cannot decode
131
132 #
133 # --------------------------------------------------------------------
134 # Write TGA file
135
136
137 SAVE = {
138 "1": ("1", 1, 0, 3),
139 "L": ("L", 8, 0, 3),
140 "LA": ("LA", 16, 0, 3),
141 "P": ("P", 8, 1, 1),
142 "RGB": ("BGR", 24, 0, 2),
143 "RGBA": ("BGRA", 32, 0, 2),
144 }
145
146
147 def _save(im, fp, filename):
148
149 try:
150 rawmode, bits, colormaptype, imagetype = SAVE[im.mode]
151 except KeyError:
152 raise IOError("cannot write mode %s as TGA" % im.mode)
153
154 rle = im.encoderinfo.get("rle", False)
155
156 if rle:
157 imagetype += 8
158
159 if colormaptype:
160 colormapfirst, colormaplength, colormapentry = 0, 256, 24
161 else:
162 colormapfirst, colormaplength, colormapentry = 0, 0, 0
163
164 if im.mode in ("LA", "RGBA"):
165 flags = 8
166 else:
167 flags = 0
168
169 orientation = im.info.get("orientation", -1)
170 if orientation > 0:
171 flags = flags | 0x20
172
173 fp.write(b"\000" +
174 o8(colormaptype) +
175 o8(imagetype) +
176 o16(colormapfirst) +
177 o16(colormaplength) +
178 o8(colormapentry) +
179 o16(0) +
180 o16(0) +
181 o16(im.size[0]) +
182 o16(im.size[1]) +
183 o8(bits) +
184 o8(flags))
185
186 if colormaptype:
187 fp.write(im.im.getpalette("RGB", "BGR"))
188
189 if rle:
190 ImageFile._save(
191 im,
192 fp,
193 [("tga_rle", (0, 0) + im.size, 0, (rawmode, orientation))])
194 else:
195 ImageFile._save(
196 im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, orientation))])
197
198 # write targa version 2 footer
199 fp.write(b"\000" * 8 + b"TRUEVISION-XFILE." + b"\000")
200
201 #
202 # --------------------------------------------------------------------
203 # Registry
204
205
206 Image.register_open(TgaImageFile.format, TgaImageFile)
207 Image.register_save(TgaImageFile.format, _save)
208
209 Image.register_extension(TgaImageFile.format, ".tga")
210
[end of src/PIL/TgaImagePlugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/PIL/TgaImagePlugin.py b/src/PIL/TgaImagePlugin.py
--- a/src/PIL/TgaImagePlugin.py
+++ b/src/PIL/TgaImagePlugin.py
@@ -20,6 +20,8 @@
from . import Image, ImageFile, ImagePalette
from ._binary import i8, i16le as i16, o8, o16le as o16
+import warnings
+
__version__ = "0.3"
@@ -53,7 +55,7 @@
# process header
s = self.fp.read(18)
- idlen = i8(s[0])
+ id_len = i8(s[0])
colormaptype = i8(s[1])
imagetype = i8(s[2])
@@ -100,8 +102,8 @@
if imagetype & 8:
self.info["compression"] = "tga_rle"
- if idlen:
- self.info["id_section"] = self.fp.read(idlen)
+ if id_len:
+ self.info["id_section"] = self.fp.read(id_len)
if colormaptype:
# read palette
@@ -151,11 +153,23 @@
except KeyError:
raise IOError("cannot write mode %s as TGA" % im.mode)
- rle = im.encoderinfo.get("rle", False)
-
+ if "rle" in im.encoderinfo:
+ rle = im.encoderinfo["rle"]
+ else:
+ compression = im.encoderinfo.get("compression",
+ im.info.get("compression"))
+ rle = compression == "tga_rle"
if rle:
imagetype += 8
+ id_section = im.encoderinfo.get("id_section",
+ im.info.get("id_section", ""))
+ id_len = len(id_section)
+ if id_len > 255:
+ id_len = 255
+ id_section = id_section[:255]
+ warnings.warn("id_section has been trimmed to 255 characters")
+
if colormaptype:
colormapfirst, colormaplength, colormapentry = 0, 256, 24
else:
@@ -166,11 +180,12 @@
else:
flags = 0
- orientation = im.info.get("orientation", -1)
+ orientation = im.encoderinfo.get("orientation",
+ im.info.get("orientation", -1))
if orientation > 0:
flags = flags | 0x20
- fp.write(b"\000" +
+ fp.write(o8(id_len) +
o8(colormaptype) +
o8(imagetype) +
o16(colormapfirst) +
@@ -183,6 +198,9 @@
o8(bits) +
o8(flags))
+ if id_section:
+ fp.write(id_section)
+
if colormaptype:
fp.write(im.im.getpalette("RGB", "BGR"))
|
{"golden_diff": "diff --git a/src/PIL/TgaImagePlugin.py b/src/PIL/TgaImagePlugin.py\n--- a/src/PIL/TgaImagePlugin.py\n+++ b/src/PIL/TgaImagePlugin.py\n@@ -20,6 +20,8 @@\n from . import Image, ImageFile, ImagePalette\n from ._binary import i8, i16le as i16, o8, o16le as o16\n \n+import warnings\n+\n __version__ = \"0.3\"\n \n \n@@ -53,7 +55,7 @@\n # process header\n s = self.fp.read(18)\n \n- idlen = i8(s[0])\n+ id_len = i8(s[0])\n \n colormaptype = i8(s[1])\n imagetype = i8(s[2])\n@@ -100,8 +102,8 @@\n if imagetype & 8:\n self.info[\"compression\"] = \"tga_rle\"\n \n- if idlen:\n- self.info[\"id_section\"] = self.fp.read(idlen)\n+ if id_len:\n+ self.info[\"id_section\"] = self.fp.read(id_len)\n \n if colormaptype:\n # read palette\n@@ -151,11 +153,23 @@\n except KeyError:\n raise IOError(\"cannot write mode %s as TGA\" % im.mode)\n \n- rle = im.encoderinfo.get(\"rle\", False)\n-\n+ if \"rle\" in im.encoderinfo:\n+ rle = im.encoderinfo[\"rle\"]\n+ else:\n+ compression = im.encoderinfo.get(\"compression\",\n+ im.info.get(\"compression\"))\n+ rle = compression == \"tga_rle\"\n if rle:\n imagetype += 8\n \n+ id_section = im.encoderinfo.get(\"id_section\",\n+ im.info.get(\"id_section\", \"\"))\n+ id_len = len(id_section)\n+ if id_len > 255:\n+ id_len = 255\n+ id_section = id_section[:255]\n+ warnings.warn(\"id_section has been trimmed to 255 characters\")\n+\n if colormaptype:\n colormapfirst, colormaplength, colormapentry = 0, 256, 24\n else:\n@@ -166,11 +180,12 @@\n else:\n flags = 0\n \n- orientation = im.info.get(\"orientation\", -1)\n+ orientation = im.encoderinfo.get(\"orientation\",\n+ im.info.get(\"orientation\", -1))\n if orientation > 0:\n flags = flags | 0x20\n \n- fp.write(b\"\\000\" +\n+ fp.write(o8(id_len) +\n o8(colormaptype) +\n o8(imagetype) +\n o16(colormapfirst) +\n@@ -183,6 +198,9 @@\n o8(bits) +\n o8(flags))\n \n+ if id_section:\n+ fp.write(id_section)\n+\n if colormaptype:\n fp.write(im.im.getpalette(\"RGB\", \"BGR\"))\n", "issue": "Sanitize TGA Image.info\nCurrently, the TGA image plugin sets `Image.info` \"compression\" to \"tga_rle\" on `load()`, while `save()` takes \"rle\" (bool) option. Both methods should use the same option for consistency. It probably makes sense to keep \"rle\" as TGA format doesn't support other compression methods, but \"compression\" may be more consistent with BMP and TIFF plugins. Neither of the options is documented, so there is no danger of breaking backward compatibility.\r\n\r\nAlso, it's not very clear whether `save()` method should \"inherit\" info like TIFF and PNG plugins do:\r\n\r\nhttps://github.com/python-pillow/Pillow/blob/4407cb65079a7d1150277e3b9a144996f56357c9/src/PIL/TiffImagePlugin.py#L1399-L1400\r\n\r\nCurrently, TGA plugin only inherits \"orientation\" but doesn't allow to specify it as a keyword to `save()`, and \"id_section\" is ignored altogether.\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# TGA file handling\n#\n# History:\n# 95-09-01 fl created (reads 24-bit files only)\n# 97-01-04 fl support more TGA versions, including compressed images\n# 98-07-04 fl fixed orientation and alpha layer bugs\n# 98-09-11 fl fixed orientation for runlength decoder\n#\n# Copyright (c) Secret Labs AB 1997-98.\n# Copyright (c) Fredrik Lundh 1995-97.\n#\n# See the README file for information on usage and redistribution.\n#\n\n\nfrom . import Image, ImageFile, ImagePalette\nfrom ._binary import i8, i16le as i16, o8, o16le as o16\n\n__version__ = \"0.3\"\n\n\n#\n# --------------------------------------------------------------------\n# Read RGA file\n\n\nMODES = {\n # map imagetype/depth to rawmode\n (1, 8): \"P\",\n (3, 1): \"1\",\n (3, 8): \"L\",\n (3, 16): \"LA\",\n (2, 16): \"BGR;5\",\n (2, 24): \"BGR\",\n (2, 32): \"BGRA\",\n}\n\n\n##\n# Image plugin for Targa files.\n\nclass TgaImageFile(ImageFile.ImageFile):\n\n format = \"TGA\"\n format_description = \"Targa\"\n\n def _open(self):\n\n # process header\n s = self.fp.read(18)\n\n idlen = i8(s[0])\n\n colormaptype = i8(s[1])\n imagetype = i8(s[2])\n\n depth = i8(s[16])\n\n flags = i8(s[17])\n\n self.size = i16(s[12:]), i16(s[14:])\n\n # validate header fields\n if colormaptype not in (0, 1) or\\\n self.size[0] <= 0 or self.size[1] <= 0 or\\\n depth not in (1, 8, 16, 24, 32):\n raise SyntaxError(\"not a TGA file\")\n\n # image mode\n if imagetype in (3, 11):\n self.mode = \"L\"\n if depth == 1:\n self.mode = \"1\" # ???\n elif depth == 16:\n self.mode = \"LA\"\n elif imagetype in (1, 9):\n self.mode = \"P\"\n elif imagetype in (2, 10):\n self.mode = \"RGB\"\n if depth == 32:\n self.mode = \"RGBA\"\n else:\n raise SyntaxError(\"unknown TGA mode\")\n\n # orientation\n orientation = flags & 0x30\n if orientation == 0x20:\n orientation = 1\n elif not orientation:\n orientation = -1\n else:\n raise SyntaxError(\"unknown TGA orientation\")\n\n self.info[\"orientation\"] = orientation\n\n if imagetype & 8:\n self.info[\"compression\"] = \"tga_rle\"\n\n if idlen:\n self.info[\"id_section\"] = self.fp.read(idlen)\n\n if colormaptype:\n # read palette\n start, size, mapdepth = i16(s[3:]), i16(s[5:]), i16(s[7:])\n if mapdepth == 16:\n self.palette = ImagePalette.raw(\n \"BGR;16\", b\"\\0\"*2*start + self.fp.read(2*size))\n elif mapdepth == 24:\n self.palette = ImagePalette.raw(\n \"BGR\", b\"\\0\"*3*start + self.fp.read(3*size))\n elif mapdepth == 32:\n self.palette = ImagePalette.raw(\n \"BGRA\", b\"\\0\"*4*start + self.fp.read(4*size))\n\n # setup tile descriptor\n try:\n rawmode = MODES[(imagetype & 7, depth)]\n if imagetype & 8:\n # compressed\n self.tile = [(\"tga_rle\", (0, 0)+self.size,\n self.fp.tell(), (rawmode, orientation, depth))]\n else:\n self.tile = [(\"raw\", (0, 0)+self.size,\n self.fp.tell(), (rawmode, 0, orientation))]\n except KeyError:\n pass # cannot decode\n\n#\n# --------------------------------------------------------------------\n# Write TGA file\n\n\nSAVE = {\n \"1\": (\"1\", 1, 0, 3),\n \"L\": (\"L\", 8, 0, 3),\n \"LA\": (\"LA\", 16, 0, 3),\n \"P\": (\"P\", 8, 1, 1),\n \"RGB\": (\"BGR\", 24, 0, 2),\n \"RGBA\": (\"BGRA\", 32, 0, 2),\n}\n\n\ndef _save(im, fp, filename):\n\n try:\n rawmode, bits, colormaptype, imagetype = SAVE[im.mode]\n except KeyError:\n raise IOError(\"cannot write mode %s as TGA\" % im.mode)\n\n rle = im.encoderinfo.get(\"rle\", False)\n\n if rle:\n imagetype += 8\n\n if colormaptype:\n colormapfirst, colormaplength, colormapentry = 0, 256, 24\n else:\n colormapfirst, colormaplength, colormapentry = 0, 0, 0\n\n if im.mode in (\"LA\", \"RGBA\"):\n flags = 8\n else:\n flags = 0\n\n orientation = im.info.get(\"orientation\", -1)\n if orientation > 0:\n flags = flags | 0x20\n\n fp.write(b\"\\000\" +\n o8(colormaptype) +\n o8(imagetype) +\n o16(colormapfirst) +\n o16(colormaplength) +\n o8(colormapentry) +\n o16(0) +\n o16(0) +\n o16(im.size[0]) +\n o16(im.size[1]) +\n o8(bits) +\n o8(flags))\n\n if colormaptype:\n fp.write(im.im.getpalette(\"RGB\", \"BGR\"))\n\n if rle:\n ImageFile._save(\n im,\n fp,\n [(\"tga_rle\", (0, 0) + im.size, 0, (rawmode, orientation))])\n else:\n ImageFile._save(\n im, fp, [(\"raw\", (0, 0) + im.size, 0, (rawmode, 0, orientation))])\n\n # write targa version 2 footer\n fp.write(b\"\\000\" * 8 + b\"TRUEVISION-XFILE.\" + b\"\\000\")\n\n#\n# --------------------------------------------------------------------\n# Registry\n\n\nImage.register_open(TgaImageFile.format, TgaImageFile)\nImage.register_save(TgaImageFile.format, _save)\n\nImage.register_extension(TgaImageFile.format, \".tga\")\n", "path": "src/PIL/TgaImagePlugin.py"}]}
| 2,930 | 692 |
gh_patches_debug_29060
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-1399
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tox (including Travis) not testing cythonized variants
As described in ``README.rst``, Falcon can be cythonized for ~20% performance gain (or actually even more). Installing Falcon from *sdist* into an environment with Cython does the trick:
```python
>>> import falcon
>>> falcon.api # As we can see, falcon.api is coming from the dynamically-linked (cythonized) library api.so
<module 'falcon.api' from '/home/vytas/.virtualenvs/fresh/local/lib/python2.7/site-packages/falcon/api.so'>
```
However, this does not hold under Tox ``py27_cython`` and ``py36_cython`` environments, including runs in Travis, as the properly cythonized Falcon is shadowed by the local source directory. This could potentially be worked around by changing dir in Tox, but apparently pytest is even more stubborn as it is correctly determining the root dir of tests, and changing to it.
See also discussions here:
* https://github.com/tox-dev/tox/issues/54
* https://github.com/tox-dev/tox/issues/514
The last comment on the latter also explains the possible patterns to work this around: https://github.com/tox-dev/tox/issues/514#issuecomment-327779367 (links to the useful https://docs.pytest.org/en/latest/goodpractices.html#choosing-a-test-layout-import-rules ).
</issue>
<code>
[start of falcon/cmd/print_routes.py]
1 #!/usr/bin/env python
2 # Copyright 2013 by Rackspace Hosting, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """
16 Script that prints out the routes of an API instance.
17 """
18
19 from __future__ import print_function
20
21 from functools import partial
22 import inspect
23
24 import falcon
25
26
27 def print_routes(api, verbose=False): # pragma: no cover
28 """
29 Initial call.
30
31 :param api: The falcon.API or callable that returns an instance to look at.
32 :type api: falcon.API or callable
33 :param verbose: If the output should be verbose.
34 :type verbose: bool
35 """
36 traverse(api._router._roots, verbose=verbose)
37
38
39 def traverse(roots, parent='', verbose=False):
40 """
41 Recursive call which also handles printing output.
42
43 :param api: The falcon.API or callable that returns an instance to look at.
44 :type api: falcon.API or callable
45 :param parent: The parent uri path to the current iteration.
46 :type parent: str
47 :param verbose: If the output should be verbose.
48 :type verbose: bool
49 """
50 for root in roots:
51 if root.method_map:
52 print('->', parent + '/' + root.raw_segment)
53 if verbose:
54 for method, func in root.method_map.items():
55 if func.__name__ != 'method_not_allowed':
56 if isinstance(func, partial):
57 real_func = func.func
58 else:
59 real_func = func
60
61 source_file = inspect.getsourcefile(real_func)
62
63 print('-->{0} {1}:{2}'.format(
64 method,
65 source_file,
66 source_file[1]
67 ))
68
69 if root.children:
70 traverse(root.children, parent + '/' + root.raw_segment, verbose)
71
72
73 def main():
74 """
75 Main entrypoint.
76 """
77 import argparse
78
79 parser = argparse.ArgumentParser(
80 description='Example: print-api-routes myprogram:app')
81 parser.add_argument(
82 '-v', '--verbose', action='store_true',
83 help='Prints out information for each method.')
84 parser.add_argument(
85 'api_module',
86 help='The module and api to inspect. Example: myapp.somemodule:api',
87 )
88 args = parser.parse_args()
89
90 try:
91 module, instance = args.api_module.split(':', 1)
92 except ValueError:
93 parser.error(
94 'The api_module must include a colon between '
95 'the module and instnace')
96 api = getattr(__import__(module, fromlist=[True]), instance)
97 if not isinstance(api, falcon.API):
98 if callable(api):
99 api = api()
100 if not isinstance(api, falcon.API):
101 parser.error(
102 '{0} did not return a falcon.API instance'.format(
103 args.api_module))
104 else:
105 parser.error(
106 'The instance must be of falcon.API or be '
107 'a callable without args that returns falcon.API')
108 print_routes(api, verbose=args.verbose)
109
110
111 if __name__ == '__main__':
112 main()
113
[end of falcon/cmd/print_routes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/falcon/cmd/print_routes.py b/falcon/cmd/print_routes.py
--- a/falcon/cmd/print_routes.py
+++ b/falcon/cmd/print_routes.py
@@ -58,13 +58,19 @@
else:
real_func = func
- source_file = inspect.getsourcefile(real_func)
-
- print('-->{0} {1}:{2}'.format(
- method,
- source_file,
- source_file[1]
- ))
+ try:
+ source_file = inspect.getsourcefile(real_func)
+ source_lines = inspect.getsourcelines(real_func)
+ source_info = '{}:{}'.format(source_file,
+ source_lines[1])
+ except TypeError:
+ # NOTE(vytas): If Falcon is cythonized, all default
+ # responders coming from cythonized modules will
+ # appear as built-in functions, and raise a
+ # TypeError when trying to locate the source file.
+ source_info = '[unknown file]'
+
+ print('-->' + method, source_info)
if root.children:
traverse(root.children, parent + '/' + root.raw_segment, verbose)
@@ -92,7 +98,7 @@
except ValueError:
parser.error(
'The api_module must include a colon between '
- 'the module and instnace')
+ 'the module and instance')
api = getattr(__import__(module, fromlist=[True]), instance)
if not isinstance(api, falcon.API):
if callable(api):
|
{"golden_diff": "diff --git a/falcon/cmd/print_routes.py b/falcon/cmd/print_routes.py\n--- a/falcon/cmd/print_routes.py\n+++ b/falcon/cmd/print_routes.py\n@@ -58,13 +58,19 @@\n else:\n real_func = func\n \n- source_file = inspect.getsourcefile(real_func)\n-\n- print('-->{0} {1}:{2}'.format(\n- method,\n- source_file,\n- source_file[1]\n- ))\n+ try:\n+ source_file = inspect.getsourcefile(real_func)\n+ source_lines = inspect.getsourcelines(real_func)\n+ source_info = '{}:{}'.format(source_file,\n+ source_lines[1])\n+ except TypeError:\n+ # NOTE(vytas): If Falcon is cythonized, all default\n+ # responders coming from cythonized modules will\n+ # appear as built-in functions, and raise a\n+ # TypeError when trying to locate the source file.\n+ source_info = '[unknown file]'\n+\n+ print('-->' + method, source_info)\n \n if root.children:\n traverse(root.children, parent + '/' + root.raw_segment, verbose)\n@@ -92,7 +98,7 @@\n except ValueError:\n parser.error(\n 'The api_module must include a colon between '\n- 'the module and instnace')\n+ 'the module and instance')\n api = getattr(__import__(module, fromlist=[True]), instance)\n if not isinstance(api, falcon.API):\n if callable(api):\n", "issue": "Tox (including Travis) not testing cythonized variants\nAs described in ``README.rst``, Falcon can be cythonized for ~20% performance gain (or actually even more). Installing Falcon from *sdist* into an environment with Cython does the trick:\r\n\r\n```python\r\n>>> import falcon\r\n>>> falcon.api # As we can see, falcon.api is coming from the dynamically-linked (cythonized) library api.so\r\n<module 'falcon.api' from '/home/vytas/.virtualenvs/fresh/local/lib/python2.7/site-packages/falcon/api.so'>\r\n```\r\n\r\nHowever, this does not hold under Tox ``py27_cython`` and ``py36_cython`` environments, including runs in Travis, as the properly cythonized Falcon is shadowed by the local source directory. This could potentially be worked around by changing dir in Tox, but apparently pytest is even more stubborn as it is correctly determining the root dir of tests, and changing to it.\r\n\r\nSee also discussions here:\r\n* https://github.com/tox-dev/tox/issues/54\r\n* https://github.com/tox-dev/tox/issues/514\r\n\r\nThe last comment on the latter also explains the possible patterns to work this around: https://github.com/tox-dev/tox/issues/514#issuecomment-327779367 (links to the useful https://docs.pytest.org/en/latest/goodpractices.html#choosing-a-test-layout-import-rules ).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nScript that prints out the routes of an API instance.\n\"\"\"\n\nfrom __future__ import print_function\n\nfrom functools import partial\nimport inspect\n\nimport falcon\n\n\ndef print_routes(api, verbose=False): # pragma: no cover\n \"\"\"\n Initial call.\n\n :param api: The falcon.API or callable that returns an instance to look at.\n :type api: falcon.API or callable\n :param verbose: If the output should be verbose.\n :type verbose: bool\n \"\"\"\n traverse(api._router._roots, verbose=verbose)\n\n\ndef traverse(roots, parent='', verbose=False):\n \"\"\"\n Recursive call which also handles printing output.\n\n :param api: The falcon.API or callable that returns an instance to look at.\n :type api: falcon.API or callable\n :param parent: The parent uri path to the current iteration.\n :type parent: str\n :param verbose: If the output should be verbose.\n :type verbose: bool\n \"\"\"\n for root in roots:\n if root.method_map:\n print('->', parent + '/' + root.raw_segment)\n if verbose:\n for method, func in root.method_map.items():\n if func.__name__ != 'method_not_allowed':\n if isinstance(func, partial):\n real_func = func.func\n else:\n real_func = func\n\n source_file = inspect.getsourcefile(real_func)\n\n print('-->{0} {1}:{2}'.format(\n method,\n source_file,\n source_file[1]\n ))\n\n if root.children:\n traverse(root.children, parent + '/' + root.raw_segment, verbose)\n\n\ndef main():\n \"\"\"\n Main entrypoint.\n \"\"\"\n import argparse\n\n parser = argparse.ArgumentParser(\n description='Example: print-api-routes myprogram:app')\n parser.add_argument(\n '-v', '--verbose', action='store_true',\n help='Prints out information for each method.')\n parser.add_argument(\n 'api_module',\n help='The module and api to inspect. Example: myapp.somemodule:api',\n )\n args = parser.parse_args()\n\n try:\n module, instance = args.api_module.split(':', 1)\n except ValueError:\n parser.error(\n 'The api_module must include a colon between '\n 'the module and instnace')\n api = getattr(__import__(module, fromlist=[True]), instance)\n if not isinstance(api, falcon.API):\n if callable(api):\n api = api()\n if not isinstance(api, falcon.API):\n parser.error(\n '{0} did not return a falcon.API instance'.format(\n args.api_module))\n else:\n parser.error(\n 'The instance must be of falcon.API or be '\n 'a callable without args that returns falcon.API')\n print_routes(api, verbose=args.verbose)\n\n\nif __name__ == '__main__':\n main()\n", "path": "falcon/cmd/print_routes.py"}]}
| 1,862 | 336 |
gh_patches_debug_10000
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-5290
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] KeyValueStore fails to check the cached data with new data
Typo bug.
```
def store_value_bytes(self, key: str, data: bytes) -> str:
...
if cache_value_file_path.exists():
old_data = cache_value_file_path.write_bytes()
...
```
should be:
```
def store_value_bytes(self, key: str, data: bytes) -> str:
...
if cache_value_file_path.exists():
old_data = cache_value_file_path.read_bytes()
...
```
</issue>
<code>
[start of sdk/python/kfp/components/_key_value_store.py]
1 import hashlib
2 from pathlib import Path
3
4
5 class KeyValueStore:
6 KEY_FILE_SUFFIX = '.key'
7 VALUE_FILE_SUFFIX = '.value'
8
9 def __init__(
10 self,
11 cache_dir: str,
12 ):
13 cache_dir = Path(cache_dir)
14 hash_func = (lambda text: hashlib.sha256(text.encode('utf-8')).hexdigest())
15 self.cache_dir = cache_dir
16 self.hash_func = hash_func
17
18 def store_value_text(self, key: str, text: str) -> str:
19 return self.store_value_bytes(key, text.encode('utf-8'))
20
21 def store_value_bytes(self, key: str, data: bytes) -> str:
22 cache_id = self.hash_func(key)
23 self.cache_dir.mkdir(parents=True, exist_ok=True)
24 cache_key_file_path = self.cache_dir / (cache_id + KeyValueStore.KEY_FILE_SUFFIX)
25 cache_value_file_path = self.cache_dir / (cache_id + KeyValueStore.VALUE_FILE_SUFFIX)
26 if cache_key_file_path.exists():
27 old_key = cache_key_file_path.read_text()
28 if key != old_key:
29 raise RuntimeError(
30 'Cache is corrupted: File "{}" contains existing key '
31 '"{}" != new key "{}"'.format(cache_key_file_path, old_key, key)
32 )
33 if cache_value_file_path.exists():
34 old_data = cache_value_file_path.write_bytes()
35 if data != old_data:
36 # TODO: Add options to raise error when overwriting the value.
37 pass
38 cache_value_file_path.write_bytes(data)
39 cache_key_file_path.write_text(key)
40 return cache_id
41
42 def try_get_value_text(self, key: str) -> str:
43 result = self.try_get_value_bytes(key)
44 if result is None:
45 return None
46 return result.decode('utf-8')
47
48 def try_get_value_bytes(self, key: str) -> bytes:
49 cache_id = self.hash_func(key)
50 cache_value_file_path = self.cache_dir / (cache_id + KeyValueStore.VALUE_FILE_SUFFIX)
51 if cache_value_file_path.exists():
52 return cache_value_file_path.read_bytes()
53 return None
54
55 def exists(self, key: str) -> bool:
56 cache_id = self.hash_func(key)
57 cache_key_file_path = self.cache_dir / (cache_id + KeyValueStore.KEY_FILE_SUFFIX)
58 return cache_key_file_path.exists()
59
60 def keys(self):
61 for cache_key_file_path in self.cache_dir.glob('*' + KeyValueStore.KEY_FILE_SUFFIX):
62 yield Path(cache_key_file_path).read_text()
63
[end of sdk/python/kfp/components/_key_value_store.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/kfp/components/_key_value_store.py b/sdk/python/kfp/components/_key_value_store.py
--- a/sdk/python/kfp/components/_key_value_store.py
+++ b/sdk/python/kfp/components/_key_value_store.py
@@ -31,7 +31,7 @@
'"{}" != new key "{}"'.format(cache_key_file_path, old_key, key)
)
if cache_value_file_path.exists():
- old_data = cache_value_file_path.write_bytes()
+ old_data = cache_value_file_path.read_bytes()
if data != old_data:
# TODO: Add options to raise error when overwriting the value.
pass
|
{"golden_diff": "diff --git a/sdk/python/kfp/components/_key_value_store.py b/sdk/python/kfp/components/_key_value_store.py\n--- a/sdk/python/kfp/components/_key_value_store.py\n+++ b/sdk/python/kfp/components/_key_value_store.py\n@@ -31,7 +31,7 @@\n '\"{}\" != new key \"{}\"'.format(cache_key_file_path, old_key, key)\n )\n if cache_value_file_path.exists():\n- old_data = cache_value_file_path.write_bytes()\n+ old_data = cache_value_file_path.read_bytes()\n if data != old_data:\n # TODO: Add options to raise error when overwriting the value.\n pass\n", "issue": "[Bug] KeyValueStore fails to check the cached data with new data\nTypo bug.\r\n\r\n```\r\ndef store_value_bytes(self, key: str, data: bytes) -> str:\r\n ... \r\n if cache_value_file_path.exists():\r\n old_data = cache_value_file_path.write_bytes()\r\n ... \r\n```\r\nshould be:\r\n```\r\ndef store_value_bytes(self, key: str, data: bytes) -> str:\r\n ... \r\n if cache_value_file_path.exists():\r\n old_data = cache_value_file_path.read_bytes()\r\n ... \r\n```\n", "before_files": [{"content": "import hashlib\nfrom pathlib import Path\n\n\nclass KeyValueStore:\n KEY_FILE_SUFFIX = '.key'\n VALUE_FILE_SUFFIX = '.value'\n\n def __init__(\n self,\n cache_dir: str,\n ):\n cache_dir = Path(cache_dir)\n hash_func = (lambda text: hashlib.sha256(text.encode('utf-8')).hexdigest())\n self.cache_dir = cache_dir\n self.hash_func = hash_func\n\n def store_value_text(self, key: str, text: str) -> str:\n return self.store_value_bytes(key, text.encode('utf-8'))\n\n def store_value_bytes(self, key: str, data: bytes) -> str:\n cache_id = self.hash_func(key)\n self.cache_dir.mkdir(parents=True, exist_ok=True)\n cache_key_file_path = self.cache_dir / (cache_id + KeyValueStore.KEY_FILE_SUFFIX)\n cache_value_file_path = self.cache_dir / (cache_id + KeyValueStore.VALUE_FILE_SUFFIX)\n if cache_key_file_path.exists():\n old_key = cache_key_file_path.read_text()\n if key != old_key:\n raise RuntimeError(\n 'Cache is corrupted: File \"{}\" contains existing key '\n '\"{}\" != new key \"{}\"'.format(cache_key_file_path, old_key, key)\n )\n if cache_value_file_path.exists():\n old_data = cache_value_file_path.write_bytes()\n if data != old_data:\n # TODO: Add options to raise error when overwriting the value.\n pass\n cache_value_file_path.write_bytes(data)\n cache_key_file_path.write_text(key)\n return cache_id\n\n def try_get_value_text(self, key: str) -> str:\n result = self.try_get_value_bytes(key)\n if result is None:\n return None\n return result.decode('utf-8')\n\n def try_get_value_bytes(self, key: str) -> bytes:\n cache_id = self.hash_func(key)\n cache_value_file_path = self.cache_dir / (cache_id + KeyValueStore.VALUE_FILE_SUFFIX)\n if cache_value_file_path.exists():\n return cache_value_file_path.read_bytes()\n return None\n\n def exists(self, key: str) -> bool:\n cache_id = self.hash_func(key)\n cache_key_file_path = self.cache_dir / (cache_id + KeyValueStore.KEY_FILE_SUFFIX)\n return cache_key_file_path.exists()\n\n def keys(self):\n for cache_key_file_path in self.cache_dir.glob('*' + KeyValueStore.KEY_FILE_SUFFIX):\n yield Path(cache_key_file_path).read_text()\n", "path": "sdk/python/kfp/components/_key_value_store.py"}]}
| 1,315 | 144 |
gh_patches_debug_40521
|
rasdani/github-patches
|
git_diff
|
web2py__web2py-2059
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extend RedisCache() constructor to allow sharing cache between applications
Sometimes I have a couple of web2py applications that share the models through symlinks. I use Redis for caching, and in that scenario, I also need to share the cache between those web2py applications.
Currently the RedisCache() constructor doesn't have an argument to specify the application name. Instead, it always uses request.application.
I think the solution could be to extend the RedisCache() constructor to be able to optionally provide an application name. I have already made this fix in my local environment and tests did ok. I'll make a pull request in the next minutes so you can check it.
Extend RedisCache() constructor to allow sharing cache between applications
Sometimes I have a couple of web2py applications that share the models through symlinks. I use Redis for caching, and in that scenario, I also need to share the cache between those web2py applications.
Currently the RedisCache() constructor doesn't have an argument to specify the application name. Instead, it always uses request.application.
I think the solution could be to extend the RedisCache() constructor to be able to optionally provide an application name. I have already made this fix in my local environment and tests did ok. I'll make a pull request in the next minutes so you can check it.
</issue>
<code>
[start of gluon/contrib/redis_cache.py]
1 """
2 Developed by [email protected]
3 Released under web2py license because includes gluon/cache.py source code
4 """
5
6 try:
7 import cPickle as pickle
8 except:
9 import pickle
10 import time
11 import re
12 import logging
13 from threading import Lock
14 import random
15 from gluon import current
16 from gluon.cache import CacheAbstract
17 from gluon.contrib.redis_utils import acquire_lock, release_lock
18 from gluon.contrib.redis_utils import register_release_lock, RConnectionError
19
20 logger = logging.getLogger("web2py.cache.redis")
21
22 locker = Lock()
23
24
25 def RedisCache(redis_conn=None, debug=False, with_lock=False, fail_gracefully=False, db=None, application=None):
26 """
27 Usage example: put in models::
28
29 First of all install Redis
30 Ubuntu :
31 sudo apt-get install redis-server
32 sudo pip install redis
33
34 Then
35
36 from gluon.contrib.redis_utils import RConn
37 rconn = RConn()
38 from gluon.contrib.redis_cache import RedisCache
39 cache.redis = RedisCache(redis_conn=rconn, debug=True, with_lock=True)
40
41 Args:
42 redis_conn: a redis-like connection object
43 debug: if True adds to stats() the total_hits and misses
44 with_lock: sets the default locking mode for creating new keys.
45 By default is False (usualy when you choose Redis you do it
46 for performances reason)
47 When True, only one thread/process can set a value concurrently
48 fail_gracefully: if redis is unavailable, returns the value computing it
49 instead of raising an exception
50
51 It can be used pretty much the same as cache.ram()
52 When you use cache.redis directly you can use :
53
54 redis_key_and_var_name = cache.redis('redis_key_and_var_name', lambda or function,
55 time_expire=time.time(), with_lock=True)
56
57 to enforce locking. The with_lock parameter overrides the one set in the
58 cache.redis instance creation
59
60 cache.redis.stats()
61 returns a dictionary with statistics of Redis server
62 with one additional key ('w2p_keys') showing all keys currently set
63 from web2py with their TTL
64
65 A little wording on how keys are stored (and why the cache_it() function
66 and the clear() one look a little bit convoluted): there are a lot of
67 libraries that just store values and then use the KEYS command to delete it.
68 Until recent releases of this module, that technique was used here too.
69 In the need of deleting specific keys in a database with zillions keys in it
70 (other web2py apps, other applications in the need of a Redis stack) the
71 KEYS command is slow (it needs to scan every key in the database).
72 So, we use Redis 'sets' to store keys in "buckets"...
73 - every key created gets "indexed" in a bucket
74 - all buckets are indexed in a fixed key that never expires
75 - all keys generated within the same minute go in the same bucket
76 - every bucket is then set to expire when every key within it is expired
77 When we need to clear() cached keys:
78 - we tell Redis to SUNION all buckets
79 - gives us just the keys that are not expired yet
80 - buckets that are expired are removed from the fixed set
81 - we scan the keys and then delete them
82 """
83
84 locker.acquire()
85 try:
86 instance_name = 'redis_instance_' + (application or current.request.application)
87 if not hasattr(RedisCache, instance_name):
88 setattr(RedisCache, instance_name,
89 RedisClient(redis_conn=redis_conn, debug=debug,
90 with_lock=with_lock, fail_gracefully=fail_gracefully))
91 return getattr(RedisCache, instance_name)
92 finally:
93 locker.release()
94
95
96 class RedisClient(object):
97
98 meta_storage = {}
99 MAX_RETRIES = 5
100 RETRIES = 0
101
102 def __init__(self, redis_conn=None, debug=False,
103 with_lock=False, fail_gracefully=False):
104 self.request = current.request
105 self.debug = debug
106 self.with_lock = with_lock
107 self.fail_gracefully = fail_gracefully
108 self.prefix = "w2p:cache:%s:" % self.request.application
109 if self.request:
110 app = self.request.application
111 else:
112 app = ''
113
114 if app not in self.meta_storage:
115 self.storage = self.meta_storage[app] = {
116 CacheAbstract.cache_stats_name: {
117 'hit_total': 0,
118 'misses': 0,
119 }}
120 else:
121 self.storage = self.meta_storage[app]
122
123 self.cache_set_key = 'w2p:%s:___cache_set' % self.request.application
124
125 self.r_server = redis_conn
126 self._release_script = register_release_lock(self.r_server)
127
128 def initialize(self):
129 pass
130
131 def __call__(self, key, f, time_expire=300, with_lock=None):
132 if with_lock is None:
133 with_lock = self.with_lock
134 if time_expire is None:
135 time_expire = 24 * 60 * 60
136 newKey = self.__keyFormat__(key)
137 value = None
138 ttl = 0
139 try:
140 if f is None:
141 # delete and never look back
142 self.r_server.delete(newKey)
143 return None
144 # is there a value
145 obj = self.r_server.get(newKey)
146 # what's its ttl
147 if obj:
148 ttl = self.r_server.ttl(newKey)
149 if ttl > time_expire:
150 obj = None
151 if obj:
152 # was cached
153 if self.debug:
154 self.r_server.incr('web2py_cache_statistics:hit_total')
155 value = pickle.loads(obj)
156 else:
157 # naive distributed locking
158 if with_lock:
159 lock_key = '%s:__lock' % newKey
160 randomvalue = time.time()
161 al = acquire_lock(self.r_server, lock_key, randomvalue)
162 try:
163 # someone may have computed it
164 obj = self.r_server.get(newKey)
165 if obj is None:
166 value = self.cache_it(newKey, f, time_expire)
167 else:
168 value = pickle.loads(obj)
169 finally:
170 release_lock(self, lock_key, al)
171 else:
172 # without distributed locking
173 value = self.cache_it(newKey, f, time_expire)
174 return value
175 except RConnectionError:
176 return self.retry_call(key, f, time_expire, with_lock)
177
178 def cache_it(self, key, f, time_expire):
179 if self.debug:
180 self.r_server.incr('web2py_cache_statistics:misses')
181 cache_set_key = self.cache_set_key
182 expire_at = int(time.time() + time_expire) + 120
183 bucket_key = "%s:%s" % (cache_set_key, expire_at // 60)
184 value = f()
185 value_ = pickle.dumps(value, pickle.HIGHEST_PROTOCOL)
186 if time_expire == 0:
187 time_expire = 1
188 self.r_server.setex(key, time_expire, value_)
189 # print '%s will expire on %s: it goes in bucket %s' % (key, time.ctime(expire_at))
190 # print 'that will expire on %s' % (bucket_key, time.ctime(((expire_at / 60) + 1) * 60))
191 p = self.r_server.pipeline()
192 # add bucket to the fixed set
193 p.sadd(cache_set_key, bucket_key)
194 # sets the key
195 p.setex(key, time_expire, value_)
196 # add the key to the bucket
197 p.sadd(bucket_key, key)
198 # expire the bucket properly
199 p.expireat(bucket_key, ((expire_at // 60) + 1) * 60)
200 p.execute()
201 return value
202
203 def retry_call(self, key, f, time_expire, with_lock):
204 self.RETRIES += 1
205 if self.RETRIES <= self.MAX_RETRIES:
206 logger.error("sleeping %s seconds before reconnecting" % (2 * self.RETRIES))
207 time.sleep(2 * self.RETRIES)
208 if self.fail_gracefully:
209 self.RETRIES = 0
210 return f()
211 return self.__call__(key, f, time_expire, with_lock)
212 else:
213 self.RETRIES = 0
214 if self.fail_gracefully:
215 return f
216 raise RConnectionError('Redis instance is unavailable')
217
218 def increment(self, key, value=1):
219 try:
220 newKey = self.__keyFormat__(key)
221 return self.r_server.incr(newKey, value)
222 except RConnectionError:
223 return self.retry_increment(key, value)
224
225 def retry_increment(self, key, value):
226 self.RETRIES += 1
227 if self.RETRIES <= self.MAX_RETRIES:
228 logger.error("sleeping some seconds before reconnecting")
229 time.sleep(2 * self.RETRIES)
230 return self.increment(key, value)
231 else:
232 self.RETRIES = 0
233 raise RConnectionError('Redis instance is unavailable')
234
235 def clear(self, regex):
236 """
237 Auxiliary function called by `clear` to search and
238 clear cache entries
239 """
240 r = re.compile(regex)
241 # get all buckets
242 buckets = self.r_server.smembers(self.cache_set_key)
243 # get all keys in buckets
244 if buckets:
245 keys = self.r_server.sunion(buckets)
246 else:
247 return
248 prefix = self.prefix
249 pipe = self.r_server.pipeline()
250 for a in keys:
251 if r.match(str(a).replace(prefix, '', 1)):
252 pipe.delete(a)
253 if random.randrange(0, 100) < 10:
254 # do this just once in a while (10% chance)
255 self.clear_buckets(buckets)
256 pipe.execute()
257
258 def clear_buckets(self, buckets):
259 p = self.r_server.pipeline()
260 for b in buckets:
261 if not self.r_server.exists(b):
262 p.srem(self.cache_set_key, b)
263 p.execute()
264
265 def delete(self, key):
266 newKey = self.__keyFormat__(key)
267 return self.r_server.delete(newKey)
268
269 def stats(self):
270 stats_collector = self.r_server.info()
271 if self.debug:
272 stats_collector['w2p_stats'] = dict(
273 hit_total=self.r_server.get(
274 'web2py_cache_statistics:hit_total'),
275 misses=self.r_server.get('web2py_cache_statistics:misses')
276 )
277 stats_collector['w2p_keys'] = dict()
278
279 for a in self.r_server.keys("w2p:%s:*" % (
280 self.request.application)):
281 stats_collector['w2p_keys']["%s_expire_in_sec" % a] = self.r_server.ttl(a)
282 return stats_collector
283
284 def __keyFormat__(self, key):
285 return '%s%s' % (self.prefix, key.replace(' ', '_'))
286
[end of gluon/contrib/redis_cache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gluon/contrib/redis_cache.py b/gluon/contrib/redis_cache.py
--- a/gluon/contrib/redis_cache.py
+++ b/gluon/contrib/redis_cache.py
@@ -47,6 +47,8 @@
When True, only one thread/process can set a value concurrently
fail_gracefully: if redis is unavailable, returns the value computing it
instead of raising an exception
+ application: if provided, it is used to construct the instance_name,
+ allowing to share cache between different applications if needed
It can be used pretty much the same as cache.ram()
When you use cache.redis directly you can use :
@@ -83,11 +85,14 @@
locker.acquire()
try:
- instance_name = 'redis_instance_' + (application or current.request.application)
+ if not application:
+ application = current.request.application
+ instance_name = 'redis_instance_' + application
if not hasattr(RedisCache, instance_name):
setattr(RedisCache, instance_name,
RedisClient(redis_conn=redis_conn, debug=debug,
- with_lock=with_lock, fail_gracefully=fail_gracefully))
+ with_lock=with_lock, fail_gracefully=fail_gracefully,
+ application=application))
return getattr(RedisCache, instance_name)
finally:
locker.release()
@@ -100,14 +105,15 @@
RETRIES = 0
def __init__(self, redis_conn=None, debug=False,
- with_lock=False, fail_gracefully=False):
+ with_lock=False, fail_gracefully=False, application=None):
self.request = current.request
self.debug = debug
self.with_lock = with_lock
self.fail_gracefully = fail_gracefully
- self.prefix = "w2p:cache:%s:" % self.request.application
+ self.application = application
+ self.prefix = "w2p:cache:%s:" % application
if self.request:
- app = self.request.application
+ app = application
else:
app = ''
@@ -120,7 +126,7 @@
else:
self.storage = self.meta_storage[app]
- self.cache_set_key = 'w2p:%s:___cache_set' % self.request.application
+ self.cache_set_key = 'w2p:%s:___cache_set' % application
self.r_server = redis_conn
self._release_script = register_release_lock(self.r_server)
@@ -276,8 +282,7 @@
)
stats_collector['w2p_keys'] = dict()
- for a in self.r_server.keys("w2p:%s:*" % (
- self.request.application)):
+ for a in self.r_server.keys("w2p:%s:*" % self.application):
stats_collector['w2p_keys']["%s_expire_in_sec" % a] = self.r_server.ttl(a)
return stats_collector
|
{"golden_diff": "diff --git a/gluon/contrib/redis_cache.py b/gluon/contrib/redis_cache.py\n--- a/gluon/contrib/redis_cache.py\n+++ b/gluon/contrib/redis_cache.py\n@@ -47,6 +47,8 @@\n When True, only one thread/process can set a value concurrently\n fail_gracefully: if redis is unavailable, returns the value computing it\n instead of raising an exception\n+ application: if provided, it is used to construct the instance_name,\n+ allowing to share cache between different applications if needed\n \n It can be used pretty much the same as cache.ram()\n When you use cache.redis directly you can use :\n@@ -83,11 +85,14 @@\n \n locker.acquire()\n try:\n- instance_name = 'redis_instance_' + (application or current.request.application)\n+ if not application:\n+ application = current.request.application\n+ instance_name = 'redis_instance_' + application\n if not hasattr(RedisCache, instance_name):\n setattr(RedisCache, instance_name,\n RedisClient(redis_conn=redis_conn, debug=debug,\n- with_lock=with_lock, fail_gracefully=fail_gracefully))\n+ with_lock=with_lock, fail_gracefully=fail_gracefully,\n+ application=application))\n return getattr(RedisCache, instance_name)\n finally:\n locker.release()\n@@ -100,14 +105,15 @@\n RETRIES = 0\n \n def __init__(self, redis_conn=None, debug=False,\n- with_lock=False, fail_gracefully=False):\n+ with_lock=False, fail_gracefully=False, application=None):\n self.request = current.request\n self.debug = debug\n self.with_lock = with_lock\n self.fail_gracefully = fail_gracefully\n- self.prefix = \"w2p:cache:%s:\" % self.request.application\n+ self.application = application\n+ self.prefix = \"w2p:cache:%s:\" % application\n if self.request:\n- app = self.request.application\n+ app = application\n else:\n app = ''\n \n@@ -120,7 +126,7 @@\n else:\n self.storage = self.meta_storage[app]\n \n- self.cache_set_key = 'w2p:%s:___cache_set' % self.request.application\n+ self.cache_set_key = 'w2p:%s:___cache_set' % application\n \n self.r_server = redis_conn\n self._release_script = register_release_lock(self.r_server)\n@@ -276,8 +282,7 @@\n )\n stats_collector['w2p_keys'] = dict()\n \n- for a in self.r_server.keys(\"w2p:%s:*\" % (\n- self.request.application)):\n+ for a in self.r_server.keys(\"w2p:%s:*\" % self.application):\n stats_collector['w2p_keys'][\"%s_expire_in_sec\" % a] = self.r_server.ttl(a)\n return stats_collector\n", "issue": "Extend RedisCache() constructor to allow sharing cache between applications\nSometimes I have a couple of web2py applications that share the models through symlinks. I use Redis for caching, and in that scenario, I also need to share the cache between those web2py applications.\r\n\r\nCurrently the RedisCache() constructor doesn't have an argument to specify the application name. Instead, it always uses request.application.\r\n\r\nI think the solution could be to extend the RedisCache() constructor to be able to optionally provide an application name. I have already made this fix in my local environment and tests did ok. I'll make a pull request in the next minutes so you can check it.\r\n\nExtend RedisCache() constructor to allow sharing cache between applications\nSometimes I have a couple of web2py applications that share the models through symlinks. I use Redis for caching, and in that scenario, I also need to share the cache between those web2py applications.\r\n\r\nCurrently the RedisCache() constructor doesn't have an argument to specify the application name. Instead, it always uses request.application.\r\n\r\nI think the solution could be to extend the RedisCache() constructor to be able to optionally provide an application name. I have already made this fix in my local environment and tests did ok. I'll make a pull request in the next minutes so you can check it.\r\n\n", "before_files": [{"content": "\"\"\"\nDeveloped by [email protected]\nReleased under web2py license because includes gluon/cache.py source code\n\"\"\"\n\ntry:\n import cPickle as pickle\nexcept:\n import pickle\nimport time\nimport re\nimport logging\nfrom threading import Lock\nimport random\nfrom gluon import current\nfrom gluon.cache import CacheAbstract\nfrom gluon.contrib.redis_utils import acquire_lock, release_lock\nfrom gluon.contrib.redis_utils import register_release_lock, RConnectionError\n\nlogger = logging.getLogger(\"web2py.cache.redis\")\n\nlocker = Lock()\n\n\ndef RedisCache(redis_conn=None, debug=False, with_lock=False, fail_gracefully=False, db=None, application=None):\n \"\"\"\n Usage example: put in models::\n\n First of all install Redis\n Ubuntu :\n sudo apt-get install redis-server\n sudo pip install redis\n\n Then\n\n from gluon.contrib.redis_utils import RConn\n rconn = RConn()\n from gluon.contrib.redis_cache import RedisCache\n cache.redis = RedisCache(redis_conn=rconn, debug=True, with_lock=True)\n\n Args:\n redis_conn: a redis-like connection object\n debug: if True adds to stats() the total_hits and misses\n with_lock: sets the default locking mode for creating new keys.\n By default is False (usualy when you choose Redis you do it\n for performances reason)\n When True, only one thread/process can set a value concurrently\n fail_gracefully: if redis is unavailable, returns the value computing it\n instead of raising an exception\n\n It can be used pretty much the same as cache.ram()\n When you use cache.redis directly you can use :\n\n redis_key_and_var_name = cache.redis('redis_key_and_var_name', lambda or function,\n time_expire=time.time(), with_lock=True)\n\n to enforce locking. The with_lock parameter overrides the one set in the\n cache.redis instance creation\n\n cache.redis.stats()\n returns a dictionary with statistics of Redis server\n with one additional key ('w2p_keys') showing all keys currently set\n from web2py with their TTL\n\n A little wording on how keys are stored (and why the cache_it() function\n and the clear() one look a little bit convoluted): there are a lot of\n libraries that just store values and then use the KEYS command to delete it.\n Until recent releases of this module, that technique was used here too.\n In the need of deleting specific keys in a database with zillions keys in it\n (other web2py apps, other applications in the need of a Redis stack) the\n KEYS command is slow (it needs to scan every key in the database).\n So, we use Redis 'sets' to store keys in \"buckets\"...\n - every key created gets \"indexed\" in a bucket\n - all buckets are indexed in a fixed key that never expires\n - all keys generated within the same minute go in the same bucket\n - every bucket is then set to expire when every key within it is expired\n When we need to clear() cached keys:\n - we tell Redis to SUNION all buckets\n - gives us just the keys that are not expired yet\n - buckets that are expired are removed from the fixed set\n - we scan the keys and then delete them\n \"\"\"\n\n locker.acquire()\n try:\n instance_name = 'redis_instance_' + (application or current.request.application)\n if not hasattr(RedisCache, instance_name):\n setattr(RedisCache, instance_name,\n RedisClient(redis_conn=redis_conn, debug=debug,\n with_lock=with_lock, fail_gracefully=fail_gracefully))\n return getattr(RedisCache, instance_name)\n finally:\n locker.release()\n\n\nclass RedisClient(object):\n\n meta_storage = {}\n MAX_RETRIES = 5\n RETRIES = 0\n\n def __init__(self, redis_conn=None, debug=False,\n with_lock=False, fail_gracefully=False):\n self.request = current.request\n self.debug = debug\n self.with_lock = with_lock\n self.fail_gracefully = fail_gracefully\n self.prefix = \"w2p:cache:%s:\" % self.request.application\n if self.request:\n app = self.request.application\n else:\n app = ''\n\n if app not in self.meta_storage:\n self.storage = self.meta_storage[app] = {\n CacheAbstract.cache_stats_name: {\n 'hit_total': 0,\n 'misses': 0,\n }}\n else:\n self.storage = self.meta_storage[app]\n\n self.cache_set_key = 'w2p:%s:___cache_set' % self.request.application\n\n self.r_server = redis_conn\n self._release_script = register_release_lock(self.r_server)\n\n def initialize(self):\n pass\n\n def __call__(self, key, f, time_expire=300, with_lock=None):\n if with_lock is None:\n with_lock = self.with_lock\n if time_expire is None:\n time_expire = 24 * 60 * 60\n newKey = self.__keyFormat__(key)\n value = None\n ttl = 0\n try:\n if f is None:\n # delete and never look back\n self.r_server.delete(newKey)\n return None\n # is there a value\n obj = self.r_server.get(newKey)\n # what's its ttl\n if obj:\n ttl = self.r_server.ttl(newKey)\n if ttl > time_expire:\n obj = None\n if obj:\n # was cached\n if self.debug:\n self.r_server.incr('web2py_cache_statistics:hit_total')\n value = pickle.loads(obj)\n else:\n # naive distributed locking\n if with_lock:\n lock_key = '%s:__lock' % newKey\n randomvalue = time.time()\n al = acquire_lock(self.r_server, lock_key, randomvalue)\n try:\n # someone may have computed it\n obj = self.r_server.get(newKey)\n if obj is None:\n value = self.cache_it(newKey, f, time_expire)\n else:\n value = pickle.loads(obj)\n finally:\n release_lock(self, lock_key, al)\n else:\n # without distributed locking\n value = self.cache_it(newKey, f, time_expire)\n return value\n except RConnectionError:\n return self.retry_call(key, f, time_expire, with_lock)\n\n def cache_it(self, key, f, time_expire):\n if self.debug:\n self.r_server.incr('web2py_cache_statistics:misses')\n cache_set_key = self.cache_set_key\n expire_at = int(time.time() + time_expire) + 120\n bucket_key = \"%s:%s\" % (cache_set_key, expire_at // 60)\n value = f()\n value_ = pickle.dumps(value, pickle.HIGHEST_PROTOCOL)\n if time_expire == 0:\n time_expire = 1\n self.r_server.setex(key, time_expire, value_)\n # print '%s will expire on %s: it goes in bucket %s' % (key, time.ctime(expire_at))\n # print 'that will expire on %s' % (bucket_key, time.ctime(((expire_at / 60) + 1) * 60))\n p = self.r_server.pipeline()\n # add bucket to the fixed set\n p.sadd(cache_set_key, bucket_key)\n # sets the key\n p.setex(key, time_expire, value_)\n # add the key to the bucket\n p.sadd(bucket_key, key)\n # expire the bucket properly\n p.expireat(bucket_key, ((expire_at // 60) + 1) * 60)\n p.execute()\n return value\n\n def retry_call(self, key, f, time_expire, with_lock):\n self.RETRIES += 1\n if self.RETRIES <= self.MAX_RETRIES:\n logger.error(\"sleeping %s seconds before reconnecting\" % (2 * self.RETRIES))\n time.sleep(2 * self.RETRIES)\n if self.fail_gracefully:\n self.RETRIES = 0\n return f()\n return self.__call__(key, f, time_expire, with_lock)\n else:\n self.RETRIES = 0\n if self.fail_gracefully:\n return f\n raise RConnectionError('Redis instance is unavailable')\n\n def increment(self, key, value=1):\n try:\n newKey = self.__keyFormat__(key)\n return self.r_server.incr(newKey, value)\n except RConnectionError:\n return self.retry_increment(key, value)\n\n def retry_increment(self, key, value):\n self.RETRIES += 1\n if self.RETRIES <= self.MAX_RETRIES:\n logger.error(\"sleeping some seconds before reconnecting\")\n time.sleep(2 * self.RETRIES)\n return self.increment(key, value)\n else:\n self.RETRIES = 0\n raise RConnectionError('Redis instance is unavailable')\n\n def clear(self, regex):\n \"\"\"\n Auxiliary function called by `clear` to search and\n clear cache entries\n \"\"\"\n r = re.compile(regex)\n # get all buckets\n buckets = self.r_server.smembers(self.cache_set_key)\n # get all keys in buckets\n if buckets:\n keys = self.r_server.sunion(buckets)\n else:\n return\n prefix = self.prefix\n pipe = self.r_server.pipeline()\n for a in keys:\n if r.match(str(a).replace(prefix, '', 1)):\n pipe.delete(a)\n if random.randrange(0, 100) < 10:\n # do this just once in a while (10% chance)\n self.clear_buckets(buckets)\n pipe.execute()\n\n def clear_buckets(self, buckets):\n p = self.r_server.pipeline()\n for b in buckets:\n if not self.r_server.exists(b):\n p.srem(self.cache_set_key, b)\n p.execute()\n\n def delete(self, key):\n newKey = self.__keyFormat__(key)\n return self.r_server.delete(newKey)\n\n def stats(self):\n stats_collector = self.r_server.info()\n if self.debug:\n stats_collector['w2p_stats'] = dict(\n hit_total=self.r_server.get(\n 'web2py_cache_statistics:hit_total'),\n misses=self.r_server.get('web2py_cache_statistics:misses')\n )\n stats_collector['w2p_keys'] = dict()\n\n for a in self.r_server.keys(\"w2p:%s:*\" % (\n self.request.application)):\n stats_collector['w2p_keys'][\"%s_expire_in_sec\" % a] = self.r_server.ttl(a)\n return stats_collector\n\n def __keyFormat__(self, key):\n return '%s%s' % (self.prefix, key.replace(' ', '_'))\n", "path": "gluon/contrib/redis_cache.py"}]}
| 3,958 | 678 |
gh_patches_debug_28123
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-903
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Send email from user management screen
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 2.0.4
- Operating System: Ubuntu
- Web Browser and Version: Same in any (chrome, FF)
**What happened?**
In users management screen, I click on a user and click on the "Send message" button. It does nothing.
**What did you expect to happen?**
Send an email to that user.
**How to reproduce your issue**
**Any associated stack traces or error logs**
</issue>
<code>
[start of CTFd/api/v1/users.py]
1 from flask import session, request, abort
2 from flask_restplus import Namespace, Resource
3 from CTFd.models import db, Users, Solves, Awards, Fails, Tracking, Unlocks, Submissions, Notifications
4 from CTFd.utils.decorators import (
5 authed_only,
6 admins_only,
7 authed
8 )
9 from CTFd.cache import cache, clear_standings
10 from CTFd.utils.user import get_current_user, is_admin
11 from CTFd.utils.decorators.visibility import check_account_visibility, check_score_visibility
12
13 from CTFd.utils.config.visibility import (
14 accounts_visible,
15 challenges_visible,
16 registration_visible,
17 scores_visible
18 )
19
20 from CTFd.schemas.submissions import SubmissionSchema
21 from CTFd.schemas.awards import AwardSchema
22 from CTFd.schemas.users import UserSchema
23
24
25 users_namespace = Namespace('users', description="Endpoint to retrieve Users")
26
27
28 @users_namespace.route('')
29 class UserList(Resource):
30 @check_account_visibility
31 def get(self):
32 users = Users.query.filter_by(banned=False)
33 response = UserSchema(view='user', many=True).dump(users)
34
35 if response.errors:
36 return {
37 'success': False,
38 'errors': response.errors
39 }, 400
40
41 return {
42 'success': True,
43 'data': response.data
44 }
45
46 @admins_only
47 def post(self):
48 req = request.get_json()
49 schema = UserSchema('admin')
50 response = schema.load(req)
51
52 if response.errors:
53 return {
54 'success': False,
55 'errors': response.errors
56 }, 400
57
58 db.session.add(response.data)
59 db.session.commit()
60
61 clear_standings()
62
63 response = schema.dump(response.data)
64
65 return {
66 'success': True,
67 'data': response.data
68 }
69
70
71 @users_namespace.route('/<int:user_id>')
72 @users_namespace.param('user_id', "User ID")
73 class UserPublic(Resource):
74 @check_account_visibility
75 def get(self, user_id):
76 user = Users.query.filter_by(id=user_id).first_or_404()
77
78 response = UserSchema(
79 view=session.get('type', 'user')
80 ).dump(user)
81
82 if response.errors:
83 return {
84 'success': False,
85 'errors': response.errors
86 }, 400
87
88 response.data['place'] = user.place
89 response.data['score'] = user.score
90
91 return {
92 'success': True,
93 'data': response.data
94 }
95
96 @admins_only
97 def patch(self, user_id):
98 user = Users.query.filter_by(id=user_id).first_or_404()
99 data = request.get_json()
100 data['id'] = user_id
101 schema = UserSchema(view='admin', instance=user, partial=True)
102 response = schema.load(data)
103 if response.errors:
104 return {
105 'success': False,
106 'errors': response.errors
107 }, 400
108
109 db.session.commit()
110
111 response = schema.dump(response.data)
112
113 db.session.close()
114
115 clear_standings()
116
117 return {
118 'success': True,
119 'data': response
120 }
121
122 @admins_only
123 def delete(self, user_id):
124 Notifications.query.filter_by(user_id=user_id).delete()
125 Awards.query.filter_by(user_id=user_id).delete()
126 Unlocks.query.filter_by(user_id=user_id).delete()
127 Submissions.query.filter_by(user_id=user_id).delete()
128 Solves.query.filter_by(user_id=user_id).delete()
129 Tracking.query.filter_by(user_id=user_id).delete()
130 Users.query.filter_by(id=user_id).delete()
131 db.session.commit()
132 db.session.close()
133
134 clear_standings()
135
136 return {
137 'success': True
138 }
139
140
141 @users_namespace.route('/me')
142 class UserPrivate(Resource):
143 @authed_only
144 def get(self):
145 user = get_current_user()
146 response = UserSchema('self').dump(user).data
147 response['place'] = user.place
148 response['score'] = user.score
149 return {
150 'success': True,
151 'data': response
152 }
153
154 @authed_only
155 def patch(self):
156 user = get_current_user()
157 data = request.get_json()
158 schema = UserSchema(view='self', instance=user, partial=True)
159 response = schema.load(data)
160 if response.errors:
161 return {
162 'success': False,
163 'errors': response.errors
164 }, 400
165
166 db.session.commit()
167
168 response = schema.dump(response.data)
169 db.session.close()
170
171 clear_standings()
172
173 return {
174 'success': True,
175 'data': response.data
176 }
177
178
179 @users_namespace.route('/<user_id>/solves')
180 @users_namespace.param('user_id', "User ID or 'me'")
181 class UserSolves(Resource):
182 def get(self, user_id):
183 if user_id == 'me':
184 if not authed():
185 abort(403)
186 user = get_current_user()
187 else:
188 if accounts_visible() is False or scores_visible() is False:
189 abort(404)
190 user = Users.query.filter_by(id=user_id).first_or_404()
191
192 solves = user.get_solves(
193 admin=is_admin()
194 )
195 for solve in solves:
196 setattr(solve, 'value', 100)
197
198 view = 'user' if not is_admin() else 'admin'
199 response = SubmissionSchema(view=view, many=True).dump(solves)
200
201 if response.errors:
202 return {
203 'success': False,
204 'errors': response.errors
205 }, 400
206
207 return {
208 'success': True,
209 'data': response.data
210 }
211
212
213 @users_namespace.route('/<user_id>/fails')
214 @users_namespace.param('user_id', "User ID or 'me'")
215 class UserFails(Resource):
216 def get(self, user_id):
217 if user_id == 'me':
218 if not authed():
219 abort(403)
220 user = get_current_user()
221 else:
222 if accounts_visible() is False or scores_visible() is False:
223 abort(404)
224 user = Users.query.filter_by(id=user_id).first_or_404()
225
226 fails = user.get_fails(
227 admin=is_admin()
228 )
229
230 view = 'user' if not is_admin() else 'admin'
231 response = SubmissionSchema(view=view, many=True).dump(fails)
232 if response.errors:
233 return {
234 'success': False,
235 'errors': response.errors
236 }, 400
237
238 if is_admin():
239 data = response.data
240 else:
241 data = []
242 count = len(response.data)
243
244 return {
245 'success': True,
246 'data': data,
247 'meta': {
248 'count': count
249 }
250 }
251
252
253 @users_namespace.route('/<user_id>/awards')
254 @users_namespace.param('user_id', "User ID or 'me'")
255 class UserAwards(Resource):
256 def get(self, user_id):
257 if user_id == 'me':
258 if not authed():
259 abort(403)
260 user = get_current_user()
261 else:
262 if accounts_visible() is False or scores_visible() is False:
263 abort(404)
264 user = Users.query.filter_by(id=user_id).first_or_404()
265
266 awards = user.get_awards(
267 admin=is_admin()
268 )
269
270 view = 'user' if not is_admin() else 'admin'
271 response = AwardSchema(view=view, many=True).dump(awards)
272
273 if response.errors:
274 return {
275 'success': False,
276 'errors': response.errors
277 }, 400
278
279 return {
280 'success': True,
281 'data': response.data
282 }
283
[end of CTFd/api/v1/users.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/CTFd/api/v1/users.py b/CTFd/api/v1/users.py
--- a/CTFd/api/v1/users.py
+++ b/CTFd/api/v1/users.py
@@ -4,9 +4,12 @@
from CTFd.utils.decorators import (
authed_only,
admins_only,
- authed
+ authed,
+ ratelimit
)
from CTFd.cache import cache, clear_standings
+from CTFd.utils.config import get_mail_provider
+from CTFd.utils.email import sendmail
from CTFd.utils.user import get_current_user, is_admin
from CTFd.utils.decorators.visibility import check_account_visibility, check_score_visibility
@@ -280,3 +283,44 @@
'success': True,
'data': response.data
}
+
+
+@users_namespace.route('/<int:user_id>/email')
+@users_namespace.param('user_id', "User ID")
+class UserEmails(Resource):
+ @admins_only
+ @ratelimit(method="POST", limit=10, interval=60)
+ def post(self, user_id):
+ req = request.get_json()
+ text = req.get('text', '').strip()
+ user = Users.query.filter_by(id=user_id).first_or_404()
+
+ if get_mail_provider() is None:
+ return {
+ 'success': False,
+ 'errors': {
+ "": [
+ "Email settings not configured"
+ ]
+ }
+ }, 400
+
+ if not text:
+ return {
+ 'success': False,
+ 'errors': {
+ "text": [
+ "Email text cannot be empty"
+ ]
+ }
+ }, 400
+
+ result, response = sendmail(
+ addr=user.email,
+ text=text
+ )
+
+ return {
+ 'success': result,
+ 'data': {}
+ }
|
{"golden_diff": "diff --git a/CTFd/api/v1/users.py b/CTFd/api/v1/users.py\n--- a/CTFd/api/v1/users.py\n+++ b/CTFd/api/v1/users.py\n@@ -4,9 +4,12 @@\n from CTFd.utils.decorators import (\n authed_only,\n admins_only,\n- authed\n+ authed,\n+ ratelimit\n )\n from CTFd.cache import cache, clear_standings\n+from CTFd.utils.config import get_mail_provider\n+from CTFd.utils.email import sendmail\n from CTFd.utils.user import get_current_user, is_admin\n from CTFd.utils.decorators.visibility import check_account_visibility, check_score_visibility\n \n@@ -280,3 +283,44 @@\n 'success': True,\n 'data': response.data\n }\n+\n+\n+@users_namespace.route('/<int:user_id>/email')\n+@users_namespace.param('user_id', \"User ID\")\n+class UserEmails(Resource):\n+ @admins_only\n+ @ratelimit(method=\"POST\", limit=10, interval=60)\n+ def post(self, user_id):\n+ req = request.get_json()\n+ text = req.get('text', '').strip()\n+ user = Users.query.filter_by(id=user_id).first_or_404()\n+\n+ if get_mail_provider() is None:\n+ return {\n+ 'success': False,\n+ 'errors': {\n+ \"\": [\n+ \"Email settings not configured\"\n+ ]\n+ }\n+ }, 400\n+\n+ if not text:\n+ return {\n+ 'success': False,\n+ 'errors': {\n+ \"text\": [\n+ \"Email text cannot be empty\"\n+ ]\n+ }\n+ }, 400\n+\n+ result, response = sendmail(\n+ addr=user.email,\n+ text=text\n+ )\n+\n+ return {\n+ 'success': result,\n+ 'data': {}\n+ }\n", "issue": "Send email from user management screen\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n - CTFd Version/Commit: 2.0.4\r\n - Operating System: Ubuntu\r\n - Web Browser and Version: Same in any (chrome, FF)\r\n\r\n**What happened?**\r\n\r\nIn users management screen, I click on a user and click on the \"Send message\" button. It does nothing.\r\n\r\n**What did you expect to happen?**\r\n\r\nSend an email to that user.\r\n\r\n**How to reproduce your issue**\r\n\r\n**Any associated stack traces or error logs**\r\n\r\n\n", "before_files": [{"content": "from flask import session, request, abort\nfrom flask_restplus import Namespace, Resource\nfrom CTFd.models import db, Users, Solves, Awards, Fails, Tracking, Unlocks, Submissions, Notifications\nfrom CTFd.utils.decorators import (\n authed_only,\n admins_only,\n authed\n)\nfrom CTFd.cache import cache, clear_standings\nfrom CTFd.utils.user import get_current_user, is_admin\nfrom CTFd.utils.decorators.visibility import check_account_visibility, check_score_visibility\n\nfrom CTFd.utils.config.visibility import (\n accounts_visible,\n challenges_visible,\n registration_visible,\n scores_visible\n)\n\nfrom CTFd.schemas.submissions import SubmissionSchema\nfrom CTFd.schemas.awards import AwardSchema\nfrom CTFd.schemas.users import UserSchema\n\n\nusers_namespace = Namespace('users', description=\"Endpoint to retrieve Users\")\n\n\n@users_namespace.route('')\nclass UserList(Resource):\n @check_account_visibility\n def get(self):\n users = Users.query.filter_by(banned=False)\n response = UserSchema(view='user', many=True).dump(users)\n\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n return {\n 'success': True,\n 'data': response.data\n }\n\n @admins_only\n def post(self):\n req = request.get_json()\n schema = UserSchema('admin')\n response = schema.load(req)\n\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n db.session.add(response.data)\n db.session.commit()\n\n clear_standings()\n\n response = schema.dump(response.data)\n\n return {\n 'success': True,\n 'data': response.data\n }\n\n\n@users_namespace.route('/<int:user_id>')\n@users_namespace.param('user_id', \"User ID\")\nclass UserPublic(Resource):\n @check_account_visibility\n def get(self, user_id):\n user = Users.query.filter_by(id=user_id).first_or_404()\n\n response = UserSchema(\n view=session.get('type', 'user')\n ).dump(user)\n\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n response.data['place'] = user.place\n response.data['score'] = user.score\n\n return {\n 'success': True,\n 'data': response.data\n }\n\n @admins_only\n def patch(self, user_id):\n user = Users.query.filter_by(id=user_id).first_or_404()\n data = request.get_json()\n data['id'] = user_id\n schema = UserSchema(view='admin', instance=user, partial=True)\n response = schema.load(data)\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n db.session.commit()\n\n response = schema.dump(response.data)\n\n db.session.close()\n\n clear_standings()\n\n return {\n 'success': True,\n 'data': response\n }\n\n @admins_only\n def delete(self, user_id):\n Notifications.query.filter_by(user_id=user_id).delete()\n Awards.query.filter_by(user_id=user_id).delete()\n Unlocks.query.filter_by(user_id=user_id).delete()\n Submissions.query.filter_by(user_id=user_id).delete()\n Solves.query.filter_by(user_id=user_id).delete()\n Tracking.query.filter_by(user_id=user_id).delete()\n Users.query.filter_by(id=user_id).delete()\n db.session.commit()\n db.session.close()\n\n clear_standings()\n\n return {\n 'success': True\n }\n\n\n@users_namespace.route('/me')\nclass UserPrivate(Resource):\n @authed_only\n def get(self):\n user = get_current_user()\n response = UserSchema('self').dump(user).data\n response['place'] = user.place\n response['score'] = user.score\n return {\n 'success': True,\n 'data': response\n }\n\n @authed_only\n def patch(self):\n user = get_current_user()\n data = request.get_json()\n schema = UserSchema(view='self', instance=user, partial=True)\n response = schema.load(data)\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_standings()\n\n return {\n 'success': True,\n 'data': response.data\n }\n\n\n@users_namespace.route('/<user_id>/solves')\n@users_namespace.param('user_id', \"User ID or 'me'\")\nclass UserSolves(Resource):\n def get(self, user_id):\n if user_id == 'me':\n if not authed():\n abort(403)\n user = get_current_user()\n else:\n if accounts_visible() is False or scores_visible() is False:\n abort(404)\n user = Users.query.filter_by(id=user_id).first_or_404()\n\n solves = user.get_solves(\n admin=is_admin()\n )\n for solve in solves:\n setattr(solve, 'value', 100)\n\n view = 'user' if not is_admin() else 'admin'\n response = SubmissionSchema(view=view, many=True).dump(solves)\n\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n return {\n 'success': True,\n 'data': response.data\n }\n\n\n@users_namespace.route('/<user_id>/fails')\n@users_namespace.param('user_id', \"User ID or 'me'\")\nclass UserFails(Resource):\n def get(self, user_id):\n if user_id == 'me':\n if not authed():\n abort(403)\n user = get_current_user()\n else:\n if accounts_visible() is False or scores_visible() is False:\n abort(404)\n user = Users.query.filter_by(id=user_id).first_or_404()\n\n fails = user.get_fails(\n admin=is_admin()\n )\n\n view = 'user' if not is_admin() else 'admin'\n response = SubmissionSchema(view=view, many=True).dump(fails)\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n if is_admin():\n data = response.data\n else:\n data = []\n count = len(response.data)\n\n return {\n 'success': True,\n 'data': data,\n 'meta': {\n 'count': count\n }\n }\n\n\n@users_namespace.route('/<user_id>/awards')\n@users_namespace.param('user_id', \"User ID or 'me'\")\nclass UserAwards(Resource):\n def get(self, user_id):\n if user_id == 'me':\n if not authed():\n abort(403)\n user = get_current_user()\n else:\n if accounts_visible() is False or scores_visible() is False:\n abort(404)\n user = Users.query.filter_by(id=user_id).first_or_404()\n\n awards = user.get_awards(\n admin=is_admin()\n )\n\n view = 'user' if not is_admin() else 'admin'\n response = AwardSchema(view=view, many=True).dump(awards)\n\n if response.errors:\n return {\n 'success': False,\n 'errors': response.errors\n }, 400\n\n return {\n 'success': True,\n 'data': response.data\n }\n", "path": "CTFd/api/v1/users.py"}]}
| 3,130 | 446 |
gh_patches_debug_56902
|
rasdani/github-patches
|
git_diff
|
NVIDIA__NVFlare-172
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in aggregating models
I'm using NVFlare 2.0.6 and torchvision ResNet-50 model for training. The FL system (consisting 4 clients) had completed the training for the first round but when aggregating into the global model I faced this error. Please help me resolve this problem, thank you.
```
2022-01-27 18:08:14,731 - InTimeAccumulateWeightedAggregator - INFO - [run=1, wf=scatter_gather_ctl]: aggregating 4 update(s) at round 0
Traceback (most recent call last):
File "/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/workflows/scatter_and_gather.py", line 202, in control_flow
self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)
File "/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py", line 60, in shareable_to_learnable
weights[v_name] += v_value
numpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
2022-01-27 18:08:14,813 - ScatterAndGather - ERROR - [run=1, wf=scatter_gather_ctl]: Exception in ScatterAndGather control_flow: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
Traceback (most recent call last):
File "/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/workflows/scatter_and_gather.py", line 202, in control_flow
self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)
File "/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py", line 60, in shareable_to_learnable
weights[v_name] += v_value
numpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
2022-01-27 18:08:14,813 - ServerRunner - ERROR - [run=1, wf=scatter_gather_ctl]: Aborting current RUN due to FATAL_SYSTEM_ERROR received: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
2022-01-27 18:08:14,813 - ServerRunner - INFO - [run=1, wf=scatter_gather_ctl]: asked to abort - triggered abort_signal to stop the RUN
2022-01-27 18:08:14,813 - ServerRunner - INFO - [run=1, wf=scatter_gather_ctl]: Workflow: scatter_gather_ctl finalizing ...
```
</issue>
<code>
[start of nvflare/app_common/shareablegenerators/full_model_shareable_generator.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from nvflare.apis.dxo import DataKind, from_shareable
16 from nvflare.apis.fl_context import FLContext
17 from nvflare.apis.shareable import Shareable
18 from nvflare.app_common.abstract.model import ModelLearnable, ModelLearnableKey, model_learnable_to_dxo
19 from nvflare.app_common.abstract.shareable_generator import ShareableGenerator
20 from nvflare.app_common.app_constant import AppConstants
21
22
23 class FullModelShareableGenerator(ShareableGenerator):
24 def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:
25 """Convert Learnable to Shareable
26
27 Args:
28 model (Learnable): model to be converted
29 fl_ctx (FLContext): FL context
30
31 Returns:
32 Shareable: a shareable containing a DXO object,
33 """
34 dxo = model_learnable_to_dxo(ml)
35 return dxo.to_shareable()
36
37 def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:
38 """Convert Shareable to Learnable
39
40 Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS
41
42 Args:
43 shareable (Shareable): Shareable that contains a DXO object
44 fl_ctx (FLContext): FL context
45
46 Returns: a ModelLearnable object
47 """
48 base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)
49 if not base_model:
50 self.system_panic(reason="No global base model!", fl_ctx=fl_ctx)
51 return base_model
52
53 weights = base_model[ModelLearnableKey.WEIGHTS]
54 dxo = from_shareable(shareable)
55
56 if dxo.data_kind == DataKind.WEIGHT_DIFF:
57 if dxo.data is not None:
58 model_diff = dxo.data
59 for v_name, v_value in model_diff.items():
60 weights[v_name] += v_value
61 elif dxo.data_kind == DataKind.WEIGHTS:
62 weights = dxo.data
63 if not weights:
64 self.log_info(fl_ctx, "No model weights found. Model will not be updated.")
65 else:
66 base_model[ModelLearnableKey.WEIGHTS] = weights
67
68 base_model[ModelLearnableKey.META] = dxo.get_meta_props()
69 return base_model
70
[end of nvflare/app_common/shareablegenerators/full_model_shareable_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
--- a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
+++ b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py
@@ -57,7 +57,7 @@
if dxo.data is not None:
model_diff = dxo.data
for v_name, v_value in model_diff.items():
- weights[v_name] += v_value
+ weights[v_name] = weights[v_name] + v_value
elif dxo.data_kind == DataKind.WEIGHTS:
weights = dxo.data
if not weights:
|
{"golden_diff": "diff --git a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n--- a/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n+++ b/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\n@@ -57,7 +57,7 @@\n if dxo.data is not None:\n model_diff = dxo.data\n for v_name, v_value in model_diff.items():\n- weights[v_name] += v_value\n+ weights[v_name] = weights[v_name] + v_value\n elif dxo.data_kind == DataKind.WEIGHTS:\n weights = dxo.data\n if not weights:\n", "issue": "Error in aggregating models\nI'm using NVFlare 2.0.6 and torchvision ResNet-50 model for training. The FL system (consisting 4 clients) had completed the training for the first round but when aggregating into the global model I faced this error. Please help me resolve this problem, thank you. \r\n```\r\n2022-01-27 18:08:14,731 - InTimeAccumulateWeightedAggregator - INFO - [run=1, wf=scatter_gather_ctl]: aggregating 4 update(s) at round 0\r\nTraceback (most recent call last):\r\n File \"/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/workflows/scatter_and_gather.py\", line 202, in control_flow\r\n self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)\r\n File \"/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\", line 60, in shareable_to_learnable\r\n weights[v_name] += v_value\r\nnumpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'\r\n2022-01-27 18:08:14,813 - ScatterAndGather - ERROR - [run=1, wf=scatter_gather_ctl]: Exception in ScatterAndGather control_flow: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'\r\nTraceback (most recent call last):\r\n File \"/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/workflows/scatter_and_gather.py\", line 202, in control_flow\r\n self._global_weights = self.shareable_gen.shareable_to_learnable(aggr_result, fl_ctx)\r\n File \"/home/jupyter-test/.conda/envs/fl/lib/python3.8/site-packages/nvflare/app_common/shareablegenerators/full_model_shareable_generator.py\", line 60, in shareable_to_learnable\r\n weights[v_name] += v_value\r\nnumpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'\r\n2022-01-27 18:08:14,813 - ServerRunner - ERROR - [run=1, wf=scatter_gather_ctl]: Aborting current RUN due to FATAL_SYSTEM_ERROR received: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'\r\n2022-01-27 18:08:14,813 - ServerRunner - INFO - [run=1, wf=scatter_gather_ctl]: asked to abort - triggered abort_signal to stop the RUN\r\n2022-01-27 18:08:14,813 - ServerRunner - INFO - [run=1, wf=scatter_gather_ctl]: Workflow: scatter_gather_ctl finalizing ...\r\n```\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom nvflare.apis.dxo import DataKind, from_shareable\nfrom nvflare.apis.fl_context import FLContext\nfrom nvflare.apis.shareable import Shareable\nfrom nvflare.app_common.abstract.model import ModelLearnable, ModelLearnableKey, model_learnable_to_dxo\nfrom nvflare.app_common.abstract.shareable_generator import ShareableGenerator\nfrom nvflare.app_common.app_constant import AppConstants\n\n\nclass FullModelShareableGenerator(ShareableGenerator):\n def learnable_to_shareable(self, ml: ModelLearnable, fl_ctx: FLContext) -> Shareable:\n \"\"\"Convert Learnable to Shareable\n\n Args:\n model (Learnable): model to be converted\n fl_ctx (FLContext): FL context\n\n Returns:\n Shareable: a shareable containing a DXO object,\n \"\"\"\n dxo = model_learnable_to_dxo(ml)\n return dxo.to_shareable()\n\n def shareable_to_learnable(self, shareable: Shareable, fl_ctx: FLContext) -> ModelLearnable:\n \"\"\"Convert Shareable to Learnable\n\n Supporting TYPE == TYPE_WEIGHT_DIFF or TYPE_WEIGHTS\n\n Args:\n shareable (Shareable): Shareable that contains a DXO object\n fl_ctx (FLContext): FL context\n\n Returns: a ModelLearnable object\n \"\"\"\n base_model = fl_ctx.get_prop(AppConstants.GLOBAL_MODEL)\n if not base_model:\n self.system_panic(reason=\"No global base model!\", fl_ctx=fl_ctx)\n return base_model\n\n weights = base_model[ModelLearnableKey.WEIGHTS]\n dxo = from_shareable(shareable)\n\n if dxo.data_kind == DataKind.WEIGHT_DIFF:\n if dxo.data is not None:\n model_diff = dxo.data\n for v_name, v_value in model_diff.items():\n weights[v_name] += v_value\n elif dxo.data_kind == DataKind.WEIGHTS:\n weights = dxo.data\n if not weights:\n self.log_info(fl_ctx, \"No model weights found. Model will not be updated.\")\n else:\n base_model[ModelLearnableKey.WEIGHTS] = weights\n\n base_model[ModelLearnableKey.META] = dxo.get_meta_props()\n return base_model\n", "path": "nvflare/app_common/shareablegenerators/full_model_shareable_generator.py"}]}
| 2,069 | 164 |
gh_patches_debug_29566
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2361
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Saving existing explorations is broken: UIQuery names need to be unique
## Description
https://github.com/centerofci/mathesar/pull/2315 modified query names to be unique per schema.
It does not ignore the current name of the query while checking the condition.
To reproduce: Try saving an existing query after making changes.
</issue>
<code>
[start of mathesar/api/serializers/queries.py]
1 from django.core.exceptions import ValidationError
2 from django.urls import reverse
3 from rest_access_policy import PermittedPkRelatedField
4 from rest_framework import serializers
5
6 from mathesar.api.db.permissions.query_table import QueryTableAccessPolicy
7 from mathesar.api.exceptions.mixins import MathesarErrorMessageMixin
8 from mathesar.api.exceptions.validation_exceptions.exceptions import DuplicateUIQueryInSchemaAPIException
9 from mathesar.models.base import Table
10 from mathesar.models.query import UIQuery
11
12
13 class BaseQuerySerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
14 schema = serializers.SerializerMethodField('get_schema')
15 base_table = PermittedPkRelatedField(
16 access_policy=QueryTableAccessPolicy,
17 queryset=Table.current_objects.all()
18 )
19
20 class Meta:
21 model = UIQuery
22 fields = ['schema', 'initial_columns', 'transformations', 'base_table', 'display_names']
23
24 def get_schema(self, uiquery):
25 base_table = uiquery.base_table
26 if base_table:
27 return base_table.schema.id
28
29 def validate(self, attrs):
30 unexpected_fields = set(self.initial_data) - set(self.fields)
31 if unexpected_fields:
32 raise ValidationError(f"Unexpected field(s): {unexpected_fields}")
33 self._validate_uniqueness(attrs)
34 return attrs
35
36 def _validate_uniqueness(self, attrs):
37 """
38 Uniqueness is only defined when both name and base_table are defined.
39
40 Would be nice to define this in terms of Django's UniqueConstraint, but that doesn't seem
41 possible, due to schema being a child property of base_table.
42 """
43 name = attrs.get('name')
44 if name:
45 base_table = attrs.get('base_table')
46 if base_table:
47 schema = base_table.schema
48 queries_with_same_name = UIQuery.objects.filter(name=name)
49 duplicate_in_schema_exists = \
50 queries_with_same_name\
51 .filter(base_table__schema=schema)\
52 .exists()
53 if duplicate_in_schema_exists:
54 raise DuplicateUIQueryInSchemaAPIException(field='name')
55
56
57 class QuerySerializer(BaseQuerySerializer):
58 results_url = serializers.SerializerMethodField('get_results_url')
59 records_url = serializers.SerializerMethodField('get_records_url')
60 columns_url = serializers.SerializerMethodField('get_columns_url')
61
62 class Meta:
63 model = UIQuery
64 fields = '__all__'
65
66 def get_records_url(self, obj):
67 if isinstance(obj, UIQuery) and obj.pk is not None:
68 # Only get records_url if we are serializing an existing persisted UIQuery
69 request = self.context['request']
70 return request.build_absolute_uri(reverse('query-records', kwargs={'pk': obj.pk}))
71 else:
72 return None
73
74 def get_columns_url(self, obj):
75 if isinstance(obj, UIQuery) and obj.pk is not None:
76 # Only get columns_url if we are serializing an existing persisted UIQuery
77 request = self.context['request']
78 return request.build_absolute_uri(reverse('query-columns', kwargs={'pk': obj.pk}))
79 else:
80 return None
81
82 def get_results_url(self, obj):
83 if isinstance(obj, UIQuery) and obj.pk is not None:
84 # Only get records_url if we are serializing an existing persisted UIQuery
85 request = self.context['request']
86 return request.build_absolute_uri(reverse('query-results', kwargs={'pk': obj.pk}))
87 else:
88 return None
89
[end of mathesar/api/serializers/queries.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mathesar/api/serializers/queries.py b/mathesar/api/serializers/queries.py
--- a/mathesar/api/serializers/queries.py
+++ b/mathesar/api/serializers/queries.py
@@ -1,5 +1,7 @@
from django.core.exceptions import ValidationError
from django.urls import reverse
+from django.db.models import Q
+
from rest_access_policy import PermittedPkRelatedField
from rest_framework import serializers
@@ -45,14 +47,23 @@
base_table = attrs.get('base_table')
if base_table:
schema = base_table.schema
- queries_with_same_name = UIQuery.objects.filter(name=name)
- duplicate_in_schema_exists = \
- queries_with_same_name\
- .filter(base_table__schema=schema)\
- .exists()
- if duplicate_in_schema_exists:
+ is_duplicate_q = self._get_is_duplicate_q(name, schema)
+ duplicates = UIQuery.objects.filter(is_duplicate_q)
+ if duplicates.exists():
raise DuplicateUIQueryInSchemaAPIException(field='name')
+ def _get_is_duplicate_q(self, name, schema):
+ has_same_name_q = Q(name=name)
+ has_same_schema_q = Q(base_table__schema=schema)
+ is_duplicate_q = has_same_name_q & has_same_schema_q
+ is_update = self.instance is not None
+ if is_update:
+ # If this is an update, filter self out of found duplicates
+ id = self.instance.id
+ is_not_this_instance_q = ~Q(id=id)
+ is_duplicate_q = is_duplicate_q & is_not_this_instance_q
+ return is_duplicate_q
+
class QuerySerializer(BaseQuerySerializer):
results_url = serializers.SerializerMethodField('get_results_url')
|
{"golden_diff": "diff --git a/mathesar/api/serializers/queries.py b/mathesar/api/serializers/queries.py\n--- a/mathesar/api/serializers/queries.py\n+++ b/mathesar/api/serializers/queries.py\n@@ -1,5 +1,7 @@\n from django.core.exceptions import ValidationError\n from django.urls import reverse\n+from django.db.models import Q\n+\n from rest_access_policy import PermittedPkRelatedField\n from rest_framework import serializers\n \n@@ -45,14 +47,23 @@\n base_table = attrs.get('base_table')\n if base_table:\n schema = base_table.schema\n- queries_with_same_name = UIQuery.objects.filter(name=name)\n- duplicate_in_schema_exists = \\\n- queries_with_same_name\\\n- .filter(base_table__schema=schema)\\\n- .exists()\n- if duplicate_in_schema_exists:\n+ is_duplicate_q = self._get_is_duplicate_q(name, schema)\n+ duplicates = UIQuery.objects.filter(is_duplicate_q)\n+ if duplicates.exists():\n raise DuplicateUIQueryInSchemaAPIException(field='name')\n \n+ def _get_is_duplicate_q(self, name, schema):\n+ has_same_name_q = Q(name=name)\n+ has_same_schema_q = Q(base_table__schema=schema)\n+ is_duplicate_q = has_same_name_q & has_same_schema_q\n+ is_update = self.instance is not None\n+ if is_update:\n+ # If this is an update, filter self out of found duplicates\n+ id = self.instance.id\n+ is_not_this_instance_q = ~Q(id=id)\n+ is_duplicate_q = is_duplicate_q & is_not_this_instance_q\n+ return is_duplicate_q\n+\n \n class QuerySerializer(BaseQuerySerializer):\n results_url = serializers.SerializerMethodField('get_results_url')\n", "issue": "Saving existing explorations is broken: UIQuery names need to be unique\n## Description\r\nhttps://github.com/centerofci/mathesar/pull/2315 modified query names to be unique per schema.\r\nIt does not ignore the current name of the query while checking the condition.\r\n\r\nTo reproduce: Try saving an existing query after making changes.\r\n\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.urls import reverse\nfrom rest_access_policy import PermittedPkRelatedField\nfrom rest_framework import serializers\n\nfrom mathesar.api.db.permissions.query_table import QueryTableAccessPolicy\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.exceptions.validation_exceptions.exceptions import DuplicateUIQueryInSchemaAPIException\nfrom mathesar.models.base import Table\nfrom mathesar.models.query import UIQuery\n\n\nclass BaseQuerySerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n schema = serializers.SerializerMethodField('get_schema')\n base_table = PermittedPkRelatedField(\n access_policy=QueryTableAccessPolicy,\n queryset=Table.current_objects.all()\n )\n\n class Meta:\n model = UIQuery\n fields = ['schema', 'initial_columns', 'transformations', 'base_table', 'display_names']\n\n def get_schema(self, uiquery):\n base_table = uiquery.base_table\n if base_table:\n return base_table.schema.id\n\n def validate(self, attrs):\n unexpected_fields = set(self.initial_data) - set(self.fields)\n if unexpected_fields:\n raise ValidationError(f\"Unexpected field(s): {unexpected_fields}\")\n self._validate_uniqueness(attrs)\n return attrs\n\n def _validate_uniqueness(self, attrs):\n \"\"\"\n Uniqueness is only defined when both name and base_table are defined.\n\n Would be nice to define this in terms of Django's UniqueConstraint, but that doesn't seem\n possible, due to schema being a child property of base_table.\n \"\"\"\n name = attrs.get('name')\n if name:\n base_table = attrs.get('base_table')\n if base_table:\n schema = base_table.schema\n queries_with_same_name = UIQuery.objects.filter(name=name)\n duplicate_in_schema_exists = \\\n queries_with_same_name\\\n .filter(base_table__schema=schema)\\\n .exists()\n if duplicate_in_schema_exists:\n raise DuplicateUIQueryInSchemaAPIException(field='name')\n\n\nclass QuerySerializer(BaseQuerySerializer):\n results_url = serializers.SerializerMethodField('get_results_url')\n records_url = serializers.SerializerMethodField('get_records_url')\n columns_url = serializers.SerializerMethodField('get_columns_url')\n\n class Meta:\n model = UIQuery\n fields = '__all__'\n\n def get_records_url(self, obj):\n if isinstance(obj, UIQuery) and obj.pk is not None:\n # Only get records_url if we are serializing an existing persisted UIQuery\n request = self.context['request']\n return request.build_absolute_uri(reverse('query-records', kwargs={'pk': obj.pk}))\n else:\n return None\n\n def get_columns_url(self, obj):\n if isinstance(obj, UIQuery) and obj.pk is not None:\n # Only get columns_url if we are serializing an existing persisted UIQuery\n request = self.context['request']\n return request.build_absolute_uri(reverse('query-columns', kwargs={'pk': obj.pk}))\n else:\n return None\n\n def get_results_url(self, obj):\n if isinstance(obj, UIQuery) and obj.pk is not None:\n # Only get records_url if we are serializing an existing persisted UIQuery\n request = self.context['request']\n return request.build_absolute_uri(reverse('query-results', kwargs={'pk': obj.pk}))\n else:\n return None\n", "path": "mathesar/api/serializers/queries.py"}]}
| 1,502 | 389 |
gh_patches_debug_4230
|
rasdani/github-patches
|
git_diff
|
numba__numba-873
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PR #856 introduced regression in macro expansion of more than one block
PR #856 caused macro expansion to effectively cease after performing macro expansion in one block, due to the logic in `numba/macro.py`:
``` python
for blk in blocks.values():
module_getattr_folding(constants, blk)
expanded = expanded or expand_macros_in_block(constants, blk)
```
</issue>
<code>
[start of numba/macro.py]
1 """
2 Macro handling passes
3
4 Macros are expanded on block-by-block
5 """
6 from __future__ import absolute_import, print_function, division
7 from numba import ir
8
9
10 class MacroError(Exception):
11 '''
12 An exception thrown during macro expansion
13 '''
14 pass
15
16
17 def expand_macros(blocks):
18 '''
19 Performs macro expansion on blocks
20
21 Args
22 ----
23 blocks: list
24 the blocks to macro-expand
25 return: bool
26 True if any macros were expanded
27 '''
28 constants = {}
29 expanded = False
30 for blk in blocks.values():
31 module_getattr_folding(constants, blk)
32 expanded = expanded or expand_macros_in_block(constants, blk)
33 return expanded
34
35 def module_getattr_folding(constants, block):
36 '''
37 Performs constant-folding of getattr instructions within a block. Any
38 constants defined within the block are also added to the constant pool.
39
40 Args
41 ----
42 constants: dict
43 The pool of constants to use, which will be updated with any new
44 constants in this block
45 block: ir.Block
46 The block to perform constant folding on
47 '''
48 for inst in block.body:
49 if isinstance(inst, ir.Assign):
50 rhs = inst.value
51
52 if isinstance(rhs, ir.Global):
53 constants[inst.target.name] = rhs.value
54
55 elif isinstance(rhs, ir.Expr) and rhs.op == 'getattr':
56 if rhs.value.name in constants:
57 base = constants[rhs.value.name]
58 constants[inst.target.name] = getattr(base, rhs.attr)
59
60 elif isinstance(rhs, ir.Const):
61 constants[inst.target.name] = rhs.value
62
63 elif isinstance(rhs, ir.Var) and rhs.name in constants:
64 constants[inst.target.name] = constants[rhs.name]
65
66 elif isinstance(rhs, ir.FreeVar):
67 constants[inst.target.name] = rhs.value
68
69 def expand_macros_in_block(constants, block):
70 '''
71 Performs macro expansion on a block.
72
73 Args
74 ----
75 constants: dict
76 The pool of constants which contains the values which contains mappings
77 from variable names to callee names
78 block: ir.Block
79 The block to perform macro expansion on
80 return: bool
81 True if any macros were expanded
82 '''
83 expanded = False
84 for inst in block.body:
85 if isinstance(inst, ir.Assign):
86 rhs = inst.value
87 if isinstance(rhs, ir.Expr) and rhs.op == 'call':
88 callee = rhs.func
89 macro = constants.get(callee.name)
90 if isinstance(macro, Macro):
91 # Rewrite calling macro
92 assert macro.callable
93 args = [constants[arg.name] for arg in rhs.args]
94 kws = dict((k, constants[v.name]) for k, v in rhs.kws)
95 try:
96 result = macro.func(*args, **kws)
97 except BaseException as e:
98 msg = str(e)
99 headfmt = "Macro expansion failed at {line}"
100 head = headfmt.format(line=inst.loc)
101 newmsg = "{0}:\n{1}".format(head, msg)
102 raise MacroError(newmsg)
103 if result:
104 # Insert a new function
105 result.loc = rhs.loc
106 inst.value = ir.Expr.call(func=result, args=rhs.args,
107 kws=rhs.kws, loc=rhs.loc)
108 expanded = True
109 elif isinstance(rhs, ir.Expr) and rhs.op == 'getattr':
110 # Rewrite get attribute to macro call
111 # Non-calling macro must be triggered by get attribute
112 base = constants.get(rhs.value.name)
113 if base is not None:
114 value = getattr(base, rhs.attr)
115 if isinstance(value, Macro):
116 macro = value
117 if not macro.callable:
118 intr = ir.Intrinsic(macro.name, macro.func, args=())
119 inst.value = ir.Expr.call(func=intr, args=(),
120 kws=(), loc=rhs.loc)
121 expanded = True
122 return expanded
123
124
125 class Macro(object):
126 '''
127 A macro object is expanded to a function call
128
129 Args
130 ----
131 name: str
132 Name of this Macro
133 func: function
134 Function that evaluates the macro expansion
135 callable: bool
136 True if the Macro represents a callable function.
137 False if it is represents some other type.
138 argnames: list
139 If ``callable`` is True, this holds a list of the names of arguments
140 to the function.
141 '''
142
143 __slots__ = 'name', 'func', 'callable', 'argnames'
144
145 def __init__(self, name, func, callable=False, argnames=None):
146 self.name = name
147 self.func = func
148 self.callable = callable
149 self.argnames = argnames
150
151 def __repr__(self):
152 return '<macro %s -> %s>' % (self.name, self.func)
153
154
[end of numba/macro.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/numba/macro.py b/numba/macro.py
--- a/numba/macro.py
+++ b/numba/macro.py
@@ -29,7 +29,8 @@
expanded = False
for blk in blocks.values():
module_getattr_folding(constants, blk)
- expanded = expanded or expand_macros_in_block(constants, blk)
+ block_expanded = expand_macros_in_block(constants, blk)
+ expanded = expanded or block_expanded
return expanded
def module_getattr_folding(constants, block):
|
{"golden_diff": "diff --git a/numba/macro.py b/numba/macro.py\n--- a/numba/macro.py\n+++ b/numba/macro.py\n@@ -29,7 +29,8 @@\n expanded = False\n for blk in blocks.values():\n module_getattr_folding(constants, blk)\n- expanded = expanded or expand_macros_in_block(constants, blk)\n+ block_expanded = expand_macros_in_block(constants, blk)\n+ expanded = expanded or block_expanded\n return expanded\n \n def module_getattr_folding(constants, block):\n", "issue": "PR #856 introduced regression in macro expansion of more than one block\nPR #856 caused macro expansion to effectively cease after performing macro expansion in one block, due to the logic in `numba/macro.py`:\n\n``` python\nfor blk in blocks.values():\n module_getattr_folding(constants, blk)\n expanded = expanded or expand_macros_in_block(constants, blk)\n```\n\n", "before_files": [{"content": "\"\"\"\nMacro handling passes\n\nMacros are expanded on block-by-block\n\"\"\"\nfrom __future__ import absolute_import, print_function, division\nfrom numba import ir\n\n\nclass MacroError(Exception):\n '''\n An exception thrown during macro expansion\n '''\n pass\n\n\ndef expand_macros(blocks):\n '''\n Performs macro expansion on blocks\n\n Args\n ----\n blocks: list\n the blocks to macro-expand\n return: bool\n True if any macros were expanded\n '''\n constants = {}\n expanded = False\n for blk in blocks.values():\n module_getattr_folding(constants, blk)\n expanded = expanded or expand_macros_in_block(constants, blk)\n return expanded\n\ndef module_getattr_folding(constants, block):\n '''\n Performs constant-folding of getattr instructions within a block. Any\n constants defined within the block are also added to the constant pool.\n\n Args\n ----\n constants: dict\n The pool of constants to use, which will be updated with any new\n constants in this block\n block: ir.Block\n The block to perform constant folding on\n '''\n for inst in block.body:\n if isinstance(inst, ir.Assign):\n rhs = inst.value\n\n if isinstance(rhs, ir.Global):\n constants[inst.target.name] = rhs.value\n\n elif isinstance(rhs, ir.Expr) and rhs.op == 'getattr':\n if rhs.value.name in constants:\n base = constants[rhs.value.name]\n constants[inst.target.name] = getattr(base, rhs.attr)\n\n elif isinstance(rhs, ir.Const):\n constants[inst.target.name] = rhs.value\n\n elif isinstance(rhs, ir.Var) and rhs.name in constants:\n constants[inst.target.name] = constants[rhs.name]\n\n elif isinstance(rhs, ir.FreeVar):\n constants[inst.target.name] = rhs.value\n\ndef expand_macros_in_block(constants, block):\n '''\n Performs macro expansion on a block.\n\n Args\n ----\n constants: dict\n The pool of constants which contains the values which contains mappings\n from variable names to callee names\n block: ir.Block\n The block to perform macro expansion on\n return: bool\n True if any macros were expanded\n '''\n expanded = False\n for inst in block.body:\n if isinstance(inst, ir.Assign):\n rhs = inst.value\n if isinstance(rhs, ir.Expr) and rhs.op == 'call':\n callee = rhs.func\n macro = constants.get(callee.name)\n if isinstance(macro, Macro):\n # Rewrite calling macro\n assert macro.callable\n args = [constants[arg.name] for arg in rhs.args]\n kws = dict((k, constants[v.name]) for k, v in rhs.kws)\n try:\n result = macro.func(*args, **kws)\n except BaseException as e:\n msg = str(e)\n headfmt = \"Macro expansion failed at {line}\"\n head = headfmt.format(line=inst.loc)\n newmsg = \"{0}:\\n{1}\".format(head, msg)\n raise MacroError(newmsg)\n if result:\n # Insert a new function\n result.loc = rhs.loc\n inst.value = ir.Expr.call(func=result, args=rhs.args,\n kws=rhs.kws, loc=rhs.loc)\n expanded = True\n elif isinstance(rhs, ir.Expr) and rhs.op == 'getattr':\n # Rewrite get attribute to macro call\n # Non-calling macro must be triggered by get attribute\n base = constants.get(rhs.value.name)\n if base is not None:\n value = getattr(base, rhs.attr)\n if isinstance(value, Macro):\n macro = value\n if not macro.callable:\n intr = ir.Intrinsic(macro.name, macro.func, args=())\n inst.value = ir.Expr.call(func=intr, args=(),\n kws=(), loc=rhs.loc)\n expanded = True\n return expanded\n\n\nclass Macro(object):\n '''\n A macro object is expanded to a function call\n\n Args\n ----\n name: str\n Name of this Macro\n func: function\n Function that evaluates the macro expansion\n callable: bool\n True if the Macro represents a callable function.\n False if it is represents some other type.\n argnames: list\n If ``callable`` is True, this holds a list of the names of arguments\n to the function.\n '''\n\n __slots__ = 'name', 'func', 'callable', 'argnames'\n\n def __init__(self, name, func, callable=False, argnames=None):\n self.name = name\n self.func = func\n self.callable = callable\n self.argnames = argnames\n\n def __repr__(self):\n return '<macro %s -> %s>' % (self.name, self.func)\n\n", "path": "numba/macro.py"}]}
| 2,018 | 122 |
gh_patches_debug_7749
|
rasdani/github-patches
|
git_diff
|
cloudtools__troposphere-1696
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
implement AWS::Synthetics changes from May 14, 2020 update
</issue>
<code>
[start of troposphere/synthetics.py]
1 # Copyright (c) 2020, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Tags
7 from .validators import (integer, boolean, canary_runtime_version)
8
9
10 class VPCConfig(AWSProperty):
11 props = {
12 'SecurityGroupIds': ([basestring], True),
13 'SubnetIds': ([basestring], True),
14 'VpcId': (basestring, False),
15 }
16
17
18 class Schedule(AWSProperty):
19 props = {
20 'DurationInSeconds': (basestring, True),
21 'Expression': (basestring, True),
22 }
23
24
25 class RunConfig(AWSProperty):
26 props = {
27 'TimeoutInSeconds': (integer, True),
28 }
29
30
31 class Code(AWSProperty):
32 props = {
33 'Handler': (basestring, False),
34 'S3Bucket': (basestring, False),
35 'S3Key': (basestring, False),
36 'S3ObjectVersion': (basestring, False),
37 'Script': (basestring, False),
38 }
39
40
41 class Canary(AWSObject):
42 resource_type = "AWS::Synthetics::Canary"
43
44 props = {
45 'ArtifactS3Location': (basestring, True),
46 'Code': (Code, True),
47 'ExecutionRoleArn': (basestring, True),
48 'FailureRetentionPeriod': (integer, False),
49 'Name': (basestring, True),
50 'RunConfig': (RunConfig, False),
51 'RuntimeVersion': (canary_runtime_version, True),
52 'Schedule': (Schedule, True),
53 'StartCanaryAfterCreation': (boolean, True),
54 'SuccessRetentionPeriod': (integer, False),
55 'Tags': (Tags, False),
56 'VPCConfig': (VPCConfig, False),
57 }
58
[end of troposphere/synthetics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/troposphere/synthetics.py b/troposphere/synthetics.py
--- a/troposphere/synthetics.py
+++ b/troposphere/synthetics.py
@@ -47,7 +47,7 @@
'ExecutionRoleArn': (basestring, True),
'FailureRetentionPeriod': (integer, False),
'Name': (basestring, True),
- 'RunConfig': (RunConfig, False),
+ 'RunConfig': (RunConfig, True),
'RuntimeVersion': (canary_runtime_version, True),
'Schedule': (Schedule, True),
'StartCanaryAfterCreation': (boolean, True),
|
{"golden_diff": "diff --git a/troposphere/synthetics.py b/troposphere/synthetics.py\n--- a/troposphere/synthetics.py\n+++ b/troposphere/synthetics.py\n@@ -47,7 +47,7 @@\n 'ExecutionRoleArn': (basestring, True),\n 'FailureRetentionPeriod': (integer, False),\n 'Name': (basestring, True),\n- 'RunConfig': (RunConfig, False),\n+ 'RunConfig': (RunConfig, True),\n 'RuntimeVersion': (canary_runtime_version, True),\n 'Schedule': (Schedule, True),\n 'StartCanaryAfterCreation': (boolean, True),\n", "issue": "implement AWS::Synthetics changes from May 14, 2020 update\n\n", "before_files": [{"content": "# Copyright (c) 2020, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\nfrom .validators import (integer, boolean, canary_runtime_version)\n\n\nclass VPCConfig(AWSProperty):\n props = {\n 'SecurityGroupIds': ([basestring], True),\n 'SubnetIds': ([basestring], True),\n 'VpcId': (basestring, False),\n }\n\n\nclass Schedule(AWSProperty):\n props = {\n 'DurationInSeconds': (basestring, True),\n 'Expression': (basestring, True),\n }\n\n\nclass RunConfig(AWSProperty):\n props = {\n 'TimeoutInSeconds': (integer, True),\n }\n\n\nclass Code(AWSProperty):\n props = {\n 'Handler': (basestring, False),\n 'S3Bucket': (basestring, False),\n 'S3Key': (basestring, False),\n 'S3ObjectVersion': (basestring, False),\n 'Script': (basestring, False),\n }\n\n\nclass Canary(AWSObject):\n resource_type = \"AWS::Synthetics::Canary\"\n\n props = {\n 'ArtifactS3Location': (basestring, True),\n 'Code': (Code, True),\n 'ExecutionRoleArn': (basestring, True),\n 'FailureRetentionPeriod': (integer, False),\n 'Name': (basestring, True),\n 'RunConfig': (RunConfig, False),\n 'RuntimeVersion': (canary_runtime_version, True),\n 'Schedule': (Schedule, True),\n 'StartCanaryAfterCreation': (boolean, True),\n 'SuccessRetentionPeriod': (integer, False),\n 'Tags': (Tags, False),\n 'VPCConfig': (VPCConfig, False),\n }\n", "path": "troposphere/synthetics.py"}]}
| 1,069 | 144 |
gh_patches_debug_31969
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-299
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip-sync fails to uninstall if find-links are set
```
Usage:
pip uninstall [options] <package> ...
pip uninstall [options] -r <requirements file> ...
no such option: -f
Traceback (most recent call last):
File "/tmp/foo/bin/pip-sync", line 11, in <module>
sys.exit(cli())
File "/tmp/foo/local/lib/python2.7/site-packages/click/core.py", line 716, in __call__
return self.main(*args, **kwargs)
File "/tmp/foo/local/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/tmp/foo/local/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmp/foo/local/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/tmp/foo/local/lib/python2.7/site-packages/piptools/scripts/sync.py", line 75, in cli
pip_flags=pip_flags))
File "/tmp/foo/local/lib/python2.7/site-packages/piptools/sync.py", line 137, in sync
check_call(['pip', 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['pip', 'uninstall', '-y', u'-f', u'https://ternaris.com/pypi', 'bagbunker', 'marv']' returned non-zero exit status 2
```
</issue>
<code>
[start of piptools/scripts/sync.py]
1 # coding: utf-8
2 from __future__ import (absolute_import, division, print_function,
3 unicode_literals)
4
5 import sys
6
7 import pip
8
9 # Make sure we're using a reasonably modern version of pip
10 if not tuple(int(digit) for digit in pip.__version__.split('.')[:2]) >= (6, 1):
11 print('pip-compile requires at least version 6.1 of pip ({} found), '
12 'perhaps run `pip install --upgrade pip`?'.format(pip.__version__))
13 sys.exit(4)
14
15 import os # noqa
16 from .. import click # noqa
17 from .. import sync # noqa
18 from ..exceptions import PipToolsError # noqa
19 from ..logging import log # noqa
20 from ..utils import flat_map # noqa
21
22 DEFAULT_REQUIREMENTS_FILE = 'requirements.txt'
23
24
25 @click.command()
26 @click.option('-n', '--dry-run', is_flag=True, help="Only show what would happen, don't change anything")
27 @click.option('--force', is_flag=True, help="Proceed even if conflicts are found")
28 @click.option('-f', '--find-links', multiple=True, help="Look for archives in this directory or on this HTML page", envvar='PIP_FIND_LINKS') # noqa
29 @click.option('-i', '--index-url', help="Change index URL (defaults to PyPI)", envvar='PIP_INDEX_URL')
30 @click.option('--extra-index-url', multiple=True, help="Add additional index URL to search", envvar='PIP_EXTRA_INDEX_URL') # noqa
31 @click.option('--no-index', is_flag=True, help="Ignore package index (only looking at --find-links URLs instead)")
32 @click.argument('src_files', required=False, type=click.Path(exists=True), nargs=-1)
33 def cli(dry_run, force, find_links, index_url, extra_index_url, no_index, src_files):
34 if not src_files:
35 if os.path.exists(DEFAULT_REQUIREMENTS_FILE):
36 src_files = (DEFAULT_REQUIREMENTS_FILE,)
37 else:
38 msg = 'No requirement files given and no {} found in the current directory'
39 log.error(msg.format(DEFAULT_REQUIREMENTS_FILE))
40 sys.exit(2)
41
42 if any(src_file.endswith('.in') for src_file in src_files):
43 msg = ('Some input files have the .in extension, which is most likely an error and can '
44 'cause weird behaviour. You probably meant to use the corresponding *.txt file?')
45 if force:
46 log.warning('WARNING: ' + msg)
47 else:
48 log.error('ERROR: ' + msg)
49 sys.exit(2)
50
51 requirements = flat_map(lambda src: pip.req.parse_requirements(src, session=True),
52 src_files)
53
54 try:
55 requirements = sync.merge(requirements, ignore_conflicts=force)
56 except PipToolsError as e:
57 log.error(str(e))
58 sys.exit(2)
59
60 installed_dists = pip.get_installed_distributions(skip=[])
61 to_install, to_uninstall = sync.diff(requirements, installed_dists)
62
63 pip_flags = []
64 for link in find_links or []:
65 pip_flags.extend(['-f', link])
66 if no_index:
67 pip_flags.append('--no-index')
68 if index_url:
69 pip_flags.extend(['-i', index_url])
70 if extra_index_url:
71 for extra_index in extra_index_url:
72 pip_flags.extend(['--extra-index-url', extra_index])
73
74 sys.exit(sync.sync(to_install, to_uninstall, verbose=True, dry_run=dry_run,
75 pip_flags=pip_flags))
76
[end of piptools/scripts/sync.py]
[start of piptools/sync.py]
1 import collections
2 from subprocess import check_call
3
4 from . import click
5 from .exceptions import IncompatibleRequirements, UnsupportedConstraint
6 from .utils import flat_map
7
8 PACKAGES_TO_IGNORE = [
9 'pip',
10 'pip-tools',
11 'pip-review',
12 'setuptools',
13 'wheel',
14 ]
15
16
17 def dependency_tree(installed_keys, root_key):
18 """
19 Calculate the dependency tree for the package `root_key` and return
20 a collection of all its dependencies. Uses a DFS traversal algorithm.
21
22 `installed_keys` should be a {key: requirement} mapping, e.g.
23 {'django': from_line('django==1.8')}
24 `root_key` should be the key to return the dependency tree for.
25 """
26 dependencies = set()
27 queue = collections.deque()
28
29 if root_key in installed_keys:
30 dep = installed_keys[root_key]
31 queue.append(dep)
32
33 while queue:
34 v = queue.popleft()
35 if v.key in dependencies:
36 continue
37
38 dependencies.add(v.key)
39
40 for dep_specifier in v.requires():
41 dep_name = dep_specifier.key
42 if dep_name in installed_keys:
43 dep = installed_keys[dep_name]
44
45 if dep_specifier.specifier.contains(dep.version):
46 queue.append(dep)
47
48 return dependencies
49
50
51 def get_dists_to_ignore(installed):
52 """
53 Returns a collection of package names to ignore when performing pip-sync,
54 based on the currently installed environment. For example, when pip-tools
55 is installed in the local environment, it should be ignored, including all
56 of its dependencies (e.g. click). When pip-tools is not installed
57 locally, click should also be installed/uninstalled depending on the given
58 requirements.
59 """
60 installed_keys = {r.key: r for r in installed}
61 return list(flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE))
62
63
64 def merge(requirements, ignore_conflicts):
65 by_key = {}
66
67 for ireq in requirements:
68 if ireq.link is not None and not ireq.editable:
69 msg = ('pip-compile does not support URLs as packages, unless they are editable. '
70 'Perhaps add -e option?')
71 raise UnsupportedConstraint(msg, ireq)
72
73 key = ireq.link or ireq.req.key
74
75 if not ignore_conflicts:
76 existing_ireq = by_key.get(key)
77 if existing_ireq:
78 # NOTE: We check equality here since we can assume that the
79 # requirements are all pinned
80 if ireq.specifier != existing_ireq.specifier:
81 raise IncompatibleRequirements(ireq, existing_ireq)
82
83 # TODO: Always pick the largest specifier in case of a conflict
84 by_key[key] = ireq
85
86 return by_key.values()
87
88
89 def diff(compiled_requirements, installed_dists):
90 """
91 Calculate which packages should be installed or uninstalled, given a set
92 of compiled requirements and a list of currently installed modules.
93 """
94 requirements_lut = {r.link or r.req.key: r for r in compiled_requirements}
95
96 satisfied = set() # holds keys
97 to_install = set() # holds keys-and-versions
98 to_uninstall = set() # holds keys
99
100 pkgs_to_ignore = get_dists_to_ignore(installed_dists)
101 for dist in installed_dists:
102 key = dist.key
103 if key not in requirements_lut:
104 to_uninstall.add(dist.key)
105 elif requirements_lut[key].specifier.contains(dist.version):
106 satisfied.add(key)
107
108 for key, requirement in requirements_lut.items():
109 if key not in satisfied:
110 to_install.add(str(requirement.link or requirement.req))
111
112 # Make sure to not uninstall any packages that should be ignored
113 to_uninstall -= set(pkgs_to_ignore)
114
115 return (to_install, to_uninstall)
116
117
118 def sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None):
119 """
120 Install and uninstalls the given sets of modules.
121 """
122 if not to_uninstall and not to_install:
123 click.echo("Everything up-to-date")
124
125 if pip_flags is None:
126 pip_flags = []
127
128 if not verbose:
129 pip_flags += ['-q']
130
131 if to_uninstall:
132 if dry_run:
133 click.echo("Would uninstall:")
134 for pkg in to_uninstall:
135 click.echo(" {}".format(pkg))
136 else:
137 check_call(['pip', 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))
138
139 if to_install:
140 if dry_run:
141 click.echo("Would install:")
142 for pkg in to_install:
143 click.echo(" {}".format(pkg))
144 else:
145 check_call(['pip', 'install'] + pip_flags + sorted(to_install))
146 return 0
147
[end of piptools/sync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/piptools/scripts/sync.py b/piptools/scripts/sync.py
--- a/piptools/scripts/sync.py
+++ b/piptools/scripts/sync.py
@@ -61,8 +61,9 @@
to_install, to_uninstall = sync.diff(requirements, installed_dists)
pip_flags = []
+ install_flags = []
for link in find_links or []:
- pip_flags.extend(['-f', link])
+ install_flags.extend(['-f', link])
if no_index:
pip_flags.append('--no-index')
if index_url:
@@ -72,4 +73,4 @@
pip_flags.extend(['--extra-index-url', extra_index])
sys.exit(sync.sync(to_install, to_uninstall, verbose=True, dry_run=dry_run,
- pip_flags=pip_flags))
+ pip_flags=pip_flags, install_flags=install_flags))
diff --git a/piptools/sync.py b/piptools/sync.py
--- a/piptools/sync.py
+++ b/piptools/sync.py
@@ -115,7 +115,7 @@
return (to_install, to_uninstall)
-def sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None):
+def sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None, install_flags=None):
"""
Install and uninstalls the given sets of modules.
"""
@@ -137,10 +137,12 @@
check_call(['pip', 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))
if to_install:
+ if install_flags is None:
+ install_flags = []
if dry_run:
click.echo("Would install:")
for pkg in to_install:
click.echo(" {}".format(pkg))
else:
- check_call(['pip', 'install'] + pip_flags + sorted(to_install))
+ check_call(['pip', 'install'] + pip_flags + install_flags + sorted(to_install))
return 0
|
{"golden_diff": "diff --git a/piptools/scripts/sync.py b/piptools/scripts/sync.py\n--- a/piptools/scripts/sync.py\n+++ b/piptools/scripts/sync.py\n@@ -61,8 +61,9 @@\n to_install, to_uninstall = sync.diff(requirements, installed_dists)\n \n pip_flags = []\n+ install_flags = []\n for link in find_links or []:\n- pip_flags.extend(['-f', link])\n+ install_flags.extend(['-f', link])\n if no_index:\n pip_flags.append('--no-index')\n if index_url:\n@@ -72,4 +73,4 @@\n pip_flags.extend(['--extra-index-url', extra_index])\n \n sys.exit(sync.sync(to_install, to_uninstall, verbose=True, dry_run=dry_run,\n- pip_flags=pip_flags))\n+ pip_flags=pip_flags, install_flags=install_flags))\ndiff --git a/piptools/sync.py b/piptools/sync.py\n--- a/piptools/sync.py\n+++ b/piptools/sync.py\n@@ -115,7 +115,7 @@\n return (to_install, to_uninstall)\n \n \n-def sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None):\n+def sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None, install_flags=None):\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n@@ -137,10 +137,12 @@\n check_call(['pip', 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))\n \n if to_install:\n+ if install_flags is None:\n+ install_flags = []\n if dry_run:\n click.echo(\"Would install:\")\n for pkg in to_install:\n click.echo(\" {}\".format(pkg))\n else:\n- check_call(['pip', 'install'] + pip_flags + sorted(to_install))\n+ check_call(['pip', 'install'] + pip_flags + install_flags + sorted(to_install))\n return 0\n", "issue": "pip-sync fails to uninstall if find-links are set\n```\n\nUsage: \n pip uninstall [options] <package> ...\n pip uninstall [options] -r <requirements file> ...\n\nno such option: -f\nTraceback (most recent call last):\n File \"/tmp/foo/bin/pip-sync\", line 11, in <module>\n sys.exit(cli())\n File \"/tmp/foo/local/lib/python2.7/site-packages/click/core.py\", line 716, in __call__\n return self.main(*args, **kwargs)\n File \"/tmp/foo/local/lib/python2.7/site-packages/click/core.py\", line 696, in main\n rv = self.invoke(ctx)\n File \"/tmp/foo/local/lib/python2.7/site-packages/click/core.py\", line 889, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"/tmp/foo/local/lib/python2.7/site-packages/click/core.py\", line 534, in invoke\n return callback(*args, **kwargs)\n File \"/tmp/foo/local/lib/python2.7/site-packages/piptools/scripts/sync.py\", line 75, in cli\n pip_flags=pip_flags))\n File \"/tmp/foo/local/lib/python2.7/site-packages/piptools/sync.py\", line 137, in sync\n check_call(['pip', 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))\n File \"/usr/lib/python2.7/subprocess.py\", line 540, in check_call\n raise CalledProcessError(retcode, cmd)\nsubprocess.CalledProcessError: Command '['pip', 'uninstall', '-y', u'-f', u'https://ternaris.com/pypi', 'bagbunker', 'marv']' returned non-zero exit status 2\n```\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\n\nimport sys\n\nimport pip\n\n# Make sure we're using a reasonably modern version of pip\nif not tuple(int(digit) for digit in pip.__version__.split('.')[:2]) >= (6, 1):\n print('pip-compile requires at least version 6.1 of pip ({} found), '\n 'perhaps run `pip install --upgrade pip`?'.format(pip.__version__))\n sys.exit(4)\n\nimport os # noqa\nfrom .. import click # noqa\nfrom .. import sync # noqa\nfrom ..exceptions import PipToolsError # noqa\nfrom ..logging import log # noqa\nfrom ..utils import flat_map # noqa\n\nDEFAULT_REQUIREMENTS_FILE = 'requirements.txt'\n\n\[email protected]()\[email protected]('-n', '--dry-run', is_flag=True, help=\"Only show what would happen, don't change anything\")\[email protected]('--force', is_flag=True, help=\"Proceed even if conflicts are found\")\[email protected]('-f', '--find-links', multiple=True, help=\"Look for archives in this directory or on this HTML page\", envvar='PIP_FIND_LINKS') # noqa\[email protected]('-i', '--index-url', help=\"Change index URL (defaults to PyPI)\", envvar='PIP_INDEX_URL')\[email protected]('--extra-index-url', multiple=True, help=\"Add additional index URL to search\", envvar='PIP_EXTRA_INDEX_URL') # noqa\[email protected]('--no-index', is_flag=True, help=\"Ignore package index (only looking at --find-links URLs instead)\")\[email protected]('src_files', required=False, type=click.Path(exists=True), nargs=-1)\ndef cli(dry_run, force, find_links, index_url, extra_index_url, no_index, src_files):\n if not src_files:\n if os.path.exists(DEFAULT_REQUIREMENTS_FILE):\n src_files = (DEFAULT_REQUIREMENTS_FILE,)\n else:\n msg = 'No requirement files given and no {} found in the current directory'\n log.error(msg.format(DEFAULT_REQUIREMENTS_FILE))\n sys.exit(2)\n\n if any(src_file.endswith('.in') for src_file in src_files):\n msg = ('Some input files have the .in extension, which is most likely an error and can '\n 'cause weird behaviour. You probably meant to use the corresponding *.txt file?')\n if force:\n log.warning('WARNING: ' + msg)\n else:\n log.error('ERROR: ' + msg)\n sys.exit(2)\n\n requirements = flat_map(lambda src: pip.req.parse_requirements(src, session=True),\n src_files)\n\n try:\n requirements = sync.merge(requirements, ignore_conflicts=force)\n except PipToolsError as e:\n log.error(str(e))\n sys.exit(2)\n\n installed_dists = pip.get_installed_distributions(skip=[])\n to_install, to_uninstall = sync.diff(requirements, installed_dists)\n\n pip_flags = []\n for link in find_links or []:\n pip_flags.extend(['-f', link])\n if no_index:\n pip_flags.append('--no-index')\n if index_url:\n pip_flags.extend(['-i', index_url])\n if extra_index_url:\n for extra_index in extra_index_url:\n pip_flags.extend(['--extra-index-url', extra_index])\n\n sys.exit(sync.sync(to_install, to_uninstall, verbose=True, dry_run=dry_run,\n pip_flags=pip_flags))\n", "path": "piptools/scripts/sync.py"}, {"content": "import collections\nfrom subprocess import check_call\n\nfrom . import click\nfrom .exceptions import IncompatibleRequirements, UnsupportedConstraint\nfrom .utils import flat_map\n\nPACKAGES_TO_IGNORE = [\n 'pip',\n 'pip-tools',\n 'pip-review',\n 'setuptools',\n 'wheel',\n]\n\n\ndef dependency_tree(installed_keys, root_key):\n \"\"\"\n Calculate the dependency tree for the package `root_key` and return\n a collection of all its dependencies. Uses a DFS traversal algorithm.\n\n `installed_keys` should be a {key: requirement} mapping, e.g.\n {'django': from_line('django==1.8')}\n `root_key` should be the key to return the dependency tree for.\n \"\"\"\n dependencies = set()\n queue = collections.deque()\n\n if root_key in installed_keys:\n dep = installed_keys[root_key]\n queue.append(dep)\n\n while queue:\n v = queue.popleft()\n if v.key in dependencies:\n continue\n\n dependencies.add(v.key)\n\n for dep_specifier in v.requires():\n dep_name = dep_specifier.key\n if dep_name in installed_keys:\n dep = installed_keys[dep_name]\n\n if dep_specifier.specifier.contains(dep.version):\n queue.append(dep)\n\n return dependencies\n\n\ndef get_dists_to_ignore(installed):\n \"\"\"\n Returns a collection of package names to ignore when performing pip-sync,\n based on the currently installed environment. For example, when pip-tools\n is installed in the local environment, it should be ignored, including all\n of its dependencies (e.g. click). When pip-tools is not installed\n locally, click should also be installed/uninstalled depending on the given\n requirements.\n \"\"\"\n installed_keys = {r.key: r for r in installed}\n return list(flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE))\n\n\ndef merge(requirements, ignore_conflicts):\n by_key = {}\n\n for ireq in requirements:\n if ireq.link is not None and not ireq.editable:\n msg = ('pip-compile does not support URLs as packages, unless they are editable. '\n 'Perhaps add -e option?')\n raise UnsupportedConstraint(msg, ireq)\n\n key = ireq.link or ireq.req.key\n\n if not ignore_conflicts:\n existing_ireq = by_key.get(key)\n if existing_ireq:\n # NOTE: We check equality here since we can assume that the\n # requirements are all pinned\n if ireq.specifier != existing_ireq.specifier:\n raise IncompatibleRequirements(ireq, existing_ireq)\n\n # TODO: Always pick the largest specifier in case of a conflict\n by_key[key] = ireq\n\n return by_key.values()\n\n\ndef diff(compiled_requirements, installed_dists):\n \"\"\"\n Calculate which packages should be installed or uninstalled, given a set\n of compiled requirements and a list of currently installed modules.\n \"\"\"\n requirements_lut = {r.link or r.req.key: r for r in compiled_requirements}\n\n satisfied = set() # holds keys\n to_install = set() # holds keys-and-versions\n to_uninstall = set() # holds keys\n\n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n for dist in installed_dists:\n key = dist.key\n if key not in requirements_lut:\n to_uninstall.add(dist.key)\n elif requirements_lut[key].specifier.contains(dist.version):\n satisfied.add(key)\n\n for key, requirement in requirements_lut.items():\n if key not in satisfied:\n to_install.add(str(requirement.link or requirement.req))\n\n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n\n return (to_install, to_uninstall)\n\n\ndef sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None):\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n if not to_uninstall and not to_install:\n click.echo(\"Everything up-to-date\")\n\n if pip_flags is None:\n pip_flags = []\n\n if not verbose:\n pip_flags += ['-q']\n\n if to_uninstall:\n if dry_run:\n click.echo(\"Would uninstall:\")\n for pkg in to_uninstall:\n click.echo(\" {}\".format(pkg))\n else:\n check_call(['pip', 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))\n\n if to_install:\n if dry_run:\n click.echo(\"Would install:\")\n for pkg in to_install:\n click.echo(\" {}\".format(pkg))\n else:\n check_call(['pip', 'install'] + pip_flags + sorted(to_install))\n return 0\n", "path": "piptools/sync.py"}]}
| 3,235 | 453 |
gh_patches_debug_38141
|
rasdani/github-patches
|
git_diff
|
apache__tvm-2921
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[TEST][TENSORFLOW] Cache the Downloaded File
So far the model files used in the TF end to end tests re-download the file in each test run.
This causes the test execution to be slow. Eventually, the test server can be blocked by the place that hosts the data-source.
We need to change the implementation to cache to local and only re-download the file if necessary.
cc @srkreddy1238 @icemelon9
</issue>
<code>
[start of python/tvm/contrib/download.py]
1 """Helper utility for downloading"""
2 from __future__ import print_function
3 from __future__ import absolute_import as _abs
4
5 import os
6 import sys
7 import time
8
9 def download(url, path, overwrite=False, size_compare=False, verbose=1):
10 """Downloads the file from the internet.
11 Set the input options correctly to overwrite or do the size comparison
12
13 Parameters
14 ----------
15 url : str
16 Download url.
17
18 path : str
19 Local file path to save downloaded file
20
21 overwrite : bool, optional
22 Whether to overwrite existing file
23
24 size_compare : bool, optional
25 Whether to do size compare to check downloaded file.
26
27 verbose: int, optional
28 Verbose level
29 """
30 if sys.version_info >= (3,):
31 import urllib.request as urllib2
32 else:
33 import urllib2
34
35 if os.path.isfile(path) and not overwrite:
36 if size_compare:
37 import requests
38 file_size = os.path.getsize(path)
39 res_head = requests.head(url)
40 res_get = requests.get(url, stream=True)
41 if 'Content-Length' not in res_head.headers:
42 res_get = urllib2.urlopen(url)
43 url_file_size = int(res_get.headers['Content-Length'])
44 if url_file_size != file_size:
45 print("exist file got corrupted, downloading %s file freshly..." % path)
46 download(url, path, True, False)
47 return
48 print('File {} exists, skip.'.format(path))
49 return
50
51 if verbose >= 1:
52 print('Downloading from url {} to {}'.format(url, path))
53
54 # Stateful start time
55 start_time = time.time()
56
57 def _download_progress(count, block_size, total_size):
58 #pylint: disable=unused-argument
59 """Show the download progress.
60 """
61 if count == 0:
62 return
63 duration = time.time() - start_time
64 progress_size = int(count * block_size)
65 speed = int(progress_size / (1024 * duration))
66 percent = min(int(count * block_size * 100 / total_size), 100)
67 sys.stdout.write("\r...%d%%, %.2f MB, %d KB/s, %d seconds passed" %
68 (percent, progress_size / (1024.0 * 1024), speed, duration))
69 sys.stdout.flush()
70
71 if sys.version_info >= (3,):
72 urllib2.urlretrieve(url, path, reporthook=_download_progress)
73 print("")
74 else:
75 f = urllib2.urlopen(url)
76 data = f.read()
77 with open(path, "wb") as code:
78 code.write(data)
79
[end of python/tvm/contrib/download.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/tvm/contrib/download.py b/python/tvm/contrib/download.py
--- a/python/tvm/contrib/download.py
+++ b/python/tvm/contrib/download.py
@@ -5,8 +5,10 @@
import os
import sys
import time
+import uuid
+import shutil
-def download(url, path, overwrite=False, size_compare=False, verbose=1):
+def download(url, path, overwrite=False, size_compare=False, verbose=1, retries=3):
"""Downloads the file from the internet.
Set the input options correctly to overwrite or do the size comparison
@@ -53,6 +55,11 @@
# Stateful start time
start_time = time.time()
+ dirpath = os.path.dirname(path)
+ if not os.path.isdir(dirpath):
+ os.makedirs(dirpath)
+ random_uuid = str(uuid.uuid4())
+ tempfile = os.path.join(dirpath, random_uuid)
def _download_progress(count, block_size, total_size):
#pylint: disable=unused-argument
@@ -68,11 +75,62 @@
(percent, progress_size / (1024.0 * 1024), speed, duration))
sys.stdout.flush()
- if sys.version_info >= (3,):
- urllib2.urlretrieve(url, path, reporthook=_download_progress)
- print("")
+ while retries >= 0:
+ # Disable pyling too broad Exception
+ # pylint: disable=W0703
+ try:
+ if sys.version_info >= (3,):
+ urllib2.urlretrieve(url, tempfile, reporthook=_download_progress)
+ print("")
+ else:
+ f = urllib2.urlopen(url)
+ data = f.read()
+ with open(tempfile, "wb") as code:
+ code.write(data)
+ shutil.move(tempfile, path)
+ break
+ except Exception as err:
+ retries -= 1
+ if retries == 0:
+ os.remove(tempfile)
+ raise err
+ else:
+ print("download failed due to {}, retrying, {} attempt{} left"
+ .format(repr(err), retries, 's' if retries > 1 else ''))
+
+
+TEST_DATA_ROOT_PATH = os.path.join(os.path.expanduser('~'), '.tvm_test_data')
+if not os.path.exists(TEST_DATA_ROOT_PATH):
+ os.mkdir(TEST_DATA_ROOT_PATH)
+
+def download_testdata(url, relpath, module=None):
+ """Downloads the test data from the internet.
+
+ Parameters
+ ----------
+ url : str
+ Download url.
+
+ relpath : str
+ Relative file path.
+
+ module : Union[str, list, tuple], optional
+ Subdirectory paths under test data folder.
+
+ Returns
+ -------
+ abspath : str
+ Absolute file path of downloaded file
+ """
+ global TEST_DATA_ROOT_PATH
+ if module is None:
+ module_path = ''
+ elif isinstance(module, str):
+ module_path = module
+ elif isinstance(module, (list, tuple)):
+ module_path = os.path.join(*module)
else:
- f = urllib2.urlopen(url)
- data = f.read()
- with open(path, "wb") as code:
- code.write(data)
+ raise ValueError("Unsupported module: " + module)
+ abspath = os.path.join(TEST_DATA_ROOT_PATH, module_path, relpath)
+ download(url, abspath, overwrite=False, size_compare=True)
+ return abspath
|
{"golden_diff": "diff --git a/python/tvm/contrib/download.py b/python/tvm/contrib/download.py\n--- a/python/tvm/contrib/download.py\n+++ b/python/tvm/contrib/download.py\n@@ -5,8 +5,10 @@\n import os\n import sys\n import time\n+import uuid\n+import shutil\n \n-def download(url, path, overwrite=False, size_compare=False, verbose=1):\n+def download(url, path, overwrite=False, size_compare=False, verbose=1, retries=3):\n \"\"\"Downloads the file from the internet.\n Set the input options correctly to overwrite or do the size comparison\n \n@@ -53,6 +55,11 @@\n \n # Stateful start time\n start_time = time.time()\n+ dirpath = os.path.dirname(path)\n+ if not os.path.isdir(dirpath):\n+ os.makedirs(dirpath)\n+ random_uuid = str(uuid.uuid4())\n+ tempfile = os.path.join(dirpath, random_uuid)\n \n def _download_progress(count, block_size, total_size):\n #pylint: disable=unused-argument\n@@ -68,11 +75,62 @@\n (percent, progress_size / (1024.0 * 1024), speed, duration))\n sys.stdout.flush()\n \n- if sys.version_info >= (3,):\n- urllib2.urlretrieve(url, path, reporthook=_download_progress)\n- print(\"\")\n+ while retries >= 0:\n+ # Disable pyling too broad Exception\n+ # pylint: disable=W0703\n+ try:\n+ if sys.version_info >= (3,):\n+ urllib2.urlretrieve(url, tempfile, reporthook=_download_progress)\n+ print(\"\")\n+ else:\n+ f = urllib2.urlopen(url)\n+ data = f.read()\n+ with open(tempfile, \"wb\") as code:\n+ code.write(data)\n+ shutil.move(tempfile, path)\n+ break\n+ except Exception as err:\n+ retries -= 1\n+ if retries == 0:\n+ os.remove(tempfile)\n+ raise err\n+ else:\n+ print(\"download failed due to {}, retrying, {} attempt{} left\"\n+ .format(repr(err), retries, 's' if retries > 1 else ''))\n+\n+\n+TEST_DATA_ROOT_PATH = os.path.join(os.path.expanduser('~'), '.tvm_test_data')\n+if not os.path.exists(TEST_DATA_ROOT_PATH):\n+ os.mkdir(TEST_DATA_ROOT_PATH)\n+\n+def download_testdata(url, relpath, module=None):\n+ \"\"\"Downloads the test data from the internet.\n+\n+ Parameters\n+ ----------\n+ url : str\n+ Download url.\n+\n+ relpath : str\n+ Relative file path.\n+\n+ module : Union[str, list, tuple], optional\n+ Subdirectory paths under test data folder.\n+\n+ Returns\n+ -------\n+ abspath : str\n+ Absolute file path of downloaded file\n+ \"\"\"\n+ global TEST_DATA_ROOT_PATH\n+ if module is None:\n+ module_path = ''\n+ elif isinstance(module, str):\n+ module_path = module\n+ elif isinstance(module, (list, tuple)):\n+ module_path = os.path.join(*module)\n else:\n- f = urllib2.urlopen(url)\n- data = f.read()\n- with open(path, \"wb\") as code:\n- code.write(data)\n+ raise ValueError(\"Unsupported module: \" + module)\n+ abspath = os.path.join(TEST_DATA_ROOT_PATH, module_path, relpath)\n+ download(url, abspath, overwrite=False, size_compare=True)\n+ return abspath\n", "issue": "[TEST][TENSORFLOW] Cache the Downloaded File\nSo far the model files used in the TF end to end tests re-download the file in each test run. \r\nThis causes the test execution to be slow. Eventually, the test server can be blocked by the place that hosts the data-source.\r\n\r\nWe need to change the implementation to cache to local and only re-download the file if necessary.\r\n\r\ncc @srkreddy1238 @icemelon9 \n", "before_files": [{"content": "\"\"\"Helper utility for downloading\"\"\"\nfrom __future__ import print_function\nfrom __future__ import absolute_import as _abs\n\nimport os\nimport sys\nimport time\n\ndef download(url, path, overwrite=False, size_compare=False, verbose=1):\n \"\"\"Downloads the file from the internet.\n Set the input options correctly to overwrite or do the size comparison\n\n Parameters\n ----------\n url : str\n Download url.\n\n path : str\n Local file path to save downloaded file\n\n overwrite : bool, optional\n Whether to overwrite existing file\n\n size_compare : bool, optional\n Whether to do size compare to check downloaded file.\n\n verbose: int, optional\n Verbose level\n \"\"\"\n if sys.version_info >= (3,):\n import urllib.request as urllib2\n else:\n import urllib2\n\n if os.path.isfile(path) and not overwrite:\n if size_compare:\n import requests\n file_size = os.path.getsize(path)\n res_head = requests.head(url)\n res_get = requests.get(url, stream=True)\n if 'Content-Length' not in res_head.headers:\n res_get = urllib2.urlopen(url)\n url_file_size = int(res_get.headers['Content-Length'])\n if url_file_size != file_size:\n print(\"exist file got corrupted, downloading %s file freshly...\" % path)\n download(url, path, True, False)\n return\n print('File {} exists, skip.'.format(path))\n return\n\n if verbose >= 1:\n print('Downloading from url {} to {}'.format(url, path))\n\n # Stateful start time\n start_time = time.time()\n\n def _download_progress(count, block_size, total_size):\n #pylint: disable=unused-argument\n \"\"\"Show the download progress.\n \"\"\"\n if count == 0:\n return\n duration = time.time() - start_time\n progress_size = int(count * block_size)\n speed = int(progress_size / (1024 * duration))\n percent = min(int(count * block_size * 100 / total_size), 100)\n sys.stdout.write(\"\\r...%d%%, %.2f MB, %d KB/s, %d seconds passed\" %\n (percent, progress_size / (1024.0 * 1024), speed, duration))\n sys.stdout.flush()\n\n if sys.version_info >= (3,):\n urllib2.urlretrieve(url, path, reporthook=_download_progress)\n print(\"\")\n else:\n f = urllib2.urlopen(url)\n data = f.read()\n with open(path, \"wb\") as code:\n code.write(data)\n", "path": "python/tvm/contrib/download.py"}]}
| 1,364 | 803 |
gh_patches_debug_25671
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-949
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug when reading arrays in configfile
Arrays (longer than a few elements) can be written out using configfile.writeConfigFile, but then cannot be read back in using configfile.readConfigFile.
This seems to be because `repr(array)` produces a string with line breaks in it. While `eval(repr(array))` works just fine, configfile chokes while reading it in because it parses each line separately, and we end up with a SyntaxError about unexpected EOF.
To reproduce:
```
>>> from numpy import *
>>> from pyqtgraph import configfile
>>> arr = arange(20)
>>> arr
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19])
>>> configfile.writeConfigFile({'arr':arr}, 'arraytest.cfg')
>>> configfile.readConfigFile('arraytest.cfg')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyqtgraph/configfile.py", line 64, in readConfigFile
data = parseString(s)[1]
File "pyqtgraph/configfile.py", line 168, in parseString
raise ParseError("Error evaluating expression '%s': [%s: %s]" % (v, ex.__class__.__name__, str(ex)), (ln+1), l)
pyqtgraph.configfile.ParseError: Error parsing config file '/Users/mbkratz/Code/pyqtgraph/arraytest.cfg' at line 1:
arr: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
Error evaluating expression 'array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,': [SyntaxError: unexpected EOF while parsing (<string>, line 1)]
>>>
```
</issue>
<code>
[start of pyqtgraph/configfile.py]
1 # -*- coding: utf-8 -*-
2 """
3 configfile.py - Human-readable text configuration file library
4 Copyright 2010 Luke Campagnola
5 Distributed under MIT/X11 license. See license.txt for more infomation.
6
7 Used for reading and writing dictionary objects to a python-like configuration
8 file format. Data structures may be nested and contain any data type as long
9 as it can be converted to/from a string using repr and eval.
10 """
11
12 import re, os, sys, datetime
13 import numpy
14 from .pgcollections import OrderedDict
15 from . import units
16 from .python2_3 import asUnicode, basestring
17 from .Qt import QtCore
18 from .Point import Point
19 from .colormap import ColorMap
20 GLOBAL_PATH = None # so not thread safe.
21
22
23 class ParseError(Exception):
24 def __init__(self, message, lineNum, line, fileName=None):
25 self.lineNum = lineNum
26 self.line = line
27 #self.message = message
28 self.fileName = fileName
29 Exception.__init__(self, message)
30
31 def __str__(self):
32 if self.fileName is None:
33 msg = "Error parsing string at line %d:\n" % self.lineNum
34 else:
35 msg = "Error parsing config file '%s' at line %d:\n" % (self.fileName, self.lineNum)
36 msg += "%s\n%s" % (self.line, self.message)
37 return msg
38 #raise Exception()
39
40
41 def writeConfigFile(data, fname):
42 s = genString(data)
43 fd = open(fname, 'w')
44 fd.write(s)
45 fd.close()
46
47 def readConfigFile(fname):
48 #cwd = os.getcwd()
49 global GLOBAL_PATH
50 if GLOBAL_PATH is not None:
51 fname2 = os.path.join(GLOBAL_PATH, fname)
52 if os.path.exists(fname2):
53 fname = fname2
54
55 GLOBAL_PATH = os.path.dirname(os.path.abspath(fname))
56
57 try:
58 #os.chdir(newDir) ## bad.
59 fd = open(fname)
60 s = asUnicode(fd.read())
61 fd.close()
62 s = s.replace("\r\n", "\n")
63 s = s.replace("\r", "\n")
64 data = parseString(s)[1]
65 except ParseError:
66 sys.exc_info()[1].fileName = fname
67 raise
68 except:
69 print("Error while reading config file %s:"% fname)
70 raise
71 #finally:
72 #os.chdir(cwd)
73 return data
74
75 def appendConfigFile(data, fname):
76 s = genString(data)
77 fd = open(fname, 'a')
78 fd.write(s)
79 fd.close()
80
81
82 def genString(data, indent=''):
83 s = ''
84 for k in data:
85 sk = str(k)
86 if len(sk) == 0:
87 print(data)
88 raise Exception('blank dict keys not allowed (see data above)')
89 if sk[0] == ' ' or ':' in sk:
90 print(data)
91 raise Exception('dict keys must not contain ":" or start with spaces [offending key is "%s"]' % sk)
92 if isinstance(data[k], dict):
93 s += indent + sk + ':\n'
94 s += genString(data[k], indent + ' ')
95 else:
96 s += indent + sk + ': ' + repr(data[k]) + '\n'
97 return s
98
99 def parseString(lines, start=0):
100
101 data = OrderedDict()
102 if isinstance(lines, basestring):
103 lines = lines.split('\n')
104 lines = [l for l in lines if re.search(r'\S', l) and not re.match(r'\s*#', l)] ## remove empty lines
105
106 indent = measureIndent(lines[start])
107 ln = start - 1
108
109 try:
110 while True:
111 ln += 1
112 #print ln
113 if ln >= len(lines):
114 break
115
116 l = lines[ln]
117
118 ## Skip blank lines or lines starting with #
119 if re.match(r'\s*#', l) or not re.search(r'\S', l):
120 continue
121
122 ## Measure line indentation, make sure it is correct for this level
123 lineInd = measureIndent(l)
124 if lineInd < indent:
125 ln -= 1
126 break
127 if lineInd > indent:
128 #print lineInd, indent
129 raise ParseError('Indentation is incorrect. Expected %d, got %d' % (indent, lineInd), ln+1, l)
130
131
132 if ':' not in l:
133 raise ParseError('Missing colon', ln+1, l)
134
135 (k, p, v) = l.partition(':')
136 k = k.strip()
137 v = v.strip()
138
139 ## set up local variables to use for eval
140 local = units.allUnits.copy()
141 local['OrderedDict'] = OrderedDict
142 local['readConfigFile'] = readConfigFile
143 local['Point'] = Point
144 local['QtCore'] = QtCore
145 local['ColorMap'] = ColorMap
146 local['datetime'] = datetime
147 # Needed for reconstructing numpy arrays
148 local['array'] = numpy.array
149 for dtype in ['int8', 'uint8',
150 'int16', 'uint16', 'float16',
151 'int32', 'uint32', 'float32',
152 'int64', 'uint64', 'float64']:
153 local[dtype] = getattr(numpy, dtype)
154
155 if len(k) < 1:
156 raise ParseError('Missing name preceding colon', ln+1, l)
157 if k[0] == '(' and k[-1] == ')': ## If the key looks like a tuple, try evaluating it.
158 try:
159 k1 = eval(k, local)
160 if type(k1) is tuple:
161 k = k1
162 except:
163 pass
164 if re.search(r'\S', v) and v[0] != '#': ## eval the value
165 try:
166 val = eval(v, local)
167 except:
168 ex = sys.exc_info()[1]
169 raise ParseError("Error evaluating expression '%s': [%s: %s]" % (v, ex.__class__.__name__, str(ex)), (ln+1), l)
170 else:
171 if ln+1 >= len(lines) or measureIndent(lines[ln+1]) <= indent:
172 #print "blank dict"
173 val = {}
174 else:
175 #print "Going deeper..", ln+1
176 (ln, val) = parseString(lines, start=ln+1)
177 data[k] = val
178 #print k, repr(val)
179 except ParseError:
180 raise
181 except:
182 ex = sys.exc_info()[1]
183 raise ParseError("%s: %s" % (ex.__class__.__name__, str(ex)), ln+1, l)
184 #print "Returning shallower..", ln+1
185 return (ln, data)
186
187 def measureIndent(s):
188 n = 0
189 while n < len(s) and s[n] == ' ':
190 n += 1
191 return n
192
193
194
195 if __name__ == '__main__':
196 import tempfile
197 fn = tempfile.mktemp()
198 tf = open(fn, 'w')
199 cf = """
200 key: 'value'
201 key2: ##comment
202 ##comment
203 key21: 'value' ## comment
204 ##comment
205 key22: [1,2,3]
206 key23: 234 #comment
207 """
208 tf.write(cf)
209 tf.close()
210 print("=== Test:===")
211 num = 1
212 for line in cf.split('\n'):
213 print("%02d %s" % (num, line))
214 num += 1
215 print(cf)
216 print("============")
217 data = readConfigFile(fn)
218 print(data)
219 os.remove(fn)
220
[end of pyqtgraph/configfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/configfile.py b/pyqtgraph/configfile.py
--- a/pyqtgraph/configfile.py
+++ b/pyqtgraph/configfile.py
@@ -33,9 +33,8 @@
msg = "Error parsing string at line %d:\n" % self.lineNum
else:
msg = "Error parsing config file '%s' at line %d:\n" % (self.fileName, self.lineNum)
- msg += "%s\n%s" % (self.line, self.message)
+ msg += "%s\n%s" % (self.line, Exception.__str__(self))
return msg
- #raise Exception()
def writeConfigFile(data, fname):
@@ -93,13 +92,14 @@
s += indent + sk + ':\n'
s += genString(data[k], indent + ' ')
else:
- s += indent + sk + ': ' + repr(data[k]) + '\n'
+ s += indent + sk + ': ' + repr(data[k]).replace("\n", "\\\n") + '\n'
return s
def parseString(lines, start=0):
data = OrderedDict()
if isinstance(lines, basestring):
+ lines = lines.replace("\\\n", "")
lines = lines.split('\n')
lines = [l for l in lines if re.search(r'\S', l) and not re.match(r'\s*#', l)] ## remove empty lines
|
{"golden_diff": "diff --git a/pyqtgraph/configfile.py b/pyqtgraph/configfile.py\n--- a/pyqtgraph/configfile.py\n+++ b/pyqtgraph/configfile.py\n@@ -33,9 +33,8 @@\n msg = \"Error parsing string at line %d:\\n\" % self.lineNum\n else:\n msg = \"Error parsing config file '%s' at line %d:\\n\" % (self.fileName, self.lineNum)\n- msg += \"%s\\n%s\" % (self.line, self.message)\n+ msg += \"%s\\n%s\" % (self.line, Exception.__str__(self))\n return msg\n- #raise Exception()\n \n \n def writeConfigFile(data, fname):\n@@ -93,13 +92,14 @@\n s += indent + sk + ':\\n'\n s += genString(data[k], indent + ' ')\n else:\n- s += indent + sk + ': ' + repr(data[k]) + '\\n'\n+ s += indent + sk + ': ' + repr(data[k]).replace(\"\\n\", \"\\\\\\n\") + '\\n'\n return s\n \n def parseString(lines, start=0):\n \n data = OrderedDict()\n if isinstance(lines, basestring):\n+ lines = lines.replace(\"\\\\\\n\", \"\")\n lines = lines.split('\\n')\n lines = [l for l in lines if re.search(r'\\S', l) and not re.match(r'\\s*#', l)] ## remove empty lines\n", "issue": "Bug when reading arrays in configfile\nArrays (longer than a few elements) can be written out using configfile.writeConfigFile, but then cannot be read back in using configfile.readConfigFile.\r\n\r\nThis seems to be because `repr(array)` produces a string with line breaks in it. While `eval(repr(array))` works just fine, configfile chokes while reading it in because it parses each line separately, and we end up with a SyntaxError about unexpected EOF. \r\n\r\nTo reproduce:\r\n```\r\n>>> from numpy import *\r\n>>> from pyqtgraph import configfile\r\n>>> arr = arange(20)\r\n>>> arr\r\narray([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,\r\n 17, 18, 19])\r\n>>> configfile.writeConfigFile({'arr':arr}, 'arraytest.cfg')\r\n>>> configfile.readConfigFile('arraytest.cfg')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"pyqtgraph/configfile.py\", line 64, in readConfigFile\r\n data = parseString(s)[1]\r\n File \"pyqtgraph/configfile.py\", line 168, in parseString\r\n raise ParseError(\"Error evaluating expression '%s': [%s: %s]\" % (v, ex.__class__.__name__, str(ex)), (ln+1), l)\r\npyqtgraph.configfile.ParseError: Error parsing config file '/Users/mbkratz/Code/pyqtgraph/arraytest.cfg' at line 1:\r\narr: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,\r\nError evaluating expression 'array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,': [SyntaxError: unexpected EOF while parsing (<string>, line 1)]\r\n>>> \r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nconfigfile.py - Human-readable text configuration file library \nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more infomation.\n\nUsed for reading and writing dictionary objects to a python-like configuration\nfile format. Data structures may be nested and contain any data type as long\nas it can be converted to/from a string using repr and eval.\n\"\"\"\n\nimport re, os, sys, datetime\nimport numpy\nfrom .pgcollections import OrderedDict\nfrom . import units\nfrom .python2_3 import asUnicode, basestring\nfrom .Qt import QtCore\nfrom .Point import Point\nfrom .colormap import ColorMap\nGLOBAL_PATH = None # so not thread safe.\n\n\nclass ParseError(Exception):\n def __init__(self, message, lineNum, line, fileName=None):\n self.lineNum = lineNum\n self.line = line\n #self.message = message\n self.fileName = fileName\n Exception.__init__(self, message)\n \n def __str__(self):\n if self.fileName is None:\n msg = \"Error parsing string at line %d:\\n\" % self.lineNum\n else:\n msg = \"Error parsing config file '%s' at line %d:\\n\" % (self.fileName, self.lineNum)\n msg += \"%s\\n%s\" % (self.line, self.message)\n return msg\n #raise Exception()\n \n\ndef writeConfigFile(data, fname):\n s = genString(data)\n fd = open(fname, 'w')\n fd.write(s)\n fd.close()\n \ndef readConfigFile(fname):\n #cwd = os.getcwd()\n global GLOBAL_PATH\n if GLOBAL_PATH is not None:\n fname2 = os.path.join(GLOBAL_PATH, fname)\n if os.path.exists(fname2):\n fname = fname2\n\n GLOBAL_PATH = os.path.dirname(os.path.abspath(fname))\n \n try:\n #os.chdir(newDir) ## bad.\n fd = open(fname)\n s = asUnicode(fd.read())\n fd.close()\n s = s.replace(\"\\r\\n\", \"\\n\")\n s = s.replace(\"\\r\", \"\\n\")\n data = parseString(s)[1]\n except ParseError:\n sys.exc_info()[1].fileName = fname\n raise\n except:\n print(\"Error while reading config file %s:\"% fname)\n raise\n #finally:\n #os.chdir(cwd)\n return data\n\ndef appendConfigFile(data, fname):\n s = genString(data)\n fd = open(fname, 'a')\n fd.write(s)\n fd.close()\n\n\ndef genString(data, indent=''):\n s = ''\n for k in data:\n sk = str(k)\n if len(sk) == 0:\n print(data)\n raise Exception('blank dict keys not allowed (see data above)')\n if sk[0] == ' ' or ':' in sk:\n print(data)\n raise Exception('dict keys must not contain \":\" or start with spaces [offending key is \"%s\"]' % sk)\n if isinstance(data[k], dict):\n s += indent + sk + ':\\n'\n s += genString(data[k], indent + ' ')\n else:\n s += indent + sk + ': ' + repr(data[k]) + '\\n'\n return s\n \ndef parseString(lines, start=0):\n \n data = OrderedDict()\n if isinstance(lines, basestring):\n lines = lines.split('\\n')\n lines = [l for l in lines if re.search(r'\\S', l) and not re.match(r'\\s*#', l)] ## remove empty lines\n \n indent = measureIndent(lines[start])\n ln = start - 1\n \n try:\n while True:\n ln += 1\n #print ln\n if ln >= len(lines):\n break\n \n l = lines[ln]\n \n ## Skip blank lines or lines starting with #\n if re.match(r'\\s*#', l) or not re.search(r'\\S', l):\n continue\n \n ## Measure line indentation, make sure it is correct for this level\n lineInd = measureIndent(l)\n if lineInd < indent:\n ln -= 1\n break\n if lineInd > indent:\n #print lineInd, indent\n raise ParseError('Indentation is incorrect. Expected %d, got %d' % (indent, lineInd), ln+1, l)\n \n \n if ':' not in l:\n raise ParseError('Missing colon', ln+1, l)\n \n (k, p, v) = l.partition(':')\n k = k.strip()\n v = v.strip()\n \n ## set up local variables to use for eval\n local = units.allUnits.copy()\n local['OrderedDict'] = OrderedDict\n local['readConfigFile'] = readConfigFile\n local['Point'] = Point\n local['QtCore'] = QtCore\n local['ColorMap'] = ColorMap\n local['datetime'] = datetime\n # Needed for reconstructing numpy arrays\n local['array'] = numpy.array\n for dtype in ['int8', 'uint8', \n 'int16', 'uint16', 'float16',\n 'int32', 'uint32', 'float32',\n 'int64', 'uint64', 'float64']:\n local[dtype] = getattr(numpy, dtype)\n \n if len(k) < 1:\n raise ParseError('Missing name preceding colon', ln+1, l)\n if k[0] == '(' and k[-1] == ')': ## If the key looks like a tuple, try evaluating it.\n try:\n k1 = eval(k, local)\n if type(k1) is tuple:\n k = k1\n except:\n pass\n if re.search(r'\\S', v) and v[0] != '#': ## eval the value\n try:\n val = eval(v, local)\n except:\n ex = sys.exc_info()[1]\n raise ParseError(\"Error evaluating expression '%s': [%s: %s]\" % (v, ex.__class__.__name__, str(ex)), (ln+1), l)\n else:\n if ln+1 >= len(lines) or measureIndent(lines[ln+1]) <= indent:\n #print \"blank dict\"\n val = {}\n else:\n #print \"Going deeper..\", ln+1\n (ln, val) = parseString(lines, start=ln+1)\n data[k] = val\n #print k, repr(val)\n except ParseError:\n raise\n except:\n ex = sys.exc_info()[1]\n raise ParseError(\"%s: %s\" % (ex.__class__.__name__, str(ex)), ln+1, l)\n #print \"Returning shallower..\", ln+1\n return (ln, data)\n \ndef measureIndent(s):\n n = 0\n while n < len(s) and s[n] == ' ':\n n += 1\n return n\n \n \n \nif __name__ == '__main__':\n import tempfile\n fn = tempfile.mktemp()\n tf = open(fn, 'w')\n cf = \"\"\"\nkey: 'value'\nkey2: ##comment\n ##comment\n key21: 'value' ## comment\n ##comment\n key22: [1,2,3]\n key23: 234 #comment\n \"\"\"\n tf.write(cf)\n tf.close()\n print(\"=== Test:===\")\n num = 1\n for line in cf.split('\\n'):\n print(\"%02d %s\" % (num, line))\n num += 1\n print(cf)\n print(\"============\")\n data = readConfigFile(fn)\n print(data)\n os.remove(fn)\n", "path": "pyqtgraph/configfile.py"}]}
| 3,342 | 323 |
gh_patches_debug_1585
|
rasdani/github-patches
|
git_diff
|
scipy__scipy-10447
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Build adds the user folder inside the code base
After building the SciPy on Windows 10, the base folder has a copy of the following folder
```
C:\Users\<user>\Documents\GitHub\scipy\Users\<user>\AppData\Local\Temp\tmpuvtg6i4i\main.obj
```
From the look of the structure, it seems like a relative folder is used instead of an absolute one hence it recreates the temp folder within the codebase.
<strike>I think this might be related to the pocketfft development as I recently started to see it but might also be another C++ source change.</strike> Happens at the `cluster._optimal_leaf_ordering` compilation
</issue>
<code>
[start of scipy/fft/_pocketfft/setup.py]
1
2 def try_compile(compiler, code=None, flags=[], ext='.cpp'):
3 """Returns True if the compiler is able to compile the given code"""
4 import tempfile
5 from distutils.errors import CompileError
6 import os
7
8 code = code or 'int main (int argc, char **argv) { return 0; }'
9
10 with tempfile.TemporaryDirectory() as temp_dir:
11 fname = os.path.join(temp_dir, 'main'+ext)
12 with open(fname, 'w') as f:
13 f.write(code)
14
15 try:
16 compiler.compile([fname], extra_postargs=flags)
17 except CompileError:
18 return False
19 return True
20
21
22 def has_flag(compiler, flag):
23 return try_compile(compiler, flags=[flag])
24
25
26 def get_std_flag(compiler):
27 # Test the compiler for the highest available c++ standard flag
28 gnu_flags = ['--std=c++14', '--std=c++11']
29 flags_by_cc = {
30 'msvc': ['/std:c++14', None],
31 'intelw': ['/Qstd=c++14', '/Qstd=c++11']
32 }
33 flags = flags_by_cc.get(compiler.compiler_type, gnu_flags)
34
35 for flag in flags:
36 if flag is None:
37 return None
38
39 if has_flag(compiler, flag):
40 return flag
41
42 from numpy.distutils import log
43 log.warn('Could not detect c++ standard flag')
44 return None
45
46
47 def try_add_flag(args, compiler, flag):
48 """Appends flag to the list of arguments if supported by the compiler"""
49 if try_compile(compiler, flags=args+[flag]):
50 args.append(flag)
51
52
53 def pre_build_hook(build_ext, ext):
54 cc = build_ext._cxx_compiler
55 args = ext.extra_compile_args
56
57 std_flag = get_std_flag(build_ext._cxx_compiler)
58 if std_flag is not None:
59 args.append(std_flag)
60
61 if cc.compiler_type == 'msvc':
62 args.append('/EHsc')
63 else:
64 try_add_flag(args, cc, '-fvisibility=hidden')
65
66 import sys
67 if sys.platform == 'darwin':
68 args.append('-mmacosx-version-min=10.7')
69 try_add_flag(args, cc, '-stdlib=libc++')
70
71
72 def configuration(parent_package='', top_path=None):
73 from numpy.distutils.misc_util import Configuration
74 import pybind11
75 include_dirs = [pybind11.get_include(True), pybind11.get_include(False)]
76
77 config = Configuration('_pocketfft', parent_package, top_path)
78 ext = config.add_extension('pypocketfft',
79 sources=['pypocketfft.cxx'],
80 depends=['pocketfft_hdronly.h'],
81 include_dirs=include_dirs,
82 language='c++')
83 ext._pre_build_hook = pre_build_hook
84
85 config.add_data_files('LICENSE.md')
86 config.add_data_dir('tests')
87 return config
88
89
90 if __name__ == '__main__':
91 from numpy.distutils.core import setup
92 setup(**configuration(top_path='').todict())
93
[end of scipy/fft/_pocketfft/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scipy/fft/_pocketfft/setup.py b/scipy/fft/_pocketfft/setup.py
--- a/scipy/fft/_pocketfft/setup.py
+++ b/scipy/fft/_pocketfft/setup.py
@@ -13,7 +13,7 @@
f.write(code)
try:
- compiler.compile([fname], extra_postargs=flags)
+ compiler.compile([fname], output_dir=temp_dir, extra_postargs=flags)
except CompileError:
return False
return True
|
{"golden_diff": "diff --git a/scipy/fft/_pocketfft/setup.py b/scipy/fft/_pocketfft/setup.py\n--- a/scipy/fft/_pocketfft/setup.py\n+++ b/scipy/fft/_pocketfft/setup.py\n@@ -13,7 +13,7 @@\n f.write(code)\n \n try:\n- compiler.compile([fname], extra_postargs=flags)\n+ compiler.compile([fname], output_dir=temp_dir, extra_postargs=flags)\n except CompileError:\n return False\n return True\n", "issue": "Build adds the user folder inside the code base\nAfter building the SciPy on Windows 10, the base folder has a copy of the following folder\r\n\r\n```\r\nC:\\Users\\<user>\\Documents\\GitHub\\scipy\\Users\\<user>\\AppData\\Local\\Temp\\tmpuvtg6i4i\\main.obj\r\n```\r\nFrom the look of the structure, it seems like a relative folder is used instead of an absolute one hence it recreates the temp folder within the codebase. \r\n\r\n<strike>I think this might be related to the pocketfft development as I recently started to see it but might also be another C++ source change.</strike> Happens at the `cluster._optimal_leaf_ordering` compilation\r\n\r\n\n", "before_files": [{"content": "\ndef try_compile(compiler, code=None, flags=[], ext='.cpp'):\n \"\"\"Returns True if the compiler is able to compile the given code\"\"\"\n import tempfile\n from distutils.errors import CompileError\n import os\n\n code = code or 'int main (int argc, char **argv) { return 0; }'\n\n with tempfile.TemporaryDirectory() as temp_dir:\n fname = os.path.join(temp_dir, 'main'+ext)\n with open(fname, 'w') as f:\n f.write(code)\n\n try:\n compiler.compile([fname], extra_postargs=flags)\n except CompileError:\n return False\n return True\n\n\ndef has_flag(compiler, flag):\n return try_compile(compiler, flags=[flag])\n\n\ndef get_std_flag(compiler):\n # Test the compiler for the highest available c++ standard flag\n gnu_flags = ['--std=c++14', '--std=c++11']\n flags_by_cc = {\n 'msvc': ['/std:c++14', None],\n 'intelw': ['/Qstd=c++14', '/Qstd=c++11']\n }\n flags = flags_by_cc.get(compiler.compiler_type, gnu_flags)\n\n for flag in flags:\n if flag is None:\n return None\n\n if has_flag(compiler, flag):\n return flag\n\n from numpy.distutils import log\n log.warn('Could not detect c++ standard flag')\n return None\n\n\ndef try_add_flag(args, compiler, flag):\n \"\"\"Appends flag to the list of arguments if supported by the compiler\"\"\"\n if try_compile(compiler, flags=args+[flag]):\n args.append(flag)\n\n\ndef pre_build_hook(build_ext, ext):\n cc = build_ext._cxx_compiler\n args = ext.extra_compile_args\n\n std_flag = get_std_flag(build_ext._cxx_compiler)\n if std_flag is not None:\n args.append(std_flag)\n\n if cc.compiler_type == 'msvc':\n args.append('/EHsc')\n else:\n try_add_flag(args, cc, '-fvisibility=hidden')\n\n import sys\n if sys.platform == 'darwin':\n args.append('-mmacosx-version-min=10.7')\n try_add_flag(args, cc, '-stdlib=libc++')\n\n\ndef configuration(parent_package='', top_path=None):\n from numpy.distutils.misc_util import Configuration\n import pybind11\n include_dirs = [pybind11.get_include(True), pybind11.get_include(False)]\n\n config = Configuration('_pocketfft', parent_package, top_path)\n ext = config.add_extension('pypocketfft',\n sources=['pypocketfft.cxx'],\n depends=['pocketfft_hdronly.h'],\n include_dirs=include_dirs,\n language='c++')\n ext._pre_build_hook = pre_build_hook\n\n config.add_data_files('LICENSE.md')\n config.add_data_dir('tests')\n return config\n\n\nif __name__ == '__main__':\n from numpy.distutils.core import setup\n setup(**configuration(top_path='').todict())\n", "path": "scipy/fft/_pocketfft/setup.py"}]}
| 1,548 | 117 |
gh_patches_debug_15992
|
rasdani/github-patches
|
git_diff
|
pymodbus-dev__pymodbus-1339
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't run pymodbus.simulator --help
<!--
Before opening a new issue, make sure you do the following:
* check that your issue isn't already filed: https://github.com/pymodbus-dev/pymodbus/issues
* check the discussions forum https://github.com/pymodbus-dev/pymodbus/discussions
* prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus
Before opening a new issue, make sure you do the following
-->
### Versions
* Python: 3.10.6
* OS: Linux
* Pymodbus: 3.1.3
* Modbus Hardware (if used):
### Description
Trying to run `pymodbus.simulator --help` fails:
```
<coroutine object main at 0x7efcc073cf90>
sys:1: RuntimeWarning: coroutine 'main' was never awaited
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
The `main` function used as entry point for the simulator is an async function: https://github.com/pymodbus-dev/pymodbus/blob/12859d0b82cc215a18ac757fe9319cdf1f9ec890/pymodbus/server/simulator/main.py#L113
It can't be used directly as an entry point. The entry point should be a function using `asyncio.run`.
</issue>
<code>
[start of pymodbus/server/simulator/main.py]
1 #!/usr/bin/env python3
2 """HTTP server for modbus simulator.
3
4 The modbus simulator contain 3 distint parts:
5
6 - Datastore simulator, to define registers and their behaviour including actions: (simulator)(../../datastore/simulator.py)
7 - Modbus server: (server)(./http_server.py)
8 - HTTP server with REST API and web pages providing an online console in your browser
9
10 Multiple setups for different server types and/or devices are prepared in a (json file)(./setup.json), the detailed configuration is explained in (doc)(README.md)
11
12 The command line parameters are kept to a minimum:
13
14 usage: main.py [-h] [--modbus_server MODBUS_SERVER]
15 [--modbus_device MODBUS_DEVICE] [--http_host HTTP_HOST]
16 [--http_port HTTP_PORT]
17 [--log {critical,error,warning,info,debug}]
18 [--json_file JSON_FILE]
19 [--custom_actions_module CUSTOM_ACTIONS_MODULE]
20
21 Modbus server with REST-API and web server
22
23 options:
24 -h, --help show this help message and exit
25 --modbus_server MODBUS_SERVER
26 use <modbus_server> from server_list in json file
27 --modbus_device MODBUS_DEVICE
28 use <modbus_device> from device_list in json file
29 --http_host HTTP_HOST
30 use <http_host> as host to bind http listen
31 --http_port HTTP_PORT
32 use <http_port> as port to bind http listen
33 --log {critical,error,warning,info,debug}
34 set log level, default is info
35 --log_file LOG_FILE
36 name of server log file, default is "server.log"
37 --json_file JSON_FILE
38 name of json_file, default is "setup.json"
39 --custom_actions_module CUSTOM_ACTIONS_MODULE
40 python file with custom actions, default is none
41 """
42 import argparse
43 import asyncio
44
45 from pymodbus import pymodbus_apply_logging_config
46 from pymodbus.logging import Log
47 from pymodbus.server.simulator.http_server import ModbusSimulatorServer
48
49
50 async def run():
51 """Run simulator."""
52
53
54 def get_commandline():
55 """Get command line arguments."""
56 parser = argparse.ArgumentParser(
57 description="Modbus server with REST-API and web server"
58 )
59 parser.add_argument(
60 "--modbus_server",
61 help="use <modbus_server> from server_list in json file",
62 type=str,
63 )
64 parser.add_argument(
65 "--modbus_device",
66 help="use <modbus_device> from device_list in json file",
67 type=str,
68 )
69 parser.add_argument(
70 "--http_host",
71 help="use <http_host> as host to bind http listen",
72 type=str,
73 )
74 parser.add_argument(
75 "--http_port",
76 help="use <http_port> as port to bind http listen",
77 type=str,
78 )
79 parser.add_argument(
80 "--log",
81 choices=["critical", "error", "warning", "info", "debug"],
82 help="set log level, default is info",
83 default="info",
84 type=str,
85 )
86 parser.add_argument(
87 "--json_file",
88 help='name of json file, default is "setup.json"',
89 type=str,
90 )
91 parser.add_argument(
92 "--log_file",
93 help='name of server log file, default is "server.log"',
94 type=str,
95 )
96 parser.add_argument(
97 "--custom_actions_module",
98 help="python file with custom actions, default is none",
99 type=str,
100 )
101 args = parser.parse_args()
102 pymodbus_apply_logging_config(args.log.upper())
103 Log.info("Start simulator")
104 cmd_args = {}
105 for argument in args.__dict__:
106 if argument == "log":
107 continue
108 if args.__dict__[argument] is not None:
109 cmd_args[argument] = args.__dict__[argument]
110 return cmd_args
111
112
113 async def main():
114 """Run server."""
115 cmd_args = get_commandline()
116 task = ModbusSimulatorServer(**cmd_args)
117
118 await task.run_forever()
119
120
121 if __name__ == "__main__":
122 asyncio.run(main(), debug=True)
123
[end of pymodbus/server/simulator/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pymodbus/server/simulator/main.py b/pymodbus/server/simulator/main.py
--- a/pymodbus/server/simulator/main.py
+++ b/pymodbus/server/simulator/main.py
@@ -47,10 +47,6 @@
from pymodbus.server.simulator.http_server import ModbusSimulatorServer
-async def run():
- """Run simulator."""
-
-
def get_commandline():
"""Get command line arguments."""
parser = argparse.ArgumentParser(
@@ -110,13 +106,12 @@
return cmd_args
-async def main():
+def main():
"""Run server."""
cmd_args = get_commandline()
task = ModbusSimulatorServer(**cmd_args)
-
- await task.run_forever()
+ asyncio.run(task.run_forever(), debug=True)
if __name__ == "__main__":
- asyncio.run(main(), debug=True)
+ main()
|
{"golden_diff": "diff --git a/pymodbus/server/simulator/main.py b/pymodbus/server/simulator/main.py\n--- a/pymodbus/server/simulator/main.py\n+++ b/pymodbus/server/simulator/main.py\n@@ -47,10 +47,6 @@\n from pymodbus.server.simulator.http_server import ModbusSimulatorServer\n \n \n-async def run():\n- \"\"\"Run simulator.\"\"\"\n-\n-\n def get_commandline():\n \"\"\"Get command line arguments.\"\"\"\n parser = argparse.ArgumentParser(\n@@ -110,13 +106,12 @@\n return cmd_args\n \n \n-async def main():\n+def main():\n \"\"\"Run server.\"\"\"\n cmd_args = get_commandline()\n task = ModbusSimulatorServer(**cmd_args)\n-\n- await task.run_forever()\n+ asyncio.run(task.run_forever(), debug=True)\n \n \n if __name__ == \"__main__\":\n- asyncio.run(main(), debug=True)\n+ main()\n", "issue": "Can't run pymodbus.simulator --help\n<!--\r\n\r\nBefore opening a new issue, make sure you do the following:\r\n * check that your issue isn't already filed: https://github.com/pymodbus-dev/pymodbus/issues\r\n * check the discussions forum https://github.com/pymodbus-dev/pymodbus/discussions\r\n * prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus\r\n\r\n Before opening a new issue, make sure you do the following\r\n-->\r\n\r\n### Versions\r\n\r\n* Python: 3.10.6\r\n* OS: Linux\r\n* Pymodbus: 3.1.3\r\n* Modbus Hardware (if used):\r\n\r\n### Description\r\n\r\nTrying to run `pymodbus.simulator --help` fails:\r\n\r\n```\r\n<coroutine object main at 0x7efcc073cf90>\r\nsys:1: RuntimeWarning: coroutine 'main' was never awaited\r\nRuntimeWarning: Enable tracemalloc to get the object allocation traceback\r\n```\r\n\r\nThe `main` function used as entry point for the simulator is an async function: https://github.com/pymodbus-dev/pymodbus/blob/12859d0b82cc215a18ac757fe9319cdf1f9ec890/pymodbus/server/simulator/main.py#L113\r\n\r\nIt can't be used directly as an entry point. The entry point should be a function using `asyncio.run`.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\"\"\"HTTP server for modbus simulator.\n\nThe modbus simulator contain 3 distint parts:\n\n- Datastore simulator, to define registers and their behaviour including actions: (simulator)(../../datastore/simulator.py)\n- Modbus server: (server)(./http_server.py)\n- HTTP server with REST API and web pages providing an online console in your browser\n\nMultiple setups for different server types and/or devices are prepared in a (json file)(./setup.json), the detailed configuration is explained in (doc)(README.md)\n\nThe command line parameters are kept to a minimum:\n\nusage: main.py [-h] [--modbus_server MODBUS_SERVER]\n [--modbus_device MODBUS_DEVICE] [--http_host HTTP_HOST]\n [--http_port HTTP_PORT]\n [--log {critical,error,warning,info,debug}]\n [--json_file JSON_FILE]\n [--custom_actions_module CUSTOM_ACTIONS_MODULE]\n\nModbus server with REST-API and web server\n\noptions:\n -h, --help show this help message and exit\n --modbus_server MODBUS_SERVER\n use <modbus_server> from server_list in json file\n --modbus_device MODBUS_DEVICE\n use <modbus_device> from device_list in json file\n --http_host HTTP_HOST\n use <http_host> as host to bind http listen\n --http_port HTTP_PORT\n use <http_port> as port to bind http listen\n --log {critical,error,warning,info,debug}\n set log level, default is info\n --log_file LOG_FILE\n name of server log file, default is \"server.log\"\n --json_file JSON_FILE\n name of json_file, default is \"setup.json\"\n --custom_actions_module CUSTOM_ACTIONS_MODULE\n python file with custom actions, default is none\n\"\"\"\nimport argparse\nimport asyncio\n\nfrom pymodbus import pymodbus_apply_logging_config\nfrom pymodbus.logging import Log\nfrom pymodbus.server.simulator.http_server import ModbusSimulatorServer\n\n\nasync def run():\n \"\"\"Run simulator.\"\"\"\n\n\ndef get_commandline():\n \"\"\"Get command line arguments.\"\"\"\n parser = argparse.ArgumentParser(\n description=\"Modbus server with REST-API and web server\"\n )\n parser.add_argument(\n \"--modbus_server\",\n help=\"use <modbus_server> from server_list in json file\",\n type=str,\n )\n parser.add_argument(\n \"--modbus_device\",\n help=\"use <modbus_device> from device_list in json file\",\n type=str,\n )\n parser.add_argument(\n \"--http_host\",\n help=\"use <http_host> as host to bind http listen\",\n type=str,\n )\n parser.add_argument(\n \"--http_port\",\n help=\"use <http_port> as port to bind http listen\",\n type=str,\n )\n parser.add_argument(\n \"--log\",\n choices=[\"critical\", \"error\", \"warning\", \"info\", \"debug\"],\n help=\"set log level, default is info\",\n default=\"info\",\n type=str,\n )\n parser.add_argument(\n \"--json_file\",\n help='name of json file, default is \"setup.json\"',\n type=str,\n )\n parser.add_argument(\n \"--log_file\",\n help='name of server log file, default is \"server.log\"',\n type=str,\n )\n parser.add_argument(\n \"--custom_actions_module\",\n help=\"python file with custom actions, default is none\",\n type=str,\n )\n args = parser.parse_args()\n pymodbus_apply_logging_config(args.log.upper())\n Log.info(\"Start simulator\")\n cmd_args = {}\n for argument in args.__dict__:\n if argument == \"log\":\n continue\n if args.__dict__[argument] is not None:\n cmd_args[argument] = args.__dict__[argument]\n return cmd_args\n\n\nasync def main():\n \"\"\"Run server.\"\"\"\n cmd_args = get_commandline()\n task = ModbusSimulatorServer(**cmd_args)\n\n await task.run_forever()\n\n\nif __name__ == \"__main__\":\n asyncio.run(main(), debug=True)\n", "path": "pymodbus/server/simulator/main.py"}]}
| 2,016 | 208 |
gh_patches_debug_31074
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1957
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ERROR: Cannot locate specified Dockerfile
I'm not sure if this is a Docker Compose bug or docker-py bug, but this used to work:
`docker-compose.yml`:
```yaml
version: '3.5'
services:
php:
build:
context: .
dockerfile: ./docker/php.Dockerfile
```
but now the `./` prefix is causing:
```
ERROR: Cannot locate specified Dockerfile: ./docker/php.Dockerfile
```
I have to change it to `dockerfile: docker/php.Dockerfile` to get it to work.
--
docker-py version: 3.1.1
Python 3.6.4
`docker version`:
```
Client:
Version: 18.02.0-ce
API version: 1.36
Go version: go1.9.4
Git commit: fc4de447b5
Built: Tue Feb 13 15:28:01 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.02.0-ce
API version: 1.36 (minimum version 1.12)
Go version: go1.9.4
Git commit: fc4de447b5
Built: Tue Feb 13 15:28:34 2018
OS/Arch: linux/amd64
Experimental: false
```
OS: Manjaro Linux 17.1.6
</issue>
<code>
[start of docker/utils/build.py]
1 import os
2 import re
3
4 from ..constants import IS_WINDOWS_PLATFORM
5 from fnmatch import fnmatch
6 from itertools import chain
7 from .utils import create_archive
8
9
10 def tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):
11 root = os.path.abspath(path)
12 exclude = exclude or []
13 return create_archive(
14 files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),
15 root=root, fileobj=fileobj, gzip=gzip
16 )
17
18
19 _SEP = re.compile('/|\\\\') if IS_WINDOWS_PLATFORM else re.compile('/')
20
21
22 def exclude_paths(root, patterns, dockerfile=None):
23 """
24 Given a root directory path and a list of .dockerignore patterns, return
25 an iterator of all paths (both regular files and directories) in the root
26 directory that do *not* match any of the patterns.
27
28 All paths returned are relative to the root.
29 """
30
31 if dockerfile is None:
32 dockerfile = 'Dockerfile'
33
34 def normalize(p):
35 # Leading and trailing slashes are not relevant. Yes,
36 # "foo.py/" must exclude the "foo.py" regular file. "."
37 # components are not relevant either, even if the whole
38 # pattern is only ".", as the Docker reference states: "For
39 # historical reasons, the pattern . is ignored."
40 split = [pt for pt in re.split(_SEP, p) if pt and pt != '.']
41 # ".." component must be cleared with the potential previous
42 # component, regardless of whether it exists: "A preprocessing
43 # step [...] eliminates . and .. elements using Go's
44 # filepath.".
45 i = 0
46 while i < len(split):
47 if split[i] == '..':
48 del split[i]
49 if i > 0:
50 del split[i - 1]
51 i -= 1
52 else:
53 i += 1
54 return split
55
56 patterns = (
57 (True, normalize(p[1:]))
58 if p.startswith('!') else
59 (False, normalize(p))
60 for p in patterns)
61 patterns = list(reversed(list(chain(
62 # Exclude empty patterns such as "." or the empty string.
63 filter(lambda p: p[1], patterns),
64 # Always include the Dockerfile and .dockerignore
65 [(True, dockerfile.split('/')), (True, ['.dockerignore'])]))))
66 return set(walk(root, patterns))
67
68
69 def walk(root, patterns, default=True):
70 """
71 A collection of file lying below root that should be included according to
72 patterns.
73 """
74
75 def match(p):
76 if p[1][0] == '**':
77 rec = (p[0], p[1][1:])
78 return [p] + (match(rec) if rec[1] else [rec])
79 elif fnmatch(f, p[1][0]):
80 return [(p[0], p[1][1:])]
81 else:
82 return []
83
84 for f in os.listdir(root):
85 cur = os.path.join(root, f)
86 # The patterns if recursing in that directory.
87 sub = list(chain(*(match(p) for p in patterns)))
88 # Whether this file is explicitely included / excluded.
89 hit = next((p[0] for p in sub if not p[1]), None)
90 # Whether this file is implicitely included / excluded.
91 matched = default if hit is None else hit
92 sub = list(filter(lambda p: p[1], sub))
93 if os.path.isdir(cur):
94 # Entirely skip directories if there are no chance any subfile will
95 # be included.
96 if all(not p[0] for p in sub) and not matched:
97 continue
98 # I think this would greatly speed up dockerignore handling by not
99 # recursing into directories we are sure would be entirely
100 # included, and only yielding the directory itself, which will be
101 # recursively archived anyway. However the current unit test expect
102 # the full list of subfiles and I'm not 100% sure it would make no
103 # difference yet.
104 # if all(p[0] for p in sub) and matched:
105 # yield f
106 # continue
107 children = False
108 for r in (os.path.join(f, p) for p in walk(cur, sub, matched)):
109 yield r
110 children = True
111 # The current unit tests expect directories only under those
112 # conditions. It might be simplifiable though.
113 if (not sub or not children) and hit or hit is None and default:
114 yield f
115 elif matched:
116 yield f
117
[end of docker/utils/build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docker/utils/build.py b/docker/utils/build.py
--- a/docker/utils/build.py
+++ b/docker/utils/build.py
@@ -31,18 +31,21 @@
if dockerfile is None:
dockerfile = 'Dockerfile'
+ def split_path(p):
+ return [pt for pt in re.split(_SEP, p) if pt and pt != '.']
+
def normalize(p):
# Leading and trailing slashes are not relevant. Yes,
# "foo.py/" must exclude the "foo.py" regular file. "."
# components are not relevant either, even if the whole
# pattern is only ".", as the Docker reference states: "For
# historical reasons, the pattern . is ignored."
- split = [pt for pt in re.split(_SEP, p) if pt and pt != '.']
# ".." component must be cleared with the potential previous
# component, regardless of whether it exists: "A preprocessing
# step [...] eliminates . and .. elements using Go's
# filepath.".
i = 0
+ split = split_path(p)
while i < len(split):
if split[i] == '..':
del split[i]
@@ -62,7 +65,7 @@
# Exclude empty patterns such as "." or the empty string.
filter(lambda p: p[1], patterns),
# Always include the Dockerfile and .dockerignore
- [(True, dockerfile.split('/')), (True, ['.dockerignore'])]))))
+ [(True, split_path(dockerfile)), (True, ['.dockerignore'])]))))
return set(walk(root, patterns))
|
{"golden_diff": "diff --git a/docker/utils/build.py b/docker/utils/build.py\n--- a/docker/utils/build.py\n+++ b/docker/utils/build.py\n@@ -31,18 +31,21 @@\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n \n+ def split_path(p):\n+ return [pt for pt in re.split(_SEP, p) if pt and pt != '.']\n+\n def normalize(p):\n # Leading and trailing slashes are not relevant. Yes,\n # \"foo.py/\" must exclude the \"foo.py\" regular file. \".\"\n # components are not relevant either, even if the whole\n # pattern is only \".\", as the Docker reference states: \"For\n # historical reasons, the pattern . is ignored.\"\n- split = [pt for pt in re.split(_SEP, p) if pt and pt != '.']\n # \"..\" component must be cleared with the potential previous\n # component, regardless of whether it exists: \"A preprocessing\n # step [...] eliminates . and .. elements using Go's\n # filepath.\".\n i = 0\n+ split = split_path(p)\n while i < len(split):\n if split[i] == '..':\n del split[i]\n@@ -62,7 +65,7 @@\n # Exclude empty patterns such as \".\" or the empty string.\n filter(lambda p: p[1], patterns),\n # Always include the Dockerfile and .dockerignore\n- [(True, dockerfile.split('/')), (True, ['.dockerignore'])]))))\n+ [(True, split_path(dockerfile)), (True, ['.dockerignore'])]))))\n return set(walk(root, patterns))\n", "issue": "ERROR: Cannot locate specified Dockerfile\nI'm not sure if this is a Docker Compose bug or docker-py bug, but this used to work:\r\n\r\n`docker-compose.yml`:\r\n```yaml\r\nversion: '3.5'\r\n\r\nservices:\r\n php:\r\n build:\r\n context: .\r\n dockerfile: ./docker/php.Dockerfile\r\n```\r\n\r\nbut now the `./` prefix is causing:\r\n```\r\nERROR: Cannot locate specified Dockerfile: ./docker/php.Dockerfile\r\n```\r\n\r\nI have to change it to `dockerfile: docker/php.Dockerfile` to get it to work.\r\n\r\n--\r\n\r\ndocker-py version: 3.1.1\r\n\r\nPython 3.6.4\r\n\r\n`docker version`:\r\n```\r\nClient:\r\n Version:\t18.02.0-ce\r\n API version:\t1.36\r\n Go version:\tgo1.9.4\r\n Git commit:\tfc4de447b5\r\n Built:\tTue Feb 13 15:28:01 2018\r\n OS/Arch:\tlinux/amd64\r\n Experimental:\tfalse\r\n Orchestrator:\tswarm\r\n\r\nServer:\r\n Engine:\r\n Version:\t18.02.0-ce\r\n API version:\t1.36 (minimum version 1.12)\r\n Go version:\tgo1.9.4\r\n Git commit:\tfc4de447b5\r\n Built:\tTue Feb 13 15:28:34 2018\r\n OS/Arch:\tlinux/amd64\r\n Experimental:\tfalse\r\n```\r\n\r\nOS: Manjaro Linux 17.1.6\n", "before_files": [{"content": "import os\nimport re\n\nfrom ..constants import IS_WINDOWS_PLATFORM\nfrom fnmatch import fnmatch\nfrom itertools import chain\nfrom .utils import create_archive\n\n\ndef tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):\n root = os.path.abspath(path)\n exclude = exclude or []\n return create_archive(\n files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),\n root=root, fileobj=fileobj, gzip=gzip\n )\n\n\n_SEP = re.compile('/|\\\\\\\\') if IS_WINDOWS_PLATFORM else re.compile('/')\n\n\ndef exclude_paths(root, patterns, dockerfile=None):\n \"\"\"\n Given a root directory path and a list of .dockerignore patterns, return\n an iterator of all paths (both regular files and directories) in the root\n directory that do *not* match any of the patterns.\n\n All paths returned are relative to the root.\n \"\"\"\n\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n\n def normalize(p):\n # Leading and trailing slashes are not relevant. Yes,\n # \"foo.py/\" must exclude the \"foo.py\" regular file. \".\"\n # components are not relevant either, even if the whole\n # pattern is only \".\", as the Docker reference states: \"For\n # historical reasons, the pattern . is ignored.\"\n split = [pt for pt in re.split(_SEP, p) if pt and pt != '.']\n # \"..\" component must be cleared with the potential previous\n # component, regardless of whether it exists: \"A preprocessing\n # step [...] eliminates . and .. elements using Go's\n # filepath.\".\n i = 0\n while i < len(split):\n if split[i] == '..':\n del split[i]\n if i > 0:\n del split[i - 1]\n i -= 1\n else:\n i += 1\n return split\n\n patterns = (\n (True, normalize(p[1:]))\n if p.startswith('!') else\n (False, normalize(p))\n for p in patterns)\n patterns = list(reversed(list(chain(\n # Exclude empty patterns such as \".\" or the empty string.\n filter(lambda p: p[1], patterns),\n # Always include the Dockerfile and .dockerignore\n [(True, dockerfile.split('/')), (True, ['.dockerignore'])]))))\n return set(walk(root, patterns))\n\n\ndef walk(root, patterns, default=True):\n \"\"\"\n A collection of file lying below root that should be included according to\n patterns.\n \"\"\"\n\n def match(p):\n if p[1][0] == '**':\n rec = (p[0], p[1][1:])\n return [p] + (match(rec) if rec[1] else [rec])\n elif fnmatch(f, p[1][0]):\n return [(p[0], p[1][1:])]\n else:\n return []\n\n for f in os.listdir(root):\n cur = os.path.join(root, f)\n # The patterns if recursing in that directory.\n sub = list(chain(*(match(p) for p in patterns)))\n # Whether this file is explicitely included / excluded.\n hit = next((p[0] for p in sub if not p[1]), None)\n # Whether this file is implicitely included / excluded.\n matched = default if hit is None else hit\n sub = list(filter(lambda p: p[1], sub))\n if os.path.isdir(cur):\n # Entirely skip directories if there are no chance any subfile will\n # be included.\n if all(not p[0] for p in sub) and not matched:\n continue\n # I think this would greatly speed up dockerignore handling by not\n # recursing into directories we are sure would be entirely\n # included, and only yielding the directory itself, which will be\n # recursively archived anyway. However the current unit test expect\n # the full list of subfiles and I'm not 100% sure it would make no\n # difference yet.\n # if all(p[0] for p in sub) and matched:\n # yield f\n # continue\n children = False\n for r in (os.path.join(f, p) for p in walk(cur, sub, matched)):\n yield r\n children = True\n # The current unit tests expect directories only under those\n # conditions. It might be simplifiable though.\n if (not sub or not children) and hit or hit is None and default:\n yield f\n elif matched:\n yield f\n", "path": "docker/utils/build.py"}]}
| 2,142 | 362 |
gh_patches_debug_2161
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-3631
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem adding datasets to a group on package_create, the user has 'Editor' capacity
### CKAN Version if known (or site URL)
Found in 2.2.2 and later
### Please describe the expected behaviour
I manage a customized CKAN for a client. The create dataset page is changed in a way it is possible to add all metadata to a dataset on 'package_create'. Also it should be possible to add the dataset direktly to groups. The user has the capacity 'Editor' on the group.
### Please describe the actual behaviour
The auth function 'package_create' always does the `check2 = _check_group_auth(context,data_dict)`, which is a different approach than in 'package_update' auth function.
That leads to using the call to `authz.has_user_permission_for_group_or_org(group.id, user, 'update')`.
Later this leads to a comparison of permission '**update**' with the permissions of 'Editor' role ('editor', ['read', 'delete_dataset', 'create_dataset', 'update_dataset', 'manage_group']).
`if 'admin' in perms or permission in perms:
return True`
In my opinion this can never be true and thus is bug.
Could you please check this?
Regards,
Daniel
</issue>
<code>
[start of ckan/logic/auth/create.py]
1 # encoding: utf-8
2
3 import ckan.logic as logic
4 import ckan.authz as authz
5 import ckan.logic.auth as logic_auth
6
7 from ckan.common import _
8
9 @logic.auth_allow_anonymous_access
10 def package_create(context, data_dict=None):
11 user = context['user']
12
13 if authz.auth_is_anon_user(context):
14 check1 = all(authz.check_config_permission(p) for p in (
15 'anon_create_dataset',
16 'create_dataset_if_not_in_organization',
17 'create_unowned_dataset',
18 ))
19 else:
20 check1 = all(authz.check_config_permission(p) for p in (
21 'create_dataset_if_not_in_organization',
22 'create_unowned_dataset',
23 )) or authz.has_user_permission_for_some_org(
24 user, 'create_dataset')
25
26 if not check1:
27 return {'success': False, 'msg': _('User %s not authorized to create packages') % user}
28
29 check2 = _check_group_auth(context,data_dict)
30 if not check2:
31 return {'success': False, 'msg': _('User %s not authorized to edit these groups') % user}
32
33 # If an organization is given are we able to add a dataset to it?
34 data_dict = data_dict or {}
35 org_id = data_dict.get('owner_org')
36 if org_id and not authz.has_user_permission_for_group_or_org(
37 org_id, user, 'create_dataset'):
38 return {'success': False, 'msg': _('User %s not authorized to add dataset to this organization') % user}
39 return {'success': True}
40
41
42 def file_upload(context, data_dict=None):
43 user = context['user']
44 if authz.auth_is_anon_user(context):
45 return {'success': False, 'msg': _('User %s not authorized to create packages') % user}
46 return {'success': True}
47
48
49 def resource_create(context, data_dict):
50 model = context['model']
51 user = context.get('user')
52
53 package_id = data_dict.get('package_id')
54 if not package_id and data_dict.get('id'):
55 # This can happen when auth is deferred, eg from `resource_view_create`
56 resource = logic_auth.get_resource_object(context, data_dict)
57 package_id = resource.package_id
58
59 if not package_id:
60 raise logic.NotFound(
61 _('No dataset id provided, cannot check auth.')
62 )
63
64 # check authentication against package
65 pkg = model.Package.get(package_id)
66 if not pkg:
67 raise logic.NotFound(
68 _('No package found for this resource, cannot check auth.')
69 )
70
71 pkg_dict = {'id': pkg.id}
72 authorized = authz.is_authorized('package_update', context, pkg_dict).get('success')
73
74 if not authorized:
75 return {'success': False,
76 'msg': _('User %s not authorized to create resources on dataset %s') %
77 (str(user), package_id)}
78 else:
79 return {'success': True}
80
81
82 def resource_view_create(context, data_dict):
83 return authz.is_authorized('resource_create', context, {'id': data_dict['resource_id']})
84
85
86 def resource_create_default_resource_views(context, data_dict):
87 return authz.is_authorized('resource_create', context, {'id': data_dict['resource']['id']})
88
89
90 def package_create_default_resource_views(context, data_dict):
91 return authz.is_authorized('package_update', context,
92 data_dict['package'])
93
94
95 def package_relationship_create(context, data_dict):
96 user = context['user']
97
98 id = data_dict['subject']
99 id2 = data_dict['object']
100
101 # If we can update each package we can see the relationships
102 authorized1 = authz.is_authorized_boolean(
103 'package_update', context, {'id': id})
104 authorized2 = authz.is_authorized_boolean(
105 'package_update', context, {'id': id2})
106
107 if not authorized1 and authorized2:
108 return {'success': False, 'msg': _('User %s not authorized to edit these packages') % user}
109 else:
110 return {'success': True}
111
112 def group_create(context, data_dict=None):
113 user = context['user']
114 user = authz.get_user_id_for_username(user, allow_none=True)
115
116 if user and authz.check_config_permission('user_create_groups'):
117 return {'success': True}
118 return {'success': False,
119 'msg': _('User %s not authorized to create groups') % user}
120
121
122 def organization_create(context, data_dict=None):
123 user = context['user']
124 user = authz.get_user_id_for_username(user, allow_none=True)
125
126 if user and authz.check_config_permission('user_create_organizations'):
127 return {'success': True}
128 return {'success': False,
129 'msg': _('User %s not authorized to create organizations') % user}
130
131 def rating_create(context, data_dict):
132 # No authz check in the logic function
133 return {'success': True}
134
135
136 @logic.auth_allow_anonymous_access
137 def user_create(context, data_dict=None):
138 using_api = 'api_version' in context
139 create_user_via_api = authz.check_config_permission(
140 'create_user_via_api')
141 create_user_via_web = authz.check_config_permission(
142 'create_user_via_web')
143
144 if using_api and not create_user_via_api:
145 return {'success': False, 'msg': _('User {user} not authorized to '
146 'create users via the API').format(user=context.get('user'))}
147 if not using_api and not create_user_via_web:
148 return {'success': False, 'msg': _('Not authorized to '
149 'create users')}
150 return {'success': True}
151
152 def user_invite(context, data_dict):
153 data_dict['id'] = data_dict['group_id']
154 return group_member_create(context, data_dict)
155
156 def _check_group_auth(context, data_dict):
157 '''Has this user got update permission for all of the given groups?
158 If there is a package in the context then ignore that package's groups.
159 (owner_org is checked elsewhere.)
160 :returns: False if not allowed to update one (or more) of the given groups.
161 True otherwise. i.e. True is the default. A blank data_dict
162 mentions no groups, so it returns True.
163
164 '''
165 # FIXME This code is shared amoung other logic.auth files and should be
166 # somewhere better
167 if not data_dict:
168 return True
169
170 model = context['model']
171 user = context['user']
172 pkg = context.get("package")
173
174 api_version = context.get('api_version') or '1'
175
176 group_blobs = data_dict.get('groups', [])
177 groups = set()
178 for group_blob in group_blobs:
179 # group_blob might be a dict or a group_ref
180 if isinstance(group_blob, dict):
181 # use group id by default, but we can accept name as well
182 id = group_blob.get('id') or group_blob.get('name')
183 if not id:
184 continue
185 else:
186 id = group_blob
187 grp = model.Group.get(id)
188 if grp is None:
189 raise logic.NotFound(_('Group was not found.'))
190 groups.add(grp)
191
192 if pkg:
193 pkg_groups = pkg.get_groups()
194
195 groups = groups - set(pkg_groups)
196
197 for group in groups:
198 if not authz.has_user_permission_for_group_or_org(group.id, user, 'update'):
199 return False
200
201 return True
202
203 ## Modifications for rest api
204
205 def package_create_rest(context, data_dict):
206 model = context['model']
207 user = context['user']
208 if not user:
209 return {'success': False, 'msg': _('Valid API key needed to create a package')}
210
211 return authz.is_authorized('package_create', context, data_dict)
212
213 def group_create_rest(context, data_dict):
214 model = context['model']
215 user = context['user']
216 if not user:
217 return {'success': False, 'msg': _('Valid API key needed to create a group')}
218
219 return authz.is_authorized('group_create', context, data_dict)
220
221 def vocabulary_create(context, data_dict):
222 # sysadmins only
223 return {'success': False}
224
225 def activity_create(context, data_dict):
226 # sysadmins only
227 return {'success': False}
228
229 def tag_create(context, data_dict):
230 # sysadmins only
231 return {'success': False}
232
233 def _group_or_org_member_create(context, data_dict):
234 user = context['user']
235 group_id = data_dict['id']
236 if not authz.has_user_permission_for_group_or_org(group_id, user, 'membership'):
237 return {'success': False, 'msg': _('User %s not authorized to add members') % user}
238 return {'success': True}
239
240 def organization_member_create(context, data_dict):
241 return _group_or_org_member_create(context, data_dict)
242
243 def group_member_create(context, data_dict):
244 return _group_or_org_member_create(context, data_dict)
245
246 def member_create(context, data_dict):
247 group = logic_auth.get_group_object(context, data_dict)
248 user = context['user']
249
250 # User must be able to update the group to add a member to it
251 permission = 'update'
252 # However if the user is member of group then they can add/remove datasets
253 if not group.is_organization and data_dict.get('object_type') == 'package':
254 permission = 'manage_group'
255
256 authorized = authz.has_user_permission_for_group_or_org(group.id,
257 user,
258 permission)
259 if not authorized:
260 return {'success': False,
261 'msg': _('User %s not authorized to edit group %s') %
262 (str(user), group.id)}
263 else:
264 return {'success': True}
265
[end of ckan/logic/auth/create.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ckan/logic/auth/create.py b/ckan/logic/auth/create.py
--- a/ckan/logic/auth/create.py
+++ b/ckan/logic/auth/create.py
@@ -195,7 +195,7 @@
groups = groups - set(pkg_groups)
for group in groups:
- if not authz.has_user_permission_for_group_or_org(group.id, user, 'update'):
+ if not authz.has_user_permission_for_group_or_org(group.id, user, 'manage_group'):
return False
return True
|
{"golden_diff": "diff --git a/ckan/logic/auth/create.py b/ckan/logic/auth/create.py\n--- a/ckan/logic/auth/create.py\n+++ b/ckan/logic/auth/create.py\n@@ -195,7 +195,7 @@\n groups = groups - set(pkg_groups)\n \n for group in groups:\n- if not authz.has_user_permission_for_group_or_org(group.id, user, 'update'):\n+ if not authz.has_user_permission_for_group_or_org(group.id, user, 'manage_group'):\n return False\n \n return True\n", "issue": "Problem adding datasets to a group on package_create, the user has 'Editor' capacity\n### CKAN Version if known (or site URL)\r\nFound in 2.2.2 and later\r\n\r\n### Please describe the expected behaviour\r\nI manage a customized CKAN for a client. The create dataset page is changed in a way it is possible to add all metadata to a dataset on 'package_create'. Also it should be possible to add the dataset direktly to groups. The user has the capacity 'Editor' on the group.\r\n\r\n### Please describe the actual behaviour\r\nThe auth function 'package_create' always does the `check2 = _check_group_auth(context,data_dict)`, which is a different approach than in 'package_update' auth function.\r\nThat leads to using the call to `authz.has_user_permission_for_group_or_org(group.id, user, 'update')`.\r\nLater this leads to a comparison of permission '**update**' with the permissions of 'Editor' role ('editor', ['read', 'delete_dataset', 'create_dataset', 'update_dataset', 'manage_group']). \r\n`if 'admin' in perms or permission in perms:\r\n return True`\r\nIn my opinion this can never be true and thus is bug.\r\n\r\nCould you please check this?\r\n\r\nRegards,\r\nDaniel\r\n\r\n\n", "before_files": [{"content": "# encoding: utf-8\n\nimport ckan.logic as logic\nimport ckan.authz as authz\nimport ckan.logic.auth as logic_auth\n\nfrom ckan.common import _\n\[email protected]_allow_anonymous_access\ndef package_create(context, data_dict=None):\n user = context['user']\n\n if authz.auth_is_anon_user(context):\n check1 = all(authz.check_config_permission(p) for p in (\n 'anon_create_dataset',\n 'create_dataset_if_not_in_organization',\n 'create_unowned_dataset',\n ))\n else:\n check1 = all(authz.check_config_permission(p) for p in (\n 'create_dataset_if_not_in_organization',\n 'create_unowned_dataset',\n )) or authz.has_user_permission_for_some_org(\n user, 'create_dataset')\n\n if not check1:\n return {'success': False, 'msg': _('User %s not authorized to create packages') % user}\n\n check2 = _check_group_auth(context,data_dict)\n if not check2:\n return {'success': False, 'msg': _('User %s not authorized to edit these groups') % user}\n\n # If an organization is given are we able to add a dataset to it?\n data_dict = data_dict or {}\n org_id = data_dict.get('owner_org')\n if org_id and not authz.has_user_permission_for_group_or_org(\n org_id, user, 'create_dataset'):\n return {'success': False, 'msg': _('User %s not authorized to add dataset to this organization') % user}\n return {'success': True}\n\n\ndef file_upload(context, data_dict=None):\n user = context['user']\n if authz.auth_is_anon_user(context):\n return {'success': False, 'msg': _('User %s not authorized to create packages') % user}\n return {'success': True}\n\n\ndef resource_create(context, data_dict):\n model = context['model']\n user = context.get('user')\n\n package_id = data_dict.get('package_id')\n if not package_id and data_dict.get('id'):\n # This can happen when auth is deferred, eg from `resource_view_create`\n resource = logic_auth.get_resource_object(context, data_dict)\n package_id = resource.package_id\n\n if not package_id:\n raise logic.NotFound(\n _('No dataset id provided, cannot check auth.')\n )\n\n # check authentication against package\n pkg = model.Package.get(package_id)\n if not pkg:\n raise logic.NotFound(\n _('No package found for this resource, cannot check auth.')\n )\n\n pkg_dict = {'id': pkg.id}\n authorized = authz.is_authorized('package_update', context, pkg_dict).get('success')\n\n if not authorized:\n return {'success': False,\n 'msg': _('User %s not authorized to create resources on dataset %s') %\n (str(user), package_id)}\n else:\n return {'success': True}\n\n\ndef resource_view_create(context, data_dict):\n return authz.is_authorized('resource_create', context, {'id': data_dict['resource_id']})\n\n\ndef resource_create_default_resource_views(context, data_dict):\n return authz.is_authorized('resource_create', context, {'id': data_dict['resource']['id']})\n\n\ndef package_create_default_resource_views(context, data_dict):\n return authz.is_authorized('package_update', context,\n data_dict['package'])\n\n\ndef package_relationship_create(context, data_dict):\n user = context['user']\n\n id = data_dict['subject']\n id2 = data_dict['object']\n\n # If we can update each package we can see the relationships\n authorized1 = authz.is_authorized_boolean(\n 'package_update', context, {'id': id})\n authorized2 = authz.is_authorized_boolean(\n 'package_update', context, {'id': id2})\n\n if not authorized1 and authorized2:\n return {'success': False, 'msg': _('User %s not authorized to edit these packages') % user}\n else:\n return {'success': True}\n\ndef group_create(context, data_dict=None):\n user = context['user']\n user = authz.get_user_id_for_username(user, allow_none=True)\n\n if user and authz.check_config_permission('user_create_groups'):\n return {'success': True}\n return {'success': False,\n 'msg': _('User %s not authorized to create groups') % user}\n\n\ndef organization_create(context, data_dict=None):\n user = context['user']\n user = authz.get_user_id_for_username(user, allow_none=True)\n\n if user and authz.check_config_permission('user_create_organizations'):\n return {'success': True}\n return {'success': False,\n 'msg': _('User %s not authorized to create organizations') % user}\n\ndef rating_create(context, data_dict):\n # No authz check in the logic function\n return {'success': True}\n\n\[email protected]_allow_anonymous_access\ndef user_create(context, data_dict=None):\n using_api = 'api_version' in context\n create_user_via_api = authz.check_config_permission(\n 'create_user_via_api')\n create_user_via_web = authz.check_config_permission(\n 'create_user_via_web')\n\n if using_api and not create_user_via_api:\n return {'success': False, 'msg': _('User {user} not authorized to '\n 'create users via the API').format(user=context.get('user'))}\n if not using_api and not create_user_via_web:\n return {'success': False, 'msg': _('Not authorized to '\n 'create users')}\n return {'success': True}\n\ndef user_invite(context, data_dict):\n data_dict['id'] = data_dict['group_id']\n return group_member_create(context, data_dict)\n\ndef _check_group_auth(context, data_dict):\n '''Has this user got update permission for all of the given groups?\n If there is a package in the context then ignore that package's groups.\n (owner_org is checked elsewhere.)\n :returns: False if not allowed to update one (or more) of the given groups.\n True otherwise. i.e. True is the default. A blank data_dict\n mentions no groups, so it returns True.\n\n '''\n # FIXME This code is shared amoung other logic.auth files and should be\n # somewhere better\n if not data_dict:\n return True\n\n model = context['model']\n user = context['user']\n pkg = context.get(\"package\")\n\n api_version = context.get('api_version') or '1'\n\n group_blobs = data_dict.get('groups', [])\n groups = set()\n for group_blob in group_blobs:\n # group_blob might be a dict or a group_ref\n if isinstance(group_blob, dict):\n # use group id by default, but we can accept name as well\n id = group_blob.get('id') or group_blob.get('name')\n if not id:\n continue\n else:\n id = group_blob\n grp = model.Group.get(id)\n if grp is None:\n raise logic.NotFound(_('Group was not found.'))\n groups.add(grp)\n\n if pkg:\n pkg_groups = pkg.get_groups()\n\n groups = groups - set(pkg_groups)\n\n for group in groups:\n if not authz.has_user_permission_for_group_or_org(group.id, user, 'update'):\n return False\n\n return True\n\n## Modifications for rest api\n\ndef package_create_rest(context, data_dict):\n model = context['model']\n user = context['user']\n if not user:\n return {'success': False, 'msg': _('Valid API key needed to create a package')}\n\n return authz.is_authorized('package_create', context, data_dict)\n\ndef group_create_rest(context, data_dict):\n model = context['model']\n user = context['user']\n if not user:\n return {'success': False, 'msg': _('Valid API key needed to create a group')}\n\n return authz.is_authorized('group_create', context, data_dict)\n\ndef vocabulary_create(context, data_dict):\n # sysadmins only\n return {'success': False}\n\ndef activity_create(context, data_dict):\n # sysadmins only\n return {'success': False}\n\ndef tag_create(context, data_dict):\n # sysadmins only\n return {'success': False}\n\ndef _group_or_org_member_create(context, data_dict):\n user = context['user']\n group_id = data_dict['id']\n if not authz.has_user_permission_for_group_or_org(group_id, user, 'membership'):\n return {'success': False, 'msg': _('User %s not authorized to add members') % user}\n return {'success': True}\n\ndef organization_member_create(context, data_dict):\n return _group_or_org_member_create(context, data_dict)\n\ndef group_member_create(context, data_dict):\n return _group_or_org_member_create(context, data_dict)\n\ndef member_create(context, data_dict):\n group = logic_auth.get_group_object(context, data_dict)\n user = context['user']\n\n # User must be able to update the group to add a member to it\n permission = 'update'\n # However if the user is member of group then they can add/remove datasets\n if not group.is_organization and data_dict.get('object_type') == 'package':\n permission = 'manage_group'\n\n authorized = authz.has_user_permission_for_group_or_org(group.id,\n user,\n permission)\n if not authorized:\n return {'success': False,\n 'msg': _('User %s not authorized to edit group %s') %\n (str(user), group.id)}\n else:\n return {'success': True}\n", "path": "ckan/logic/auth/create.py"}]}
| 3,596 | 125 |
gh_patches_debug_3677
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-2275
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding a space after typing a username will not show that user in the search results
**Describe the bug**
When I search for "@[email protected]" it won't work if the input has a space at the end ("@[email protected] ")
It may work if the user has already been searched for before though.
**To Reproduce**
Steps to reproduce the behavior:
1. search a user you don't currently follow
2. add a space at the end
3. the user won't be found
**Expected behavior**
spaces should be ignored when looking for usernames
**Instance**
On which BookWyrm instance did you encounter this problem.
**Additional context**
Bookrastinating.com
---
**Desktop**
- OS: Fedora
- Browser Firefox
- Version 102
</issue>
<code>
[start of bookwyrm/views/search.py]
1 """ search views"""
2 import re
3
4 from django.contrib.postgres.search import TrigramSimilarity
5 from django.core.paginator import Paginator
6 from django.db.models.functions import Greatest
7 from django.http import JsonResponse
8 from django.template.response import TemplateResponse
9 from django.views import View
10
11 from bookwyrm import models
12 from bookwyrm.connectors import connector_manager
13 from bookwyrm.book_search import search, format_search_result
14 from bookwyrm.settings import PAGE_LENGTH
15 from bookwyrm.utils import regex
16 from .helpers import is_api_request
17 from .helpers import handle_remote_webfinger
18
19
20 # pylint: disable= no-self-use
21 class Search(View):
22 """search users or books"""
23
24 def get(self, request):
25 """that search bar up top"""
26 if is_api_request(request):
27 return api_book_search(request)
28
29 query = request.GET.get("q")
30 if not query:
31 return TemplateResponse(request, "search/book.html")
32
33 search_type = request.GET.get("type")
34 if query and not search_type:
35 search_type = "user" if "@" in query else "book"
36
37 endpoints = {
38 "book": book_search,
39 "user": user_search,
40 "list": list_search,
41 }
42 if not search_type in endpoints:
43 search_type = "book"
44
45 return endpoints[search_type](request)
46
47
48 def api_book_search(request):
49 """Return books via API response"""
50 query = request.GET.get("q")
51 query = isbn_check(query)
52 min_confidence = request.GET.get("min_confidence", 0)
53 # only return local book results via json so we don't cascade
54 book_results = search(query, min_confidence=min_confidence)
55 return JsonResponse(
56 [format_search_result(r) for r in book_results[:10]], safe=False
57 )
58
59
60 def book_search(request):
61 """the real business is elsewhere"""
62 query = request.GET.get("q")
63 # check if query is isbn
64 query = isbn_check(query)
65 min_confidence = request.GET.get("min_confidence", 0)
66 search_remote = request.GET.get("remote", False) and request.user.is_authenticated
67
68 # try a local-only search
69 local_results = search(query, min_confidence=min_confidence)
70 paginated = Paginator(local_results, PAGE_LENGTH)
71 page = paginated.get_page(request.GET.get("page"))
72 data = {
73 "query": query,
74 "results": page,
75 "type": "book",
76 "remote": search_remote,
77 "page_range": paginated.get_elided_page_range(
78 page.number, on_each_side=2, on_ends=1
79 ),
80 }
81 # if a logged in user requested remote results or got no local results, try remote
82 if request.user.is_authenticated and (not local_results or search_remote):
83 data["remote_results"] = connector_manager.search(
84 query, min_confidence=min_confidence
85 )
86 return TemplateResponse(request, "search/book.html", data)
87
88
89 def user_search(request):
90 """cool kids members only user search"""
91 viewer = request.user
92 query = request.GET.get("q")
93 data = {"type": "user", "query": query}
94 # logged out viewers can't search users
95 if not viewer.is_authenticated:
96 return TemplateResponse(request, "search/user.html", data)
97
98 # use webfinger for mastodon style [email protected] username to load the user if
99 # they don't exist locally (handle_remote_webfinger will check the db)
100 if re.match(regex.FULL_USERNAME, query):
101 handle_remote_webfinger(query)
102
103 results = (
104 models.User.viewer_aware_objects(viewer)
105 .annotate(
106 similarity=Greatest(
107 TrigramSimilarity("username", query),
108 TrigramSimilarity("localname", query),
109 )
110 )
111 .filter(
112 similarity__gt=0.5,
113 )
114 .order_by("-similarity")
115 )
116 paginated = Paginator(results, PAGE_LENGTH)
117 page = paginated.get_page(request.GET.get("page"))
118 data["results"] = page
119 data["page_range"] = paginated.get_elided_page_range(
120 page.number, on_each_side=2, on_ends=1
121 )
122 return TemplateResponse(request, "search/user.html", data)
123
124
125 def list_search(request):
126 """any relevent lists?"""
127 query = request.GET.get("q")
128 data = {"query": query, "type": "list"}
129 results = (
130 models.List.privacy_filter(
131 request.user,
132 privacy_levels=["public", "followers"],
133 )
134 .annotate(
135 similarity=Greatest(
136 TrigramSimilarity("name", query),
137 TrigramSimilarity("description", query),
138 )
139 )
140 .filter(
141 similarity__gt=0.1,
142 )
143 .order_by("-similarity")
144 )
145 paginated = Paginator(results, PAGE_LENGTH)
146 page = paginated.get_page(request.GET.get("page"))
147 data["results"] = page
148 data["page_range"] = paginated.get_elided_page_range(
149 page.number, on_each_side=2, on_ends=1
150 )
151 return TemplateResponse(request, "search/list.html", data)
152
153
154 def isbn_check(query):
155 """isbn10 or isbn13 check, if so remove separators"""
156 if query:
157 su_num = re.sub(r"(?<=\d)\D(?=\d|[xX])", "", query)
158 if len(su_num) == 13 and su_num.isdecimal():
159 # Multiply every other digit by 3
160 # Add these numbers and the other digits
161 product = sum(int(ch) for ch in su_num[::2]) + sum(
162 int(ch) * 3 for ch in su_num[1::2]
163 )
164 if product % 10 == 0:
165 return su_num
166 elif (
167 len(su_num) == 10
168 and su_num[:-1].isdecimal()
169 and (su_num[-1].isdecimal() or su_num[-1].lower() == "x")
170 ):
171 product = 0
172 # Iterate through code_string
173 for i in range(9):
174 # for each character, multiply by a different decreasing number: 10 - x
175 product = product + int(su_num[i]) * (10 - i)
176 # Handle last character
177 if su_num[9].lower() == "x":
178 product += 10
179 else:
180 product += int(su_num[9])
181 if product % 11 == 0:
182 return su_num
183 return query
184
[end of bookwyrm/views/search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bookwyrm/views/search.py b/bookwyrm/views/search.py
--- a/bookwyrm/views/search.py
+++ b/bookwyrm/views/search.py
@@ -90,6 +90,7 @@
"""cool kids members only user search"""
viewer = request.user
query = request.GET.get("q")
+ query = query.strip()
data = {"type": "user", "query": query}
# logged out viewers can't search users
if not viewer.is_authenticated:
|
{"golden_diff": "diff --git a/bookwyrm/views/search.py b/bookwyrm/views/search.py\n--- a/bookwyrm/views/search.py\n+++ b/bookwyrm/views/search.py\n@@ -90,6 +90,7 @@\n \"\"\"cool kids members only user search\"\"\"\n viewer = request.user\n query = request.GET.get(\"q\")\n+ query = query.strip()\n data = {\"type\": \"user\", \"query\": query}\n # logged out viewers can't search users\n if not viewer.is_authenticated:\n", "issue": "Adding a space after typing a username will not show that user in the search results\n**Describe the bug**\r\nWhen I search for \"@[email protected]\" it won't work if the input has a space at the end (\"@[email protected] \")\r\nIt may work if the user has already been searched for before though. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. search a user you don't currently follow\r\n2. add a space at the end\r\n3. the user won't be found\r\n\r\n**Expected behavior**\r\nspaces should be ignored when looking for usernames\r\n\r\n**Instance**\r\nOn which BookWyrm instance did you encounter this problem.\r\n\r\n**Additional context**\r\nBookrastinating.com\r\n\r\n---\r\n\r\n**Desktop**\r\n - OS: Fedora\r\n - Browser Firefox\r\n - Version 102\r\n\r\n\n", "before_files": [{"content": "\"\"\" search views\"\"\"\nimport re\n\nfrom django.contrib.postgres.search import TrigramSimilarity\nfrom django.core.paginator import Paginator\nfrom django.db.models.functions import Greatest\nfrom django.http import JsonResponse\nfrom django.template.response import TemplateResponse\nfrom django.views import View\n\nfrom bookwyrm import models\nfrom bookwyrm.connectors import connector_manager\nfrom bookwyrm.book_search import search, format_search_result\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom bookwyrm.utils import regex\nfrom .helpers import is_api_request\nfrom .helpers import handle_remote_webfinger\n\n\n# pylint: disable= no-self-use\nclass Search(View):\n \"\"\"search users or books\"\"\"\n\n def get(self, request):\n \"\"\"that search bar up top\"\"\"\n if is_api_request(request):\n return api_book_search(request)\n\n query = request.GET.get(\"q\")\n if not query:\n return TemplateResponse(request, \"search/book.html\")\n\n search_type = request.GET.get(\"type\")\n if query and not search_type:\n search_type = \"user\" if \"@\" in query else \"book\"\n\n endpoints = {\n \"book\": book_search,\n \"user\": user_search,\n \"list\": list_search,\n }\n if not search_type in endpoints:\n search_type = \"book\"\n\n return endpoints[search_type](request)\n\n\ndef api_book_search(request):\n \"\"\"Return books via API response\"\"\"\n query = request.GET.get(\"q\")\n query = isbn_check(query)\n min_confidence = request.GET.get(\"min_confidence\", 0)\n # only return local book results via json so we don't cascade\n book_results = search(query, min_confidence=min_confidence)\n return JsonResponse(\n [format_search_result(r) for r in book_results[:10]], safe=False\n )\n\n\ndef book_search(request):\n \"\"\"the real business is elsewhere\"\"\"\n query = request.GET.get(\"q\")\n # check if query is isbn\n query = isbn_check(query)\n min_confidence = request.GET.get(\"min_confidence\", 0)\n search_remote = request.GET.get(\"remote\", False) and request.user.is_authenticated\n\n # try a local-only search\n local_results = search(query, min_confidence=min_confidence)\n paginated = Paginator(local_results, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"query\": query,\n \"results\": page,\n \"type\": \"book\",\n \"remote\": search_remote,\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n }\n # if a logged in user requested remote results or got no local results, try remote\n if request.user.is_authenticated and (not local_results or search_remote):\n data[\"remote_results\"] = connector_manager.search(\n query, min_confidence=min_confidence\n )\n return TemplateResponse(request, \"search/book.html\", data)\n\n\ndef user_search(request):\n \"\"\"cool kids members only user search\"\"\"\n viewer = request.user\n query = request.GET.get(\"q\")\n data = {\"type\": \"user\", \"query\": query}\n # logged out viewers can't search users\n if not viewer.is_authenticated:\n return TemplateResponse(request, \"search/user.html\", data)\n\n # use webfinger for mastodon style [email protected] username to load the user if\n # they don't exist locally (handle_remote_webfinger will check the db)\n if re.match(regex.FULL_USERNAME, query):\n handle_remote_webfinger(query)\n\n results = (\n models.User.viewer_aware_objects(viewer)\n .annotate(\n similarity=Greatest(\n TrigramSimilarity(\"username\", query),\n TrigramSimilarity(\"localname\", query),\n )\n )\n .filter(\n similarity__gt=0.5,\n )\n .order_by(\"-similarity\")\n )\n paginated = Paginator(results, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data[\"results\"] = page\n data[\"page_range\"] = paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n )\n return TemplateResponse(request, \"search/user.html\", data)\n\n\ndef list_search(request):\n \"\"\"any relevent lists?\"\"\"\n query = request.GET.get(\"q\")\n data = {\"query\": query, \"type\": \"list\"}\n results = (\n models.List.privacy_filter(\n request.user,\n privacy_levels=[\"public\", \"followers\"],\n )\n .annotate(\n similarity=Greatest(\n TrigramSimilarity(\"name\", query),\n TrigramSimilarity(\"description\", query),\n )\n )\n .filter(\n similarity__gt=0.1,\n )\n .order_by(\"-similarity\")\n )\n paginated = Paginator(results, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data[\"results\"] = page\n data[\"page_range\"] = paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n )\n return TemplateResponse(request, \"search/list.html\", data)\n\n\ndef isbn_check(query):\n \"\"\"isbn10 or isbn13 check, if so remove separators\"\"\"\n if query:\n su_num = re.sub(r\"(?<=\\d)\\D(?=\\d|[xX])\", \"\", query)\n if len(su_num) == 13 and su_num.isdecimal():\n # Multiply every other digit by 3\n # Add these numbers and the other digits\n product = sum(int(ch) for ch in su_num[::2]) + sum(\n int(ch) * 3 for ch in su_num[1::2]\n )\n if product % 10 == 0:\n return su_num\n elif (\n len(su_num) == 10\n and su_num[:-1].isdecimal()\n and (su_num[-1].isdecimal() or su_num[-1].lower() == \"x\")\n ):\n product = 0\n # Iterate through code_string\n for i in range(9):\n # for each character, multiply by a different decreasing number: 10 - x\n product = product + int(su_num[i]) * (10 - i)\n # Handle last character\n if su_num[9].lower() == \"x\":\n product += 10\n else:\n product += int(su_num[9])\n if product % 11 == 0:\n return su_num\n return query\n", "path": "bookwyrm/views/search.py"}]}
| 2,578 | 110 |
gh_patches_debug_2638
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-1850
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Build failed on Mac
I cannot build Chainer on mac after #1775 is merged.
```
% python setup.py develop
Options: {'profile': False, 'annotate': False, 'linetrace': False, 'no_cuda': False}
Traceback (most recent call last):
File "setup.py", line 17, in <module>
ext_modules = chainer_setup_build.get_ext_modules()
File "/Users/unno/git/chainer/chainer_setup_build.py", line 219, in get_ext_modules
sysconfig.customize_compiler(compiler)
File "/Users/unno/.pyenv/versions/2.7.8/lib/python2.7/distutils/sysconfig.py", line 168, in customize_compiler
if not _config_vars.get('CUSTOMIZED_OSX_COMPILER', ''):
AttributeError: 'NoneType' object has no attribute 'get'
```
I investigated distutils. It initializes `_config_var` variables as `None` and when a user calls `get_config_vars` this variable is initialized.
BTW, I think we need to use ccompiler in `build_ext` because this option has many command line options for compiler flags.
</issue>
<code>
[start of chainer_setup_build.py]
1 from __future__ import print_function
2 from distutils import ccompiler
3 from distutils import sysconfig
4 import os
5 from os import path
6 import sys
7
8 import pkg_resources
9 import setuptools
10
11 from install import build
12 from install import utils
13
14
15 require_cython_version = pkg_resources.parse_version('0.24.0')
16
17 MODULES = [
18 {
19 'name': 'cuda',
20 'file': [
21 'cupy.core.core',
22 'cupy.core.flags',
23 'cupy.core.internal',
24 'cupy.cuda.cublas',
25 'cupy.cuda.curand',
26 'cupy.cuda.device',
27 'cupy.cuda.driver',
28 'cupy.cuda.memory',
29 'cupy.cuda.pinned_memory',
30 'cupy.cuda.profiler',
31 'cupy.cuda.nvtx',
32 'cupy.cuda.function',
33 'cupy.cuda.runtime',
34 'cupy.util',
35 ],
36 'include': [
37 'cublas_v2.h',
38 'cuda.h',
39 'cuda_profiler_api.h',
40 'cuda_runtime.h',
41 'curand.h',
42 'nvToolsExt.h',
43 ],
44 'libraries': [
45 'cublas',
46 'cuda',
47 'cudart',
48 'curand',
49 'nvToolsExt',
50 ],
51 'check_method': build.check_cuda_version,
52 },
53 {
54 'name': 'cudnn',
55 'file': [
56 'cupy.cuda.cudnn',
57 ],
58 'include': [
59 'cudnn.h',
60 ],
61 'libraries': [
62 'cudnn',
63 ],
64 'check_method': build.check_cudnn_version,
65 }
66 ]
67
68 if sys.platform == 'win32':
69 mod_cuda = MODULES[0]
70 mod_cuda['file'].remove('cupy.cuda.nvtx')
71 mod_cuda['include'].remove('nvToolsExt.h')
72 mod_cuda['libraries'].remove('nvToolsExt')
73
74
75 def check_readthedocs_environment():
76 return os.environ.get('READTHEDOCS', None) == 'True'
77
78
79 def check_library(compiler, includes=(), libraries=(),
80 include_dirs=(), library_dirs=()):
81
82 source = ''.join(['#include <%s>\n' % header for header in includes])
83 source += 'int main(int argc, char* argv[]) {return 0;}'
84 try:
85 build.build_and_run(compiler, source, libraries,
86 include_dirs, library_dirs)
87 except Exception:
88 return False
89 return True
90
91
92 def make_extensions(options, compiler, use_cython):
93 """Produce a list of Extension instances which passed to cythonize()."""
94
95 no_cuda = options['no_cuda']
96 settings = build.get_compiler_setting()
97
98 include_dirs = settings['include_dirs']
99
100 settings['include_dirs'] = [
101 x for x in include_dirs if path.exists(x)]
102 settings['library_dirs'] = [
103 x for x in settings['library_dirs'] if path.exists(x)]
104 if sys.platform != 'win32':
105 settings['runtime_library_dirs'] = settings['library_dirs']
106 if sys.platform == 'darwin':
107 args = settings.setdefault('extra_link_args', [])
108 args.append(
109 '-Wl,' + ','.join('-rpath,' + p
110 for p in settings['library_dirs']))
111 # -rpath is only supported when targetting Mac OS X 10.5 or later
112 args.append('-mmacosx-version-min=10.5')
113
114 if options['linetrace']:
115 settings['define_macros'].append(('CYTHON_TRACE', '1'))
116 settings['define_macros'].append(('CYTHON_TRACE_NOGIL', '1'))
117 if no_cuda:
118 settings['define_macros'].append(('CUPY_NO_CUDA', '1'))
119
120 ret = []
121 ext = '.pyx' if use_cython else '.cpp'
122 for module in MODULES:
123 print('Include directories:', settings['include_dirs'])
124 print('Library directories:', settings['library_dirs'])
125
126 if not no_cuda:
127 if not check_library(compiler,
128 includes=module['include'],
129 include_dirs=settings['include_dirs']):
130 utils.print_warning(
131 'Include files not found: %s' % module['include'],
132 'Skip installing %s support' % module['name'],
133 'Check your CFLAGS environment variable')
134 continue
135
136 if not check_library(compiler,
137 libraries=module['libraries'],
138 library_dirs=settings['library_dirs']):
139 utils.print_warning(
140 'Cannot link libraries: %s' % module['libraries'],
141 'Skip installing %s support' % module['name'],
142 'Check your LDFLAGS environment variable')
143 continue
144
145 if 'check_method' in module and \
146 not module['check_method'](compiler, settings):
147 continue
148
149 s = settings.copy()
150 if not no_cuda:
151 s['libraries'] = module['libraries']
152
153 ret.extend([
154 setuptools.Extension(f, [path.join(*f.split('.')) + ext], **s)
155 for f in module['file']])
156 return ret
157
158
159 def parse_args():
160 arg_options = dict()
161 arg_options['profile'] = '--cupy-profile' in sys.argv
162 if arg_options['profile']:
163 sys.argv.remove('--cupy-profile')
164
165 cupy_coverage = '--cupy-coverage' in sys.argv
166 if cupy_coverage:
167 sys.argv.remove('--cupy-coverage')
168 arg_options['linetrace'] = cupy_coverage
169 arg_options['annotate'] = cupy_coverage
170
171 arg_options['no_cuda'] = '--cupy-no-cuda' in sys.argv
172 if arg_options['no_cuda']:
173 sys.argv.remove('--cupy-no-cuda')
174 if check_readthedocs_environment():
175 arg_options['no_cuda'] = True
176 return arg_options
177
178
179 def check_cython_version():
180 try:
181 import Cython
182 cython_version = pkg_resources.parse_version(Cython.__version__)
183 return cython_version >= require_cython_version
184 except ImportError:
185 return False
186
187
188 def cythonize(extensions, arg_options):
189 import Cython.Build
190
191 directive_keys = ('linetrace', 'profile')
192 directives = {key: arg_options[key] for key in directive_keys}
193
194 cythonize_option_keys = ('annotate',)
195 cythonize_options = {key: arg_options[key]
196 for key in cythonize_option_keys}
197
198 return Cython.Build.cythonize(
199 extensions, language="c++", verbose=True,
200 compiler_directives=directives, **cythonize_options)
201
202
203 def check_extensions(extensions):
204 for x in extensions:
205 for f in x.sources:
206 if not path.isfile(f):
207 msg = ('Missing file: %s\n' % f +
208 'Please install Cython.\n' +
209 'See http://docs.chainer.org/en/stable/install.html')
210 raise RuntimeError(msg)
211
212
213 def get_ext_modules():
214 arg_options = parse_args()
215 print('Options:', arg_options)
216
217 compiler = ccompiler.new_compiler()
218 sysconfig.customize_compiler(compiler)
219
220 use_cython = check_cython_version()
221 extensions = make_extensions(arg_options, compiler, use_cython)
222
223 if use_cython:
224 extensions = cythonize(extensions, arg_options)
225
226 check_extensions(extensions)
227 return extensions
228
[end of chainer_setup_build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer_setup_build.py b/chainer_setup_build.py
--- a/chainer_setup_build.py
+++ b/chainer_setup_build.py
@@ -214,6 +214,9 @@
arg_options = parse_args()
print('Options:', arg_options)
+ # We need to call get_config_vars to initialize _config_vars in distutils
+ # see #1849
+ sysconfig.get_config_vars()
compiler = ccompiler.new_compiler()
sysconfig.customize_compiler(compiler)
|
{"golden_diff": "diff --git a/chainer_setup_build.py b/chainer_setup_build.py\n--- a/chainer_setup_build.py\n+++ b/chainer_setup_build.py\n@@ -214,6 +214,9 @@\n arg_options = parse_args()\n print('Options:', arg_options)\n \n+ # We need to call get_config_vars to initialize _config_vars in distutils\n+ # see #1849\n+ sysconfig.get_config_vars()\n compiler = ccompiler.new_compiler()\n sysconfig.customize_compiler(compiler)\n", "issue": "Build failed on Mac\nI cannot build Chainer on mac after #1775 is merged.\r\n\r\n```\r\n% python setup.py develop\r\nOptions: {'profile': False, 'annotate': False, 'linetrace': False, 'no_cuda': False}\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 17, in <module>\r\n ext_modules = chainer_setup_build.get_ext_modules()\r\n File \"/Users/unno/git/chainer/chainer_setup_build.py\", line 219, in get_ext_modules\r\n sysconfig.customize_compiler(compiler)\r\n File \"/Users/unno/.pyenv/versions/2.7.8/lib/python2.7/distutils/sysconfig.py\", line 168, in customize_compiler\r\n if not _config_vars.get('CUSTOMIZED_OSX_COMPILER', ''):\r\nAttributeError: 'NoneType' object has no attribute 'get'\r\n```\r\n\r\nI investigated distutils. It initializes `_config_var` variables as `None` and when a user calls `get_config_vars` this variable is initialized.\r\nBTW, I think we need to use ccompiler in `build_ext` because this option has many command line options for compiler flags.\n", "before_files": [{"content": "from __future__ import print_function\nfrom distutils import ccompiler\nfrom distutils import sysconfig\nimport os\nfrom os import path\nimport sys\n\nimport pkg_resources\nimport setuptools\n\nfrom install import build\nfrom install import utils\n\n\nrequire_cython_version = pkg_resources.parse_version('0.24.0')\n\nMODULES = [\n {\n 'name': 'cuda',\n 'file': [\n 'cupy.core.core',\n 'cupy.core.flags',\n 'cupy.core.internal',\n 'cupy.cuda.cublas',\n 'cupy.cuda.curand',\n 'cupy.cuda.device',\n 'cupy.cuda.driver',\n 'cupy.cuda.memory',\n 'cupy.cuda.pinned_memory',\n 'cupy.cuda.profiler',\n 'cupy.cuda.nvtx',\n 'cupy.cuda.function',\n 'cupy.cuda.runtime',\n 'cupy.util',\n ],\n 'include': [\n 'cublas_v2.h',\n 'cuda.h',\n 'cuda_profiler_api.h',\n 'cuda_runtime.h',\n 'curand.h',\n 'nvToolsExt.h',\n ],\n 'libraries': [\n 'cublas',\n 'cuda',\n 'cudart',\n 'curand',\n 'nvToolsExt',\n ],\n 'check_method': build.check_cuda_version,\n },\n {\n 'name': 'cudnn',\n 'file': [\n 'cupy.cuda.cudnn',\n ],\n 'include': [\n 'cudnn.h',\n ],\n 'libraries': [\n 'cudnn',\n ],\n 'check_method': build.check_cudnn_version,\n }\n]\n\nif sys.platform == 'win32':\n mod_cuda = MODULES[0]\n mod_cuda['file'].remove('cupy.cuda.nvtx')\n mod_cuda['include'].remove('nvToolsExt.h')\n mod_cuda['libraries'].remove('nvToolsExt')\n\n\ndef check_readthedocs_environment():\n return os.environ.get('READTHEDOCS', None) == 'True'\n\n\ndef check_library(compiler, includes=(), libraries=(),\n include_dirs=(), library_dirs=()):\n\n source = ''.join(['#include <%s>\\n' % header for header in includes])\n source += 'int main(int argc, char* argv[]) {return 0;}'\n try:\n build.build_and_run(compiler, source, libraries,\n include_dirs, library_dirs)\n except Exception:\n return False\n return True\n\n\ndef make_extensions(options, compiler, use_cython):\n \"\"\"Produce a list of Extension instances which passed to cythonize().\"\"\"\n\n no_cuda = options['no_cuda']\n settings = build.get_compiler_setting()\n\n include_dirs = settings['include_dirs']\n\n settings['include_dirs'] = [\n x for x in include_dirs if path.exists(x)]\n settings['library_dirs'] = [\n x for x in settings['library_dirs'] if path.exists(x)]\n if sys.platform != 'win32':\n settings['runtime_library_dirs'] = settings['library_dirs']\n if sys.platform == 'darwin':\n args = settings.setdefault('extra_link_args', [])\n args.append(\n '-Wl,' + ','.join('-rpath,' + p\n for p in settings['library_dirs']))\n # -rpath is only supported when targetting Mac OS X 10.5 or later\n args.append('-mmacosx-version-min=10.5')\n\n if options['linetrace']:\n settings['define_macros'].append(('CYTHON_TRACE', '1'))\n settings['define_macros'].append(('CYTHON_TRACE_NOGIL', '1'))\n if no_cuda:\n settings['define_macros'].append(('CUPY_NO_CUDA', '1'))\n\n ret = []\n ext = '.pyx' if use_cython else '.cpp'\n for module in MODULES:\n print('Include directories:', settings['include_dirs'])\n print('Library directories:', settings['library_dirs'])\n\n if not no_cuda:\n if not check_library(compiler,\n includes=module['include'],\n include_dirs=settings['include_dirs']):\n utils.print_warning(\n 'Include files not found: %s' % module['include'],\n 'Skip installing %s support' % module['name'],\n 'Check your CFLAGS environment variable')\n continue\n\n if not check_library(compiler,\n libraries=module['libraries'],\n library_dirs=settings['library_dirs']):\n utils.print_warning(\n 'Cannot link libraries: %s' % module['libraries'],\n 'Skip installing %s support' % module['name'],\n 'Check your LDFLAGS environment variable')\n continue\n\n if 'check_method' in module and \\\n not module['check_method'](compiler, settings):\n continue\n\n s = settings.copy()\n if not no_cuda:\n s['libraries'] = module['libraries']\n\n ret.extend([\n setuptools.Extension(f, [path.join(*f.split('.')) + ext], **s)\n for f in module['file']])\n return ret\n\n\ndef parse_args():\n arg_options = dict()\n arg_options['profile'] = '--cupy-profile' in sys.argv\n if arg_options['profile']:\n sys.argv.remove('--cupy-profile')\n\n cupy_coverage = '--cupy-coverage' in sys.argv\n if cupy_coverage:\n sys.argv.remove('--cupy-coverage')\n arg_options['linetrace'] = cupy_coverage\n arg_options['annotate'] = cupy_coverage\n\n arg_options['no_cuda'] = '--cupy-no-cuda' in sys.argv\n if arg_options['no_cuda']:\n sys.argv.remove('--cupy-no-cuda')\n if check_readthedocs_environment():\n arg_options['no_cuda'] = True\n return arg_options\n\n\ndef check_cython_version():\n try:\n import Cython\n cython_version = pkg_resources.parse_version(Cython.__version__)\n return cython_version >= require_cython_version\n except ImportError:\n return False\n\n\ndef cythonize(extensions, arg_options):\n import Cython.Build\n\n directive_keys = ('linetrace', 'profile')\n directives = {key: arg_options[key] for key in directive_keys}\n\n cythonize_option_keys = ('annotate',)\n cythonize_options = {key: arg_options[key]\n for key in cythonize_option_keys}\n\n return Cython.Build.cythonize(\n extensions, language=\"c++\", verbose=True,\n compiler_directives=directives, **cythonize_options)\n\n\ndef check_extensions(extensions):\n for x in extensions:\n for f in x.sources:\n if not path.isfile(f):\n msg = ('Missing file: %s\\n' % f +\n 'Please install Cython.\\n' +\n 'See http://docs.chainer.org/en/stable/install.html')\n raise RuntimeError(msg)\n\n\ndef get_ext_modules():\n arg_options = parse_args()\n print('Options:', arg_options)\n\n compiler = ccompiler.new_compiler()\n sysconfig.customize_compiler(compiler)\n\n use_cython = check_cython_version()\n extensions = make_extensions(arg_options, compiler, use_cython)\n\n if use_cython:\n extensions = cythonize(extensions, arg_options)\n\n check_extensions(extensions)\n return extensions\n", "path": "chainer_setup_build.py"}]}
| 2,951 | 116 |
gh_patches_debug_650
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1942
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.109
On the docket:
+ [x] pex does not support musllinux wheels #1933
+ [x] Empty string PEX_PATH="" env var causes CWD (.) to be added bootstrapped pex_path #1936
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.108"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.108"
+__version__ = "2.1.109"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.108\"\n+__version__ = \"2.1.109\"\n", "issue": "Release 2.1.109\nOn the docket:\r\n+ [x] pex does not support musllinux wheels #1933\r\n+ [x] Empty string PEX_PATH=\"\" env var causes CWD (.) to be added bootstrapped pex_path #1936\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.108\"\n", "path": "pex/version.py"}]}
| 649 | 98 |
gh_patches_debug_25401
|
rasdani/github-patches
|
git_diff
|
GeotrekCE__Geotrek-admin-1307
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Infrastructure list is filtered on "Ouvrage" by default
J'ai créé des points d'aménagements (8 au total), cependant la plupart ne s'affichent pas (ni dans la liste, ni sur la carte)...

Lorsque je rentre dans une fiche aménagement et que je reclique sur le bouton liste, là ils apparaissent tous mais seulement sur la carte.

Par contre, si je touche au zoom, ils disparaissent et je n'ai plus que les trois du début.
</issue>
<code>
[start of geotrek/infrastructure/filters.py]
1 from django.utils.translation import ugettext_lazy as _
2
3 from geotrek.common.filters import StructureRelatedFilterSet, YearFilter
4 from geotrek.maintenance.filters import InterventionYearSelect
5
6 from .models import INFRASTRUCTURE_TYPES, Infrastructure, Signage
7
8
9 class InfrastructureYearSelect(InterventionYearSelect):
10 label = _(u"Intervention year")
11
12
13 class InfrastructureFilterSet(StructureRelatedFilterSet):
14 intervention_year = YearFilter(name='interventions_set__date',
15 widget=InfrastructureYearSelect,
16 label=_(u"Intervention year"))
17
18 def __init__(self, *args, **kwargs):
19 super(InfrastructureFilterSet, self).__init__(*args, **kwargs)
20 field = self.form.fields['type']
21 field.queryset = field.queryset.exclude(type=INFRASTRUCTURE_TYPES.SIGNAGE)
22
23 class Meta(StructureRelatedFilterSet.Meta):
24 model = Infrastructure
25 fields = StructureRelatedFilterSet.Meta.fields + ['type__type', 'type']
26
27
28 class SignageFilterSet(StructureRelatedFilterSet):
29 intervention_year = YearFilter(name='interventions_set__date',
30 widget=InfrastructureYearSelect)
31
32 def __init__(self, *args, **kwargs):
33 super(SignageFilterSet, self).__init__(*args, **kwargs)
34 field = self.form.fields['type']
35 field.queryset = field.queryset.filter(type=INFRASTRUCTURE_TYPES.SIGNAGE)
36
37 class Meta(StructureRelatedFilterSet.Meta):
38 model = Signage
39 fields = StructureRelatedFilterSet.Meta.fields + ['type']
40
[end of geotrek/infrastructure/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/geotrek/infrastructure/filters.py b/geotrek/infrastructure/filters.py
--- a/geotrek/infrastructure/filters.py
+++ b/geotrek/infrastructure/filters.py
@@ -20,6 +20,11 @@
field = self.form.fields['type']
field.queryset = field.queryset.exclude(type=INFRASTRUCTURE_TYPES.SIGNAGE)
+ field = self.form.fields['type__type']
+ all_choices = field.widget.choices
+ all_choices = [c for c in all_choices if c[0] != INFRASTRUCTURE_TYPES.SIGNAGE]
+ field.widget.choices = [('', _(u"Category"))] + all_choices
+
class Meta(StructureRelatedFilterSet.Meta):
model = Infrastructure
fields = StructureRelatedFilterSet.Meta.fields + ['type__type', 'type']
@@ -29,11 +34,6 @@
intervention_year = YearFilter(name='interventions_set__date',
widget=InfrastructureYearSelect)
- def __init__(self, *args, **kwargs):
- super(SignageFilterSet, self).__init__(*args, **kwargs)
- field = self.form.fields['type']
- field.queryset = field.queryset.filter(type=INFRASTRUCTURE_TYPES.SIGNAGE)
-
class Meta(StructureRelatedFilterSet.Meta):
model = Signage
- fields = StructureRelatedFilterSet.Meta.fields + ['type']
+ fields = StructureRelatedFilterSet.Meta.fields
|
{"golden_diff": "diff --git a/geotrek/infrastructure/filters.py b/geotrek/infrastructure/filters.py\n--- a/geotrek/infrastructure/filters.py\n+++ b/geotrek/infrastructure/filters.py\n@@ -20,6 +20,11 @@\n field = self.form.fields['type']\n field.queryset = field.queryset.exclude(type=INFRASTRUCTURE_TYPES.SIGNAGE)\n \n+ field = self.form.fields['type__type']\n+ all_choices = field.widget.choices\n+ all_choices = [c for c in all_choices if c[0] != INFRASTRUCTURE_TYPES.SIGNAGE]\n+ field.widget.choices = [('', _(u\"Category\"))] + all_choices\n+\n class Meta(StructureRelatedFilterSet.Meta):\n model = Infrastructure\n fields = StructureRelatedFilterSet.Meta.fields + ['type__type', 'type']\n@@ -29,11 +34,6 @@\n intervention_year = YearFilter(name='interventions_set__date',\n widget=InfrastructureYearSelect)\n \n- def __init__(self, *args, **kwargs):\n- super(SignageFilterSet, self).__init__(*args, **kwargs)\n- field = self.form.fields['type']\n- field.queryset = field.queryset.filter(type=INFRASTRUCTURE_TYPES.SIGNAGE)\n-\n class Meta(StructureRelatedFilterSet.Meta):\n model = Signage\n- fields = StructureRelatedFilterSet.Meta.fields + ['type']\n+ fields = StructureRelatedFilterSet.Meta.fields\n", "issue": "Infrastructure list is filtered on \"Ouvrage\" by default\nJ'ai cr\u00e9\u00e9 des points d'am\u00e9nagements (8 au total), cependant la plupart ne s'affichent pas (ni dans la liste, ni sur la carte)...\n\nLorsque je rentre dans une fiche am\u00e9nagement et que je reclique sur le bouton liste, l\u00e0 ils apparaissent tous mais seulement sur la carte.\n\nPar contre, si je touche au zoom, ils disparaissent et je n'ai plus que les trois du d\u00e9but.\n\n", "before_files": [{"content": "from django.utils.translation import ugettext_lazy as _\n\nfrom geotrek.common.filters import StructureRelatedFilterSet, YearFilter\nfrom geotrek.maintenance.filters import InterventionYearSelect\n\nfrom .models import INFRASTRUCTURE_TYPES, Infrastructure, Signage\n\n\nclass InfrastructureYearSelect(InterventionYearSelect):\n label = _(u\"Intervention year\")\n\n\nclass InfrastructureFilterSet(StructureRelatedFilterSet):\n intervention_year = YearFilter(name='interventions_set__date',\n widget=InfrastructureYearSelect,\n label=_(u\"Intervention year\"))\n\n def __init__(self, *args, **kwargs):\n super(InfrastructureFilterSet, self).__init__(*args, **kwargs)\n field = self.form.fields['type']\n field.queryset = field.queryset.exclude(type=INFRASTRUCTURE_TYPES.SIGNAGE)\n\n class Meta(StructureRelatedFilterSet.Meta):\n model = Infrastructure\n fields = StructureRelatedFilterSet.Meta.fields + ['type__type', 'type']\n\n\nclass SignageFilterSet(StructureRelatedFilterSet):\n intervention_year = YearFilter(name='interventions_set__date',\n widget=InfrastructureYearSelect)\n\n def __init__(self, *args, **kwargs):\n super(SignageFilterSet, self).__init__(*args, **kwargs)\n field = self.form.fields['type']\n field.queryset = field.queryset.filter(type=INFRASTRUCTURE_TYPES.SIGNAGE)\n\n class Meta(StructureRelatedFilterSet.Meta):\n model = Signage\n fields = StructureRelatedFilterSet.Meta.fields + ['type']\n", "path": "geotrek/infrastructure/filters.py"}]}
| 1,185 | 332 |
gh_patches_debug_11644
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-954
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CI-fail] GLScatterPlotItem failing on Windows Builds
```
Traceback (most recent call last):
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\latebind.py", line 41, in __call__
return self._finalCall( *args, **named )
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\a\1\s\pyqtgraph\opengl\GLViewWidget.py", line 60, in addItem
item.initializeGL()
File "D:\a\1\s\pyqtgraph\opengl\items\GLScatterPlotItem.py", line 70, in initializeGL
self.pointTexture = glGenTextures(1)
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\latebind.py", line 61, in __call__
return self.wrapperFunction( self.baseFunction, *args, **named )
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\GL\exceptional.py", line 178, in glGenTextures
baseFunction( count, textures)
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\latebind.py", line 45, in __call__
return self._finalCall( *args, **named )
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\wrapper.py", line 664, in wrapperCall
raise err
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\wrapper.py", line 657, in wrapperCall
result = wrappedOperation( *cArguments )
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\platform\baseplatform.py", line 402, in __call__
return self( *args, **named )
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\error.py", line 232, in glCheckError
baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glGenTextures,
pyArgs = (1, c_ulong(0)),
cArgs = (1, <cparam 'P' (00000158BE5A9310)>),
cArguments = (1, <cparam 'P' (00000158BE5A9310)>)
)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 7, in <module>
File "D:\a\1\s\examples\GLScatterPlotItem.py", line 46, in <module>
w.addItem(sp1)
File "D:\a\1\s\pyqtgraph\opengl\GLViewWidget.py", line 62, in addItem
self.checkOpenGLVersion('Error while adding item %s to GLViewWidget.' % str(item))
File "D:\a\1\s\pyqtgraph\opengl\GLViewWidget.py", line 429, in checkOpenGLVersion
ver = glGetString(GL_VERSION).split()[0]
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\platform\baseplatform.py", line 402, in __call__
return self( *args, **named )
File "c:\hostedtoolcache\windows\python\3.7.3\x64\lib\site-packages\OpenGL\error.py", line 232, in glCheckError
baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glGetString,
cArguments = (GL_VERSION,)
)
Failed Scatter Plot Example Test Located in GLScatterPlotItem.py
```
Likely related to #928
</issue>
<code>
[start of pyqtgraph/opengl/glInfo.py]
1 from ..Qt import QtCore, QtGui, QtOpenGL
2 from OpenGL.GL import *
3 app = QtGui.QApplication([])
4
5 class GLTest(QtOpenGL.QGLWidget):
6 def __init__(self):
7 QtOpenGL.QGLWidget.__init__(self)
8 self.makeCurrent()
9 print("GL version:" + glGetString(GL_VERSION))
10 print("MAX_TEXTURE_SIZE: %d" % glGetIntegerv(GL_MAX_TEXTURE_SIZE))
11 print("MAX_3D_TEXTURE_SIZE: %d" % glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE))
12 print("Extensions: " + glGetString(GL_EXTENSIONS))
13
14 GLTest()
15
16
17
[end of pyqtgraph/opengl/glInfo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/opengl/glInfo.py b/pyqtgraph/opengl/glInfo.py
--- a/pyqtgraph/opengl/glInfo.py
+++ b/pyqtgraph/opengl/glInfo.py
@@ -6,10 +6,10 @@
def __init__(self):
QtOpenGL.QGLWidget.__init__(self)
self.makeCurrent()
- print("GL version:" + glGetString(GL_VERSION))
+ print("GL version:" + glGetString(GL_VERSION).decode("utf-8"))
print("MAX_TEXTURE_SIZE: %d" % glGetIntegerv(GL_MAX_TEXTURE_SIZE))
print("MAX_3D_TEXTURE_SIZE: %d" % glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE))
- print("Extensions: " + glGetString(GL_EXTENSIONS))
+ print("Extensions: " + glGetString(GL_EXTENSIONS).decode("utf-8").replace(" ", "\n"))
GLTest()
|
{"golden_diff": "diff --git a/pyqtgraph/opengl/glInfo.py b/pyqtgraph/opengl/glInfo.py\n--- a/pyqtgraph/opengl/glInfo.py\n+++ b/pyqtgraph/opengl/glInfo.py\n@@ -6,10 +6,10 @@\n def __init__(self):\n QtOpenGL.QGLWidget.__init__(self)\n self.makeCurrent()\n- print(\"GL version:\" + glGetString(GL_VERSION))\n+ print(\"GL version:\" + glGetString(GL_VERSION).decode(\"utf-8\"))\n print(\"MAX_TEXTURE_SIZE: %d\" % glGetIntegerv(GL_MAX_TEXTURE_SIZE))\n print(\"MAX_3D_TEXTURE_SIZE: %d\" % glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE))\n- print(\"Extensions: \" + glGetString(GL_EXTENSIONS))\n+ print(\"Extensions: \" + glGetString(GL_EXTENSIONS).decode(\"utf-8\").replace(\" \", \"\\n\"))\n \n GLTest()\n", "issue": "[CI-fail] GLScatterPlotItem failing on Windows Builds\n```\r\nTraceback (most recent call last):\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\latebind.py\", line 41, in __call__\r\n\r\n return self._finalCall( *args, **named )\r\n\r\nTypeError: 'NoneType' object is not callable\r\n\r\n\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"D:\\a\\1\\s\\pyqtgraph\\opengl\\GLViewWidget.py\", line 60, in addItem\r\n\r\n item.initializeGL()\r\n\r\n File \"D:\\a\\1\\s\\pyqtgraph\\opengl\\items\\GLScatterPlotItem.py\", line 70, in initializeGL\r\n\r\n self.pointTexture = glGenTextures(1)\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\latebind.py\", line 61, in __call__\r\n\r\n return self.wrapperFunction( self.baseFunction, *args, **named )\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\GL\\exceptional.py\", line 178, in glGenTextures\r\n\r\n baseFunction( count, textures)\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\latebind.py\", line 45, in __call__\r\n\r\n return self._finalCall( *args, **named )\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\wrapper.py\", line 664, in wrapperCall\r\n\r\n raise err\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\wrapper.py\", line 657, in wrapperCall\r\n\r\n result = wrappedOperation( *cArguments )\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\platform\\baseplatform.py\", line 402, in __call__\r\n\r\n return self( *args, **named )\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\error.py\", line 232, in glCheckError\r\n\r\n baseOperation = baseOperation,\r\n\r\nOpenGL.error.GLError: GLError(\r\n\r\n\terr = 1282,\r\n\r\n\tdescription = b'invalid operation',\r\n\r\n\tbaseOperation = glGenTextures,\r\n\r\n\tpyArgs = (1, c_ulong(0)),\r\n\r\n\tcArgs = (1, <cparam 'P' (00000158BE5A9310)>),\r\n\r\n\tcArguments = (1, <cparam 'P' (00000158BE5A9310)>)\r\n\r\n)\r\n\r\n\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"<stdin>\", line 7, in <module>\r\n\r\n File \"D:\\a\\1\\s\\examples\\GLScatterPlotItem.py\", line 46, in <module>\r\n\r\n w.addItem(sp1)\r\n\r\n File \"D:\\a\\1\\s\\pyqtgraph\\opengl\\GLViewWidget.py\", line 62, in addItem\r\n\r\n self.checkOpenGLVersion('Error while adding item %s to GLViewWidget.' % str(item))\r\n\r\n File \"D:\\a\\1\\s\\pyqtgraph\\opengl\\GLViewWidget.py\", line 429, in checkOpenGLVersion\r\n\r\n ver = glGetString(GL_VERSION).split()[0]\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\platform\\baseplatform.py\", line 402, in __call__\r\n\r\n return self( *args, **named )\r\n\r\n File \"c:\\hostedtoolcache\\windows\\python\\3.7.3\\x64\\lib\\site-packages\\OpenGL\\error.py\", line 232, in glCheckError\r\n\r\n baseOperation = baseOperation,\r\n\r\nOpenGL.error.GLError: GLError(\r\n\r\n\terr = 1282,\r\n\r\n\tdescription = b'invalid operation',\r\n\r\n\tbaseOperation = glGetString,\r\n\r\n\tcArguments = (GL_VERSION,)\r\n\r\n)\r\n\r\n\r\nFailed Scatter Plot Example Test Located in GLScatterPlotItem.py \r\n```\r\n\r\nLikely related to #928 \n", "before_files": [{"content": "from ..Qt import QtCore, QtGui, QtOpenGL\nfrom OpenGL.GL import *\napp = QtGui.QApplication([])\n\nclass GLTest(QtOpenGL.QGLWidget):\n def __init__(self):\n QtOpenGL.QGLWidget.__init__(self)\n self.makeCurrent()\n print(\"GL version:\" + glGetString(GL_VERSION))\n print(\"MAX_TEXTURE_SIZE: %d\" % glGetIntegerv(GL_MAX_TEXTURE_SIZE))\n print(\"MAX_3D_TEXTURE_SIZE: %d\" % glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE))\n print(\"Extensions: \" + glGetString(GL_EXTENSIONS))\n\nGLTest()\n\n\n", "path": "pyqtgraph/opengl/glInfo.py"}]}
| 1,710 | 199 |
gh_patches_debug_10153
|
rasdani/github-patches
|
git_diff
|
PyGithub__PyGithub-963
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PaginatedList.totalCount is broken when there are 0 results
Self-contained test:
```
from github import Github
github = Github()
repos = github.search_repositories('shouldreturn0repos')
assert repos.totalCount == 0
```
The `totalCount` method has this code:
```
def totalCount(self):
if not self.__totalCount:
[...]
if 'link' not in headers:
self.__totalCount = len(data) if data else 0
[...]
```
The response has no `link` header but it has this data:
```
{"total_count":0,"incomplete_results":false,"items":[]}
```
and `totalCount` returns 3 because there are 3 items inside the data dict.
I'm not sure why the `total_count` value in the response is not used directly.
</issue>
<code>
[start of github/PaginatedList.py]
1 # -*- coding: utf-8 -*-
2
3 ############################ Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 AKFish <[email protected]> #
8 # Copyright 2013 Bill Mill <[email protected]> #
9 # Copyright 2013 Vincent Jacques <[email protected]> #
10 # Copyright 2013 davidbrai <[email protected]> #
11 # Copyright 2014 Thialfihar <[email protected]> #
12 # Copyright 2014 Vincent Jacques <[email protected]> #
13 # Copyright 2015 Dan Vanderkam <[email protected]> #
14 # Copyright 2015 Eliot Walker <[email protected]> #
15 # Copyright 2016 Peter Buckley <[email protected]> #
16 # Copyright 2017 Jannis Gebauer <[email protected]> #
17 # Copyright 2018 Gilad Shefer <[email protected]> #
18 # Copyright 2018 Joel Koglin <[email protected]> #
19 # Copyright 2018 Wan Liuyang <[email protected]> #
20 # Copyright 2018 sfdye <[email protected]> #
21 # #
22 # This file is part of PyGithub. #
23 # http://pygithub.readthedocs.io/ #
24 # #
25 # PyGithub is free software: you can redistribute it and/or modify it under #
26 # the terms of the GNU Lesser General Public License as published by the Free #
27 # Software Foundation, either version 3 of the License, or (at your option) #
28 # any later version. #
29 # #
30 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
31 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
32 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
33 # details. #
34 # #
35 # You should have received a copy of the GNU Lesser General Public License #
36 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
37 # #
38 ################################################################################
39
40 try:
41 from urllib.parse import parse_qs
42 except ImportError:
43 from urlparse import parse_qs
44
45 import github.GithubObject
46
47
48 class PaginatedListBase:
49 def __init__(self):
50 self.__elements = list()
51
52 def __getitem__(self, index):
53 assert isinstance(index, (int, slice))
54 if isinstance(index, (int, long)):
55 self.__fetchToIndex(index)
56 return self.__elements[index]
57 else:
58 return self._Slice(self, index)
59
60 def __iter__(self):
61 for element in self.__elements:
62 yield element
63 while self._couldGrow():
64 newElements = self._grow()
65 for element in newElements:
66 yield element
67
68 def _isBiggerThan(self, index):
69 return len(self.__elements) > index or self._couldGrow()
70
71 def __fetchToIndex(self, index):
72 while len(self.__elements) <= index and self._couldGrow():
73 self._grow()
74
75 def _grow(self):
76 newElements = self._fetchNextPage()
77 self.__elements += newElements
78 return newElements
79
80 class _Slice:
81 def __init__(self, theList, theSlice):
82 self.__list = theList
83 self.__start = theSlice.start or 0
84 self.__stop = theSlice.stop
85 self.__step = theSlice.step or 1
86
87 def __iter__(self):
88 index = self.__start
89 while not self.__finished(index):
90 if self.__list._isBiggerThan(index):
91 yield self.__list[index]
92 index += self.__step
93 else:
94 return
95
96 def __finished(self, index):
97 return self.__stop is not None and index >= self.__stop
98
99
100 class PaginatedList(PaginatedListBase):
101 """
102 This class abstracts the `pagination of the API <http://developer.github.com/v3/#pagination>`_.
103
104 You can simply enumerate through instances of this class::
105
106 for repo in user.get_repos():
107 print(repo.name)
108
109 If you want to know the total number of items in the list::
110
111 print(user.get_repos().totalCount)
112 print(len(user.get_repos()))
113
114 You can also index them or take slices::
115
116 second_repo = user.get_repos()[1]
117 first_repos = user.get_repos()[:10]
118
119 If you want to iterate in reversed order, just do::
120
121 for repo in user.get_repos().reversed:
122 print(repo.name)
123
124 And if you really need it, you can explicitly access a specific page::
125
126 some_repos = user.get_repos().get_page(0)
127 some_other_repos = user.get_repos().get_page(3)
128 """
129
130 def __init__(self, contentClass, requester, firstUrl, firstParams, headers=None, list_item="items"):
131 PaginatedListBase.__init__(self)
132 self.__requester = requester
133 self.__contentClass = contentClass
134 self.__firstUrl = firstUrl
135 self.__firstParams = firstParams or ()
136 self.__nextUrl = firstUrl
137 self.__nextParams = firstParams or {}
138 self.__headers = headers
139 self.__list_item = list_item
140 if self.__requester.per_page != 30:
141 self.__nextParams["per_page"] = self.__requester.per_page
142 self._reversed = False
143 self.__totalCount = None
144
145 @property
146 def totalCount(self):
147 if not self.__totalCount:
148 params = {} if self.__nextParams is None else self.__nextParams.copy()
149 # set per_page = 1 so the totalCount is just the number of pages
150 params.update({"per_page": 1})
151 headers, data = self.__requester.requestJsonAndCheck(
152 "GET",
153 self.__firstUrl,
154 parameters=params,
155 headers=self.__headers
156 )
157 if 'link' not in headers:
158 self.__totalCount = len(data) if data else 0
159 else:
160 links = self.__parseLinkHeader(headers)
161 lastUrl = links.get("last")
162 self.__totalCount = int(parse_qs(lastUrl)['page'][0])
163 return self.__totalCount
164
165 def _getLastPageUrl(self):
166 headers, data = self.__requester.requestJsonAndCheck(
167 "GET",
168 self.__firstUrl,
169 parameters=self.__nextParams,
170 headers=self.__headers
171 )
172 links = self.__parseLinkHeader(headers)
173 lastUrl = links.get("last")
174 return lastUrl
175
176 @property
177 def reversed(self):
178 r = PaginatedList(self.__contentClass, self.__requester, self.__firstUrl, self.__firstParams, self.__headers, self.__list_item)
179 r.__reverse()
180 return r
181
182 def __reverse(self):
183 self._reversed = True
184 lastUrl = self._getLastPageUrl()
185 if lastUrl:
186 self.__nextUrl = lastUrl
187
188 def _couldGrow(self):
189 return self.__nextUrl is not None
190
191 def _fetchNextPage(self):
192 headers, data = self.__requester.requestJsonAndCheck(
193 "GET",
194 self.__nextUrl,
195 parameters=self.__nextParams,
196 headers=self.__headers
197 )
198 data = data if data else []
199
200 self.__nextUrl = None
201 if len(data) > 0:
202 links = self.__parseLinkHeader(headers)
203 if self._reversed:
204 if "prev" in links:
205 self.__nextUrl = links["prev"]
206 elif "next" in links:
207 self.__nextUrl = links["next"]
208 self.__nextParams = None
209
210 if self.__list_item in data:
211 self.__totalCount = data.get('total_count')
212 data = data[self.__list_item]
213
214 content = [
215 self.__contentClass(self.__requester, headers, element, completed=False)
216 for element in data if element is not None
217 ]
218 if self._reversed:
219 return content[::-1]
220 return content
221
222 def __parseLinkHeader(self, headers):
223 links = {}
224 if "link" in headers:
225 linkHeaders = headers["link"].split(", ")
226 for linkHeader in linkHeaders:
227 (url, rel) = linkHeader.split("; ")
228 url = url[1:-1]
229 rel = rel[5:-1]
230 links[rel] = url
231 return links
232
233 def get_page(self, page):
234 params = dict(self.__firstParams)
235 if page != 0:
236 params["page"] = page + 1
237 if self.__requester.per_page != 30:
238 params["per_page"] = self.__requester.per_page
239 headers, data = self.__requester.requestJsonAndCheck(
240 "GET",
241 self.__firstUrl,
242 parameters=params,
243 headers=self.__headers
244 )
245
246 if self.__list_item in data:
247 self.__totalCount = data.get('total_count')
248 data = data[self.__list_item]
249
250 return [
251 self.__contentClass(self.__requester, headers, element, completed=False)
252 for element in data
253 ]
254
[end of github/PaginatedList.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/github/PaginatedList.py b/github/PaginatedList.py
--- a/github/PaginatedList.py
+++ b/github/PaginatedList.py
@@ -155,7 +155,12 @@
headers=self.__headers
)
if 'link' not in headers:
- self.__totalCount = len(data) if data else 0
+ if data and "total_count" in data:
+ self.__totalCount = data["total_count"]
+ elif data:
+ self.__totalCount = len(data)
+ else:
+ self.__totalCount = 0
else:
links = self.__parseLinkHeader(headers)
lastUrl = links.get("last")
|
{"golden_diff": "diff --git a/github/PaginatedList.py b/github/PaginatedList.py\n--- a/github/PaginatedList.py\n+++ b/github/PaginatedList.py\n@@ -155,7 +155,12 @@\n headers=self.__headers\n )\n if 'link' not in headers:\n- self.__totalCount = len(data) if data else 0\n+ if data and \"total_count\" in data:\n+ self.__totalCount = data[\"total_count\"]\n+ elif data:\n+ self.__totalCount = len(data)\n+ else:\n+ self.__totalCount = 0\n else:\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n", "issue": "PaginatedList.totalCount is broken when there are 0 results\nSelf-contained test:\r\n```\r\nfrom github import Github\r\n\r\ngithub = Github()\r\nrepos = github.search_repositories('shouldreturn0repos')\r\n\r\nassert repos.totalCount == 0\r\n```\r\n\r\nThe `totalCount` method has this code:\r\n```\r\n def totalCount(self):\r\n if not self.__totalCount:\r\n [...]\r\n if 'link' not in headers:\r\n self.__totalCount = len(data) if data else 0\r\n [...]\r\n```\r\nThe response has no `link` header but it has this data:\r\n```\r\n{\"total_count\":0,\"incomplete_results\":false,\"items\":[]}\r\n```\r\nand `totalCount` returns 3 because there are 3 items inside the data dict.\r\n\r\nI'm not sure why the `total_count` value in the response is not used directly.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Bill Mill <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 davidbrai <[email protected]> #\n# Copyright 2014 Thialfihar <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2015 Dan Vanderkam <[email protected]> #\n# Copyright 2015 Eliot Walker <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2018 Gilad Shefer <[email protected]> #\n# Copyright 2018 Joel Koglin <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\ntry:\n from urllib.parse import parse_qs\nexcept ImportError:\n from urlparse import parse_qs\n\nimport github.GithubObject\n\n\nclass PaginatedListBase:\n def __init__(self):\n self.__elements = list()\n\n def __getitem__(self, index):\n assert isinstance(index, (int, slice))\n if isinstance(index, (int, long)):\n self.__fetchToIndex(index)\n return self.__elements[index]\n else:\n return self._Slice(self, index)\n\n def __iter__(self):\n for element in self.__elements:\n yield element\n while self._couldGrow():\n newElements = self._grow()\n for element in newElements:\n yield element\n\n def _isBiggerThan(self, index):\n return len(self.__elements) > index or self._couldGrow()\n\n def __fetchToIndex(self, index):\n while len(self.__elements) <= index and self._couldGrow():\n self._grow()\n\n def _grow(self):\n newElements = self._fetchNextPage()\n self.__elements += newElements\n return newElements\n\n class _Slice:\n def __init__(self, theList, theSlice):\n self.__list = theList\n self.__start = theSlice.start or 0\n self.__stop = theSlice.stop\n self.__step = theSlice.step or 1\n\n def __iter__(self):\n index = self.__start\n while not self.__finished(index):\n if self.__list._isBiggerThan(index):\n yield self.__list[index]\n index += self.__step\n else:\n return\n\n def __finished(self, index):\n return self.__stop is not None and index >= self.__stop\n\n\nclass PaginatedList(PaginatedListBase):\n \"\"\"\n This class abstracts the `pagination of the API <http://developer.github.com/v3/#pagination>`_.\n\n You can simply enumerate through instances of this class::\n\n for repo in user.get_repos():\n print(repo.name)\n\n If you want to know the total number of items in the list::\n\n print(user.get_repos().totalCount)\n print(len(user.get_repos()))\n\n You can also index them or take slices::\n\n second_repo = user.get_repos()[1]\n first_repos = user.get_repos()[:10]\n\n If you want to iterate in reversed order, just do::\n\n for repo in user.get_repos().reversed:\n print(repo.name)\n\n And if you really need it, you can explicitly access a specific page::\n\n some_repos = user.get_repos().get_page(0)\n some_other_repos = user.get_repos().get_page(3)\n \"\"\"\n\n def __init__(self, contentClass, requester, firstUrl, firstParams, headers=None, list_item=\"items\"):\n PaginatedListBase.__init__(self)\n self.__requester = requester\n self.__contentClass = contentClass\n self.__firstUrl = firstUrl\n self.__firstParams = firstParams or ()\n self.__nextUrl = firstUrl\n self.__nextParams = firstParams or {}\n self.__headers = headers\n self.__list_item = list_item\n if self.__requester.per_page != 30:\n self.__nextParams[\"per_page\"] = self.__requester.per_page\n self._reversed = False\n self.__totalCount = None\n\n @property\n def totalCount(self):\n if not self.__totalCount:\n params = {} if self.__nextParams is None else self.__nextParams.copy()\n # set per_page = 1 so the totalCount is just the number of pages\n params.update({\"per_page\": 1})\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\",\n self.__firstUrl,\n parameters=params,\n headers=self.__headers\n )\n if 'link' not in headers:\n self.__totalCount = len(data) if data else 0\n else:\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n self.__totalCount = int(parse_qs(lastUrl)['page'][0])\n return self.__totalCount\n\n def _getLastPageUrl(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\",\n self.__firstUrl,\n parameters=self.__nextParams,\n headers=self.__headers\n )\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n return lastUrl\n\n @property\n def reversed(self):\n r = PaginatedList(self.__contentClass, self.__requester, self.__firstUrl, self.__firstParams, self.__headers, self.__list_item)\n r.__reverse()\n return r\n\n def __reverse(self):\n self._reversed = True\n lastUrl = self._getLastPageUrl()\n if lastUrl:\n self.__nextUrl = lastUrl\n\n def _couldGrow(self):\n return self.__nextUrl is not None\n\n def _fetchNextPage(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\",\n self.__nextUrl,\n parameters=self.__nextParams,\n headers=self.__headers\n )\n data = data if data else []\n\n self.__nextUrl = None\n if len(data) > 0:\n links = self.__parseLinkHeader(headers)\n if self._reversed:\n if \"prev\" in links:\n self.__nextUrl = links[\"prev\"]\n elif \"next\" in links:\n self.__nextUrl = links[\"next\"]\n self.__nextParams = None\n\n if self.__list_item in data:\n self.__totalCount = data.get('total_count')\n data = data[self.__list_item]\n\n content = [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data if element is not None\n ]\n if self._reversed:\n return content[::-1]\n return content\n\n def __parseLinkHeader(self, headers):\n links = {}\n if \"link\" in headers:\n linkHeaders = headers[\"link\"].split(\", \")\n for linkHeader in linkHeaders:\n (url, rel) = linkHeader.split(\"; \")\n url = url[1:-1]\n rel = rel[5:-1]\n links[rel] = url\n return links\n\n def get_page(self, page):\n params = dict(self.__firstParams)\n if page != 0:\n params[\"page\"] = page + 1\n if self.__requester.per_page != 30:\n params[\"per_page\"] = self.__requester.per_page\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\",\n self.__firstUrl,\n parameters=params,\n headers=self.__headers\n )\n\n if self.__list_item in data:\n self.__totalCount = data.get('total_count')\n data = data[self.__list_item]\n\n return [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n ]\n", "path": "github/PaginatedList.py"}]}
| 3,473 | 155 |
gh_patches_debug_40224
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-2101
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Thumbnails of SVGs are not scaled
Integrating a thumbnail with
.. thumbnail:: image.svg
leads to having the full image integrated. Also colorbox click-to-enlarge is not enabled for this image.
The 'thumbnail' being created as image.thumbnail.svg is identical to image.svg.
Possible fixes include having the svg's inside <image> tags with width/height instead of object, and manipulating the thumbnail.svg to have a different viewport.
</issue>
<code>
[start of nikola/plugins/compile/rest/thumbnail.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2014-2015 Pelle Nilsson and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Thumbnail directive for reStructuredText."""
28
29 import os
30
31 from docutils.parsers.rst import directives
32 from docutils.parsers.rst.directives.images import Image, Figure
33
34 from nikola.plugin_categories import RestExtension
35
36
37 class Plugin(RestExtension):
38
39 """Plugin for thumbnail directive."""
40
41 name = "rest_thumbnail"
42
43 def set_site(self, site):
44 """Set Nikola site."""
45 self.site = site
46 directives.register_directive('thumbnail', Thumbnail)
47 return super(Plugin, self).set_site(site)
48
49
50 class Thumbnail(Figure):
51
52 """Thumbnail directive for reST."""
53
54 def align(argument):
55 """Return thumbnail alignment."""
56 return directives.choice(argument, Image.align_values)
57
58 def figwidth_value(argument):
59 """Return figure width."""
60 if argument.lower() == 'image':
61 return 'image'
62 else:
63 return directives.length_or_percentage_or_unitless(argument, 'px')
64
65 option_spec = Image.option_spec.copy()
66 option_spec['figwidth'] = figwidth_value
67 option_spec['figclass'] = directives.class_option
68 has_content = True
69
70 def run(self):
71 """Run the thumbnail directive."""
72 uri = directives.uri(self.arguments[0])
73 self.options['target'] = uri
74 self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri))
75 if self.content:
76 (node,) = Figure.run(self)
77 else:
78 (node,) = Image.run(self)
79 return [node]
80
[end of nikola/plugins/compile/rest/thumbnail.py]
[start of nikola/image_processing.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2014 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Process images."""
28
29 from __future__ import unicode_literals
30 import datetime
31 import os
32
33 from nikola import utils
34
35 Image = None
36 try:
37 from PIL import Image, ExifTags # NOQA
38 except ImportError:
39 try:
40 import Image as _Image
41 import ExifTags
42 Image = _Image
43 except ImportError:
44 pass
45
46
47 class ImageProcessor(object):
48
49 """Apply image operations."""
50
51 image_ext_list_builtin = ['.jpg', '.png', '.jpeg', '.gif', '.svg', '.bmp', '.tiff']
52
53 def resize_image(self, src, dst, max_size, bigger_panoramas=True):
54 """Make a copy of the image in the requested size."""
55 if not Image or os.path.splitext(src)[1] in ['.svg', '.svgz']:
56 utils.copy_file(src, dst)
57 return
58 im = Image.open(src)
59 w, h = im.size
60 if w > max_size or h > max_size:
61 size = max_size, max_size
62
63 # Panoramas get larger thumbnails because they look *awful*
64 if bigger_panoramas and w > 2 * h:
65 size = min(w, max_size * 4), min(w, max_size * 4)
66
67 try:
68 exif = im._getexif()
69 except Exception:
70 exif = None
71 if exif is not None:
72 for tag, value in list(exif.items()):
73 decoded = ExifTags.TAGS.get(tag, tag)
74
75 if decoded == 'Orientation':
76 if value == 3:
77 im = im.rotate(180)
78 elif value == 6:
79 im = im.rotate(270)
80 elif value == 8:
81 im = im.rotate(90)
82 break
83 try:
84 im.thumbnail(size, Image.ANTIALIAS)
85 im.save(dst)
86 except Exception as e:
87 self.logger.warn("Can't thumbnail {0}, using original "
88 "image as thumbnail ({1})".format(src, e))
89 utils.copy_file(src, dst)
90 else: # Image is small
91 utils.copy_file(src, dst)
92
93 def image_date(self, src):
94 """Try to figure out the date of the image."""
95 if src not in self.dates:
96 try:
97 im = Image.open(src)
98 exif = im._getexif()
99 except Exception:
100 exif = None
101 if exif is not None:
102 for tag, value in list(exif.items()):
103 decoded = ExifTags.TAGS.get(tag, tag)
104 if decoded in ('DateTimeOriginal', 'DateTimeDigitized'):
105 try:
106 self.dates[src] = datetime.datetime.strptime(
107 value, r'%Y:%m:%d %H:%M:%S')
108 break
109 except ValueError: # Invalid EXIF date.
110 pass
111 if src not in self.dates:
112 self.dates[src] = datetime.datetime.fromtimestamp(
113 os.stat(src).st_mtime)
114 return self.dates[src]
115
[end of nikola/image_processing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nikola/image_processing.py b/nikola/image_processing.py
--- a/nikola/image_processing.py
+++ b/nikola/image_processing.py
@@ -29,6 +29,9 @@
from __future__ import unicode_literals
import datetime
import os
+import lxml
+import re
+import gzip
from nikola import utils
@@ -53,7 +56,7 @@
def resize_image(self, src, dst, max_size, bigger_panoramas=True):
"""Make a copy of the image in the requested size."""
if not Image or os.path.splitext(src)[1] in ['.svg', '.svgz']:
- utils.copy_file(src, dst)
+ self.resize_svg(src, dst, max_size, bigger_panoramas)
return
im = Image.open(src)
w, h = im.size
@@ -90,6 +93,48 @@
else: # Image is small
utils.copy_file(src, dst)
+ def resize_svg(self, src, dst, max_size, bigger_panoramas):
+ """Make a copy of an svg at the requested size."""
+ try:
+ # Resize svg based on viewport hacking.
+ # note that this can also lead to enlarged svgs
+ if src.endswith('.svgz'):
+ with gzip.GzipFile(src) as op:
+ xml = op.read()
+ else:
+ with open(src) as op:
+ xml = op.read()
+ tree = lxml.etree.XML(xml)
+ width = tree.attrib['width']
+ height = tree.attrib['height']
+ w = int(re.search("[0-9]+", width).group(0))
+ h = int(re.search("[0-9]+", height).group(0))
+ # calculate new size preserving aspect ratio.
+ ratio = float(w) / h
+ # Panoramas get larger thumbnails because they look *awful*
+ if bigger_panoramas and w > 2 * h:
+ max_size = max_size * 4
+ if w > h:
+ w = max_size
+ h = max_size / ratio
+ else:
+ w = max_size * ratio
+ h = max_size
+ w = int(w)
+ h = int(h)
+ tree.attrib.pop("width")
+ tree.attrib.pop("height")
+ tree.attrib['viewport'] = "0 0 %ipx %ipx" % (w, h)
+ if dst.endswith('.svgz'):
+ op = gzip.GzipFile(dst, 'w')
+ else:
+ op = open(dst, 'w')
+ op.write(lxml.etree.tostring(tree))
+ op.close()
+ except (KeyError, AttributeError) as e:
+ self.logger.warn("No width/height in %s. Actuall exception: %s" % (src, e))
+ utils.copy_file(src, dst)
+
def image_date(self, src):
"""Try to figure out the date of the image."""
if src not in self.dates:
diff --git a/nikola/plugins/compile/rest/thumbnail.py b/nikola/plugins/compile/rest/thumbnail.py
--- a/nikola/plugins/compile/rest/thumbnail.py
+++ b/nikola/plugins/compile/rest/thumbnail.py
@@ -70,8 +70,12 @@
def run(self):
"""Run the thumbnail directive."""
uri = directives.uri(self.arguments[0])
+ if uri.endswith('.svg'):
+ # the ? at the end makes docutil output an <img> instead of an object for the svg, which colorbox requires
+ self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri)) + '?'
+ else:
+ self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri))
self.options['target'] = uri
- self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri))
if self.content:
(node,) = Figure.run(self)
else:
|
{"golden_diff": "diff --git a/nikola/image_processing.py b/nikola/image_processing.py\n--- a/nikola/image_processing.py\n+++ b/nikola/image_processing.py\n@@ -29,6 +29,9 @@\n from __future__ import unicode_literals\n import datetime\n import os\n+import lxml\n+import re\n+import gzip\n \n from nikola import utils\n \n@@ -53,7 +56,7 @@\n def resize_image(self, src, dst, max_size, bigger_panoramas=True):\n \"\"\"Make a copy of the image in the requested size.\"\"\"\n if not Image or os.path.splitext(src)[1] in ['.svg', '.svgz']:\n- utils.copy_file(src, dst)\n+ self.resize_svg(src, dst, max_size, bigger_panoramas)\n return\n im = Image.open(src)\n w, h = im.size\n@@ -90,6 +93,48 @@\n else: # Image is small\n utils.copy_file(src, dst)\n \n+ def resize_svg(self, src, dst, max_size, bigger_panoramas):\n+ \"\"\"Make a copy of an svg at the requested size.\"\"\"\n+ try:\n+ # Resize svg based on viewport hacking.\n+ # note that this can also lead to enlarged svgs\n+ if src.endswith('.svgz'):\n+ with gzip.GzipFile(src) as op:\n+ xml = op.read()\n+ else:\n+ with open(src) as op:\n+ xml = op.read()\n+ tree = lxml.etree.XML(xml)\n+ width = tree.attrib['width']\n+ height = tree.attrib['height']\n+ w = int(re.search(\"[0-9]+\", width).group(0))\n+ h = int(re.search(\"[0-9]+\", height).group(0))\n+ # calculate new size preserving aspect ratio.\n+ ratio = float(w) / h\n+ # Panoramas get larger thumbnails because they look *awful*\n+ if bigger_panoramas and w > 2 * h:\n+ max_size = max_size * 4\n+ if w > h:\n+ w = max_size\n+ h = max_size / ratio\n+ else:\n+ w = max_size * ratio\n+ h = max_size\n+ w = int(w)\n+ h = int(h)\n+ tree.attrib.pop(\"width\")\n+ tree.attrib.pop(\"height\")\n+ tree.attrib['viewport'] = \"0 0 %ipx %ipx\" % (w, h)\n+ if dst.endswith('.svgz'):\n+ op = gzip.GzipFile(dst, 'w')\n+ else:\n+ op = open(dst, 'w')\n+ op.write(lxml.etree.tostring(tree))\n+ op.close()\n+ except (KeyError, AttributeError) as e:\n+ self.logger.warn(\"No width/height in %s. Actuall exception: %s\" % (src, e))\n+ utils.copy_file(src, dst)\n+\n def image_date(self, src):\n \"\"\"Try to figure out the date of the image.\"\"\"\n if src not in self.dates:\ndiff --git a/nikola/plugins/compile/rest/thumbnail.py b/nikola/plugins/compile/rest/thumbnail.py\n--- a/nikola/plugins/compile/rest/thumbnail.py\n+++ b/nikola/plugins/compile/rest/thumbnail.py\n@@ -70,8 +70,12 @@\n def run(self):\n \"\"\"Run the thumbnail directive.\"\"\"\n uri = directives.uri(self.arguments[0])\n+ if uri.endswith('.svg'):\n+ # the ? at the end makes docutil output an <img> instead of an object for the svg, which colorbox requires\n+ self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri)) + '?'\n+ else:\n+ self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri))\n self.options['target'] = uri\n- self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri))\n if self.content:\n (node,) = Figure.run(self)\n else:\n", "issue": "Thumbnails of SVGs are not scaled\nIntegrating a thumbnail with\n.. thumbnail:: image.svg\n\nleads to having the full image integrated. Also colorbox click-to-enlarge is not enabled for this image.\n\nThe 'thumbnail' being created as image.thumbnail.svg is identical to image.svg.\n\nPossible fixes include having the svg's inside <image> tags with width/height instead of object, and manipulating the thumbnail.svg to have a different viewport.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2014-2015 Pelle Nilsson and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Thumbnail directive for reStructuredText.\"\"\"\n\nimport os\n\nfrom docutils.parsers.rst import directives\nfrom docutils.parsers.rst.directives.images import Image, Figure\n\nfrom nikola.plugin_categories import RestExtension\n\n\nclass Plugin(RestExtension):\n\n \"\"\"Plugin for thumbnail directive.\"\"\"\n\n name = \"rest_thumbnail\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.site = site\n directives.register_directive('thumbnail', Thumbnail)\n return super(Plugin, self).set_site(site)\n\n\nclass Thumbnail(Figure):\n\n \"\"\"Thumbnail directive for reST.\"\"\"\n\n def align(argument):\n \"\"\"Return thumbnail alignment.\"\"\"\n return directives.choice(argument, Image.align_values)\n\n def figwidth_value(argument):\n \"\"\"Return figure width.\"\"\"\n if argument.lower() == 'image':\n return 'image'\n else:\n return directives.length_or_percentage_or_unitless(argument, 'px')\n\n option_spec = Image.option_spec.copy()\n option_spec['figwidth'] = figwidth_value\n option_spec['figclass'] = directives.class_option\n has_content = True\n\n def run(self):\n \"\"\"Run the thumbnail directive.\"\"\"\n uri = directives.uri(self.arguments[0])\n self.options['target'] = uri\n self.arguments[0] = '.thumbnail'.join(os.path.splitext(uri))\n if self.content:\n (node,) = Figure.run(self)\n else:\n (node,) = Image.run(self)\n return [node]\n", "path": "nikola/plugins/compile/rest/thumbnail.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2014 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Process images.\"\"\"\n\nfrom __future__ import unicode_literals\nimport datetime\nimport os\n\nfrom nikola import utils\n\nImage = None\ntry:\n from PIL import Image, ExifTags # NOQA\nexcept ImportError:\n try:\n import Image as _Image\n import ExifTags\n Image = _Image\n except ImportError:\n pass\n\n\nclass ImageProcessor(object):\n\n \"\"\"Apply image operations.\"\"\"\n\n image_ext_list_builtin = ['.jpg', '.png', '.jpeg', '.gif', '.svg', '.bmp', '.tiff']\n\n def resize_image(self, src, dst, max_size, bigger_panoramas=True):\n \"\"\"Make a copy of the image in the requested size.\"\"\"\n if not Image or os.path.splitext(src)[1] in ['.svg', '.svgz']:\n utils.copy_file(src, dst)\n return\n im = Image.open(src)\n w, h = im.size\n if w > max_size or h > max_size:\n size = max_size, max_size\n\n # Panoramas get larger thumbnails because they look *awful*\n if bigger_panoramas and w > 2 * h:\n size = min(w, max_size * 4), min(w, max_size * 4)\n\n try:\n exif = im._getexif()\n except Exception:\n exif = None\n if exif is not None:\n for tag, value in list(exif.items()):\n decoded = ExifTags.TAGS.get(tag, tag)\n\n if decoded == 'Orientation':\n if value == 3:\n im = im.rotate(180)\n elif value == 6:\n im = im.rotate(270)\n elif value == 8:\n im = im.rotate(90)\n break\n try:\n im.thumbnail(size, Image.ANTIALIAS)\n im.save(dst)\n except Exception as e:\n self.logger.warn(\"Can't thumbnail {0}, using original \"\n \"image as thumbnail ({1})\".format(src, e))\n utils.copy_file(src, dst)\n else: # Image is small\n utils.copy_file(src, dst)\n\n def image_date(self, src):\n \"\"\"Try to figure out the date of the image.\"\"\"\n if src not in self.dates:\n try:\n im = Image.open(src)\n exif = im._getexif()\n except Exception:\n exif = None\n if exif is not None:\n for tag, value in list(exif.items()):\n decoded = ExifTags.TAGS.get(tag, tag)\n if decoded in ('DateTimeOriginal', 'DateTimeDigitized'):\n try:\n self.dates[src] = datetime.datetime.strptime(\n value, r'%Y:%m:%d %H:%M:%S')\n break\n except ValueError: # Invalid EXIF date.\n pass\n if src not in self.dates:\n self.dates[src] = datetime.datetime.fromtimestamp(\n os.stat(src).st_mtime)\n return self.dates[src]\n", "path": "nikola/image_processing.py"}]}
| 2,523 | 894 |
gh_patches_debug_27636
|
rasdani/github-patches
|
git_diff
|
graspologic-org__graspologic-829
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Inaccurracy in how to use autokmeans
## Expected Behavior
https://github.com/microsoft/graspologic/blob/10de2bf17b972decbab318568154af226dcd71fa/graspologic/cluster/kclust.py#L16
This line is false; higher silhouette score is better, to my knowledge? https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html
## Actual Behavior
Documentation correctly reports how to use the package.
</issue>
<code>
[start of graspologic/cluster/kclust.py]
1 # Copyright (c) Microsoft Corporation and contributors.
2 # Licensed under the MIT License.
3
4 from typing import Optional, Union
5
6 import numpy as np
7 from sklearn.cluster import KMeans
8 from sklearn.metrics import adjusted_rand_score, silhouette_score
9
10 from graspologic.types import List
11
12 from .base import BaseCluster
13
14
15 class KMeansCluster(BaseCluster):
16 ari_: Optional[List[float]]
17
18 """
19 KMeans Cluster.
20
21 It computes all possible models from one component to
22 ``max_clusters``. The best model is given by the lowest silhouette score.
23
24 Parameters
25 ----------
26 max_clusters : int, defaults to 1.
27 The maximum number of mixture components to consider.
28
29 random_state : int, RandomState instance or None, optional (default=None)
30 If int, ``random_state`` is the seed used by the random number generator;
31 If RandomState instance, ``random_state`` is the random number generator;
32 If None, the random number generator is the RandomState instance used
33 by ``np.random``.
34
35 Attributes
36 ----------
37 n_clusters_ : int
38 Optimal number of components. If y is given, it is based on largest
39 ARI. Otherwise, it is based on smallest loss.
40
41 model_ : KMeans object
42 Fitted KMeans object fitted with optimal n_components.
43
44 silhouette_ : list
45 List of silhouette scores computed for all possible number
46 of clusters given by ``range(2, max_clusters)``.
47
48 ari_ : list
49 Only computed when y is given. List of ARI values computed for
50 all possible number of clusters given by ``range(2, max_clusters)``.
51 """
52
53 def __init__(
54 self,
55 max_clusters: int = 2,
56 random_state: Optional[Union[int, np.random.RandomState]] = None,
57 ):
58 if isinstance(max_clusters, int):
59 if max_clusters <= 1:
60 msg = "n_components must be >= 2 or None."
61 raise ValueError(msg)
62 else:
63 self.max_clusters = max_clusters
64 else:
65 msg = "max_clusters must be an integer, not {}.".format(type(max_clusters))
66 raise TypeError(msg)
67 self.random_state = random_state
68
69 def fit(self, X: np.ndarray, y: Optional[np.ndarray] = None) -> "KMeansCluster":
70 """
71 Fits kmeans model to the data.
72
73 Parameters
74 ----------
75 X : array-like, shape (n_samples, n_features)
76 List of n_features-dimensional data points. Each row
77 corresponds to a single data point.
78
79 y : array-like, shape (n_samples,), optional (default=None)
80 List of labels for `X` if available. Used to compute ARI scores.
81
82 Returns
83 -------
84 self
85 """
86 # Deal with number of clusters
87 if self.max_clusters > X.shape[0]:
88 msg = "n_components must be >= n_samples, but got \
89 n_components = {}, n_samples = {}".format(
90 self.max_clusters, X.shape[0]
91 )
92 raise ValueError(msg)
93 else:
94 max_clusters = self.max_clusters
95
96 # Get parameters
97 random_state = self.random_state
98
99 # Compute all models
100 models = []
101 silhouettes = []
102 aris = []
103 for n in range(2, max_clusters + 1):
104 model = KMeans(n_clusters=n, random_state=random_state)
105
106 # Fit and compute values
107 predictions = model.fit_predict(X)
108 models.append(model)
109 silhouettes.append(silhouette_score(X, predictions))
110 if y is not None:
111 aris.append(adjusted_rand_score(y, predictions))
112
113 if y is not None:
114 self.ari_ = aris
115 self.silhouette_ = silhouettes
116 self.n_clusters_ = np.argmax(aris) + 1
117 self.model_ = models[np.argmax(aris)]
118 else:
119 self.ari_ = None
120 self.silhouette_ = silhouettes
121 self.n_clusters_ = np.argmax(silhouettes) + 1
122 self.model_ = models[np.argmax(silhouettes)]
123
124 return self
125
[end of graspologic/cluster/kclust.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/graspologic/cluster/kclust.py b/graspologic/cluster/kclust.py
--- a/graspologic/cluster/kclust.py
+++ b/graspologic/cluster/kclust.py
@@ -18,13 +18,15 @@
"""
KMeans Cluster.
- It computes all possible models from one component to
- ``max_clusters``. The best model is given by the lowest silhouette score.
+ It computes all possible models from one component to ``max_clusters``.
+ When the true labels are known, the best model is given by the model with highest
+ adjusted Rand index (ARI).
+ Otherwise, the best model is given by the model with highest silhouette score.
Parameters
----------
- max_clusters : int, defaults to 1.
- The maximum number of mixture components to consider.
+ max_clusters : int, default=2.
+ The maximum number of clusters to consider. Must be ``>=2``.
random_state : int, RandomState instance or None, optional (default=None)
If int, ``random_state`` is the seed used by the random number generator;
@@ -35,11 +37,11 @@
Attributes
----------
n_clusters_ : int
- Optimal number of components. If y is given, it is based on largest
- ARI. Otherwise, it is based on smallest loss.
+ Optimal number of clusters. If y is given, it is based on largest
+ ARI. Otherwise, it is based on highest silhouette score.
model_ : KMeans object
- Fitted KMeans object fitted with optimal n_components.
+ Fitted KMeans object fitted with ``n_clusters_``.
silhouette_ : list
List of silhouette scores computed for all possible number
|
{"golden_diff": "diff --git a/graspologic/cluster/kclust.py b/graspologic/cluster/kclust.py\n--- a/graspologic/cluster/kclust.py\n+++ b/graspologic/cluster/kclust.py\n@@ -18,13 +18,15 @@\n \"\"\"\n KMeans Cluster.\n \n- It computes all possible models from one component to\n- ``max_clusters``. The best model is given by the lowest silhouette score.\n+ It computes all possible models from one component to ``max_clusters``.\n+ When the true labels are known, the best model is given by the model with highest\n+ adjusted Rand index (ARI).\n+ Otherwise, the best model is given by the model with highest silhouette score.\n \n Parameters\n ----------\n- max_clusters : int, defaults to 1.\n- The maximum number of mixture components to consider.\n+ max_clusters : int, default=2.\n+ The maximum number of clusters to consider. Must be ``>=2``.\n \n random_state : int, RandomState instance or None, optional (default=None)\n If int, ``random_state`` is the seed used by the random number generator;\n@@ -35,11 +37,11 @@\n Attributes\n ----------\n n_clusters_ : int\n- Optimal number of components. If y is given, it is based on largest\n- ARI. Otherwise, it is based on smallest loss.\n+ Optimal number of clusters. If y is given, it is based on largest\n+ ARI. Otherwise, it is based on highest silhouette score.\n \n model_ : KMeans object\n- Fitted KMeans object fitted with optimal n_components.\n+ Fitted KMeans object fitted with ``n_clusters_``.\n \n silhouette_ : list\n List of silhouette scores computed for all possible number\n", "issue": "[BUG] Inaccurracy in how to use autokmeans\n## Expected Behavior\r\n\r\nhttps://github.com/microsoft/graspologic/blob/10de2bf17b972decbab318568154af226dcd71fa/graspologic/cluster/kclust.py#L16\r\n\r\nThis line is false; higher silhouette score is better, to my knowledge? https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html\r\n\r\n## Actual Behavior\r\n\r\nDocumentation correctly reports how to use the package.\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nfrom typing import Optional, Union\n\nimport numpy as np\nfrom sklearn.cluster import KMeans\nfrom sklearn.metrics import adjusted_rand_score, silhouette_score\n\nfrom graspologic.types import List\n\nfrom .base import BaseCluster\n\n\nclass KMeansCluster(BaseCluster):\n ari_: Optional[List[float]]\n\n \"\"\"\n KMeans Cluster.\n\n It computes all possible models from one component to\n ``max_clusters``. The best model is given by the lowest silhouette score.\n\n Parameters\n ----------\n max_clusters : int, defaults to 1.\n The maximum number of mixture components to consider.\n\n random_state : int, RandomState instance or None, optional (default=None)\n If int, ``random_state`` is the seed used by the random number generator;\n If RandomState instance, ``random_state`` is the random number generator;\n If None, the random number generator is the RandomState instance used\n by ``np.random``.\n\n Attributes\n ----------\n n_clusters_ : int\n Optimal number of components. If y is given, it is based on largest\n ARI. Otherwise, it is based on smallest loss.\n\n model_ : KMeans object\n Fitted KMeans object fitted with optimal n_components.\n\n silhouette_ : list\n List of silhouette scores computed for all possible number\n of clusters given by ``range(2, max_clusters)``.\n\n ari_ : list\n Only computed when y is given. List of ARI values computed for\n all possible number of clusters given by ``range(2, max_clusters)``.\n \"\"\"\n\n def __init__(\n self,\n max_clusters: int = 2,\n random_state: Optional[Union[int, np.random.RandomState]] = None,\n ):\n if isinstance(max_clusters, int):\n if max_clusters <= 1:\n msg = \"n_components must be >= 2 or None.\"\n raise ValueError(msg)\n else:\n self.max_clusters = max_clusters\n else:\n msg = \"max_clusters must be an integer, not {}.\".format(type(max_clusters))\n raise TypeError(msg)\n self.random_state = random_state\n\n def fit(self, X: np.ndarray, y: Optional[np.ndarray] = None) -> \"KMeansCluster\":\n \"\"\"\n Fits kmeans model to the data.\n\n Parameters\n ----------\n X : array-like, shape (n_samples, n_features)\n List of n_features-dimensional data points. Each row\n corresponds to a single data point.\n\n y : array-like, shape (n_samples,), optional (default=None)\n List of labels for `X` if available. Used to compute ARI scores.\n\n Returns\n -------\n self\n \"\"\"\n # Deal with number of clusters\n if self.max_clusters > X.shape[0]:\n msg = \"n_components must be >= n_samples, but got \\\n n_components = {}, n_samples = {}\".format(\n self.max_clusters, X.shape[0]\n )\n raise ValueError(msg)\n else:\n max_clusters = self.max_clusters\n\n # Get parameters\n random_state = self.random_state\n\n # Compute all models\n models = []\n silhouettes = []\n aris = []\n for n in range(2, max_clusters + 1):\n model = KMeans(n_clusters=n, random_state=random_state)\n\n # Fit and compute values\n predictions = model.fit_predict(X)\n models.append(model)\n silhouettes.append(silhouette_score(X, predictions))\n if y is not None:\n aris.append(adjusted_rand_score(y, predictions))\n\n if y is not None:\n self.ari_ = aris\n self.silhouette_ = silhouettes\n self.n_clusters_ = np.argmax(aris) + 1\n self.model_ = models[np.argmax(aris)]\n else:\n self.ari_ = None\n self.silhouette_ = silhouettes\n self.n_clusters_ = np.argmax(silhouettes) + 1\n self.model_ = models[np.argmax(silhouettes)]\n\n return self\n", "path": "graspologic/cluster/kclust.py"}]}
| 1,839 | 402 |
gh_patches_debug_8252
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-5077
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
extend auto-resolve timeframe to be more than 7 days (up to 90 days will be nice)
</issue>
<code>
[start of src/sentry/web/frontend/project_settings.py]
1 from __future__ import absolute_import
2
3 import re
4
5 from django import forms
6 from django.contrib import messages
7 from django.core.urlresolvers import reverse
8 from django.http import HttpResponseRedirect
9 from django.utils.safestring import mark_safe
10 from django.utils.translation import ugettext_lazy as _
11 from uuid import uuid1
12
13 from sentry import options
14 from sentry.models import AuditLogEntryEvent, Project, Team
15 from sentry.web.forms.fields import (
16 CustomTypedChoiceField, RangeField, OriginsField,
17 )
18 from sentry.web.frontend.base import ProjectView
19
20
21 BLANK_CHOICE = [("", "")]
22
23
24 class EditProjectForm(forms.ModelForm):
25 name = forms.CharField(label=_('Project Name'), max_length=200,
26 widget=forms.TextInput(attrs={'placeholder': _('Production')}))
27 slug = forms.SlugField(
28 label=_('Short name'),
29 help_text=_('A unique ID used to identify this project.'),
30 )
31 team = CustomTypedChoiceField(choices=(), coerce=int, required=False)
32 origins = OriginsField(label=_('Allowed Domains'), required=False,
33 help_text=_('Separate multiple entries with a newline.'))
34 token = forms.CharField(
35 label=_('Security token'),
36 help_text=_('Outbound requests matching Allowed Domains will have the header "{token_header}: {token}" appended.'),
37 required=True,
38 )
39 token_header = forms.CharField(
40 label=_('Security token header'),
41 help_text=_('Outbound requests matching Allowed Domains will have the header "{token_header}: {token}" appended.'),
42 widget=forms.TextInput(attrs={
43 'placeholder': _('X-Sentry-Token'),
44 }),
45 required=False,
46 )
47 verify_ssl = forms.BooleanField(
48 label=_('Verify TLS/SSL'),
49 help_text=_('Outbound requests will verify TLS (sometimes known as SSL) connections.'),
50 required=False,
51 )
52 resolve_age = RangeField(label=_('Auto resolve'), required=False,
53 min_value=0, max_value=168, step_value=1,
54 help_text=_('Automatically resolve an issue if it hasn\'t been seen for this amount of time.'))
55 scrub_data = forms.BooleanField(
56 label=_('Data Scrubber'),
57 help_text=_('Enable server-side data scrubbing.'),
58 required=False
59 )
60 scrub_defaults = forms.BooleanField(
61 label=_('Use Default Scrubbers'),
62 help_text=_('Apply default scrubbers to prevent things like passwords and credit cards from being stored.'),
63 required=False
64 )
65 sensitive_fields = forms.CharField(
66 label=_('Additional sensitive fields'),
67 help_text=_('Additional field names to match against when scrubbing data. Separate multiple entries with a newline.'),
68 widget=forms.Textarea(attrs={
69 'placeholder': mark_safe(_('e.g. email')),
70 'class': 'span8',
71 'rows': '3',
72 }),
73 required=False,
74 )
75 safe_fields = forms.CharField(
76 label=_('Safe fields'),
77 help_text=_('Field names which data scrubbers should ignore. '
78 'Separate multiple entries with a newline.'),
79 widget=forms.Textarea(attrs={
80 'placeholder': mark_safe(_('e.g. email')),
81 'class': 'span8',
82 'rows': '3',
83 }),
84 required=False,
85 )
86 scrub_ip_address = forms.BooleanField(
87 label=_('Don\'t store IP Addresses'),
88 help_text=_('Prevent IP addresses from being stored for new events.'),
89 required=False
90 )
91
92 # JavaScript options
93 scrape_javascript = forms.BooleanField(
94 label=_('Enable JavaScript source fetching'),
95 help_text=_('Allow Sentry to scrape missing JavaScript source context when possible.'),
96 required=False,
97 )
98
99 # Options that are overridden by Organization level settings
100 org_overrides = ('scrub_data', 'scrub_defaults', 'scrub_ip_address')
101
102 default_environment = forms.CharField(
103 label=_('Default Environment'),
104 help_text=_('The default selected environment when viewing issues.'),
105 widget=forms.TextInput(attrs={'placeholder': _('e.g. production')}),
106 required=False,
107 )
108 mail_subject_prefix = forms.CharField(
109 label=_('Subject Prefix'), required=False,
110 help_text=_('Choose a custom prefix for emails from this project.'))
111
112 class Meta:
113 fields = ('name', 'team', 'slug')
114 model = Project
115
116 def __init__(self, request, organization, team_list, data, instance, *args, **kwargs):
117 # First, we need to check for the value overrides from the Organization options
118 # We need to do this before `initial` gets passed into the Form.
119 disabled = []
120 if 'initial' in kwargs:
121 for opt in self.org_overrides:
122 value = bool(organization.get_option('sentry:require_%s' % (opt,), False))
123 if value:
124 disabled.append(opt)
125 kwargs['initial'][opt] = value
126
127 super(EditProjectForm, self).__init__(data=data, instance=instance, *args, **kwargs)
128
129 self.organization = organization
130 self.team_list = team_list
131
132 self.fields['team'].choices = self.get_team_choices(team_list, instance.team)
133 self.fields['team'].widget.choices = self.fields['team'].choices
134
135 # After the Form is initialized, we now need to disable the fields that have been
136 # overridden from Organization options.
137 for opt in disabled:
138 self.fields[opt].widget.attrs['disabled'] = 'disabled'
139
140 def get_team_label(self, team):
141 return '%s (%s)' % (team.name, team.slug)
142
143 def get_team_choices(self, team_list, default=None):
144 sorted_team_list = sorted(team_list, key=lambda x: x.name)
145
146 choices = []
147 for team in sorted_team_list:
148 # TODO: optimize queries
149 choices.append(
150 (team.id, self.get_team_label(team))
151 )
152
153 if default is None:
154 choices.insert(0, (-1, mark_safe('–' * 8)))
155 elif default not in sorted_team_list:
156 choices.insert(0, (default.id, self.get_team_label(default)))
157
158 return choices
159
160 def clean_sensitive_fields(self):
161 value = self.cleaned_data.get('sensitive_fields')
162 if not value:
163 return
164
165 return filter(bool, (v.lower().strip() for v in value.split('\n')))
166
167 def clean_safe_fields(self):
168 value = self.cleaned_data.get('safe_fields')
169 if not value:
170 return
171
172 return filter(bool, (v.lower().strip() for v in value.split('\n')))
173
174 def clean_team(self):
175 value = self.cleaned_data.get('team')
176 if not value:
177 return
178
179 # TODO: why is this not already an int?
180 value = int(value)
181 if value == -1:
182 return
183
184 if self.instance.team and value == self.instance.team.id:
185 return self.instance.team
186
187 for team in self.team_list:
188 if value == team.id:
189 return team
190
191 raise forms.ValidationError('Unable to find chosen team')
192
193 def clean_slug(self):
194 slug = self.cleaned_data.get('slug')
195 if not slug:
196 return
197 other = Project.objects.filter(
198 slug=slug,
199 organization=self.organization
200 ).exclude(id=self.instance.id).first()
201 if other is not None:
202 raise forms.ValidationError('Another project (%s) is already '
203 'using that slug' % other.name)
204 return slug
205
206 def clean_token(self):
207 token = self.cleaned_data.get('token')
208 if not token:
209 return
210 token_re = r'^[-a-zA-Z0-9+/= ]{1,255}$'
211 if not re.match(token_re, token):
212 raise forms.ValidationError('Invalid security token, must be: %s' % token_re)
213 return token
214
215 def clean_token_header(self):
216 token_header = self.cleaned_data.get('token_header')
217 if not token_header:
218 return
219 header_re = r'^[a-zA-Z0-9-]{1,20}$'
220 if not re.match(header_re, token_header):
221 raise forms.ValidationError('Invalid header value, must be: %s' % header_re)
222 return token_header
223
224
225 class ProjectSettingsView(ProjectView):
226 required_scope = 'project:write'
227
228 def get_form(self, request, project):
229 organization = project.organization
230 team_list = [
231 t for t in Team.objects.get_for_user(
232 organization=organization,
233 user=request.user,
234 )
235 if request.access.has_team_scope(t, self.required_scope)
236 ]
237
238 # TODO(dcramer): this update should happen within a lock
239 security_token = project.get_option('sentry:token', None)
240 if security_token is None:
241 security_token = uuid1().hex
242 project.update_option('sentry:token', security_token)
243
244 return EditProjectForm(
245 request, organization, team_list, request.POST or None,
246 instance=project,
247 initial={
248 'origins': '\n'.join(project.get_option('sentry:origins', ['*'])),
249 'token': security_token,
250 'token_header': project.get_option('sentry:token_header'),
251 'verify_ssl': bool(project.get_option('sentry:verify_ssl', False)),
252 'resolve_age': int(project.get_option('sentry:resolve_age', 0)),
253 'scrub_data': bool(project.get_option('sentry:scrub_data', True)),
254 'scrub_defaults': bool(project.get_option('sentry:scrub_defaults', True)),
255 'sensitive_fields': '\n'.join(project.get_option('sentry:sensitive_fields', None) or []),
256 'safe_fields': '\n'.join(project.get_option('sentry:safe_fields', None) or []),
257 'scrub_ip_address': bool(project.get_option('sentry:scrub_ip_address', False)),
258 'scrape_javascript': bool(project.get_option('sentry:scrape_javascript', True)),
259 'default_environment': project.get_option('sentry:default_environment'),
260 'mail_subject_prefix': project.get_option(
261 'mail:subject_prefix', options.get('mail.subject-prefix')),
262 },
263 )
264
265 def handle(self, request, organization, team, project):
266 form = self.get_form(request, project)
267
268 if form.is_valid():
269 project = form.save()
270 for opt in (
271 'origins',
272 'token',
273 'token_header',
274 'verify_ssl',
275 'resolve_age',
276 'scrub_data',
277 'scrub_defaults',
278 'sensitive_fields',
279 'safe_fields',
280 'scrub_ip_address',
281 'scrape_javascript',
282 'default_environment',
283 'mail_subject_prefix',
284 ):
285 opt_key = 'sentry:{}'.format(opt)
286
287 # Value can't be overridden if set on the org level
288 if opt in form.org_overrides and organization.get_option(opt_key, False):
289 continue
290 if opt == 'mail_subject_prefix':
291 key = 'mail:subject_prefix'
292 else:
293 key = 'sentry:%s' % (opt,)
294 value = form.cleaned_data.get(opt)
295 if value is None:
296 project.delete_option(key)
297 else:
298 project.update_option(key, value)
299
300 project.update_option('sentry:reviewed-callsign', True)
301
302 self.create_audit_entry(
303 request,
304 organization=organization,
305 target_object=project.id,
306 event=AuditLogEntryEvent.PROJECT_EDIT,
307 data=project.get_audit_log_data(),
308 )
309
310 messages.add_message(
311 request, messages.SUCCESS,
312 _('Changes to your project were saved.'))
313
314 redirect = reverse('sentry-manage-project', args=[project.organization.slug, project.slug])
315
316 return HttpResponseRedirect(redirect)
317
318 context = {
319 'form': form,
320 'page': 'details',
321 }
322
323 return self.respond('sentry/projects/manage.html', context)
324
[end of src/sentry/web/frontend/project_settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/web/frontend/project_settings.py b/src/sentry/web/frontend/project_settings.py
--- a/src/sentry/web/frontend/project_settings.py
+++ b/src/sentry/web/frontend/project_settings.py
@@ -50,7 +50,7 @@
required=False,
)
resolve_age = RangeField(label=_('Auto resolve'), required=False,
- min_value=0, max_value=168, step_value=1,
+ min_value=0, max_value=720, step_value=1,
help_text=_('Automatically resolve an issue if it hasn\'t been seen for this amount of time.'))
scrub_data = forms.BooleanField(
label=_('Data Scrubber'),
|
{"golden_diff": "diff --git a/src/sentry/web/frontend/project_settings.py b/src/sentry/web/frontend/project_settings.py\n--- a/src/sentry/web/frontend/project_settings.py\n+++ b/src/sentry/web/frontend/project_settings.py\n@@ -50,7 +50,7 @@\n required=False,\n )\n resolve_age = RangeField(label=_('Auto resolve'), required=False,\n- min_value=0, max_value=168, step_value=1,\n+ min_value=0, max_value=720, step_value=1,\n help_text=_('Automatically resolve an issue if it hasn\\'t been seen for this amount of time.'))\n scrub_data = forms.BooleanField(\n label=_('Data Scrubber'),\n", "issue": "extend auto-resolve timeframe to be more than 7 days (up to 90 days will be nice)\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport re\n\nfrom django import forms\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import ugettext_lazy as _\nfrom uuid import uuid1\n\nfrom sentry import options\nfrom sentry.models import AuditLogEntryEvent, Project, Team\nfrom sentry.web.forms.fields import (\n CustomTypedChoiceField, RangeField, OriginsField,\n)\nfrom sentry.web.frontend.base import ProjectView\n\n\nBLANK_CHOICE = [(\"\", \"\")]\n\n\nclass EditProjectForm(forms.ModelForm):\n name = forms.CharField(label=_('Project Name'), max_length=200,\n widget=forms.TextInput(attrs={'placeholder': _('Production')}))\n slug = forms.SlugField(\n label=_('Short name'),\n help_text=_('A unique ID used to identify this project.'),\n )\n team = CustomTypedChoiceField(choices=(), coerce=int, required=False)\n origins = OriginsField(label=_('Allowed Domains'), required=False,\n help_text=_('Separate multiple entries with a newline.'))\n token = forms.CharField(\n label=_('Security token'),\n help_text=_('Outbound requests matching Allowed Domains will have the header \"{token_header}: {token}\" appended.'),\n required=True,\n )\n token_header = forms.CharField(\n label=_('Security token header'),\n help_text=_('Outbound requests matching Allowed Domains will have the header \"{token_header}: {token}\" appended.'),\n widget=forms.TextInput(attrs={\n 'placeholder': _('X-Sentry-Token'),\n }),\n required=False,\n )\n verify_ssl = forms.BooleanField(\n label=_('Verify TLS/SSL'),\n help_text=_('Outbound requests will verify TLS (sometimes known as SSL) connections.'),\n required=False,\n )\n resolve_age = RangeField(label=_('Auto resolve'), required=False,\n min_value=0, max_value=168, step_value=1,\n help_text=_('Automatically resolve an issue if it hasn\\'t been seen for this amount of time.'))\n scrub_data = forms.BooleanField(\n label=_('Data Scrubber'),\n help_text=_('Enable server-side data scrubbing.'),\n required=False\n )\n scrub_defaults = forms.BooleanField(\n label=_('Use Default Scrubbers'),\n help_text=_('Apply default scrubbers to prevent things like passwords and credit cards from being stored.'),\n required=False\n )\n sensitive_fields = forms.CharField(\n label=_('Additional sensitive fields'),\n help_text=_('Additional field names to match against when scrubbing data. Separate multiple entries with a newline.'),\n widget=forms.Textarea(attrs={\n 'placeholder': mark_safe(_('e.g. email')),\n 'class': 'span8',\n 'rows': '3',\n }),\n required=False,\n )\n safe_fields = forms.CharField(\n label=_('Safe fields'),\n help_text=_('Field names which data scrubbers should ignore. '\n 'Separate multiple entries with a newline.'),\n widget=forms.Textarea(attrs={\n 'placeholder': mark_safe(_('e.g. email')),\n 'class': 'span8',\n 'rows': '3',\n }),\n required=False,\n )\n scrub_ip_address = forms.BooleanField(\n label=_('Don\\'t store IP Addresses'),\n help_text=_('Prevent IP addresses from being stored for new events.'),\n required=False\n )\n\n # JavaScript options\n scrape_javascript = forms.BooleanField(\n label=_('Enable JavaScript source fetching'),\n help_text=_('Allow Sentry to scrape missing JavaScript source context when possible.'),\n required=False,\n )\n\n # Options that are overridden by Organization level settings\n org_overrides = ('scrub_data', 'scrub_defaults', 'scrub_ip_address')\n\n default_environment = forms.CharField(\n label=_('Default Environment'),\n help_text=_('The default selected environment when viewing issues.'),\n widget=forms.TextInput(attrs={'placeholder': _('e.g. production')}),\n required=False,\n )\n mail_subject_prefix = forms.CharField(\n label=_('Subject Prefix'), required=False,\n help_text=_('Choose a custom prefix for emails from this project.'))\n\n class Meta:\n fields = ('name', 'team', 'slug')\n model = Project\n\n def __init__(self, request, organization, team_list, data, instance, *args, **kwargs):\n # First, we need to check for the value overrides from the Organization options\n # We need to do this before `initial` gets passed into the Form.\n disabled = []\n if 'initial' in kwargs:\n for opt in self.org_overrides:\n value = bool(organization.get_option('sentry:require_%s' % (opt,), False))\n if value:\n disabled.append(opt)\n kwargs['initial'][opt] = value\n\n super(EditProjectForm, self).__init__(data=data, instance=instance, *args, **kwargs)\n\n self.organization = organization\n self.team_list = team_list\n\n self.fields['team'].choices = self.get_team_choices(team_list, instance.team)\n self.fields['team'].widget.choices = self.fields['team'].choices\n\n # After the Form is initialized, we now need to disable the fields that have been\n # overridden from Organization options.\n for opt in disabled:\n self.fields[opt].widget.attrs['disabled'] = 'disabled'\n\n def get_team_label(self, team):\n return '%s (%s)' % (team.name, team.slug)\n\n def get_team_choices(self, team_list, default=None):\n sorted_team_list = sorted(team_list, key=lambda x: x.name)\n\n choices = []\n for team in sorted_team_list:\n # TODO: optimize queries\n choices.append(\n (team.id, self.get_team_label(team))\n )\n\n if default is None:\n choices.insert(0, (-1, mark_safe('–' * 8)))\n elif default not in sorted_team_list:\n choices.insert(0, (default.id, self.get_team_label(default)))\n\n return choices\n\n def clean_sensitive_fields(self):\n value = self.cleaned_data.get('sensitive_fields')\n if not value:\n return\n\n return filter(bool, (v.lower().strip() for v in value.split('\\n')))\n\n def clean_safe_fields(self):\n value = self.cleaned_data.get('safe_fields')\n if not value:\n return\n\n return filter(bool, (v.lower().strip() for v in value.split('\\n')))\n\n def clean_team(self):\n value = self.cleaned_data.get('team')\n if not value:\n return\n\n # TODO: why is this not already an int?\n value = int(value)\n if value == -1:\n return\n\n if self.instance.team and value == self.instance.team.id:\n return self.instance.team\n\n for team in self.team_list:\n if value == team.id:\n return team\n\n raise forms.ValidationError('Unable to find chosen team')\n\n def clean_slug(self):\n slug = self.cleaned_data.get('slug')\n if not slug:\n return\n other = Project.objects.filter(\n slug=slug,\n organization=self.organization\n ).exclude(id=self.instance.id).first()\n if other is not None:\n raise forms.ValidationError('Another project (%s) is already '\n 'using that slug' % other.name)\n return slug\n\n def clean_token(self):\n token = self.cleaned_data.get('token')\n if not token:\n return\n token_re = r'^[-a-zA-Z0-9+/= ]{1,255}$'\n if not re.match(token_re, token):\n raise forms.ValidationError('Invalid security token, must be: %s' % token_re)\n return token\n\n def clean_token_header(self):\n token_header = self.cleaned_data.get('token_header')\n if not token_header:\n return\n header_re = r'^[a-zA-Z0-9-]{1,20}$'\n if not re.match(header_re, token_header):\n raise forms.ValidationError('Invalid header value, must be: %s' % header_re)\n return token_header\n\n\nclass ProjectSettingsView(ProjectView):\n required_scope = 'project:write'\n\n def get_form(self, request, project):\n organization = project.organization\n team_list = [\n t for t in Team.objects.get_for_user(\n organization=organization,\n user=request.user,\n )\n if request.access.has_team_scope(t, self.required_scope)\n ]\n\n # TODO(dcramer): this update should happen within a lock\n security_token = project.get_option('sentry:token', None)\n if security_token is None:\n security_token = uuid1().hex\n project.update_option('sentry:token', security_token)\n\n return EditProjectForm(\n request, organization, team_list, request.POST or None,\n instance=project,\n initial={\n 'origins': '\\n'.join(project.get_option('sentry:origins', ['*'])),\n 'token': security_token,\n 'token_header': project.get_option('sentry:token_header'),\n 'verify_ssl': bool(project.get_option('sentry:verify_ssl', False)),\n 'resolve_age': int(project.get_option('sentry:resolve_age', 0)),\n 'scrub_data': bool(project.get_option('sentry:scrub_data', True)),\n 'scrub_defaults': bool(project.get_option('sentry:scrub_defaults', True)),\n 'sensitive_fields': '\\n'.join(project.get_option('sentry:sensitive_fields', None) or []),\n 'safe_fields': '\\n'.join(project.get_option('sentry:safe_fields', None) or []),\n 'scrub_ip_address': bool(project.get_option('sentry:scrub_ip_address', False)),\n 'scrape_javascript': bool(project.get_option('sentry:scrape_javascript', True)),\n 'default_environment': project.get_option('sentry:default_environment'),\n 'mail_subject_prefix': project.get_option(\n 'mail:subject_prefix', options.get('mail.subject-prefix')),\n },\n )\n\n def handle(self, request, organization, team, project):\n form = self.get_form(request, project)\n\n if form.is_valid():\n project = form.save()\n for opt in (\n 'origins',\n 'token',\n 'token_header',\n 'verify_ssl',\n 'resolve_age',\n 'scrub_data',\n 'scrub_defaults',\n 'sensitive_fields',\n 'safe_fields',\n 'scrub_ip_address',\n 'scrape_javascript',\n 'default_environment',\n 'mail_subject_prefix',\n ):\n opt_key = 'sentry:{}'.format(opt)\n\n # Value can't be overridden if set on the org level\n if opt in form.org_overrides and organization.get_option(opt_key, False):\n continue\n if opt == 'mail_subject_prefix':\n key = 'mail:subject_prefix'\n else:\n key = 'sentry:%s' % (opt,)\n value = form.cleaned_data.get(opt)\n if value is None:\n project.delete_option(key)\n else:\n project.update_option(key, value)\n\n project.update_option('sentry:reviewed-callsign', True)\n\n self.create_audit_entry(\n request,\n organization=organization,\n target_object=project.id,\n event=AuditLogEntryEvent.PROJECT_EDIT,\n data=project.get_audit_log_data(),\n )\n\n messages.add_message(\n request, messages.SUCCESS,\n _('Changes to your project were saved.'))\n\n redirect = reverse('sentry-manage-project', args=[project.organization.slug, project.slug])\n\n return HttpResponseRedirect(redirect)\n\n context = {\n 'form': form,\n 'page': 'details',\n }\n\n return self.respond('sentry/projects/manage.html', context)\n", "path": "src/sentry/web/frontend/project_settings.py"}]}
| 3,950 | 151 |
gh_patches_debug_3066
|
rasdani/github-patches
|
git_diff
|
searx__searx-200
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bing_news can't parse other languages date
When searching for french article, the time is noted as "Il y a 5 minutes", and so, doesn't match for the regex `"^[0-9]+ minute(s|) ago$"`.
Do you see a way to internationalize this detection ?
</issue>
<code>
[start of searx/engines/bing_news.py]
1 ## Bing (News)
2 #
3 # @website https://www.bing.com/news
4 # @provide-api yes (http://datamarket.azure.com/dataset/bing/search),
5 # max. 5000 query/month
6 #
7 # @using-api no (because of query limit)
8 # @results HTML (using search portal)
9 # @stable no (HTML can change)
10 # @parse url, title, content, publishedDate
11
12 from urllib import urlencode
13 from cgi import escape
14 from lxml import html
15 from datetime import datetime, timedelta
16 from dateutil import parser
17 import re
18
19 # engine dependent config
20 categories = ['news']
21 paging = True
22 language_support = True
23
24 # search-url
25 base_url = 'https://www.bing.com/'
26 search_string = 'news/search?{query}&first={offset}'
27
28
29 # do search-request
30 def request(query, params):
31 offset = (params['pageno'] - 1) * 10 + 1
32
33 if params['language'] == 'all':
34 language = 'en-US'
35 else:
36 language = params['language'].replace('_', '-')
37
38 search_path = search_string.format(
39 query=urlencode({'q': query, 'setmkt': language}),
40 offset=offset)
41
42 params['cookies']['SRCHHPGUSR'] = \
43 'NEWWND=0&NRSLT=-1&SRCHLANG=' + language.split('-')[0]
44
45 params['url'] = base_url + search_path
46 return params
47
48
49 # get response from search-request
50 def response(resp):
51 results = []
52
53 dom = html.fromstring(resp.content)
54
55 # parse results
56 for result in dom.xpath('//div[@class="sn_r"]'):
57 link = result.xpath('.//div[@class="newstitle"]/a')[0]
58 url = link.attrib.get('href')
59 title = ' '.join(link.xpath('.//text()'))
60 contentXPath = result.xpath('.//div[@class="sn_txt"]/div'
61 '//span[@class="sn_snip"]//text()')
62 if contentXPath is not None:
63 content = escape(' '.join(contentXPath))
64
65 # parse publishedDate
66 publishedDateXPath = result.xpath('.//div[@class="sn_txt"]/div'
67 '//span[contains(@class,"sn_ST")]'
68 '//span[contains(@class,"sn_tm")]'
69 '//text()')
70 if publishedDateXPath is not None:
71 publishedDate = escape(' '.join(publishedDateXPath))
72
73 if re.match("^[0-9]+ minute(s|) ago$", publishedDate):
74 timeNumbers = re.findall(r'\d+', publishedDate)
75 publishedDate = datetime.now()\
76 - timedelta(minutes=int(timeNumbers[0]))
77 elif re.match("^[0-9]+ hour(s|) ago$", publishedDate):
78 timeNumbers = re.findall(r'\d+', publishedDate)
79 publishedDate = datetime.now()\
80 - timedelta(hours=int(timeNumbers[0]))
81 elif re.match("^[0-9]+ hour(s|),"
82 " [0-9]+ minute(s|) ago$", publishedDate):
83 timeNumbers = re.findall(r'\d+', publishedDate)
84 publishedDate = datetime.now()\
85 - timedelta(hours=int(timeNumbers[0]))\
86 - timedelta(minutes=int(timeNumbers[1]))
87 elif re.match("^[0-9]+ day(s|) ago$", publishedDate):
88 timeNumbers = re.findall(r'\d+', publishedDate)
89 publishedDate = datetime.now()\
90 - timedelta(days=int(timeNumbers[0]))
91 else:
92 try:
93 # FIXME use params['language'] to parse either mm/dd or dd/mm
94 publishedDate = parser.parse(publishedDate, dayfirst=False)
95 except TypeError:
96 # FIXME
97 publishedDate = datetime.now()
98
99 # append result
100 results.append({'url': url,
101 'title': title,
102 'publishedDate': publishedDate,
103 'content': content})
104
105 # return results
106 return results
107
[end of searx/engines/bing_news.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/bing_news.py b/searx/engines/bing_news.py
--- a/searx/engines/bing_news.py
+++ b/searx/engines/bing_news.py
@@ -39,8 +39,7 @@
query=urlencode({'q': query, 'setmkt': language}),
offset=offset)
- params['cookies']['SRCHHPGUSR'] = \
- 'NEWWND=0&NRSLT=-1&SRCHLANG=' + language.split('-')[0]
+ params['cookies']['_FP'] = "ui=en-US"
params['url'] = base_url + search_path
return params
|
{"golden_diff": "diff --git a/searx/engines/bing_news.py b/searx/engines/bing_news.py\n--- a/searx/engines/bing_news.py\n+++ b/searx/engines/bing_news.py\n@@ -39,8 +39,7 @@\n query=urlencode({'q': query, 'setmkt': language}),\n offset=offset)\n \n- params['cookies']['SRCHHPGUSR'] = \\\n- 'NEWWND=0&NRSLT=-1&SRCHLANG=' + language.split('-')[0]\n+ params['cookies']['_FP'] = \"ui=en-US\"\n \n params['url'] = base_url + search_path\n return params\n", "issue": "bing_news can't parse other languages date\nWhen searching for french article, the time is noted as \"Il y a 5 minutes\", and so, doesn't match for the regex `\"^[0-9]+ minute(s|) ago$\"`.\n\nDo you see a way to internationalize this detection ?\n\n", "before_files": [{"content": "## Bing (News)\n#\n# @website https://www.bing.com/news\n# @provide-api yes (http://datamarket.azure.com/dataset/bing/search),\n# max. 5000 query/month\n#\n# @using-api no (because of query limit)\n# @results HTML (using search portal)\n# @stable no (HTML can change)\n# @parse url, title, content, publishedDate\n\nfrom urllib import urlencode\nfrom cgi import escape\nfrom lxml import html\nfrom datetime import datetime, timedelta\nfrom dateutil import parser\nimport re\n\n# engine dependent config\ncategories = ['news']\npaging = True\nlanguage_support = True\n\n# search-url\nbase_url = 'https://www.bing.com/'\nsearch_string = 'news/search?{query}&first={offset}'\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * 10 + 1\n\n if params['language'] == 'all':\n language = 'en-US'\n else:\n language = params['language'].replace('_', '-')\n\n search_path = search_string.format(\n query=urlencode({'q': query, 'setmkt': language}),\n offset=offset)\n\n params['cookies']['SRCHHPGUSR'] = \\\n 'NEWWND=0&NRSLT=-1&SRCHLANG=' + language.split('-')[0]\n\n params['url'] = base_url + search_path\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.content)\n\n # parse results\n for result in dom.xpath('//div[@class=\"sn_r\"]'):\n link = result.xpath('.//div[@class=\"newstitle\"]/a')[0]\n url = link.attrib.get('href')\n title = ' '.join(link.xpath('.//text()'))\n contentXPath = result.xpath('.//div[@class=\"sn_txt\"]/div'\n '//span[@class=\"sn_snip\"]//text()')\n if contentXPath is not None:\n content = escape(' '.join(contentXPath))\n\n # parse publishedDate\n publishedDateXPath = result.xpath('.//div[@class=\"sn_txt\"]/div'\n '//span[contains(@class,\"sn_ST\")]'\n '//span[contains(@class,\"sn_tm\")]'\n '//text()')\n if publishedDateXPath is not None:\n publishedDate = escape(' '.join(publishedDateXPath))\n\n if re.match(\"^[0-9]+ minute(s|) ago$\", publishedDate):\n timeNumbers = re.findall(r'\\d+', publishedDate)\n publishedDate = datetime.now()\\\n - timedelta(minutes=int(timeNumbers[0]))\n elif re.match(\"^[0-9]+ hour(s|) ago$\", publishedDate):\n timeNumbers = re.findall(r'\\d+', publishedDate)\n publishedDate = datetime.now()\\\n - timedelta(hours=int(timeNumbers[0]))\n elif re.match(\"^[0-9]+ hour(s|),\"\n \" [0-9]+ minute(s|) ago$\", publishedDate):\n timeNumbers = re.findall(r'\\d+', publishedDate)\n publishedDate = datetime.now()\\\n - timedelta(hours=int(timeNumbers[0]))\\\n - timedelta(minutes=int(timeNumbers[1]))\n elif re.match(\"^[0-9]+ day(s|) ago$\", publishedDate):\n timeNumbers = re.findall(r'\\d+', publishedDate)\n publishedDate = datetime.now()\\\n - timedelta(days=int(timeNumbers[0]))\n else:\n try:\n # FIXME use params['language'] to parse either mm/dd or dd/mm\n publishedDate = parser.parse(publishedDate, dayfirst=False)\n except TypeError:\n # FIXME\n publishedDate = datetime.now()\n\n # append result\n results.append({'url': url,\n 'title': title,\n 'publishedDate': publishedDate,\n 'content': content})\n\n # return results\n return results\n", "path": "searx/engines/bing_news.py"}]}
| 1,681 | 158 |
gh_patches_debug_45066
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-1655
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip installed in virtualenv matches virtualenv installation python not python in virtualenv
This one's a little esoteric - but if you install virtualenv>20 with python2, then create a virtualenv with --python=python3 - the pip commands installed in the virtualenv are pip and pip2 rather than pip and pip3. Basically, the suffix is matching the python that the virtualenv is running with, not the python that the virtualenv is installing.
For example:
```
# pip2 install 'virtualenv==20.0.3'
# /usr/bin/virtualenv -p python3 blerg . # this is in a centos7 container, pip2 installs into /usr/bin
# ls blerg/bin/pip*
pip pip2 pip2.7
# rm -rf blerg
# pip2 install 'virtualenv<20'
# /usr/bin/virtualenv -p python3 blerg
# ls blerg/bin/pip*
pip pip3 pip3.6
```
</issue>
<code>
[start of src/virtualenv/seed/via_app_data/pip_install/base.py]
1 from __future__ import absolute_import, unicode_literals
2
3 import logging
4 import os
5 import re
6 import shutil
7 import zipfile
8 from abc import ABCMeta, abstractmethod
9 from tempfile import mkdtemp
10
11 from six import PY3, add_metaclass
12
13 from virtualenv.util import ConfigParser
14 from virtualenv.util.path import Path
15 from virtualenv.util.six import ensure_text
16
17
18 @add_metaclass(ABCMeta)
19 class PipInstall(object):
20 def __init__(self, wheel, creator, image_folder):
21 self._wheel = wheel
22 self._creator = creator
23 self._image_dir = image_folder
24 self._extracted = False
25 self.__dist_info = None
26 self._console_entry_points = None
27
28 @abstractmethod
29 def _sync(self, src, dst):
30 raise NotImplementedError
31
32 def install(self):
33 self._extracted = True
34 # sync image
35 for filename in self._image_dir.iterdir():
36 into = self._creator.purelib / filename.name
37 if into.exists():
38 if into.is_dir() and not into.is_symlink():
39 shutil.rmtree(str(into))
40 else:
41 into.unlink()
42 self._sync(filename, into)
43 # generate console executables
44 consoles = set()
45 script_dir = self._creator.script_dir
46 for name, module in self._console_scripts.items():
47 consoles.update(self._create_console_entry_point(name, module, script_dir))
48 logging.debug("generated console scripts %s", " ".join(i.name for i in consoles))
49
50 def build_image(self):
51 # 1. first extract the wheel
52 logging.debug("build install image to %s of %s", self._image_dir, self._wheel.name)
53 with zipfile.ZipFile(str(self._wheel)) as zip_ref:
54 zip_ref.extractall(str(self._image_dir))
55 self._extracted = True
56 # 2. now add additional files not present in the package
57 new_files = self._generate_new_files()
58 # 3. finally fix the records file
59 self._fix_records(new_files)
60
61 def _records_text(self, files):
62 record_data = "\n".join(
63 "{},,".format(os.path.relpath(ensure_text(str(rec)), ensure_text(str(self._image_dir)))) for rec in files
64 )
65 return record_data
66
67 def _generate_new_files(self):
68 new_files = set()
69 installer = self._dist_info / "INSTALLER"
70 installer.write_text("pip\n")
71 new_files.add(installer)
72 # inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226
73 marker = self._image_dir / "{}.virtualenv".format(self._dist_info.stem)
74 marker.write_text("")
75 new_files.add(marker)
76 folder = mkdtemp()
77 try:
78 to_folder = Path(folder)
79 rel = os.path.relpath(ensure_text(str(self._creator.script_dir)), ensure_text(str(self._creator.purelib)))
80 for name, module in self._console_scripts.items():
81 new_files.update(
82 Path(os.path.normpath(ensure_text(str(self._image_dir / rel / i.name))))
83 for i in self._create_console_entry_point(name, module, to_folder)
84 )
85 finally:
86 shutil.rmtree(folder, ignore_errors=True)
87 return new_files
88
89 @property
90 def _dist_info(self):
91 if self._extracted is False:
92 return None # pragma: no cover
93 if self.__dist_info is None:
94 for filename in self._image_dir.iterdir():
95 if filename.suffix == ".dist-info":
96 self.__dist_info = filename
97 break
98 else:
99 raise RuntimeError("no dist info") # pragma: no cover
100 return self.__dist_info
101
102 @abstractmethod
103 def _fix_records(self, extra_record_data):
104 raise NotImplementedError
105
106 @property
107 def _console_scripts(self):
108 if self._extracted is False:
109 return None # pragma: no cover
110 if self._console_entry_points is None:
111 self._console_entry_points = {}
112 entry_points = self._dist_info / "entry_points.txt"
113 if entry_points.exists():
114 parser = ConfigParser.ConfigParser()
115 with entry_points.open() as file_handler:
116 reader = getattr(parser, "read_file" if PY3 else "readfp")
117 reader(file_handler)
118 if "console_scripts" in parser.sections():
119 for name, value in parser.items("console_scripts"):
120 match = re.match(r"(.*?)-?\d\.?\d*", name)
121 if match:
122 name = match.groups(1)[0]
123 self._console_entry_points[name] = value
124 return self._console_entry_points
125
126 def _create_console_entry_point(self, name, value, to_folder):
127 result = []
128 from distlib.scripts import ScriptMaker
129
130 maker = ScriptMaker(None, str(to_folder))
131 maker.clobber = True # overwrite
132 maker.variants = {"", "X", "X.Y"} # create all variants
133 maker.set_mode = True # ensure they are executable
134 maker.executable = str(self._creator.exe)
135 specification = "{} = {}".format(name, value)
136 new_files = maker.make(specification)
137 result.extend(Path(i) for i in new_files)
138 return result
139
140 def clear(self):
141 if self._image_dir.exists():
142 shutil.rmtree(ensure_text(str(self._image_dir)))
143
144 def has_image(self):
145 return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None
146
[end of src/virtualenv/seed/via_app_data/pip_install/base.py]
[start of src/virtualenv/seed/via_app_data/via_app_data.py]
1 """Bootstrap"""
2 from __future__ import absolute_import, unicode_literals
3
4 import logging
5 import shutil
6 from contextlib import contextmanager
7 from threading import Lock, Thread
8
9 from virtualenv.dirs import default_data_dir
10 from virtualenv.info import fs_supports_symlink
11 from virtualenv.seed.embed.base_embed import BaseEmbed
12 from virtualenv.seed.embed.wheels.acquire import get_wheels
13 from virtualenv.util.six import ensure_text
14
15 from .pip_install.copy import CopyPipInstall
16 from .pip_install.symlink import SymlinkPipInstall
17
18
19 class FromAppData(BaseEmbed):
20 def __init__(self, options):
21 super(FromAppData, self).__init__(options)
22 self.clear = options.clear_app_data
23 self.app_data_dir = default_data_dir() / "seed-v1"
24 self.symlinks = options.symlink_app_data
25
26 @classmethod
27 def add_parser_arguments(cls, parser, interpreter):
28 super(FromAppData, cls).add_parser_arguments(parser, interpreter)
29 parser.add_argument(
30 "--clear-app-data",
31 dest="clear_app_data",
32 action="store_true",
33 help="clear the app data folder of seed images ({})".format((default_data_dir() / "seed-v1").path),
34 default=False,
35 )
36 can_symlink = fs_supports_symlink()
37 parser.add_argument(
38 "--symlink-app-data",
39 dest="symlink_app_data",
40 action="store_true" if can_symlink else "store_false",
41 help="{} symlink the python packages from the app-data folder (requires seed pip>=19.3)".format(
42 "" if can_symlink else "not supported - "
43 ),
44 default=False,
45 )
46
47 def run(self, creator):
48 if not self.enabled:
49 return
50 base_cache = self.app_data_dir / creator.interpreter.version_release_str
51 with self._get_seed_wheels(creator, base_cache) as name_to_whl:
52 pip_version = name_to_whl["pip"].stem.split("-")[1]
53 installer_class = self.installer_class(pip_version)
54
55 def _install(name, wheel):
56 logging.debug("install %s from wheel %s via %s", name, wheel, installer_class.__name__)
57 image_folder = base_cache.path / "image" / installer_class.__name__ / wheel.stem
58 installer = installer_class(wheel, creator, image_folder)
59 if self.clear:
60 installer.clear()
61 if not installer.has_image():
62 installer.build_image()
63 installer.install()
64
65 threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())
66 for thread in threads:
67 thread.start()
68 for thread in threads:
69 thread.join()
70
71 @contextmanager
72 def _get_seed_wheels(self, creator, base_cache):
73 with base_cache.lock_for_key("wheels"):
74 wheels_to = base_cache.path / "wheels"
75 if self.clear and wheels_to.exists():
76 shutil.rmtree(ensure_text(str(wheels_to)))
77 wheels_to.mkdir(parents=True, exist_ok=True)
78 name_to_whl, lock = {}, Lock()
79
80 def _get(package, version):
81 result = get_wheels(
82 creator.interpreter.version_release_str,
83 wheels_to,
84 self.extra_search_dir,
85 self.download,
86 {package: version},
87 )
88 with lock:
89 name_to_whl.update(result)
90
91 threads = list(Thread(target=_get, args=(pkg, v)) for pkg, v in self.package_version().items())
92 for thread in threads:
93 thread.start()
94 for thread in threads:
95 thread.join()
96
97 yield name_to_whl
98
99 def installer_class(self, pip_version):
100 if self.symlinks:
101 # symlink support requires pip 19.3+
102 pip_version_int = tuple(int(i) for i in pip_version.split(".")[0:2])
103 if pip_version_int >= (19, 3):
104 return SymlinkPipInstall
105 return CopyPipInstall
106
107 def __unicode__(self):
108 base = super(FromAppData, self).__unicode__()
109 return (
110 base[:-1]
111 + ", via={}, app_data_dir={}".format("symlink" if self.symlinks else "copy", self.app_data_dir.path)
112 + base[-1]
113 )
114
[end of src/virtualenv/seed/via_app_data/via_app_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py
--- a/src/virtualenv/seed/via_app_data/pip_install/base.py
+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py
@@ -4,9 +4,12 @@
import os
import re
import shutil
+import sys
import zipfile
from abc import ABCMeta, abstractmethod
+from contextlib import contextmanager
from tempfile import mkdtemp
+from threading import Lock
from six import PY3, add_metaclass
@@ -17,6 +20,8 @@
@add_metaclass(ABCMeta)
class PipInstall(object):
+ lock = Lock()
+
def __init__(self, wheel, creator, image_folder):
self._wheel = wheel
self._creator = creator
@@ -29,7 +34,7 @@
def _sync(self, src, dst):
raise NotImplementedError
- def install(self):
+ def install(self, version_info):
self._extracted = True
# sync image
for filename in self._image_dir.iterdir():
@@ -44,7 +49,7 @@
consoles = set()
script_dir = self._creator.script_dir
for name, module in self._console_scripts.items():
- consoles.update(self._create_console_entry_point(name, module, script_dir))
+ consoles.update(self._create_console_entry_point(name, module, script_dir, version_info))
logging.debug("generated console scripts %s", " ".join(i.name for i in consoles))
def build_image(self):
@@ -77,10 +82,11 @@
try:
to_folder = Path(folder)
rel = os.path.relpath(ensure_text(str(self._creator.script_dir)), ensure_text(str(self._creator.purelib)))
+ version_info = self._creator.interpreter.version_info
for name, module in self._console_scripts.items():
new_files.update(
Path(os.path.normpath(ensure_text(str(self._image_dir / rel / i.name))))
- for i in self._create_console_entry_point(name, module, to_folder)
+ for i in self._create_console_entry_point(name, module, to_folder, version_info)
)
finally:
shutil.rmtree(folder, ignore_errors=True)
@@ -123,7 +129,7 @@
self._console_entry_points[name] = value
return self._console_entry_points
- def _create_console_entry_point(self, name, value, to_folder):
+ def _create_console_entry_point(self, name, value, to_folder, version_info):
result = []
from distlib.scripts import ScriptMaker
@@ -133,10 +139,25 @@
maker.set_mode = True # ensure they are executable
maker.executable = str(self._creator.exe)
specification = "{} = {}".format(name, value)
- new_files = maker.make(specification)
+ with self.switch_sys_version(version_info):
+ new_files = maker.make(specification)
result.extend(Path(i) for i in new_files)
return result
+ @contextmanager
+ def switch_sys_version(self, version_info):
+ """
+ Patch until upstream distutils supports creating scripts with different python target
+ https://bitbucket.org/pypa/distlib/issues/134/allow-specifying-the-version-information
+ """
+ previous = sys.version_info
+ with self.lock:
+ sys.version_info = version_info
+ try:
+ yield
+ finally:
+ sys.version_info = previous
+
def clear(self):
if self._image_dir.exists():
shutil.rmtree(ensure_text(str(self._image_dir)))
diff --git a/src/virtualenv/seed/via_app_data/via_app_data.py b/src/virtualenv/seed/via_app_data/via_app_data.py
--- a/src/virtualenv/seed/via_app_data/via_app_data.py
+++ b/src/virtualenv/seed/via_app_data/via_app_data.py
@@ -60,7 +60,7 @@
installer.clear()
if not installer.has_image():
installer.build_image()
- installer.install()
+ installer.install(creator.interpreter.version_info)
threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())
for thread in threads:
|
{"golden_diff": "diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py\n--- a/src/virtualenv/seed/via_app_data/pip_install/base.py\n+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py\n@@ -4,9 +4,12 @@\n import os\n import re\n import shutil\n+import sys\n import zipfile\n from abc import ABCMeta, abstractmethod\n+from contextlib import contextmanager\n from tempfile import mkdtemp\n+from threading import Lock\n \n from six import PY3, add_metaclass\n \n@@ -17,6 +20,8 @@\n \n @add_metaclass(ABCMeta)\n class PipInstall(object):\n+ lock = Lock()\n+\n def __init__(self, wheel, creator, image_folder):\n self._wheel = wheel\n self._creator = creator\n@@ -29,7 +34,7 @@\n def _sync(self, src, dst):\n raise NotImplementedError\n \n- def install(self):\n+ def install(self, version_info):\n self._extracted = True\n # sync image\n for filename in self._image_dir.iterdir():\n@@ -44,7 +49,7 @@\n consoles = set()\n script_dir = self._creator.script_dir\n for name, module in self._console_scripts.items():\n- consoles.update(self._create_console_entry_point(name, module, script_dir))\n+ consoles.update(self._create_console_entry_point(name, module, script_dir, version_info))\n logging.debug(\"generated console scripts %s\", \" \".join(i.name for i in consoles))\n \n def build_image(self):\n@@ -77,10 +82,11 @@\n try:\n to_folder = Path(folder)\n rel = os.path.relpath(ensure_text(str(self._creator.script_dir)), ensure_text(str(self._creator.purelib)))\n+ version_info = self._creator.interpreter.version_info\n for name, module in self._console_scripts.items():\n new_files.update(\n Path(os.path.normpath(ensure_text(str(self._image_dir / rel / i.name))))\n- for i in self._create_console_entry_point(name, module, to_folder)\n+ for i in self._create_console_entry_point(name, module, to_folder, version_info)\n )\n finally:\n shutil.rmtree(folder, ignore_errors=True)\n@@ -123,7 +129,7 @@\n self._console_entry_points[name] = value\n return self._console_entry_points\n \n- def _create_console_entry_point(self, name, value, to_folder):\n+ def _create_console_entry_point(self, name, value, to_folder, version_info):\n result = []\n from distlib.scripts import ScriptMaker\n \n@@ -133,10 +139,25 @@\n maker.set_mode = True # ensure they are executable\n maker.executable = str(self._creator.exe)\n specification = \"{} = {}\".format(name, value)\n- new_files = maker.make(specification)\n+ with self.switch_sys_version(version_info):\n+ new_files = maker.make(specification)\n result.extend(Path(i) for i in new_files)\n return result\n \n+ @contextmanager\n+ def switch_sys_version(self, version_info):\n+ \"\"\"\n+ Patch until upstream distutils supports creating scripts with different python target\n+ https://bitbucket.org/pypa/distlib/issues/134/allow-specifying-the-version-information\n+ \"\"\"\n+ previous = sys.version_info\n+ with self.lock:\n+ sys.version_info = version_info\n+ try:\n+ yield\n+ finally:\n+ sys.version_info = previous\n+\n def clear(self):\n if self._image_dir.exists():\n shutil.rmtree(ensure_text(str(self._image_dir)))\ndiff --git a/src/virtualenv/seed/via_app_data/via_app_data.py b/src/virtualenv/seed/via_app_data/via_app_data.py\n--- a/src/virtualenv/seed/via_app_data/via_app_data.py\n+++ b/src/virtualenv/seed/via_app_data/via_app_data.py\n@@ -60,7 +60,7 @@\n installer.clear()\n if not installer.has_image():\n installer.build_image()\n- installer.install()\n+ installer.install(creator.interpreter.version_info)\n \n threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())\n for thread in threads:\n", "issue": "pip installed in virtualenv matches virtualenv installation python not python in virtualenv\nThis one's a little esoteric - but if you install virtualenv>20 with python2, then create a virtualenv with --python=python3 - the pip commands installed in the virtualenv are pip and pip2 rather than pip and pip3. Basically, the suffix is matching the python that the virtualenv is running with, not the python that the virtualenv is installing.\r\n\r\nFor example:\r\n\r\n```\r\n# pip2 install 'virtualenv==20.0.3'\r\n# /usr/bin/virtualenv -p python3 blerg . # this is in a centos7 container, pip2 installs into /usr/bin\r\n# ls blerg/bin/pip*\r\npip pip2 pip2.7\r\n# rm -rf blerg\r\n# pip2 install 'virtualenv<20'\r\n# /usr/bin/virtualenv -p python3 blerg\r\n# ls blerg/bin/pip*\r\npip pip3 pip3.6\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nimport re\nimport shutil\nimport zipfile\nfrom abc import ABCMeta, abstractmethod\nfrom tempfile import mkdtemp\n\nfrom six import PY3, add_metaclass\n\nfrom virtualenv.util import ConfigParser\nfrom virtualenv.util.path import Path\nfrom virtualenv.util.six import ensure_text\n\n\n@add_metaclass(ABCMeta)\nclass PipInstall(object):\n def __init__(self, wheel, creator, image_folder):\n self._wheel = wheel\n self._creator = creator\n self._image_dir = image_folder\n self._extracted = False\n self.__dist_info = None\n self._console_entry_points = None\n\n @abstractmethod\n def _sync(self, src, dst):\n raise NotImplementedError\n\n def install(self):\n self._extracted = True\n # sync image\n for filename in self._image_dir.iterdir():\n into = self._creator.purelib / filename.name\n if into.exists():\n if into.is_dir() and not into.is_symlink():\n shutil.rmtree(str(into))\n else:\n into.unlink()\n self._sync(filename, into)\n # generate console executables\n consoles = set()\n script_dir = self._creator.script_dir\n for name, module in self._console_scripts.items():\n consoles.update(self._create_console_entry_point(name, module, script_dir))\n logging.debug(\"generated console scripts %s\", \" \".join(i.name for i in consoles))\n\n def build_image(self):\n # 1. first extract the wheel\n logging.debug(\"build install image to %s of %s\", self._image_dir, self._wheel.name)\n with zipfile.ZipFile(str(self._wheel)) as zip_ref:\n zip_ref.extractall(str(self._image_dir))\n self._extracted = True\n # 2. now add additional files not present in the package\n new_files = self._generate_new_files()\n # 3. finally fix the records file\n self._fix_records(new_files)\n\n def _records_text(self, files):\n record_data = \"\\n\".join(\n \"{},,\".format(os.path.relpath(ensure_text(str(rec)), ensure_text(str(self._image_dir)))) for rec in files\n )\n return record_data\n\n def _generate_new_files(self):\n new_files = set()\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n # inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226\n marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.stem)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n try:\n to_folder = Path(folder)\n rel = os.path.relpath(ensure_text(str(self._creator.script_dir)), ensure_text(str(self._creator.purelib)))\n for name, module in self._console_scripts.items():\n new_files.update(\n Path(os.path.normpath(ensure_text(str(self._image_dir / rel / i.name))))\n for i in self._create_console_entry_point(name, module, to_folder)\n )\n finally:\n shutil.rmtree(folder, ignore_errors=True)\n return new_files\n\n @property\n def _dist_info(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self.__dist_info is None:\n for filename in self._image_dir.iterdir():\n if filename.suffix == \".dist-info\":\n self.__dist_info = filename\n break\n else:\n raise RuntimeError(\"no dist info\") # pragma: no cover\n return self.__dist_info\n\n @abstractmethod\n def _fix_records(self, extra_record_data):\n raise NotImplementedError\n\n @property\n def _console_scripts(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self._console_entry_points is None:\n self._console_entry_points = {}\n entry_points = self._dist_info / \"entry_points.txt\"\n if entry_points.exists():\n parser = ConfigParser.ConfigParser()\n with entry_points.open() as file_handler:\n reader = getattr(parser, \"read_file\" if PY3 else \"readfp\")\n reader(file_handler)\n if \"console_scripts\" in parser.sections():\n for name, value in parser.items(\"console_scripts\"):\n match = re.match(r\"(.*?)-?\\d\\.?\\d*\", name)\n if match:\n name = match.groups(1)[0]\n self._console_entry_points[name] = value\n return self._console_entry_points\n\n def _create_console_entry_point(self, name, value, to_folder):\n result = []\n from distlib.scripts import ScriptMaker\n\n maker = ScriptMaker(None, str(to_folder))\n maker.clobber = True # overwrite\n maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n maker.set_mode = True # ensure they are executable\n maker.executable = str(self._creator.exe)\n specification = \"{} = {}\".format(name, value)\n new_files = maker.make(specification)\n result.extend(Path(i) for i in new_files)\n return result\n\n def clear(self):\n if self._image_dir.exists():\n shutil.rmtree(ensure_text(str(self._image_dir)))\n\n def has_image(self):\n return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None\n", "path": "src/virtualenv/seed/via_app_data/pip_install/base.py"}, {"content": "\"\"\"Bootstrap\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nimport shutil\nfrom contextlib import contextmanager\nfrom threading import Lock, Thread\n\nfrom virtualenv.dirs import default_data_dir\nfrom virtualenv.info import fs_supports_symlink\nfrom virtualenv.seed.embed.base_embed import BaseEmbed\nfrom virtualenv.seed.embed.wheels.acquire import get_wheels\nfrom virtualenv.util.six import ensure_text\n\nfrom .pip_install.copy import CopyPipInstall\nfrom .pip_install.symlink import SymlinkPipInstall\n\n\nclass FromAppData(BaseEmbed):\n def __init__(self, options):\n super(FromAppData, self).__init__(options)\n self.clear = options.clear_app_data\n self.app_data_dir = default_data_dir() / \"seed-v1\"\n self.symlinks = options.symlink_app_data\n\n @classmethod\n def add_parser_arguments(cls, parser, interpreter):\n super(FromAppData, cls).add_parser_arguments(parser, interpreter)\n parser.add_argument(\n \"--clear-app-data\",\n dest=\"clear_app_data\",\n action=\"store_true\",\n help=\"clear the app data folder of seed images ({})\".format((default_data_dir() / \"seed-v1\").path),\n default=False,\n )\n can_symlink = fs_supports_symlink()\n parser.add_argument(\n \"--symlink-app-data\",\n dest=\"symlink_app_data\",\n action=\"store_true\" if can_symlink else \"store_false\",\n help=\"{} symlink the python packages from the app-data folder (requires seed pip>=19.3)\".format(\n \"\" if can_symlink else \"not supported - \"\n ),\n default=False,\n )\n\n def run(self, creator):\n if not self.enabled:\n return\n base_cache = self.app_data_dir / creator.interpreter.version_release_str\n with self._get_seed_wheels(creator, base_cache) as name_to_whl:\n pip_version = name_to_whl[\"pip\"].stem.split(\"-\")[1]\n installer_class = self.installer_class(pip_version)\n\n def _install(name, wheel):\n logging.debug(\"install %s from wheel %s via %s\", name, wheel, installer_class.__name__)\n image_folder = base_cache.path / \"image\" / installer_class.__name__ / wheel.stem\n installer = installer_class(wheel, creator, image_folder)\n if self.clear:\n installer.clear()\n if not installer.has_image():\n installer.build_image()\n installer.install()\n\n threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n\n @contextmanager\n def _get_seed_wheels(self, creator, base_cache):\n with base_cache.lock_for_key(\"wheels\"):\n wheels_to = base_cache.path / \"wheels\"\n if self.clear and wheels_to.exists():\n shutil.rmtree(ensure_text(str(wheels_to)))\n wheels_to.mkdir(parents=True, exist_ok=True)\n name_to_whl, lock = {}, Lock()\n\n def _get(package, version):\n result = get_wheels(\n creator.interpreter.version_release_str,\n wheels_to,\n self.extra_search_dir,\n self.download,\n {package: version},\n )\n with lock:\n name_to_whl.update(result)\n\n threads = list(Thread(target=_get, args=(pkg, v)) for pkg, v in self.package_version().items())\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n\n yield name_to_whl\n\n def installer_class(self, pip_version):\n if self.symlinks:\n # symlink support requires pip 19.3+\n pip_version_int = tuple(int(i) for i in pip_version.split(\".\")[0:2])\n if pip_version_int >= (19, 3):\n return SymlinkPipInstall\n return CopyPipInstall\n\n def __unicode__(self):\n base = super(FromAppData, self).__unicode__()\n return (\n base[:-1]\n + \", via={}, app_data_dir={}\".format(\"symlink\" if self.symlinks else \"copy\", self.app_data_dir.path)\n + base[-1]\n )\n", "path": "src/virtualenv/seed/via_app_data/via_app_data.py"}]}
| 3,510 | 985 |
gh_patches_debug_1309
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-534
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix import error
```
ImportError: cannot import name 'BashApp' from 'parsl.app.python' (/home/annawoodard/parsl/parsl/app/python.py)
```
It looks like I introduced this bug in 3d0e2d1e69ad27a133b0c40a42472ae43876d5f2.
</issue>
<code>
[start of parsl/app/app.py]
1 """Definitions for the @App decorator and the App classes.
2
3 The App class encapsulates a generic leaf task that can be executed asynchronously.
4 """
5 import logging
6 from inspect import getsource
7 from hashlib import md5
8 from inspect import signature
9
10 from parsl.app.errors import InvalidAppTypeError
11
12 logger = logging.getLogger(__name__)
13
14
15 class AppBase(object):
16 """This is the base class that defines the two external facing functions that an App must define.
17
18 The __init__ () which is called when the interpreter sees the definition of the decorated
19 function, and the __call__ () which is invoked when a decorated function is called by the user.
20
21 """
22
23 def __init__(self, func, data_flow_kernel=None, walltime=60, executors='all', cache=False):
24 """Construct the App object.
25
26 Args:
27 - func (function): Takes the function to be made into an App
28
29 Kwargs:
30 - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for
31 managing this app. This can be omitted only
32 after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.
33 - walltime (int) : Walltime in seconds for the app execution.
34 - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.
35 - cache (Bool) : Enable caching of this app ?
36
37 Returns:
38 - App object.
39
40 """
41 self.__name__ = func.__name__
42 self.func = func
43 self.data_flow_kernel = data_flow_kernel
44 self.status = 'created'
45 self.executors = executors
46 self.cache = cache
47 if not (isinstance(executors, list) or isinstance(executors, str)):
48 logger.error("App {} specifies invalid executor option, expects string or list".format(
49 func.__name__))
50
51 if cache is True:
52 try:
53 self.fn_source = getsource(func)
54 except OSError:
55 logger.debug("Unable to get source code for AppCaching. Recommend creating module")
56 self.fn_source = func.__name__
57
58 self.func_hash = md5(self.fn_source.encode('utf-8')).hexdigest()
59 else:
60 self.func_hash = func.__name__
61
62 params = signature(func).parameters
63
64 self.kwargs = {}
65 if 'stdout' in params:
66 self.kwargs['stdout'] = params['stdout'].default
67 if 'stderr' in params:
68 self.kwargs['stderr'] = params['stderr'].default
69 self.outputs = params['outputs'].default if 'outputs' in params else []
70 self.inputs = params['inputs'].default if 'inputs' in params else []
71
72 def __call__(self, *args, **kwargs):
73 """The __call__ function must be implemented in the subclasses."""
74 raise NotImplementedError
75
76
77 def app_wrapper(func):
78
79 def wrapper(*args, **kwargs):
80 logger.debug("App wrapper begins")
81 x = func(*args, **kwargs)
82 logger.debug("App wrapper ends")
83 return x
84
85 return wrapper
86
87
88 def App(apptype, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
89 """The App decorator function.
90
91 Args:
92 - apptype (string) : Apptype can be bash|python
93
94 Kwargs:
95 - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for
96 managing this app. This can be omitted only
97 after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.
98 - walltime (int) : Walltime for app in seconds,
99 default=60
100 - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.
101 - cache (Bool) : Enable caching of the app call
102 default=False
103
104 Returns:
105 A PythonApp or BashApp object, which when called runs the apps through the executor.
106 """
107
108 from parsl.app.python import PythonApp
109 from parsl.app.bash import BashApp
110
111 logger.warning("The 'App' decorator will be depreciated in Parsl 0.8. Please use 'python_app' or 'bash_app' instead.")
112
113 if apptype is 'python':
114 app_class = PythonApp
115 elif apptype is 'bash':
116 app_class = BashApp
117 else:
118 raise InvalidAppTypeError("Invalid apptype requested {}; must be 'python' or 'bash'".format(apptype))
119
120 def wrapper(f):
121 return app_class(f,
122 data_flow_kernel=data_flow_kernel,
123 walltime=walltime,
124 cache=cache,
125 executors=executors)
126 return wrapper
127
128
129 def python_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
130 """Decorator function for making python apps.
131
132 Parameters
133 ----------
134 function : function
135 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,
136 for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the
137 decorator is used alone, function will be the actual function being decorated, whereas if it
138 is called with arguments, function will be None. Default is None.
139 data_flow_kernel : DataFlowKernel
140 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can
141 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
142 walltime : int
143 Walltime for app in seconds. Default is 60.
144 executors : string or list
145 Labels of the executors that this app can execute over. Default is 'all'.
146 cache : bool
147 Enable caching of the app call. Default is False.
148 """
149 from parsl.app.python import PythonApp
150
151 def decorator(func):
152 def wrapper(f):
153 return PythonApp(f,
154 data_flow_kernel=data_flow_kernel,
155 walltime=walltime,
156 cache=cache,
157 executors=executors)
158 return wrapper(func)
159 if function is not None:
160 return decorator(function)
161 return decorator
162
163
164 def bash_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
165 """Decorator function for making bash apps.
166
167 Parameters
168 ----------
169 function : function
170 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,
171 for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the
172 decorator is used alone, function will be the actual function being decorated, whereas if it
173 is called with arguments, function will be None. Default is None.
174 data_flow_kernel : DataFlowKernel
175 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can
176 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
177 walltime : int
178 Walltime for app in seconds. Default is 60.
179 executors : string or list
180 Labels of the executors that this app can execute over. Default is 'all'.
181 cache : bool
182 Enable caching of the app call. Default is False.
183 """
184 from parsl.app.python import BashApp
185
186 def decorator(func):
187 def wrapper(f):
188 return BashApp(f,
189 data_flow_kernel=data_flow_kernel,
190 walltime=walltime,
191 cache=cache,
192 executors=executors)
193 return wrapper(func)
194 if function is not None:
195 return decorator(function)
196 return decorator
197
[end of parsl/app/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/app/app.py b/parsl/app/app.py
--- a/parsl/app/app.py
+++ b/parsl/app/app.py
@@ -181,7 +181,7 @@
cache : bool
Enable caching of the app call. Default is False.
"""
- from parsl.app.python import BashApp
+ from parsl.app.bash import BashApp
def decorator(func):
def wrapper(f):
|
{"golden_diff": "diff --git a/parsl/app/app.py b/parsl/app/app.py\n--- a/parsl/app/app.py\n+++ b/parsl/app/app.py\n@@ -181,7 +181,7 @@\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n- from parsl.app.python import BashApp\n+ from parsl.app.bash import BashApp\n \n def decorator(func):\n def wrapper(f):\n", "issue": "Fix import error\n```\r\nImportError: cannot import name 'BashApp' from 'parsl.app.python' (/home/annawoodard/parsl/parsl/app/python.py)\r\n```\r\n\r\nIt looks like I introduced this bug in 3d0e2d1e69ad27a133b0c40a42472ae43876d5f2.\n", "before_files": [{"content": "\"\"\"Definitions for the @App decorator and the App classes.\n\nThe App class encapsulates a generic leaf task that can be executed asynchronously.\n\"\"\"\nimport logging\nfrom inspect import getsource\nfrom hashlib import md5\nfrom inspect import signature\n\nfrom parsl.app.errors import InvalidAppTypeError\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppBase(object):\n \"\"\"This is the base class that defines the two external facing functions that an App must define.\n\n The __init__ () which is called when the interpreter sees the definition of the decorated\n function, and the __call__ () which is invoked when a decorated function is called by the user.\n\n \"\"\"\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, executors='all', cache=False):\n \"\"\"Construct the App object.\n\n Args:\n - func (function): Takes the function to be made into an App\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime in seconds for the app execution.\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of this app ?\n\n Returns:\n - App object.\n\n \"\"\"\n self.__name__ = func.__name__\n self.func = func\n self.data_flow_kernel = data_flow_kernel\n self.status = 'created'\n self.executors = executors\n self.cache = cache\n if not (isinstance(executors, list) or isinstance(executors, str)):\n logger.error(\"App {} specifies invalid executor option, expects string or list\".format(\n func.__name__))\n\n if cache is True:\n try:\n self.fn_source = getsource(func)\n except OSError:\n logger.debug(\"Unable to get source code for AppCaching. Recommend creating module\")\n self.fn_source = func.__name__\n\n self.func_hash = md5(self.fn_source.encode('utf-8')).hexdigest()\n else:\n self.func_hash = func.__name__\n\n params = signature(func).parameters\n\n self.kwargs = {}\n if 'stdout' in params:\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n\n def __call__(self, *args, **kwargs):\n \"\"\"The __call__ function must be implemented in the subclasses.\"\"\"\n raise NotImplementedError\n\n\ndef app_wrapper(func):\n\n def wrapper(*args, **kwargs):\n logger.debug(\"App wrapper begins\")\n x = func(*args, **kwargs)\n logger.debug(\"App wrapper ends\")\n return x\n\n return wrapper\n\n\ndef App(apptype, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"The App decorator function.\n\n Args:\n - apptype (string) : Apptype can be bash|python\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime for app in seconds,\n default=60\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of the app call\n default=False\n\n Returns:\n A PythonApp or BashApp object, which when called runs the apps through the executor.\n \"\"\"\n\n from parsl.app.python import PythonApp\n from parsl.app.bash import BashApp\n\n logger.warning(\"The 'App' decorator will be depreciated in Parsl 0.8. Please use 'python_app' or 'bash_app' instead.\")\n\n if apptype is 'python':\n app_class = PythonApp\n elif apptype is 'bash':\n app_class = BashApp\n else:\n raise InvalidAppTypeError(\"Invalid apptype requested {}; must be 'python' or 'bash'\".format(apptype))\n\n def wrapper(f):\n return app_class(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper\n\n\ndef python_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\ndef bash_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.python import BashApp\n\n def decorator(func):\n def wrapper(f):\n return BashApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n", "path": "parsl/app/app.py"}]}
| 2,802 | 102 |
gh_patches_debug_28314
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-7158
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't use `repo.metrics.show()` when have a long current dir
This call combined with asserts prevents it:
https://github.com/iterative/dvc/blob/0deab0118a947f3c92cf8a7dd6c7daf0fac5988f/dvc/repo/metrics/show.py#L84
https://github.com/iterative/dvc/blob/0deab0118a947f3c92cf8a7dd6c7daf0fac5988f/dvc/fs/path.py#L83-L85
This blocks Studio updating to DVC 2.9.2.
**Suggested fix:** move relpath calculation to command code, remove asserts as having a longer base should be ok. Returning relpath from API code like `repo.metrics.show()` is unexpected, that code is not really aware about cwd, both absolute paths or paths relative to repo root make sense.
</issue>
<code>
[start of dvc/repo/diff.py]
1 import logging
2 import os
3 from collections import defaultdict
4 from typing import Dict, List
5
6 from dvc.exceptions import PathMissingError
7 from dvc.repo import locked
8 from dvc.repo.experiments.utils import fix_exp_head
9
10 logger = logging.getLogger(__name__)
11
12
13 @locked
14 def diff(self, a_rev="HEAD", b_rev=None, targets=None):
15 """
16 By default, it compares the workspace with the last commit's fs.
17
18 This implementation differs from `git diff` since DVC doesn't have
19 the concept of `index`, but it keeps the same interface, thus,
20 `dvc diff` would be the same as `dvc diff HEAD`.
21 """
22
23 if self.scm.no_commits:
24 return {}
25
26 from dvc.fs.repo import RepoFileSystem
27
28 repo_fs = RepoFileSystem(self)
29
30 a_rev = fix_exp_head(self.scm, a_rev)
31 b_rev = fix_exp_head(self.scm, b_rev) if b_rev else "workspace"
32 results = {}
33 missing_targets = {}
34 for rev in self.brancher(revs=[a_rev, b_rev]):
35 if rev == "workspace" and rev != b_rev:
36 # brancher always returns workspace, but we only need to compute
37 # workspace paths/checksums if b_rev was None
38 continue
39
40 targets_paths = None
41 if targets is not None:
42 # convert targets to paths, and capture any missing targets
43 targets_paths, missing_targets[rev] = _targets_to_paths(
44 repo_fs, targets
45 )
46
47 results[rev] = _paths_checksums(self, targets_paths)
48
49 if targets is not None:
50 # check for overlapping missing targets between a_rev and b_rev
51 for target in set(missing_targets[a_rev]) & set(
52 missing_targets[b_rev]
53 ):
54 raise PathMissingError(target, self)
55
56 old = results[a_rev]
57 new = results[b_rev]
58
59 # Compare paths between the old and new fs.
60 # set() efficiently converts dict keys to a set
61 added = sorted(set(new) - set(old))
62 deleted_or_missing = set(old) - set(new)
63 if b_rev == "workspace":
64 # missing status is only applicable when diffing local workspace
65 # against a commit
66 missing = sorted(_filter_missing(repo_fs, deleted_or_missing))
67 else:
68 missing = []
69 deleted = sorted(deleted_or_missing - set(missing))
70 modified = sorted(set(old) & set(new))
71
72 # Cases when file was changed and renamed are resulted
73 # in having deleted and added record
74 # To cover such cases we need to change hashing function
75 # to produce rolling/chunking hash
76
77 renamed = _calculate_renamed(new, old, added, deleted)
78
79 for renamed_item in renamed:
80 added.remove(renamed_item["path"]["new"])
81 deleted.remove(renamed_item["path"]["old"])
82
83 ret = {
84 "added": [{"path": path, "hash": new[path]} for path in added],
85 "deleted": [{"path": path, "hash": old[path]} for path in deleted],
86 "modified": [
87 {"path": path, "hash": {"old": old[path], "new": new[path]}}
88 for path in modified
89 if old[path] != new[path]
90 ],
91 "renamed": renamed,
92 "not in cache": [
93 {"path": path, "hash": old[path]} for path in missing
94 ],
95 }
96
97 return ret if any(ret.values()) else {}
98
99
100 def _paths_checksums(repo, targets):
101 """
102 A dictionary of checksums addressed by relpaths collected from
103 the current fs outputs.
104
105 To help distinguish between a directory and a file output,
106 the former one will come with a trailing slash in the path:
107
108 directory: "data/"
109 file: "data"
110 """
111
112 return dict(_output_paths(repo, targets))
113
114
115 def _output_paths(repo, targets):
116 from dvc.fs.local import LocalFileSystem
117 from dvc.objects.stage import stage as ostage
118
119 on_working_fs = isinstance(repo.fs, LocalFileSystem)
120
121 def _exists(output):
122 if on_working_fs:
123 return output.exists
124 return True
125
126 def _to_path(output):
127 return (
128 str(output)
129 if not output.is_dir_checksum
130 else os.path.join(str(output), "")
131 )
132
133 for output in repo.index.outs:
134 if _exists(output):
135 yield_output = targets is None or any(
136 output.fs.path.isin_or_eq(output.fs_path, target)
137 for target in targets
138 )
139
140 if on_working_fs:
141 _, _, obj = ostage(
142 repo.odb.local,
143 output.fs_path,
144 repo.odb.local.fs,
145 "md5",
146 dry_run=True,
147 dvcignore=output.dvcignore,
148 )
149 hash_info = obj.hash_info
150 else:
151 hash_info = output.hash_info
152 obj = output.get_obj()
153
154 if yield_output:
155 yield _to_path(output), hash_info.value
156
157 if not obj:
158 continue
159
160 if output.is_dir_checksum and (
161 yield_output
162 or any(
163 output.fs.path.isin(target, output.fs_path)
164 for target in targets
165 )
166 ):
167 yield from _dir_output_paths(
168 output.fs, output.fs_path, obj, targets
169 )
170
171
172 def _dir_output_paths(fs, fs_path, obj, targets=None):
173 for key, _, oid in obj:
174 fname = fs.path.join(fs_path, *key)
175 if targets is None or any(
176 fs.path.isin_or_eq(fname, target) for target in targets
177 ):
178 # pylint: disable=no-member
179 yield fs.path.join(fs.path.name(fs_path), *key), oid.value
180
181
182 def _filter_missing(repo_fs, paths):
183 for path in paths:
184 try:
185 metadata = repo_fs.metadata(path)
186 if metadata.is_dvc:
187 out = metadata.outs[0]
188 if out.status().get(str(out)) == "not in cache":
189 yield path
190 except FileNotFoundError:
191 pass
192
193
194 def _targets_to_paths(repo_fs, targets):
195 paths = []
196 missing = []
197
198 for target in targets:
199 if repo_fs.exists(target):
200 paths.append(repo_fs.metadata(target).fs_path)
201 else:
202 missing.append(target)
203
204 return paths, missing
205
206
207 def _calculate_renamed(new, old, added, deleted):
208 old_inverted: Dict[str, List[str]] = defaultdict(list)
209 # It is needed to be dict of lists to cover cases
210 # when repo has paths with same hash
211 for path, path_hash in old.items():
212 old_inverted[path_hash].append(path)
213
214 renamed = []
215 for path in added:
216 path_hash = new[path]
217 old_paths = old_inverted[path_hash]
218 try:
219 iterator = enumerate(old_paths)
220 index = next(idx for idx, path in iterator if path in deleted)
221 except StopIteration:
222 continue
223
224 old_path = old_paths.pop(index)
225 renamed.append(
226 {"path": {"old": old_path, "new": path}, "hash": path_hash}
227 )
228
229 return renamed
230
[end of dvc/repo/diff.py]
[start of dvc/fs/path.py]
1 import ntpath
2 import posixpath
3
4
5 class Path:
6 def __init__(self, sep):
7 if sep == posixpath.sep:
8 self.flavour = posixpath
9 elif sep == ntpath.sep:
10 self.flavour = ntpath
11 else:
12 raise ValueError(f"unsupported separator '{sep}'")
13
14 def join(self, *parts):
15 return self.flavour.join(*parts)
16
17 def parts(self, path):
18 drive, path = self.flavour.splitdrive(path)
19
20 ret = []
21 while True:
22 path, part = self.flavour.split(path)
23
24 if part:
25 ret.append(part)
26 continue
27
28 if path:
29 ret.append(path)
30
31 break
32
33 ret.reverse()
34
35 if drive:
36 ret = [drive] + ret
37
38 return tuple(ret)
39
40 def parent(self, path):
41 return self.flavour.dirname(path)
42
43 def parents(self, path):
44 parts = self.parts(path)
45 return tuple(
46 self.join(*parts[:length])
47 for length in range(len(parts) - 1, 0, -1)
48 )
49
50 def name(self, path):
51 return self.parts(path)[-1]
52
53 def suffix(self, path):
54 name = self.name(path)
55 _, dot, suffix = name.partition(".")
56 return dot + suffix
57
58 def with_name(self, path, name):
59 parts = list(self.parts(path))
60 parts[-1] = name
61 return self.join(*parts)
62
63 def with_suffix(self, path, suffix):
64 parts = list(self.parts(path))
65 real_path, _, _ = parts[-1].partition(".")
66 parts[-1] = real_path + suffix
67 return self.join(*parts)
68
69 def isin(self, left, right):
70 left_parts = self.parts(left)
71 right_parts = self.parts(right)
72 left_len = len(left_parts)
73 right_len = len(right_parts)
74 return left_len > right_len and left_parts[:right_len] == right_parts
75
76 def isin_or_eq(self, left, right):
77 return left == right or self.isin(left, right)
78
79 def overlaps(self, left, right):
80 # pylint: disable=arguments-out-of-order
81 return self.isin_or_eq(left, right) or self.isin(right, left)
82
83 def relpath(self, path, base):
84 assert len(path) > len(base)
85 assert path.startswith(base)
86 normpath = path.rstrip(self.flavour.sep)
87 normbase = base.rstrip(self.flavour.sep)
88 return normpath[len(normbase) + 1 :]
89
90 def relparts(self, path, base):
91 return self.parts(self.relpath(path, base))
92
93 def as_posix(self, path):
94 return path.replace(self.flavour.sep, posixpath.sep)
95
[end of dvc/fs/path.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/fs/path.py b/dvc/fs/path.py
--- a/dvc/fs/path.py
+++ b/dvc/fs/path.py
@@ -80,12 +80,9 @@
# pylint: disable=arguments-out-of-order
return self.isin_or_eq(left, right) or self.isin(right, left)
- def relpath(self, path, base):
- assert len(path) > len(base)
- assert path.startswith(base)
- normpath = path.rstrip(self.flavour.sep)
- normbase = base.rstrip(self.flavour.sep)
- return normpath[len(normbase) + 1 :]
+ def relpath(self, path, start):
+ assert start
+ return self.flavour.relpath(path, start=start)
def relparts(self, path, base):
return self.parts(self.relpath(path, base))
diff --git a/dvc/repo/diff.py b/dvc/repo/diff.py
--- a/dvc/repo/diff.py
+++ b/dvc/repo/diff.py
@@ -170,13 +170,20 @@
def _dir_output_paths(fs, fs_path, obj, targets=None):
+ if fs.scheme == "local":
+ # NOTE: workaround for filesystems that are based on full local paths
+ # (e.g. gitfs, dvcfs, repofs). Proper solution is to use upcoming
+ # fsspec's prefixfs to use relpaths as fs_paths.
+ base = fs.path.relpath(fs_path, os.getcwd())
+ else:
+ base = fs_path
for key, _, oid in obj:
fname = fs.path.join(fs_path, *key)
if targets is None or any(
fs.path.isin_or_eq(fname, target) for target in targets
):
# pylint: disable=no-member
- yield fs.path.join(fs.path.name(fs_path), *key), oid.value
+ yield fs.path.join(base, *key), oid.value
def _filter_missing(repo_fs, paths):
|
{"golden_diff": "diff --git a/dvc/fs/path.py b/dvc/fs/path.py\n--- a/dvc/fs/path.py\n+++ b/dvc/fs/path.py\n@@ -80,12 +80,9 @@\n # pylint: disable=arguments-out-of-order\n return self.isin_or_eq(left, right) or self.isin(right, left)\n \n- def relpath(self, path, base):\n- assert len(path) > len(base)\n- assert path.startswith(base)\n- normpath = path.rstrip(self.flavour.sep)\n- normbase = base.rstrip(self.flavour.sep)\n- return normpath[len(normbase) + 1 :]\n+ def relpath(self, path, start):\n+ assert start\n+ return self.flavour.relpath(path, start=start)\n \n def relparts(self, path, base):\n return self.parts(self.relpath(path, base))\ndiff --git a/dvc/repo/diff.py b/dvc/repo/diff.py\n--- a/dvc/repo/diff.py\n+++ b/dvc/repo/diff.py\n@@ -170,13 +170,20 @@\n \n \n def _dir_output_paths(fs, fs_path, obj, targets=None):\n+ if fs.scheme == \"local\":\n+ # NOTE: workaround for filesystems that are based on full local paths\n+ # (e.g. gitfs, dvcfs, repofs). Proper solution is to use upcoming\n+ # fsspec's prefixfs to use relpaths as fs_paths.\n+ base = fs.path.relpath(fs_path, os.getcwd())\n+ else:\n+ base = fs_path\n for key, _, oid in obj:\n fname = fs.path.join(fs_path, *key)\n if targets is None or any(\n fs.path.isin_or_eq(fname, target) for target in targets\n ):\n # pylint: disable=no-member\n- yield fs.path.join(fs.path.name(fs_path), *key), oid.value\n+ yield fs.path.join(base, *key), oid.value\n \n \n def _filter_missing(repo_fs, paths):\n", "issue": "Can't use `repo.metrics.show()` when have a long current dir\nThis call combined with asserts prevents it:\r\n\r\nhttps://github.com/iterative/dvc/blob/0deab0118a947f3c92cf8a7dd6c7daf0fac5988f/dvc/repo/metrics/show.py#L84\r\n\r\nhttps://github.com/iterative/dvc/blob/0deab0118a947f3c92cf8a7dd6c7daf0fac5988f/dvc/fs/path.py#L83-L85\r\n\r\nThis blocks Studio updating to DVC 2.9.2.\r\n\r\n**Suggested fix:** move relpath calculation to command code, remove asserts as having a longer base should be ok. Returning relpath from API code like `repo.metrics.show()` is unexpected, that code is not really aware about cwd, both absolute paths or paths relative to repo root make sense.\n", "before_files": [{"content": "import logging\nimport os\nfrom collections import defaultdict\nfrom typing import Dict, List\n\nfrom dvc.exceptions import PathMissingError\nfrom dvc.repo import locked\nfrom dvc.repo.experiments.utils import fix_exp_head\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef diff(self, a_rev=\"HEAD\", b_rev=None, targets=None):\n \"\"\"\n By default, it compares the workspace with the last commit's fs.\n\n This implementation differs from `git diff` since DVC doesn't have\n the concept of `index`, but it keeps the same interface, thus,\n `dvc diff` would be the same as `dvc diff HEAD`.\n \"\"\"\n\n if self.scm.no_commits:\n return {}\n\n from dvc.fs.repo import RepoFileSystem\n\n repo_fs = RepoFileSystem(self)\n\n a_rev = fix_exp_head(self.scm, a_rev)\n b_rev = fix_exp_head(self.scm, b_rev) if b_rev else \"workspace\"\n results = {}\n missing_targets = {}\n for rev in self.brancher(revs=[a_rev, b_rev]):\n if rev == \"workspace\" and rev != b_rev:\n # brancher always returns workspace, but we only need to compute\n # workspace paths/checksums if b_rev was None\n continue\n\n targets_paths = None\n if targets is not None:\n # convert targets to paths, and capture any missing targets\n targets_paths, missing_targets[rev] = _targets_to_paths(\n repo_fs, targets\n )\n\n results[rev] = _paths_checksums(self, targets_paths)\n\n if targets is not None:\n # check for overlapping missing targets between a_rev and b_rev\n for target in set(missing_targets[a_rev]) & set(\n missing_targets[b_rev]\n ):\n raise PathMissingError(target, self)\n\n old = results[a_rev]\n new = results[b_rev]\n\n # Compare paths between the old and new fs.\n # set() efficiently converts dict keys to a set\n added = sorted(set(new) - set(old))\n deleted_or_missing = set(old) - set(new)\n if b_rev == \"workspace\":\n # missing status is only applicable when diffing local workspace\n # against a commit\n missing = sorted(_filter_missing(repo_fs, deleted_or_missing))\n else:\n missing = []\n deleted = sorted(deleted_or_missing - set(missing))\n modified = sorted(set(old) & set(new))\n\n # Cases when file was changed and renamed are resulted\n # in having deleted and added record\n # To cover such cases we need to change hashing function\n # to produce rolling/chunking hash\n\n renamed = _calculate_renamed(new, old, added, deleted)\n\n for renamed_item in renamed:\n added.remove(renamed_item[\"path\"][\"new\"])\n deleted.remove(renamed_item[\"path\"][\"old\"])\n\n ret = {\n \"added\": [{\"path\": path, \"hash\": new[path]} for path in added],\n \"deleted\": [{\"path\": path, \"hash\": old[path]} for path in deleted],\n \"modified\": [\n {\"path\": path, \"hash\": {\"old\": old[path], \"new\": new[path]}}\n for path in modified\n if old[path] != new[path]\n ],\n \"renamed\": renamed,\n \"not in cache\": [\n {\"path\": path, \"hash\": old[path]} for path in missing\n ],\n }\n\n return ret if any(ret.values()) else {}\n\n\ndef _paths_checksums(repo, targets):\n \"\"\"\n A dictionary of checksums addressed by relpaths collected from\n the current fs outputs.\n\n To help distinguish between a directory and a file output,\n the former one will come with a trailing slash in the path:\n\n directory: \"data/\"\n file: \"data\"\n \"\"\"\n\n return dict(_output_paths(repo, targets))\n\n\ndef _output_paths(repo, targets):\n from dvc.fs.local import LocalFileSystem\n from dvc.objects.stage import stage as ostage\n\n on_working_fs = isinstance(repo.fs, LocalFileSystem)\n\n def _exists(output):\n if on_working_fs:\n return output.exists\n return True\n\n def _to_path(output):\n return (\n str(output)\n if not output.is_dir_checksum\n else os.path.join(str(output), \"\")\n )\n\n for output in repo.index.outs:\n if _exists(output):\n yield_output = targets is None or any(\n output.fs.path.isin_or_eq(output.fs_path, target)\n for target in targets\n )\n\n if on_working_fs:\n _, _, obj = ostage(\n repo.odb.local,\n output.fs_path,\n repo.odb.local.fs,\n \"md5\",\n dry_run=True,\n dvcignore=output.dvcignore,\n )\n hash_info = obj.hash_info\n else:\n hash_info = output.hash_info\n obj = output.get_obj()\n\n if yield_output:\n yield _to_path(output), hash_info.value\n\n if not obj:\n continue\n\n if output.is_dir_checksum and (\n yield_output\n or any(\n output.fs.path.isin(target, output.fs_path)\n for target in targets\n )\n ):\n yield from _dir_output_paths(\n output.fs, output.fs_path, obj, targets\n )\n\n\ndef _dir_output_paths(fs, fs_path, obj, targets=None):\n for key, _, oid in obj:\n fname = fs.path.join(fs_path, *key)\n if targets is None or any(\n fs.path.isin_or_eq(fname, target) for target in targets\n ):\n # pylint: disable=no-member\n yield fs.path.join(fs.path.name(fs_path), *key), oid.value\n\n\ndef _filter_missing(repo_fs, paths):\n for path in paths:\n try:\n metadata = repo_fs.metadata(path)\n if metadata.is_dvc:\n out = metadata.outs[0]\n if out.status().get(str(out)) == \"not in cache\":\n yield path\n except FileNotFoundError:\n pass\n\n\ndef _targets_to_paths(repo_fs, targets):\n paths = []\n missing = []\n\n for target in targets:\n if repo_fs.exists(target):\n paths.append(repo_fs.metadata(target).fs_path)\n else:\n missing.append(target)\n\n return paths, missing\n\n\ndef _calculate_renamed(new, old, added, deleted):\n old_inverted: Dict[str, List[str]] = defaultdict(list)\n # It is needed to be dict of lists to cover cases\n # when repo has paths with same hash\n for path, path_hash in old.items():\n old_inverted[path_hash].append(path)\n\n renamed = []\n for path in added:\n path_hash = new[path]\n old_paths = old_inverted[path_hash]\n try:\n iterator = enumerate(old_paths)\n index = next(idx for idx, path in iterator if path in deleted)\n except StopIteration:\n continue\n\n old_path = old_paths.pop(index)\n renamed.append(\n {\"path\": {\"old\": old_path, \"new\": path}, \"hash\": path_hash}\n )\n\n return renamed\n", "path": "dvc/repo/diff.py"}, {"content": "import ntpath\nimport posixpath\n\n\nclass Path:\n def __init__(self, sep):\n if sep == posixpath.sep:\n self.flavour = posixpath\n elif sep == ntpath.sep:\n self.flavour = ntpath\n else:\n raise ValueError(f\"unsupported separator '{sep}'\")\n\n def join(self, *parts):\n return self.flavour.join(*parts)\n\n def parts(self, path):\n drive, path = self.flavour.splitdrive(path)\n\n ret = []\n while True:\n path, part = self.flavour.split(path)\n\n if part:\n ret.append(part)\n continue\n\n if path:\n ret.append(path)\n\n break\n\n ret.reverse()\n\n if drive:\n ret = [drive] + ret\n\n return tuple(ret)\n\n def parent(self, path):\n return self.flavour.dirname(path)\n\n def parents(self, path):\n parts = self.parts(path)\n return tuple(\n self.join(*parts[:length])\n for length in range(len(parts) - 1, 0, -1)\n )\n\n def name(self, path):\n return self.parts(path)[-1]\n\n def suffix(self, path):\n name = self.name(path)\n _, dot, suffix = name.partition(\".\")\n return dot + suffix\n\n def with_name(self, path, name):\n parts = list(self.parts(path))\n parts[-1] = name\n return self.join(*parts)\n\n def with_suffix(self, path, suffix):\n parts = list(self.parts(path))\n real_path, _, _ = parts[-1].partition(\".\")\n parts[-1] = real_path + suffix\n return self.join(*parts)\n\n def isin(self, left, right):\n left_parts = self.parts(left)\n right_parts = self.parts(right)\n left_len = len(left_parts)\n right_len = len(right_parts)\n return left_len > right_len and left_parts[:right_len] == right_parts\n\n def isin_or_eq(self, left, right):\n return left == right or self.isin(left, right)\n\n def overlaps(self, left, right):\n # pylint: disable=arguments-out-of-order\n return self.isin_or_eq(left, right) or self.isin(right, left)\n\n def relpath(self, path, base):\n assert len(path) > len(base)\n assert path.startswith(base)\n normpath = path.rstrip(self.flavour.sep)\n normbase = base.rstrip(self.flavour.sep)\n return normpath[len(normbase) + 1 :]\n\n def relparts(self, path, base):\n return self.parts(self.relpath(path, base))\n\n def as_posix(self, path):\n return path.replace(self.flavour.sep, posixpath.sep)\n", "path": "dvc/fs/path.py"}]}
| 3,699 | 453 |
gh_patches_debug_6406
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-290
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] RuntimeError: dictionary changed size during iteration when using @del within dynaconf_merge logic
**Describe the bug**
The following [line](https://github.com/rochacbruno/dynaconf/blob/25fed5dc27d1dd78c368d7464f7d160b46aa1d24/dynaconf/utils/__init__.py#L49
) is bugged, changing dict size during iteration, via pop() leads to
```
RuntimeError: dictionary changed size during iteration
```
**To Reproduce**
You can run following python code which is assumed to be very simple interpretation of the code line above:
```
new = {"a": 1}
for k, v in new.items():
new.pop(k, None)
```
1. To reproduce it with `dynaconf`, use following config.yaml
```
default:
options:
A: 1
B: 2
development:
options:
dynaconf_merge:
B: "@del"
```
**Expected behavior**
No RuntimeError, key marked with `@del` is removed from merge result
</issue>
<code>
[start of dynaconf/utils/__init__.py]
1 import functools
2 import os
3 import warnings
4
5
6 BANNER = """
7 ██████╗ ██╗ ██╗███╗ ██╗ █████╗ ██████╗ ██████╗ ███╗ ██╗███████╗
8 ██╔══██╗╚██╗ ██╔╝████╗ ██║██╔══██╗██╔════╝██╔═══██╗████╗ ██║██╔════╝
9 ██║ ██║ ╚████╔╝ ██╔██╗ ██║███████║██║ ██║ ██║██╔██╗ ██║█████╗
10 ██║ ██║ ╚██╔╝ ██║╚██╗██║██╔══██║██║ ██║ ██║██║╚██╗██║██╔══╝
11 ██████╔╝ ██║ ██║ ╚████║██║ ██║╚██████╗╚██████╔╝██║ ╚████║██║
12 ╚═════╝ ╚═╝ ╚═╝ ╚═══╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═══╝╚═╝
13 """
14
15 if os.name == "nt": # pragma: no cover
16 # windows can't handle the above charmap
17 BANNER = "DYNACONF"
18
19
20 def object_merge(old, new, unique=False):
21 """
22 Recursively merge two data structures.
23
24 :param unique: When set to True existing list items are not set.
25 """
26 if old == new:
27 # Nothing to merge
28 return
29
30 if isinstance(old, list) and isinstance(new, list):
31 for item in old[::-1]:
32 if unique and item in new:
33 continue
34 new.insert(0, item)
35 if isinstance(old, dict) and isinstance(new, dict):
36 for key, value in old.items():
37 if key not in new:
38 new[key] = value
39 else:
40 object_merge(value, new[key])
41
42 # Cleanup of MetaValues on New dict
43 for key, value in new.items():
44 if getattr(new[key], "dynaconf_reset", False):
45 # new Reset triggers cleanup of existing data
46 new[key] = new[key].value
47 elif getattr(new[key], "dynaconf_del", False):
48 # new Del triggers deletion of existing data
49 new.pop(key, None)
50
51
52 class DynaconfDict(dict):
53 """A dict representing en empty Dynaconf object
54 useful to run loaders in to a dict for testing"""
55
56 def __init__(self, *args, **kwargs):
57 self._loaded_files = []
58 super(DynaconfDict, self).__init__(*args, **kwargs)
59
60 @property
61 def logger(self):
62 return raw_logger()
63
64 def set(self, key, value, *args, **kwargs):
65 self[key] = value
66
67 @staticmethod
68 def get_environ(key, default=None): # pragma: no cover
69 return os.environ.get(key, default)
70
71 def exists(self, key, **kwargs):
72 return self.get(key, missing) is not missing
73
74
75 @functools.lru_cache()
76 def _logger(level):
77 import logging
78
79 formatter = logging.Formatter(
80 fmt=(
81 "%(asctime)s,%(msecs)d %(levelname)-8s "
82 "[%(filename)s:%(lineno)d - %(funcName)s] %(message)s"
83 ),
84 datefmt="%Y-%m-%d:%H:%M:%S",
85 )
86 handler = logging.StreamHandler()
87 handler.setFormatter(formatter)
88
89 logger = logging.getLogger("dynaconf")
90 logger.addHandler(handler)
91 logger.setLevel(level=getattr(logging, level, "DEBUG"))
92 return logger
93
94
95 def raw_logger(level=None):
96 """Get or create inner logger"""
97 level = level or os.environ.get("DEBUG_LEVEL_FOR_DYNACONF", "ERROR")
98 return _logger(level)
99
100
101 RENAMED_VARS = {
102 # old: new
103 "DYNACONF_NAMESPACE": "ENV_FOR_DYNACONF",
104 "NAMESPACE_FOR_DYNACONF": "ENV_FOR_DYNACONF",
105 "DYNACONF_SETTINGS_MODULE": "SETTINGS_FILE_FOR_DYNACONF",
106 "DYNACONF_SETTINGS": "SETTINGS_FILE_FOR_DYNACONF",
107 "SETTINGS_MODULE": "SETTINGS_FILE_FOR_DYNACONF",
108 "SETTINGS_MODULE_FOR_DYNACONF": "SETTINGS_FILE_FOR_DYNACONF",
109 "PROJECT_ROOT": "ROOT_PATH_FOR_DYNACONF",
110 "PROJECT_ROOT_FOR_DYNACONF": "ROOT_PATH_FOR_DYNACONF",
111 "DYNACONF_SILENT_ERRORS": "SILENT_ERRORS_FOR_DYNACONF",
112 "DYNACONF_ALWAYS_FRESH_VARS": "FRESH_VARS_FOR_DYNACONF",
113 "BASE_NAMESPACE_FOR_DYNACONF": "DEFAULT_ENV_FOR_DYNACONF",
114 "GLOBAL_ENV_FOR_DYNACONF": "ENVVAR_PREFIX_FOR_DYNACONF",
115 }
116
117
118 def compat_kwargs(kwargs):
119 """To keep backwards compat change the kwargs to new names"""
120 warn_deprecations(kwargs)
121 for old, new in RENAMED_VARS.items():
122 if old in kwargs:
123 kwargs[new] = kwargs[old]
124 # update cross references
125 for c_old, c_new in RENAMED_VARS.items():
126 if c_new == new:
127 kwargs[c_old] = kwargs[new]
128
129
130 class Missing(object):
131 """
132 Sentinel value object/singleton used to differentiate between ambiguous
133 situations where `None` is a valid value.
134 """
135
136 def __bool__(self):
137 """Respond to boolean duck-typing."""
138 return False
139
140 def __eq__(self, other):
141 """Equality check for a singleton."""
142
143 return isinstance(other, self.__class__)
144
145 # Ensure compatibility with Python 2.x
146 __nonzero__ = __bool__
147
148 def __repr__(self):
149 """
150 Unambiguously identify this string-based representation of Missing,
151 used as a singleton.
152 """
153 return "<dynaconf.missing>"
154
155
156 missing = Missing()
157
158
159 def deduplicate(list_object):
160 """Rebuild `list_object` removing duplicated and keeping order"""
161 new = []
162 for item in list_object:
163 if item not in new:
164 new.append(item)
165 return new
166
167
168 def warn_deprecations(data):
169 for old, new in RENAMED_VARS.items():
170 if old in data:
171 warnings.warn(
172 "You are using %s which is a deprecated settings "
173 "replace it with %s" % (old, new),
174 DeprecationWarning,
175 )
176
177
178 def trimmed_split(s, seps=(";", ",")):
179 """Given a string s, split is by one of one of the seps."""
180 for sep in seps:
181 if sep not in s:
182 continue
183 data = [item.strip() for item in s.strip().split(sep)]
184 return data
185 return [s] # raw un-splitted
186
187
188 def ensure_a_list(data):
189 """Ensure data is a list or wrap it in a list"""
190 if not data:
191 return []
192 if isinstance(data, (list, tuple, set)):
193 return list(data)
194 if isinstance(data, str):
195 data = trimmed_split(data) # settings.toml,other.yaml
196 return data
197 return [data]
198
199
200 def build_env_list(obj, env):
201 """Build env list for loaders to iterate.
202
203 Arguments:
204 obj {LazySettings} -- A Dynaconf settings instance
205 env {str} -- The current env to be loaded
206
207 Returns:
208 [str] -- A list of string names of the envs to load.
209 """
210 # add the [default] env
211 env_list = [obj.get("DEFAULT_ENV_FOR_DYNACONF")]
212
213 # compatibility with older versions that still uses [dynaconf] as
214 # [default] env
215 global_env = obj.get("ENVVAR_PREFIX_FOR_DYNACONF") or "DYNACONF"
216 if global_env not in env_list:
217 env_list.append(global_env)
218
219 # add the current env
220 if obj.current_env and obj.current_env not in env_list:
221 env_list.append(obj.current_env)
222
223 # add a manually set env
224 if env and env not in env_list:
225 env_list.append(env)
226
227 # add the [global] env
228 env_list.append("GLOBAL")
229
230 # loaders are responsible to change to lower/upper cases
231 return [env.lower() for env in env_list]
232
233
234 def upperfy(key):
235 """Receive a string key and returns its upper version.
236
237 Example:
238
239 input: foo
240 output: FOO
241
242 input: foo_bar
243 output: FOO_BAR
244
245 input: foo__bar__ZAZ
246 output: FOO__bar__ZAZ
247
248 Arguments:
249 key {str} -- A string key that may contain dunders `__`
250
251 Returns:
252 The key as upper case but keeping the nested elements.
253 """
254 if "__" in key:
255 parts = key.split("__")
256 return "__".join([parts[0].upper()] + parts[1:])
257 return key.upper()
258
[end of dynaconf/utils/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dynaconf/utils/__init__.py b/dynaconf/utils/__init__.py
--- a/dynaconf/utils/__init__.py
+++ b/dynaconf/utils/__init__.py
@@ -40,7 +40,7 @@
object_merge(value, new[key])
# Cleanup of MetaValues on New dict
- for key, value in new.items():
+ for key, value in list(new.items()):
if getattr(new[key], "dynaconf_reset", False):
# new Reset triggers cleanup of existing data
new[key] = new[key].value
|
{"golden_diff": "diff --git a/dynaconf/utils/__init__.py b/dynaconf/utils/__init__.py\n--- a/dynaconf/utils/__init__.py\n+++ b/dynaconf/utils/__init__.py\n@@ -40,7 +40,7 @@\n object_merge(value, new[key])\n \n # Cleanup of MetaValues on New dict\n- for key, value in new.items():\n+ for key, value in list(new.items()):\n if getattr(new[key], \"dynaconf_reset\", False):\n # new Reset triggers cleanup of existing data\n new[key] = new[key].value\n", "issue": "[bug] RuntimeError: dictionary changed size during iteration when using @del within dynaconf_merge logic\n**Describe the bug**\r\nThe following [line](https://github.com/rochacbruno/dynaconf/blob/25fed5dc27d1dd78c368d7464f7d160b46aa1d24/dynaconf/utils/__init__.py#L49\r\n) is bugged, changing dict size during iteration, via pop() leads to \r\n\r\n```\r\nRuntimeError: dictionary changed size during iteration\r\n```\r\n\r\n**To Reproduce**\r\nYou can run following python code which is assumed to be very simple interpretation of the code line above:\r\n```\r\nnew = {\"a\": 1}\r\n\r\nfor k, v in new.items():\r\n new.pop(k, None)\r\n```\r\n\r\n1. To reproduce it with `dynaconf`, use following config.yaml\r\n```\r\ndefault:\r\n options:\r\n A: 1\r\n B: 2\r\ndevelopment:\r\n options:\r\n dynaconf_merge:\r\n B: \"@del\"\r\n```\r\n\r\n**Expected behavior**\r\nNo RuntimeError, key marked with `@del` is removed from merge result\r\n\n", "before_files": [{"content": "import functools\nimport os\nimport warnings\n\n\nBANNER = \"\"\"\n\u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2557 \u2588\u2588\u2557\u2588\u2588\u2588\u2557 \u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2557 \u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\n\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u255a\u2588\u2588\u2557 \u2588\u2588\u2554\u255d\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2554\u2550\u2550\u2550\u2550\u255d\u2588\u2588\u2554\u2550\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2550\u2550\u255d\n\u2588\u2588\u2551 \u2588\u2588\u2551 \u255a\u2588\u2588\u2588\u2588\u2554\u255d \u2588\u2588\u2554\u2588\u2588\u2557 \u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2551\u2588\u2588\u2551 \u2588\u2588\u2551 \u2588\u2588\u2551\u2588\u2588\u2554\u2588\u2588\u2557 \u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2557\n\u2588\u2588\u2551 \u2588\u2588\u2551 \u255a\u2588\u2588\u2554\u255d \u2588\u2588\u2551\u255a\u2588\u2588\u2557\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2551\u2588\u2588\u2551 \u2588\u2588\u2551 \u2588\u2588\u2551\u2588\u2588\u2551\u255a\u2588\u2588\u2557\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u255d\n\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d \u2588\u2588\u2551 \u2588\u2588\u2551 \u255a\u2588\u2588\u2588\u2588\u2551\u2588\u2588\u2551 \u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u255a\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551 \u255a\u2588\u2588\u2588\u2588\u2551\u2588\u2588\u2551\n\u255a\u2550\u2550\u2550\u2550\u2550\u255d \u255a\u2550\u255d \u255a\u2550\u255d \u255a\u2550\u2550\u2550\u255d\u255a\u2550\u255d \u255a\u2550\u255d \u255a\u2550\u2550\u2550\u2550\u2550\u255d \u255a\u2550\u2550\u2550\u2550\u2550\u255d \u255a\u2550\u255d \u255a\u2550\u2550\u2550\u255d\u255a\u2550\u255d\n\"\"\"\n\nif os.name == \"nt\": # pragma: no cover\n # windows can't handle the above charmap\n BANNER = \"DYNACONF\"\n\n\ndef object_merge(old, new, unique=False):\n \"\"\"\n Recursively merge two data structures.\n\n :param unique: When set to True existing list items are not set.\n \"\"\"\n if old == new:\n # Nothing to merge\n return\n\n if isinstance(old, list) and isinstance(new, list):\n for item in old[::-1]:\n if unique and item in new:\n continue\n new.insert(0, item)\n if isinstance(old, dict) and isinstance(new, dict):\n for key, value in old.items():\n if key not in new:\n new[key] = value\n else:\n object_merge(value, new[key])\n\n # Cleanup of MetaValues on New dict\n for key, value in new.items():\n if getattr(new[key], \"dynaconf_reset\", False):\n # new Reset triggers cleanup of existing data\n new[key] = new[key].value\n elif getattr(new[key], \"dynaconf_del\", False):\n # new Del triggers deletion of existing data\n new.pop(key, None)\n\n\nclass DynaconfDict(dict):\n \"\"\"A dict representing en empty Dynaconf object\n useful to run loaders in to a dict for testing\"\"\"\n\n def __init__(self, *args, **kwargs):\n self._loaded_files = []\n super(DynaconfDict, self).__init__(*args, **kwargs)\n\n @property\n def logger(self):\n return raw_logger()\n\n def set(self, key, value, *args, **kwargs):\n self[key] = value\n\n @staticmethod\n def get_environ(key, default=None): # pragma: no cover\n return os.environ.get(key, default)\n\n def exists(self, key, **kwargs):\n return self.get(key, missing) is not missing\n\n\[email protected]_cache()\ndef _logger(level):\n import logging\n\n formatter = logging.Formatter(\n fmt=(\n \"%(asctime)s,%(msecs)d %(levelname)-8s \"\n \"[%(filename)s:%(lineno)d - %(funcName)s] %(message)s\"\n ),\n datefmt=\"%Y-%m-%d:%H:%M:%S\",\n )\n handler = logging.StreamHandler()\n handler.setFormatter(formatter)\n\n logger = logging.getLogger(\"dynaconf\")\n logger.addHandler(handler)\n logger.setLevel(level=getattr(logging, level, \"DEBUG\"))\n return logger\n\n\ndef raw_logger(level=None):\n \"\"\"Get or create inner logger\"\"\"\n level = level or os.environ.get(\"DEBUG_LEVEL_FOR_DYNACONF\", \"ERROR\")\n return _logger(level)\n\n\nRENAMED_VARS = {\n # old: new\n \"DYNACONF_NAMESPACE\": \"ENV_FOR_DYNACONF\",\n \"NAMESPACE_FOR_DYNACONF\": \"ENV_FOR_DYNACONF\",\n \"DYNACONF_SETTINGS_MODULE\": \"SETTINGS_FILE_FOR_DYNACONF\",\n \"DYNACONF_SETTINGS\": \"SETTINGS_FILE_FOR_DYNACONF\",\n \"SETTINGS_MODULE\": \"SETTINGS_FILE_FOR_DYNACONF\",\n \"SETTINGS_MODULE_FOR_DYNACONF\": \"SETTINGS_FILE_FOR_DYNACONF\",\n \"PROJECT_ROOT\": \"ROOT_PATH_FOR_DYNACONF\",\n \"PROJECT_ROOT_FOR_DYNACONF\": \"ROOT_PATH_FOR_DYNACONF\",\n \"DYNACONF_SILENT_ERRORS\": \"SILENT_ERRORS_FOR_DYNACONF\",\n \"DYNACONF_ALWAYS_FRESH_VARS\": \"FRESH_VARS_FOR_DYNACONF\",\n \"BASE_NAMESPACE_FOR_DYNACONF\": \"DEFAULT_ENV_FOR_DYNACONF\",\n \"GLOBAL_ENV_FOR_DYNACONF\": \"ENVVAR_PREFIX_FOR_DYNACONF\",\n}\n\n\ndef compat_kwargs(kwargs):\n \"\"\"To keep backwards compat change the kwargs to new names\"\"\"\n warn_deprecations(kwargs)\n for old, new in RENAMED_VARS.items():\n if old in kwargs:\n kwargs[new] = kwargs[old]\n # update cross references\n for c_old, c_new in RENAMED_VARS.items():\n if c_new == new:\n kwargs[c_old] = kwargs[new]\n\n\nclass Missing(object):\n \"\"\"\n Sentinel value object/singleton used to differentiate between ambiguous\n situations where `None` is a valid value.\n \"\"\"\n\n def __bool__(self):\n \"\"\"Respond to boolean duck-typing.\"\"\"\n return False\n\n def __eq__(self, other):\n \"\"\"Equality check for a singleton.\"\"\"\n\n return isinstance(other, self.__class__)\n\n # Ensure compatibility with Python 2.x\n __nonzero__ = __bool__\n\n def __repr__(self):\n \"\"\"\n Unambiguously identify this string-based representation of Missing,\n used as a singleton.\n \"\"\"\n return \"<dynaconf.missing>\"\n\n\nmissing = Missing()\n\n\ndef deduplicate(list_object):\n \"\"\"Rebuild `list_object` removing duplicated and keeping order\"\"\"\n new = []\n for item in list_object:\n if item not in new:\n new.append(item)\n return new\n\n\ndef warn_deprecations(data):\n for old, new in RENAMED_VARS.items():\n if old in data:\n warnings.warn(\n \"You are using %s which is a deprecated settings \"\n \"replace it with %s\" % (old, new),\n DeprecationWarning,\n )\n\n\ndef trimmed_split(s, seps=(\";\", \",\")):\n \"\"\"Given a string s, split is by one of one of the seps.\"\"\"\n for sep in seps:\n if sep not in s:\n continue\n data = [item.strip() for item in s.strip().split(sep)]\n return data\n return [s] # raw un-splitted\n\n\ndef ensure_a_list(data):\n \"\"\"Ensure data is a list or wrap it in a list\"\"\"\n if not data:\n return []\n if isinstance(data, (list, tuple, set)):\n return list(data)\n if isinstance(data, str):\n data = trimmed_split(data) # settings.toml,other.yaml\n return data\n return [data]\n\n\ndef build_env_list(obj, env):\n \"\"\"Build env list for loaders to iterate.\n\n Arguments:\n obj {LazySettings} -- A Dynaconf settings instance\n env {str} -- The current env to be loaded\n\n Returns:\n [str] -- A list of string names of the envs to load.\n \"\"\"\n # add the [default] env\n env_list = [obj.get(\"DEFAULT_ENV_FOR_DYNACONF\")]\n\n # compatibility with older versions that still uses [dynaconf] as\n # [default] env\n global_env = obj.get(\"ENVVAR_PREFIX_FOR_DYNACONF\") or \"DYNACONF\"\n if global_env not in env_list:\n env_list.append(global_env)\n\n # add the current env\n if obj.current_env and obj.current_env not in env_list:\n env_list.append(obj.current_env)\n\n # add a manually set env\n if env and env not in env_list:\n env_list.append(env)\n\n # add the [global] env\n env_list.append(\"GLOBAL\")\n\n # loaders are responsible to change to lower/upper cases\n return [env.lower() for env in env_list]\n\n\ndef upperfy(key):\n \"\"\"Receive a string key and returns its upper version.\n\n Example:\n\n input: foo\n output: FOO\n\n input: foo_bar\n output: FOO_BAR\n\n input: foo__bar__ZAZ\n output: FOO__bar__ZAZ\n\n Arguments:\n key {str} -- A string key that may contain dunders `__`\n\n Returns:\n The key as upper case but keeping the nested elements.\n \"\"\"\n if \"__\" in key:\n parts = key.split(\"__\")\n return \"__\".join([parts[0].upper()] + parts[1:])\n return key.upper()\n", "path": "dynaconf/utils/__init__.py"}]}
| 3,504 | 130 |
gh_patches_debug_18461
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-extras-3381
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Amazon ec2_elb_facts module fails when no names passed in
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- ec2_elb_facts
##### ANSIBLE VERSION
```
ansible --version
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
None
##### OS / ENVIRONMENT
- Amazon Linux, stock install
##### SUMMARY
With the release of this module for Ansible 2.2, I am no longer able to get all load balancer information. The module throws a TypeError
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```
# Example playbook
- name: Gather elb facts
action:
module: ec2_elb_facts
region: "{{ ec2_region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: elb_facts
tags:
- deploy
```
##### EXPECTED RESULTS
- Not a failure
##### ACTUAL RESULTS
```
# Job output
TASK [deploy_elb_deregister : Gather elb facts] ********************************
fatal: [10.90.30.119]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 10.90.30.119 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_49PnSI/ansible_module_ec2_elb_facts.py\", line 245, in <module>\r\n main()\r\n File \"/tmp/ansible_49PnSI/ansible_module_ec2_elb_facts.py\", line 237, in main\r\n elbs=elb_information.list_elbs())\r\n File \"/tmp/ansible_49PnSI/ansible_module_ec2_elb_facts.py\", line 209, in list_elbs\r\n if existing_lb.name in self.names:\r\nTypeError: argument of type 'NoneType' is not iterable\r\n", "msg": "MODULE FAILURE"}
```
I will have a pull request today.
</issue>
<code>
[start of cloud/amazon/ec2_elb_facts.py]
1 #!/usr/bin/python
2 #
3 # This is a free software: you can redistribute it and/or modify
4 # it under the terms of the GNU General Public License as published by
5 # the Free Software Foundation, either version 3 of the License, or
6 # (at your option) any later version.
7 #
8 # This Ansible library is distributed in the hope that it will be useful,
9 # but WITHOUT ANY WARRANTY; without even the implied warranty of
10 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11 # GNU General Public License for more details.
12 #
13 # You should have received a copy of the GNU General Public License
14 # along with this library. If not, see <http://www.gnu.org/licenses/>.
15
16 DOCUMENTATION = '''
17 ---
18 module: ec2_elb_facts
19 short_description: Gather facts about EC2 Elastic Load Balancers in AWS
20 description:
21 - Gather facts about EC2 Elastic Load Balancers in AWS
22 version_added: "2.0"
23 author:
24 - "Michael Schultz (github.com/mjschultz)"
25 - "Fernando Jose Pando (@nand0p)"
26 options:
27 names:
28 description:
29 - List of ELB names to gather facts about. Pass this option to gather facts about a set of ELBs, otherwise, all ELBs are returned.
30 required: false
31 default: null
32 aliases: ['elb_ids', 'ec2_elbs']
33 extends_documentation_fragment:
34 - aws
35 - ec2
36 '''
37
38 EXAMPLES = '''
39 # Note: These examples do not set authentication details, see the AWS Guide for details.
40 # Output format tries to match ec2_elb_lb module input parameters
41
42 # Gather facts about all ELBs
43 - action:
44 module: ec2_elb_facts
45 register: elb_facts
46
47 - action:
48 module: debug
49 msg: "{{ item.dns_name }}"
50 with_items: elb_facts.elbs
51
52 # Gather facts about a particular ELB
53 - action:
54 module: ec2_elb_facts
55 names: frontend-prod-elb
56 register: elb_facts
57
58 - action:
59 module: debug
60 msg: "{{ elb_facts.elbs.0.dns_name }}"
61
62 # Gather facts about a set of ELBs
63 - action:
64 module: ec2_elb_facts
65 names:
66 - frontend-prod-elb
67 - backend-prod-elb
68 register: elb_facts
69
70 - action:
71 module: debug
72 msg: "{{ item.dns_name }}"
73 with_items: elb_facts.elbs
74
75 '''
76
77 try:
78 import boto.ec2.elb
79 from boto.ec2.tag import Tag
80 from boto.exception import BotoServerError
81 HAS_BOTO = True
82 except ImportError:
83 HAS_BOTO = False
84
85 class ElbInformation(object):
86 """ Handles ELB information """
87
88 def __init__(self,
89 module,
90 names,
91 region,
92 **aws_connect_params):
93
94 self.module = module
95 self.names = names
96 self.region = region
97 self.aws_connect_params = aws_connect_params
98 self.connection = self._get_elb_connection()
99
100 def _get_tags(self, elbname):
101 params = {'LoadBalancerNames.member.1': elbname}
102 try:
103 elb_tags = self.connection.get_list('DescribeTags', params, [('member', Tag)])
104 return dict((tag.Key, tag.Value) for tag in elb_tags if hasattr(tag, 'Key'))
105 except:
106 return {}
107
108 def _get_elb_connection(self):
109 try:
110 return connect_to_aws(boto.ec2.elb, self.region, **self.aws_connect_params)
111 except BotoServerError as err:
112 self.module.fail_json(msg=err.message)
113
114 def _get_elb_listeners(self, listeners):
115 listener_list = []
116
117 for listener in listeners:
118 listener_dict = {
119 'load_balancer_port': listener[0],
120 'instance_port': listener[1],
121 'protocol': listener[2],
122 }
123
124 try:
125 ssl_certificate_id = listener[4]
126 except IndexError:
127 pass
128 else:
129 if ssl_certificate_id:
130 listener_dict['ssl_certificate_id'] = ssl_certificate_id
131
132 listener_list.append(listener_dict)
133
134 return listener_list
135
136 def _get_health_check(self, health_check):
137 protocol, port_path = health_check.target.split(':')
138 try:
139 port, path = port_path.split('/', 1)
140 path = '/{}'.format(path)
141 except ValueError:
142 port = port_path
143 path = None
144
145 health_check_dict = {
146 'ping_protocol': protocol.lower(),
147 'ping_port': int(port),
148 'response_timeout': health_check.timeout,
149 'interval': health_check.interval,
150 'unhealthy_threshold': health_check.unhealthy_threshold,
151 'healthy_threshold': health_check.healthy_threshold,
152 }
153
154 if path:
155 health_check_dict['ping_path'] = path
156 return health_check_dict
157
158 def _get_elb_info(self, elb):
159 elb_info = {
160 'name': elb.name,
161 'zones': elb.availability_zones,
162 'dns_name': elb.dns_name,
163 'canonical_hosted_zone_name': elb.canonical_hosted_zone_name,
164 'canonical_hosted_zone_name_id': elb.canonical_hosted_zone_name_id,
165 'hosted_zone_name': elb.canonical_hosted_zone_name,
166 'hosted_zone_id': elb.canonical_hosted_zone_name_id,
167 'instances': [instance.id for instance in elb.instances],
168 'listeners': self._get_elb_listeners(elb.listeners),
169 'scheme': elb.scheme,
170 'security_groups': elb.security_groups,
171 'health_check': self._get_health_check(elb.health_check),
172 'subnets': elb.subnets,
173 'instances_inservice': [],
174 'instances_inservice_count': 0,
175 'instances_outofservice': [],
176 'instances_outofservice_count': 0,
177 'instances_inservice_percent': 0.0,
178 'tags': self._get_tags(elb.name)
179 }
180
181 if elb.vpc_id:
182 elb_info['vpc_id'] = elb.vpc_id
183
184 if elb.instances:
185 try:
186 instance_health = self.connection.describe_instance_health(elb.name)
187 except BotoServerError as err:
188 self.module.fail_json(msg=err.message)
189 elb_info['instances_inservice'] = [inst.instance_id for inst in instance_health if inst.state == 'InService']
190 elb_info['instances_inservice_count'] = len(elb_info['instances_inservice'])
191 elb_info['instances_outofservice'] = [inst.instance_id for inst in instance_health if inst.state == 'OutOfService']
192 elb_info['instances_outofservice_count'] = len(elb_info['instances_outofservice'])
193 elb_info['instances_inservice_percent'] = float(elb_info['instances_inservice_count'])/(
194 float(elb_info['instances_inservice_count']) +
195 float(elb_info['instances_outofservice_count']))*100
196 return elb_info
197
198
199 def list_elbs(self):
200 elb_array = []
201
202 try:
203 all_elbs = self.connection.get_all_load_balancers()
204 except BotoServerError as err:
205 self.module.fail_json(msg = "%s: %s" % (err.error_code, err.error_message))
206
207 if all_elbs:
208 for existing_lb in all_elbs:
209 if existing_lb.name in self.names:
210 elb_array.append(self._get_elb_info(existing_lb))
211
212 return elb_array
213
214 def main():
215 argument_spec = ec2_argument_spec()
216 argument_spec.update(dict(
217 names={'default': None, 'type': 'list'}
218 )
219 )
220 module = AnsibleModule(argument_spec=argument_spec)
221
222 if not HAS_BOTO:
223 module.fail_json(msg='boto required for this module')
224
225 region, ec2_url, aws_connect_params = get_aws_connection_info(module)
226
227 if not region:
228 module.fail_json(msg="region must be specified")
229
230 names = module.params['names']
231 elb_information = ElbInformation(module,
232 names,
233 region,
234 **aws_connect_params)
235
236 ec2_facts_result = dict(changed=False,
237 elbs=elb_information.list_elbs())
238
239 module.exit_json(**ec2_facts_result)
240
241 from ansible.module_utils.basic import *
242 from ansible.module_utils.ec2 import *
243
244 if __name__ == '__main__':
245 main()
246
[end of cloud/amazon/ec2_elb_facts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cloud/amazon/ec2_elb_facts.py b/cloud/amazon/ec2_elb_facts.py
--- a/cloud/amazon/ec2_elb_facts.py
+++ b/cloud/amazon/ec2_elb_facts.py
@@ -205,16 +205,19 @@
self.module.fail_json(msg = "%s: %s" % (err.error_code, err.error_message))
if all_elbs:
- for existing_lb in all_elbs:
- if existing_lb.name in self.names:
- elb_array.append(self._get_elb_info(existing_lb))
-
- return elb_array
+ if self.names:
+ for existing_lb in all_elbs:
+ if existing_lb.name in self.names:
+ elb_array.append(existing_lb)
+ else:
+ elb_array = all_elbs
+
+ return list(map(self._get_elb_info, elb_array))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
- names={'default': None, 'type': 'list'}
+ names={'default': [], 'type': 'list'}
)
)
module = AnsibleModule(argument_spec=argument_spec)
|
{"golden_diff": "diff --git a/cloud/amazon/ec2_elb_facts.py b/cloud/amazon/ec2_elb_facts.py\n--- a/cloud/amazon/ec2_elb_facts.py\n+++ b/cloud/amazon/ec2_elb_facts.py\n@@ -205,16 +205,19 @@\n self.module.fail_json(msg = \"%s: %s\" % (err.error_code, err.error_message))\n \n if all_elbs:\n- for existing_lb in all_elbs:\n- if existing_lb.name in self.names:\n- elb_array.append(self._get_elb_info(existing_lb))\n-\n- return elb_array\n+ if self.names:\n+ for existing_lb in all_elbs:\n+ if existing_lb.name in self.names:\n+ elb_array.append(existing_lb)\n+ else:\n+ elb_array = all_elbs\n+ \n+ return list(map(self._get_elb_info, elb_array))\n \n def main():\n argument_spec = ec2_argument_spec()\n argument_spec.update(dict(\n- names={'default': None, 'type': 'list'}\n+ names={'default': [], 'type': 'list'}\n )\n )\n module = AnsibleModule(argument_spec=argument_spec)\n", "issue": "Amazon ec2_elb_facts module fails when no names passed in\n##### ISSUE TYPE\r\n - Bug Report\r\n \r\n##### COMPONENT NAME\r\n- ec2_elb_facts\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible --version\r\nansible 2.2.0.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n```\r\n\r\n##### CONFIGURATION\r\nNone\r\n\r\n##### OS / ENVIRONMENT\r\n- Amazon Linux, stock install\r\n\r\n##### SUMMARY\r\nWith the release of this module for Ansible 2.2, I am no longer able to get all load balancer information. The module throws a TypeError\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```\r\n# Example playbook\r\n- name: Gather elb facts\r\n action: \r\n module: ec2_elb_facts\r\n region: \"{{ ec2_region }}\"\r\n aws_access_key: \"{{ aws_access_key }}\"\r\n aws_secret_key: \"{{ aws_secret_key }}\"\r\n register: elb_facts\r\n tags:\r\n - deploy\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n- Not a failure\r\n\r\n##### ACTUAL RESULTS\r\n```\r\n# Job output\r\nTASK [deploy_elb_deregister : Gather elb facts] ********************************\r\nfatal: [10.90.30.119]: FAILED! => {\"changed\": false, \"failed\": true, \"module_stderr\": \"Shared connection to 10.90.30.119 closed.\\r\\n\", \"module_stdout\": \"Traceback (most recent call last):\\r\\n File \\\"/tmp/ansible_49PnSI/ansible_module_ec2_elb_facts.py\\\", line 245, in <module>\\r\\n main()\\r\\n File \\\"/tmp/ansible_49PnSI/ansible_module_ec2_elb_facts.py\\\", line 237, in main\\r\\n elbs=elb_information.list_elbs())\\r\\n File \\\"/tmp/ansible_49PnSI/ansible_module_ec2_elb_facts.py\\\", line 209, in list_elbs\\r\\n if existing_lb.name in self.names:\\r\\nTypeError: argument of type 'NoneType' is not iterable\\r\\n\", \"msg\": \"MODULE FAILURE\"}\r\n```\r\n\r\nI will have a pull request today.\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# This is a free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This Ansible library is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this library. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: ec2_elb_facts\nshort_description: Gather facts about EC2 Elastic Load Balancers in AWS\ndescription:\n - Gather facts about EC2 Elastic Load Balancers in AWS\nversion_added: \"2.0\"\nauthor:\n - \"Michael Schultz (github.com/mjschultz)\"\n - \"Fernando Jose Pando (@nand0p)\"\noptions:\n names:\n description:\n - List of ELB names to gather facts about. Pass this option to gather facts about a set of ELBs, otherwise, all ELBs are returned.\n required: false\n default: null\n aliases: ['elb_ids', 'ec2_elbs']\nextends_documentation_fragment:\n - aws\n - ec2\n'''\n\nEXAMPLES = '''\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n# Output format tries to match ec2_elb_lb module input parameters\n\n# Gather facts about all ELBs\n- action:\n module: ec2_elb_facts\n register: elb_facts\n\n- action:\n module: debug\n msg: \"{{ item.dns_name }}\"\n with_items: elb_facts.elbs\n\n# Gather facts about a particular ELB\n- action:\n module: ec2_elb_facts\n names: frontend-prod-elb\n register: elb_facts\n\n- action:\n module: debug\n msg: \"{{ elb_facts.elbs.0.dns_name }}\"\n\n# Gather facts about a set of ELBs\n- action:\n module: ec2_elb_facts\n names:\n - frontend-prod-elb\n - backend-prod-elb\n register: elb_facts\n\n- action:\n module: debug\n msg: \"{{ item.dns_name }}\"\n with_items: elb_facts.elbs\n\n'''\n\ntry:\n import boto.ec2.elb\n from boto.ec2.tag import Tag\n from boto.exception import BotoServerError\n HAS_BOTO = True\nexcept ImportError:\n HAS_BOTO = False\n\nclass ElbInformation(object):\n \"\"\" Handles ELB information \"\"\"\n\n def __init__(self,\n module,\n names,\n region,\n **aws_connect_params):\n\n self.module = module\n self.names = names\n self.region = region\n self.aws_connect_params = aws_connect_params\n self.connection = self._get_elb_connection()\n\n def _get_tags(self, elbname):\n params = {'LoadBalancerNames.member.1': elbname}\n try:\n elb_tags = self.connection.get_list('DescribeTags', params, [('member', Tag)])\n return dict((tag.Key, tag.Value) for tag in elb_tags if hasattr(tag, 'Key'))\n except:\n return {}\n\n def _get_elb_connection(self):\n try:\n return connect_to_aws(boto.ec2.elb, self.region, **self.aws_connect_params)\n except BotoServerError as err:\n self.module.fail_json(msg=err.message)\n\n def _get_elb_listeners(self, listeners):\n listener_list = []\n\n for listener in listeners:\n listener_dict = {\n 'load_balancer_port': listener[0],\n 'instance_port': listener[1],\n 'protocol': listener[2],\n }\n\n try:\n ssl_certificate_id = listener[4]\n except IndexError:\n pass\n else:\n if ssl_certificate_id:\n listener_dict['ssl_certificate_id'] = ssl_certificate_id\n\n listener_list.append(listener_dict)\n\n return listener_list\n\n def _get_health_check(self, health_check):\n protocol, port_path = health_check.target.split(':')\n try:\n port, path = port_path.split('/', 1)\n path = '/{}'.format(path)\n except ValueError:\n port = port_path\n path = None\n\n health_check_dict = {\n 'ping_protocol': protocol.lower(),\n 'ping_port': int(port),\n 'response_timeout': health_check.timeout,\n 'interval': health_check.interval,\n 'unhealthy_threshold': health_check.unhealthy_threshold,\n 'healthy_threshold': health_check.healthy_threshold,\n }\n\n if path:\n health_check_dict['ping_path'] = path\n return health_check_dict\n\n def _get_elb_info(self, elb):\n elb_info = {\n 'name': elb.name,\n 'zones': elb.availability_zones,\n 'dns_name': elb.dns_name,\n 'canonical_hosted_zone_name': elb.canonical_hosted_zone_name,\n 'canonical_hosted_zone_name_id': elb.canonical_hosted_zone_name_id,\n 'hosted_zone_name': elb.canonical_hosted_zone_name,\n 'hosted_zone_id': elb.canonical_hosted_zone_name_id,\n 'instances': [instance.id for instance in elb.instances],\n 'listeners': self._get_elb_listeners(elb.listeners),\n 'scheme': elb.scheme,\n 'security_groups': elb.security_groups,\n 'health_check': self._get_health_check(elb.health_check),\n 'subnets': elb.subnets,\n 'instances_inservice': [],\n 'instances_inservice_count': 0,\n 'instances_outofservice': [],\n 'instances_outofservice_count': 0,\n 'instances_inservice_percent': 0.0,\n 'tags': self._get_tags(elb.name)\n }\n\n if elb.vpc_id:\n elb_info['vpc_id'] = elb.vpc_id\n\n if elb.instances:\n try:\n instance_health = self.connection.describe_instance_health(elb.name)\n except BotoServerError as err:\n self.module.fail_json(msg=err.message)\n elb_info['instances_inservice'] = [inst.instance_id for inst in instance_health if inst.state == 'InService']\n elb_info['instances_inservice_count'] = len(elb_info['instances_inservice'])\n elb_info['instances_outofservice'] = [inst.instance_id for inst in instance_health if inst.state == 'OutOfService']\n elb_info['instances_outofservice_count'] = len(elb_info['instances_outofservice'])\n elb_info['instances_inservice_percent'] = float(elb_info['instances_inservice_count'])/(\n float(elb_info['instances_inservice_count']) +\n float(elb_info['instances_outofservice_count']))*100\n return elb_info\n\n\n def list_elbs(self):\n elb_array = []\n\n try:\n all_elbs = self.connection.get_all_load_balancers()\n except BotoServerError as err:\n self.module.fail_json(msg = \"%s: %s\" % (err.error_code, err.error_message))\n\n if all_elbs:\n for existing_lb in all_elbs:\n if existing_lb.name in self.names:\n elb_array.append(self._get_elb_info(existing_lb))\n\n return elb_array\n\ndef main():\n argument_spec = ec2_argument_spec()\n argument_spec.update(dict(\n names={'default': None, 'type': 'list'}\n )\n )\n module = AnsibleModule(argument_spec=argument_spec)\n\n if not HAS_BOTO:\n module.fail_json(msg='boto required for this module')\n\n region, ec2_url, aws_connect_params = get_aws_connection_info(module)\n\n if not region:\n module.fail_json(msg=\"region must be specified\")\n\n names = module.params['names']\n elb_information = ElbInformation(module,\n names,\n region,\n **aws_connect_params)\n\n ec2_facts_result = dict(changed=False,\n elbs=elb_information.list_elbs())\n\n module.exit_json(**ec2_facts_result)\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.ec2 import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/amazon/ec2_elb_facts.py"}]}
| 3,544 | 270 |
gh_patches_debug_2124
|
rasdani/github-patches
|
git_diff
|
python-gitlab__python-gitlab-1213
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RefreshMixin.refresh() doesn't remove removed attributes
## Description of the problem, including code/CLI snippet
When attributes disappear from an object on the server `RefreshMixin.refresh()` doesn't remove them.
For instance if a job that has artifacts will have an `artifacts_file` attribute. If you call `.delete_artifacts()` on it, then call `.refresh()` the `artifacts_file` attribute will still be there.
```bash
# get a job with artifacts
job = project.jobs.get(job_id)
# will succeed
assert hasattr(job, "artifacts_file")
# now delete the artifacts from the server
job.delete_artifacts()
# This will fail because the artifacts_file is still there; refresh() didn't remove it
job.refresh()
assert not hasattr(job, "artifacts_file")
# If you get the job again from the project it'll be fine
job = project.jobs.get(job_id)
assert not hasattr(job, "artifacts_file")
```
```python
```
## Expected Behavior
I would expect that the attributes dict on any object should be exactly the same between a freshly retrieved object and an old object after calling `.refresh()`
```python
o.refresh()
# After a refresh this should always be true
o.attributes == o.manager.get(o.id).attributes
```
## Actual Behavior
They're not equal
## Specifications
- python-gitlab version: `v2.4.0`
- API version you are using (v3/v4): `v4`
- Gitlab server version (or gitlab.com): `13.2.3`
</issue>
<code>
[start of gitlab/base.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) 2013-2017 Gauvain Pocentek <[email protected]>
4 #
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU Lesser General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU Lesser General Public License for more details.
14 #
15 # You should have received a copy of the GNU Lesser General Public License
16 # along with this program. If not, see <http://www.gnu.org/licenses/>.
17
18 import importlib
19
20
21 class RESTObject(object):
22 """Represents an object built from server data.
23
24 It holds the attributes know from the server, and the updated attributes in
25 another. This allows smart updates, if the object allows it.
26
27 You can redefine ``_id_attr`` in child classes to specify which attribute
28 must be used as uniq ID. ``None`` means that the object can be updated
29 without ID in the url.
30 """
31
32 _id_attr = "id"
33
34 def __init__(self, manager, attrs):
35 self.__dict__.update(
36 {
37 "manager": manager,
38 "_attrs": attrs,
39 "_updated_attrs": {},
40 "_module": importlib.import_module(self.__module__),
41 }
42 )
43 self.__dict__["_parent_attrs"] = self.manager.parent_attrs
44 self._create_managers()
45
46 def __getstate__(self):
47 state = self.__dict__.copy()
48 module = state.pop("_module")
49 state["_module_name"] = module.__name__
50 return state
51
52 def __setstate__(self, state):
53 module_name = state.pop("_module_name")
54 self.__dict__.update(state)
55 self.__dict__["_module"] = importlib.import_module(module_name)
56
57 def __getattr__(self, name):
58 try:
59 return self.__dict__["_updated_attrs"][name]
60 except KeyError:
61 try:
62 value = self.__dict__["_attrs"][name]
63
64 # If the value is a list, we copy it in the _updated_attrs dict
65 # because we are not able to detect changes made on the object
66 # (append, insert, pop, ...). Without forcing the attr
67 # creation __setattr__ is never called, the list never ends up
68 # in the _updated_attrs dict, and the update() and save()
69 # method never push the new data to the server.
70 # See https://github.com/python-gitlab/python-gitlab/issues/306
71 #
72 # note: _parent_attrs will only store simple values (int) so we
73 # don't make this check in the next except block.
74 if isinstance(value, list):
75 self.__dict__["_updated_attrs"][name] = value[:]
76 return self.__dict__["_updated_attrs"][name]
77
78 return value
79
80 except KeyError:
81 try:
82 return self.__dict__["_parent_attrs"][name]
83 except KeyError:
84 raise AttributeError(name)
85
86 def __setattr__(self, name, value):
87 self.__dict__["_updated_attrs"][name] = value
88
89 def __str__(self):
90 data = self._attrs.copy()
91 data.update(self._updated_attrs)
92 return "%s => %s" % (type(self), data)
93
94 def __repr__(self):
95 if self._id_attr:
96 return "<%s %s:%s>" % (
97 self.__class__.__name__,
98 self._id_attr,
99 self.get_id(),
100 )
101 else:
102 return "<%s>" % self.__class__.__name__
103
104 def __eq__(self, other):
105 if self.get_id() and other.get_id():
106 return self.get_id() == other.get_id()
107 return super(RESTObject, self) == other
108
109 def __ne__(self, other):
110 if self.get_id() and other.get_id():
111 return self.get_id() != other.get_id()
112 return super(RESTObject, self) != other
113
114 def __dir__(self):
115 return super(RESTObject, self).__dir__() + list(self.attributes)
116
117 def __hash__(self):
118 if not self.get_id():
119 return super(RESTObject, self).__hash__()
120 return hash(self.get_id())
121
122 def _create_managers(self):
123 managers = getattr(self, "_managers", None)
124 if managers is None:
125 return
126
127 for attr, cls_name in self._managers:
128 cls = getattr(self._module, cls_name)
129 manager = cls(self.manager.gitlab, parent=self)
130 self.__dict__[attr] = manager
131
132 def _update_attrs(self, new_attrs):
133 self.__dict__["_updated_attrs"] = {}
134 self.__dict__["_attrs"].update(new_attrs)
135
136 def get_id(self):
137 """Returns the id of the resource."""
138 if self._id_attr is None or not hasattr(self, self._id_attr):
139 return None
140 return getattr(self, self._id_attr)
141
142 @property
143 def attributes(self):
144 d = self.__dict__["_updated_attrs"].copy()
145 d.update(self.__dict__["_attrs"])
146 d.update(self.__dict__["_parent_attrs"])
147 return d
148
149
150 class RESTObjectList(object):
151 """Generator object representing a list of RESTObject's.
152
153 This generator uses the Gitlab pagination system to fetch new data when
154 required.
155
156 Note: you should not instanciate such objects, they are returned by calls
157 to RESTManager.list()
158
159 Args:
160 manager: Manager to attach to the created objects
161 obj_cls: Type of objects to create from the json data
162 _list: A GitlabList object
163 """
164
165 def __init__(self, manager, obj_cls, _list):
166 """Creates an objects list from a GitlabList.
167
168 You should not create objects of this type, but use managers list()
169 methods instead.
170
171 Args:
172 manager: the RESTManager to attach to the objects
173 obj_cls: the class of the created objects
174 _list: the GitlabList holding the data
175 """
176 self.manager = manager
177 self._obj_cls = obj_cls
178 self._list = _list
179
180 def __iter__(self):
181 return self
182
183 def __len__(self):
184 return len(self._list)
185
186 def __next__(self):
187 return self.next()
188
189 def next(self):
190 data = self._list.next()
191 return self._obj_cls(self.manager, data)
192
193 @property
194 def current_page(self):
195 """The current page number."""
196 return self._list.current_page
197
198 @property
199 def prev_page(self):
200 """The previous page number.
201
202 If None, the current page is the first.
203 """
204 return self._list.prev_page
205
206 @property
207 def next_page(self):
208 """The next page number.
209
210 If None, the current page is the last.
211 """
212 return self._list.next_page
213
214 @property
215 def per_page(self):
216 """The number of items per page."""
217 return self._list.per_page
218
219 @property
220 def total_pages(self):
221 """The total number of pages."""
222 return self._list.total_pages
223
224 @property
225 def total(self):
226 """The total number of items."""
227 return self._list.total
228
229
230 class RESTManager(object):
231 """Base class for CRUD operations on objects.
232
233 Derived class must define ``_path`` and ``_obj_cls``.
234
235 ``_path``: Base URL path on which requests will be sent (e.g. '/projects')
236 ``_obj_cls``: The class of objects that will be created
237 """
238
239 _path = None
240 _obj_cls = None
241
242 def __init__(self, gl, parent=None):
243 """REST manager constructor.
244
245 Args:
246 gl (Gitlab): :class:`~gitlab.Gitlab` connection to use to make
247 requests.
248 parent: REST object to which the manager is attached.
249 """
250 self.gitlab = gl
251 self._parent = parent # for nested managers
252 self._computed_path = self._compute_path()
253
254 @property
255 def parent_attrs(self):
256 return self._parent_attrs
257
258 def _compute_path(self, path=None):
259 self._parent_attrs = {}
260 if path is None:
261 path = self._path
262 if self._parent is None or not hasattr(self, "_from_parent_attrs"):
263 return path
264
265 data = {
266 self_attr: getattr(self._parent, parent_attr, None)
267 for self_attr, parent_attr in self._from_parent_attrs.items()
268 }
269 self._parent_attrs = data
270 return path % data
271
272 @property
273 def path(self):
274 return self._computed_path
275
[end of gitlab/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gitlab/base.py b/gitlab/base.py
--- a/gitlab/base.py
+++ b/gitlab/base.py
@@ -131,7 +131,7 @@
def _update_attrs(self, new_attrs):
self.__dict__["_updated_attrs"] = {}
- self.__dict__["_attrs"].update(new_attrs)
+ self.__dict__["_attrs"] = new_attrs
def get_id(self):
"""Returns the id of the resource."""
|
{"golden_diff": "diff --git a/gitlab/base.py b/gitlab/base.py\n--- a/gitlab/base.py\n+++ b/gitlab/base.py\n@@ -131,7 +131,7 @@\n \n def _update_attrs(self, new_attrs):\n self.__dict__[\"_updated_attrs\"] = {}\n- self.__dict__[\"_attrs\"].update(new_attrs)\n+ self.__dict__[\"_attrs\"] = new_attrs\n \n def get_id(self):\n \"\"\"Returns the id of the resource.\"\"\"\n", "issue": "RefreshMixin.refresh() doesn't remove removed attributes\n## Description of the problem, including code/CLI snippet\r\n\r\nWhen attributes disappear from an object on the server `RefreshMixin.refresh()` doesn't remove them.\r\n\r\nFor instance if a job that has artifacts will have an `artifacts_file` attribute. If you call `.delete_artifacts()` on it, then call `.refresh()` the `artifacts_file` attribute will still be there.\r\n\r\n```bash\r\n# get a job with artifacts\r\njob = project.jobs.get(job_id)\r\n# will succeed\r\nassert hasattr(job, \"artifacts_file\")\r\n# now delete the artifacts from the server\r\njob.delete_artifacts()\r\n\r\n# This will fail because the artifacts_file is still there; refresh() didn't remove it\r\njob.refresh()\r\nassert not hasattr(job, \"artifacts_file\")\r\n\r\n# If you get the job again from the project it'll be fine\r\njob = project.jobs.get(job_id)\r\nassert not hasattr(job, \"artifacts_file\")\r\n```\r\n\r\n```python\r\n\r\n```\r\n\r\n\r\n## Expected Behavior\r\n\r\nI would expect that the attributes dict on any object should be exactly the same between a freshly retrieved object and an old object after calling `.refresh()`\r\n\r\n```python\r\no.refresh()\r\n# After a refresh this should always be true\r\no.attributes == o.manager.get(o.id).attributes\r\n```\r\n\r\n## Actual Behavior\r\n\r\nThey're not equal\r\n\r\n## Specifications\r\n\r\n - python-gitlab version: `v2.4.0`\r\n - API version you are using (v3/v4): `v4`\r\n - Gitlab server version (or gitlab.com): `13.2.3`\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2013-2017 Gauvain Pocentek <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\nimport importlib\n\n\nclass RESTObject(object):\n \"\"\"Represents an object built from server data.\n\n It holds the attributes know from the server, and the updated attributes in\n another. This allows smart updates, if the object allows it.\n\n You can redefine ``_id_attr`` in child classes to specify which attribute\n must be used as uniq ID. ``None`` means that the object can be updated\n without ID in the url.\n \"\"\"\n\n _id_attr = \"id\"\n\n def __init__(self, manager, attrs):\n self.__dict__.update(\n {\n \"manager\": manager,\n \"_attrs\": attrs,\n \"_updated_attrs\": {},\n \"_module\": importlib.import_module(self.__module__),\n }\n )\n self.__dict__[\"_parent_attrs\"] = self.manager.parent_attrs\n self._create_managers()\n\n def __getstate__(self):\n state = self.__dict__.copy()\n module = state.pop(\"_module\")\n state[\"_module_name\"] = module.__name__\n return state\n\n def __setstate__(self, state):\n module_name = state.pop(\"_module_name\")\n self.__dict__.update(state)\n self.__dict__[\"_module\"] = importlib.import_module(module_name)\n\n def __getattr__(self, name):\n try:\n return self.__dict__[\"_updated_attrs\"][name]\n except KeyError:\n try:\n value = self.__dict__[\"_attrs\"][name]\n\n # If the value is a list, we copy it in the _updated_attrs dict\n # because we are not able to detect changes made on the object\n # (append, insert, pop, ...). Without forcing the attr\n # creation __setattr__ is never called, the list never ends up\n # in the _updated_attrs dict, and the update() and save()\n # method never push the new data to the server.\n # See https://github.com/python-gitlab/python-gitlab/issues/306\n #\n # note: _parent_attrs will only store simple values (int) so we\n # don't make this check in the next except block.\n if isinstance(value, list):\n self.__dict__[\"_updated_attrs\"][name] = value[:]\n return self.__dict__[\"_updated_attrs\"][name]\n\n return value\n\n except KeyError:\n try:\n return self.__dict__[\"_parent_attrs\"][name]\n except KeyError:\n raise AttributeError(name)\n\n def __setattr__(self, name, value):\n self.__dict__[\"_updated_attrs\"][name] = value\n\n def __str__(self):\n data = self._attrs.copy()\n data.update(self._updated_attrs)\n return \"%s => %s\" % (type(self), data)\n\n def __repr__(self):\n if self._id_attr:\n return \"<%s %s:%s>\" % (\n self.__class__.__name__,\n self._id_attr,\n self.get_id(),\n )\n else:\n return \"<%s>\" % self.__class__.__name__\n\n def __eq__(self, other):\n if self.get_id() and other.get_id():\n return self.get_id() == other.get_id()\n return super(RESTObject, self) == other\n\n def __ne__(self, other):\n if self.get_id() and other.get_id():\n return self.get_id() != other.get_id()\n return super(RESTObject, self) != other\n\n def __dir__(self):\n return super(RESTObject, self).__dir__() + list(self.attributes)\n\n def __hash__(self):\n if not self.get_id():\n return super(RESTObject, self).__hash__()\n return hash(self.get_id())\n\n def _create_managers(self):\n managers = getattr(self, \"_managers\", None)\n if managers is None:\n return\n\n for attr, cls_name in self._managers:\n cls = getattr(self._module, cls_name)\n manager = cls(self.manager.gitlab, parent=self)\n self.__dict__[attr] = manager\n\n def _update_attrs(self, new_attrs):\n self.__dict__[\"_updated_attrs\"] = {}\n self.__dict__[\"_attrs\"].update(new_attrs)\n\n def get_id(self):\n \"\"\"Returns the id of the resource.\"\"\"\n if self._id_attr is None or not hasattr(self, self._id_attr):\n return None\n return getattr(self, self._id_attr)\n\n @property\n def attributes(self):\n d = self.__dict__[\"_updated_attrs\"].copy()\n d.update(self.__dict__[\"_attrs\"])\n d.update(self.__dict__[\"_parent_attrs\"])\n return d\n\n\nclass RESTObjectList(object):\n \"\"\"Generator object representing a list of RESTObject's.\n\n This generator uses the Gitlab pagination system to fetch new data when\n required.\n\n Note: you should not instanciate such objects, they are returned by calls\n to RESTManager.list()\n\n Args:\n manager: Manager to attach to the created objects\n obj_cls: Type of objects to create from the json data\n _list: A GitlabList object\n \"\"\"\n\n def __init__(self, manager, obj_cls, _list):\n \"\"\"Creates an objects list from a GitlabList.\n\n You should not create objects of this type, but use managers list()\n methods instead.\n\n Args:\n manager: the RESTManager to attach to the objects\n obj_cls: the class of the created objects\n _list: the GitlabList holding the data\n \"\"\"\n self.manager = manager\n self._obj_cls = obj_cls\n self._list = _list\n\n def __iter__(self):\n return self\n\n def __len__(self):\n return len(self._list)\n\n def __next__(self):\n return self.next()\n\n def next(self):\n data = self._list.next()\n return self._obj_cls(self.manager, data)\n\n @property\n def current_page(self):\n \"\"\"The current page number.\"\"\"\n return self._list.current_page\n\n @property\n def prev_page(self):\n \"\"\"The previous page number.\n\n If None, the current page is the first.\n \"\"\"\n return self._list.prev_page\n\n @property\n def next_page(self):\n \"\"\"The next page number.\n\n If None, the current page is the last.\n \"\"\"\n return self._list.next_page\n\n @property\n def per_page(self):\n \"\"\"The number of items per page.\"\"\"\n return self._list.per_page\n\n @property\n def total_pages(self):\n \"\"\"The total number of pages.\"\"\"\n return self._list.total_pages\n\n @property\n def total(self):\n \"\"\"The total number of items.\"\"\"\n return self._list.total\n\n\nclass RESTManager(object):\n \"\"\"Base class for CRUD operations on objects.\n\n Derived class must define ``_path`` and ``_obj_cls``.\n\n ``_path``: Base URL path on which requests will be sent (e.g. '/projects')\n ``_obj_cls``: The class of objects that will be created\n \"\"\"\n\n _path = None\n _obj_cls = None\n\n def __init__(self, gl, parent=None):\n \"\"\"REST manager constructor.\n\n Args:\n gl (Gitlab): :class:`~gitlab.Gitlab` connection to use to make\n requests.\n parent: REST object to which the manager is attached.\n \"\"\"\n self.gitlab = gl\n self._parent = parent # for nested managers\n self._computed_path = self._compute_path()\n\n @property\n def parent_attrs(self):\n return self._parent_attrs\n\n def _compute_path(self, path=None):\n self._parent_attrs = {}\n if path is None:\n path = self._path\n if self._parent is None or not hasattr(self, \"_from_parent_attrs\"):\n return path\n\n data = {\n self_attr: getattr(self._parent, parent_attr, None)\n for self_attr, parent_attr in self._from_parent_attrs.items()\n }\n self._parent_attrs = data\n return path % data\n\n @property\n def path(self):\n return self._computed_path\n", "path": "gitlab/base.py"}]}
| 3,567 | 105 |
gh_patches_debug_3814
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-573
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem with --checkout reclone
The message should ask me about recloning `/Users/audreyr/.cookiecutters/cookiecutter-pypackage`, not `/Users/audreyr/.cookiecutters`.
```
$ cookiecutter https://github.com/eliasdorneles/cookiecutter-pypackage/ -c adding-travis-setup-for-pypi-deployment
You've cloned /Users/audreyr/.cookiecutters before. Is it okay to delete and re-clone it? [yes]:
```
</issue>
<code>
[start of cookiecutter/vcs.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.vcs
6 ----------------
7
8 Helper functions for working with version control systems.
9 """
10
11 from __future__ import unicode_literals
12 import logging
13 import os
14 import subprocess
15 import sys
16
17 from whichcraft import which
18
19 from .exceptions import UnknownRepoType, VCSNotInstalled
20 from .prompt import read_user_yes_no
21 from .utils import make_sure_path_exists, rmtree
22
23
24 def prompt_and_delete_repo(repo_dir, no_input=False):
25 """
26 Asks the user whether it's okay to delete the previously-cloned repo.
27 If yes, deletes it. Otherwise, Cookiecutter exits.
28
29 :param repo_dir: Directory of previously-cloned repo.
30 :param no_input: Suppress prompt to delete repo and just delete it.
31 """
32
33 # Suppress prompt if called via API
34 if no_input:
35 ok_to_delete = True
36 else:
37 question = (
38 "You've cloned {0} before. "
39 'Is it okay to delete and re-clone it?'
40 ).format(repo_dir)
41
42 ok_to_delete = read_user_yes_no(question, 'yes')
43
44 if ok_to_delete:
45 rmtree(repo_dir)
46 else:
47 sys.exit()
48
49
50 def identify_repo(repo_url):
51 """
52 Determines if `repo_url` should be treated as a URL to a git or hg repo.
53 Repos can be identified prepeding "hg+" or "git+" to repo URL.
54
55 :param repo_url: Repo URL of unknown type.
56 :returns: ("git", repo_url), ("hg", repo_url), or None.
57 """
58 repo_url_values = repo_url.split('+')
59 if len(repo_url_values) == 2:
60 repo_type = repo_url_values[0]
61 if repo_type in ["git", "hg"]:
62 return repo_type, repo_url_values[1]
63 else:
64 raise UnknownRepoType
65 else:
66 if "git" in repo_url:
67 return "git", repo_url
68 elif "bitbucket" in repo_url:
69 return "hg", repo_url
70 else:
71 raise UnknownRepoType
72
73
74 def is_vcs_installed(repo_type):
75 """
76 Check if the version control system for a repo type is installed.
77
78 :param repo_type:
79 """
80 return bool(which(repo_type))
81
82
83 def clone(repo_url, checkout=None, clone_to_dir=".", no_input=False):
84 """
85 Clone a repo to the current directory.
86
87 :param repo_url: Repo URL of unknown type.
88 :param checkout: The branch, tag or commit ID to checkout after clone.
89 :param clone_to_dir: The directory to clone to.
90 Defaults to the current directory.
91 :param no_input: Suppress all user prompts when calling via API.
92 """
93
94 # Ensure that clone_to_dir exists
95 clone_to_dir = os.path.expanduser(clone_to_dir)
96 make_sure_path_exists(clone_to_dir)
97
98 # identify the repo_type
99 repo_type, repo_url = identify_repo(repo_url)
100
101 # check that the appropriate VCS for the repo_type is installed
102 if not is_vcs_installed(repo_type):
103 msg = "'{0}' is not installed.".format(repo_type)
104 raise VCSNotInstalled(msg)
105
106 tail = os.path.split(repo_url)[1]
107 if repo_type == 'git':
108 repo_dir = os.path.normpath(os.path.join(clone_to_dir,
109 tail.rsplit('.git')[0]))
110 elif repo_type == 'hg':
111 repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail))
112 logging.debug('repo_dir is {0}'.format(repo_dir))
113
114 if os.path.isdir(repo_dir):
115 prompt_and_delete_repo(repo_dir, no_input=no_input)
116
117 if repo_type in ['git', 'hg']:
118 subprocess.check_call([repo_type, 'clone', repo_url], cwd=clone_to_dir)
119 if checkout is not None:
120 subprocess.check_call([repo_type, 'checkout', checkout],
121 cwd=repo_dir)
122
123 return repo_dir
124
[end of cookiecutter/vcs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cookiecutter/vcs.py b/cookiecutter/vcs.py
--- a/cookiecutter/vcs.py
+++ b/cookiecutter/vcs.py
@@ -103,6 +103,7 @@
msg = "'{0}' is not installed.".format(repo_type)
raise VCSNotInstalled(msg)
+ repo_url = repo_url.rstrip('/')
tail = os.path.split(repo_url)[1]
if repo_type == 'git':
repo_dir = os.path.normpath(os.path.join(clone_to_dir,
|
{"golden_diff": "diff --git a/cookiecutter/vcs.py b/cookiecutter/vcs.py\n--- a/cookiecutter/vcs.py\n+++ b/cookiecutter/vcs.py\n@@ -103,6 +103,7 @@\n msg = \"'{0}' is not installed.\".format(repo_type)\n raise VCSNotInstalled(msg)\n \n+ repo_url = repo_url.rstrip('/')\n tail = os.path.split(repo_url)[1]\n if repo_type == 'git':\n repo_dir = os.path.normpath(os.path.join(clone_to_dir,\n", "issue": "Problem with --checkout reclone\nThe message should ask me about recloning `/Users/audreyr/.cookiecutters/cookiecutter-pypackage`, not `/Users/audreyr/.cookiecutters`.\n\n```\n$ cookiecutter https://github.com/eliasdorneles/cookiecutter-pypackage/ -c adding-travis-setup-for-pypi-deployment\nYou've cloned /Users/audreyr/.cookiecutters before. Is it okay to delete and re-clone it? [yes]: \n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.vcs\n----------------\n\nHelper functions for working with version control systems.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\nimport os\nimport subprocess\nimport sys\n\nfrom whichcraft import which\n\nfrom .exceptions import UnknownRepoType, VCSNotInstalled\nfrom .prompt import read_user_yes_no\nfrom .utils import make_sure_path_exists, rmtree\n\n\ndef prompt_and_delete_repo(repo_dir, no_input=False):\n \"\"\"\n Asks the user whether it's okay to delete the previously-cloned repo.\n If yes, deletes it. Otherwise, Cookiecutter exits.\n\n :param repo_dir: Directory of previously-cloned repo.\n :param no_input: Suppress prompt to delete repo and just delete it.\n \"\"\"\n\n # Suppress prompt if called via API\n if no_input:\n ok_to_delete = True\n else:\n question = (\n \"You've cloned {0} before. \"\n 'Is it okay to delete and re-clone it?'\n ).format(repo_dir)\n\n ok_to_delete = read_user_yes_no(question, 'yes')\n\n if ok_to_delete:\n rmtree(repo_dir)\n else:\n sys.exit()\n\n\ndef identify_repo(repo_url):\n \"\"\"\n Determines if `repo_url` should be treated as a URL to a git or hg repo.\n Repos can be identified prepeding \"hg+\" or \"git+\" to repo URL.\n\n :param repo_url: Repo URL of unknown type.\n :returns: (\"git\", repo_url), (\"hg\", repo_url), or None.\n \"\"\"\n repo_url_values = repo_url.split('+')\n if len(repo_url_values) == 2:\n repo_type = repo_url_values[0]\n if repo_type in [\"git\", \"hg\"]:\n return repo_type, repo_url_values[1]\n else:\n raise UnknownRepoType\n else:\n if \"git\" in repo_url:\n return \"git\", repo_url\n elif \"bitbucket\" in repo_url:\n return \"hg\", repo_url\n else:\n raise UnknownRepoType\n\n\ndef is_vcs_installed(repo_type):\n \"\"\"\n Check if the version control system for a repo type is installed.\n\n :param repo_type:\n \"\"\"\n return bool(which(repo_type))\n\n\ndef clone(repo_url, checkout=None, clone_to_dir=\".\", no_input=False):\n \"\"\"\n Clone a repo to the current directory.\n\n :param repo_url: Repo URL of unknown type.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param clone_to_dir: The directory to clone to.\n Defaults to the current directory.\n :param no_input: Suppress all user prompts when calling via API.\n \"\"\"\n\n # Ensure that clone_to_dir exists\n clone_to_dir = os.path.expanduser(clone_to_dir)\n make_sure_path_exists(clone_to_dir)\n\n # identify the repo_type\n repo_type, repo_url = identify_repo(repo_url)\n\n # check that the appropriate VCS for the repo_type is installed\n if not is_vcs_installed(repo_type):\n msg = \"'{0}' is not installed.\".format(repo_type)\n raise VCSNotInstalled(msg)\n\n tail = os.path.split(repo_url)[1]\n if repo_type == 'git':\n repo_dir = os.path.normpath(os.path.join(clone_to_dir,\n tail.rsplit('.git')[0]))\n elif repo_type == 'hg':\n repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail))\n logging.debug('repo_dir is {0}'.format(repo_dir))\n\n if os.path.isdir(repo_dir):\n prompt_and_delete_repo(repo_dir, no_input=no_input)\n\n if repo_type in ['git', 'hg']:\n subprocess.check_call([repo_type, 'clone', repo_url], cwd=clone_to_dir)\n if checkout is not None:\n subprocess.check_call([repo_type, 'checkout', checkout],\n cwd=repo_dir)\n\n return repo_dir\n", "path": "cookiecutter/vcs.py"}]}
| 1,789 | 121 |
gh_patches_debug_22116
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1381
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
uplea plugin (still) broken
Hi again,
sorry but inspite of #1369 and #1375, uplea is still not working; now it's back with downloading the HTML download page...
24 26.04.2015 23:29:20 INFO Download finished: *****
23 26.04.2015 23:29:02 INFO Download starts: ****
The resulting file has correct name but is 14KB big; expected size if around 350MB
</issue>
<code>
[start of module/plugins/hoster/UpleaCom.py]
1 # -*- coding: utf-8 -*-
2
3 import re
4
5 from urlparse import urljoin
6
7 from module.plugins.internal.XFSHoster import XFSHoster, create_getInfo
8
9
10 class UpleaCom(XFSHoster):
11 __name__ = "UpleaCom"
12 __type__ = "hoster"
13 __version__ = "0.08"
14
15 __pattern__ = r'https?://(?:www\.)?uplea\.com/dl/\w{15}'
16
17 __description__ = """Uplea.com hoster plugin"""
18 __license__ = "GPLv3"
19 __authors__ = [("Redleon", None),
20 ("GammaC0de", None)]
21
22
23 NAME_PATTERN = r'class="agmd size18">(?P<N>.+?)<'
24 SIZE_PATTERN = r'size14">(?P<S>[\d.,]+) (?P<U>[\w^_]+?)</span>'
25 SIZE_REPLACEMENTS = [('Ko','KB'), ('Mo','MB'), ('Go','GB')]
26
27 OFFLINE_PATTERN = r'>You followed an invalid or expired link'
28 PREMIUM_PATTERN = r'You need to have a Premium subscription to download this file'
29
30 LINK_PATTERN = r'"(https?://\w+\.uplea\.com/anonym/.*?)"'
31 HOSTER_DOMAIN = "uplea.com"
32
33 WAIT_PATTERN = r'timeText: ?([\d.]+),'
34 STEP_PATTERN = r'<a href="(/step/.+)">'
35
36
37 def setup(self):
38 self.multiDL = False
39 self.chunkLimit = 1
40 self.resumeDownload = True
41
42
43 def handleFree(self, pyfile):
44 m = re.search(self.STEP_PATTERN, self.html)
45 if m is None:
46 self.error(_("STEP_PATTERN not found"))
47
48 self.html = self.load(urljoin("http://uplea.com/", m.group(1)))
49
50 m = re.search(self.WAIT_PATTERN, self.html)
51 if m:
52 self.logDebug(_("Waiting %s seconds") % m.group(1))
53 self.wait(m.group(1), True)
54 self.retry()
55
56 m = re.search(self.PREMIUM_PATTERN, self.html)
57 if m:
58 self.error(_("This URL requires a premium account"))
59
60 m = re.search(self.LINK_PATTERN, self.html)
61 if m is None:
62 self.error(_("LINK_PATTERN not found"))
63
64 self.link = m.group(1)
65 self.wait(15)
66
67
68 getInfo = create_getInfo(UpleaCom)
69
[end of module/plugins/hoster/UpleaCom.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/module/plugins/hoster/UpleaCom.py b/module/plugins/hoster/UpleaCom.py
--- a/module/plugins/hoster/UpleaCom.py
+++ b/module/plugins/hoster/UpleaCom.py
@@ -10,7 +10,7 @@
class UpleaCom(XFSHoster):
__name__ = "UpleaCom"
__type__ = "hoster"
- __version__ = "0.08"
+ __version__ = "0.10"
__pattern__ = r'https?://(?:www\.)?uplea\.com/dl/\w{15}'
@@ -20,9 +20,11 @@
("GammaC0de", None)]
- NAME_PATTERN = r'class="agmd size18">(?P<N>.+?)<'
- SIZE_PATTERN = r'size14">(?P<S>[\d.,]+) (?P<U>[\w^_]+?)</span>'
- SIZE_REPLACEMENTS = [('Ko','KB'), ('Mo','MB'), ('Go','GB')]
+ DISPOSITION = False #@TODO: Remove in 0.4.10
+
+ NAME_PATTERN = r'<span class="gold-text">(?P<N>.+?)</span>'
+ SIZE_PATTERN = r'<span class="label label-info agmd">(?P<S>[\d.,]+) (?P<U>[\w^_]+?)</span>'
+ SIZE_REPLACEMENTS = [('ko','KB'), ('mo','MB'), ('go','GB'), ('Ko','KB'), ('Mo','MB'), ('Go','GB')]
OFFLINE_PATTERN = r'>You followed an invalid or expired link'
PREMIUM_PATTERN = r'You need to have a Premium subscription to download this file'
|
{"golden_diff": "diff --git a/module/plugins/hoster/UpleaCom.py b/module/plugins/hoster/UpleaCom.py\n--- a/module/plugins/hoster/UpleaCom.py\n+++ b/module/plugins/hoster/UpleaCom.py\n@@ -10,7 +10,7 @@\n class UpleaCom(XFSHoster):\n __name__ = \"UpleaCom\"\n __type__ = \"hoster\"\n- __version__ = \"0.08\"\n+ __version__ = \"0.10\"\n \n __pattern__ = r'https?://(?:www\\.)?uplea\\.com/dl/\\w{15}'\n \n@@ -20,9 +20,11 @@\n (\"GammaC0de\", None)]\n \n \n- NAME_PATTERN = r'class=\"agmd size18\">(?P<N>.+?)<'\n- SIZE_PATTERN = r'size14\">(?P<S>[\\d.,]+) (?P<U>[\\w^_]+?)</span>'\n- SIZE_REPLACEMENTS = [('Ko','KB'), ('Mo','MB'), ('Go','GB')]\n+ DISPOSITION = False #@TODO: Remove in 0.4.10\n+\n+ NAME_PATTERN = r'<span class=\"gold-text\">(?P<N>.+?)</span>'\n+ SIZE_PATTERN = r'<span class=\"label label-info agmd\">(?P<S>[\\d.,]+) (?P<U>[\\w^_]+?)</span>'\n+ SIZE_REPLACEMENTS = [('ko','KB'), ('mo','MB'), ('go','GB'), ('Ko','KB'), ('Mo','MB'), ('Go','GB')]\n \n OFFLINE_PATTERN = r'>You followed an invalid or expired link'\n PREMIUM_PATTERN = r'You need to have a Premium subscription to download this file'\n", "issue": "uplea plugin (still) broken\nHi again,\n\nsorry but inspite of #1369 and #1375, uplea is still not working; now it's back with downloading the HTML download page...\n24 26.04.2015 23:29:20 INFO Download finished: *****\n23 26.04.2015 23:29:02 INFO Download starts: ****\n\nThe resulting file has correct name but is 14KB big; expected size if around 350MB\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom urlparse import urljoin\n\nfrom module.plugins.internal.XFSHoster import XFSHoster, create_getInfo\n\n\nclass UpleaCom(XFSHoster):\n __name__ = \"UpleaCom\"\n __type__ = \"hoster\"\n __version__ = \"0.08\"\n\n __pattern__ = r'https?://(?:www\\.)?uplea\\.com/dl/\\w{15}'\n\n __description__ = \"\"\"Uplea.com hoster plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Redleon\", None),\n (\"GammaC0de\", None)]\n\n\n NAME_PATTERN = r'class=\"agmd size18\">(?P<N>.+?)<'\n SIZE_PATTERN = r'size14\">(?P<S>[\\d.,]+) (?P<U>[\\w^_]+?)</span>'\n SIZE_REPLACEMENTS = [('Ko','KB'), ('Mo','MB'), ('Go','GB')]\n\n OFFLINE_PATTERN = r'>You followed an invalid or expired link'\n PREMIUM_PATTERN = r'You need to have a Premium subscription to download this file'\n\n LINK_PATTERN = r'\"(https?://\\w+\\.uplea\\.com/anonym/.*?)\"'\n HOSTER_DOMAIN = \"uplea.com\"\n\n WAIT_PATTERN = r'timeText: ?([\\d.]+),'\n STEP_PATTERN = r'<a href=\"(/step/.+)\">'\n\n\n def setup(self):\n self.multiDL = False\n self.chunkLimit = 1\n self.resumeDownload = True\n\n\n def handleFree(self, pyfile):\n m = re.search(self.STEP_PATTERN, self.html)\n if m is None:\n self.error(_(\"STEP_PATTERN not found\"))\n\n self.html = self.load(urljoin(\"http://uplea.com/\", m.group(1)))\n\n m = re.search(self.WAIT_PATTERN, self.html)\n if m:\n self.logDebug(_(\"Waiting %s seconds\") % m.group(1))\n self.wait(m.group(1), True)\n self.retry()\n\n m = re.search(self.PREMIUM_PATTERN, self.html)\n if m:\n self.error(_(\"This URL requires a premium account\"))\n\n m = re.search(self.LINK_PATTERN, self.html)\n if m is None:\n self.error(_(\"LINK_PATTERN not found\"))\n\n self.link = m.group(1)\n self.wait(15)\n\n\ngetInfo = create_getInfo(UpleaCom)\n", "path": "module/plugins/hoster/UpleaCom.py"}]}
| 1,366 | 402 |
gh_patches_debug_9287
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-1454
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
E0000 found unknown escape character '/'
*cfn-lint version: `0.29.2`*
*Description of issue.*
Using a regex pattern in a JSON cfn template fails linting with `E000 found unknown escape character '/'`
This appears to be a duplicate of #727 but the fix applied as part of that PR doesn't seem to be working in this case.
The JSON snippet that's causing issues is:
```
"AppIdRegexPattern": {
"Type": "AWS::WAFv2::RegexPatternSet",
"Properties": {
"Name": "x-mitel-app-regex",
"Description": "Handles validating an appropriate Mitel app identifier is present, and the app is provided",
"Scope": "REGIONAL",
"RegularExpressionList": [
"^app=[a-z-\\.0-9]+\/[0-9.a-z-]+;"
]
}
}
```
Here's the debug output from linting the file
```
$ cfn-lint cloudlink-waf.template --debug
2020-03-31 14:24:27,769 - cfnlint - DEBUG - Looking for CFLINTRC before attempting to load
2020-03-31 14:24:27,770 - cfnlint - DEBUG - Validating User CFNLINTRC
2020-03-31 14:24:27,771 - cfnlint - DEBUG - Validating CFNLINTRC config with given JSONSchema
2020-03-31 14:24:27,772 - cfnlint - DEBUG - Schema used: {'$id': 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/data/CfnLintCli/config/schema.json', '$schema': 'http://json-schema.org/draft-07/schema#', 'description': 'CFNLINTRC configuration schema', 'title': 'CFNLINTRC JSON Schema', 'type': 'object', 'additionalProperties': False, 'properties': {'append_rules': {'description': 'Location of directories to append rules from', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_checks': {'description': 'List of checks to ignore', 'items': {'type': 'string'}, 'type': 'array'}, 'include_checks': {'description': 'List of checks to include', 'items': {'type': 'string'}, 'type': 'array'}, 'mandatory_checks': {'description': 'List of mandatory checks to enforce', 'items': {'type': 'string'}, 'type': 'array'}, 'override_spec': {'description': 'Path to spec file to override with', 'type': 'string'}, 'regions': {'description': 'Regions to test against', 'items': {'type': 'string'}, 'type': 'array'}, 'configure_rules': {'description': 'Configure rules', 'patternProperties': {'^.*$': {'type': 'object', 'patternProperties': {'^.*$': {'anyOf': [{'type': 'string'}, {'type': 'integer'}, {'type': 'boolean'}]}}}}, 'additionalProperties': False, 'type': 'object'}, 'templates': {'description': 'Templates to lint', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_templates': {'description': 'Templates to ignore', 'items': {'type': 'string'}, 'type': 'array'}}}
2020-03-31 14:24:27,802 - cfnlint - DEBUG - Config used: {}
2020-03-31 14:24:27,808 - cfnlint - DEBUG - CFNLINTRC looks valid!
2020-03-31 14:24:27,808 - cfnlint - DEBUG - Validating Project CFNLINTRC
2020-03-31 14:24:27,809 - cfnlint - DEBUG - Validating CFNLINTRC config with given JSONSchema
2020-03-31 14:24:27,809 - cfnlint - DEBUG - Schema used: {'$id': 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/data/CfnLintCli/config/schema.json', '$schema': 'http://json-schema.org/draft-07/schema#', 'description': 'CFNLINTRC configuration schema', 'title': 'CFNLINTRC JSON Schema', 'type': 'object', 'additionalProperties': False, 'properties': {'append_rules': {'description': 'Location of directories to append rules from', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_checks': {'description': 'List of checks to ignore', 'items': {'type': 'string'}, 'type': 'array'}, 'include_checks': {'description': 'List of checks to include', 'items': {'type': 'string'}, 'type': 'array'}, 'mandatory_checks': {'description': 'List of mandatory checks to enforce', 'items': {'type': 'string'}, 'type': 'array'}, 'override_spec': {'description': 'Path to spec file to override with', 'type': 'string'}, 'regions': {'description': 'Regions to test against', 'items': {'type': 'string'}, 'type': 'array'}, 'configure_rules': {'description': 'Configure rules', 'patternProperties': {'^.*$': {'type': 'object', 'patternProperties': {'^.*$': {'anyOf': [{'type': 'string'}, {'type': 'integer'}, {'type': 'boolean'}]}}}}, 'additionalProperties': False, 'type': 'object'}, 'templates': {'description': 'Templates to lint', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_templates': {'description': 'Templates to ignore', 'items': {'type': 'string'}, 'type': 'array'}}}
2020-03-31 14:24:27,811 - cfnlint - DEBUG - Config used: {}
2020-03-31 14:24:27,818 - cfnlint - DEBUG - CFNLINTRC looks valid!
2020-03-31 14:24:27,838 - cfnlint - DEBUG - User configuration loaded as
2020-03-31 14:24:27,840 - cfnlint - DEBUG - {}
2020-03-31 14:24:27,842 - cfnlint - DEBUG - Project configuration loaded as
2020-03-31 14:24:27,842 - cfnlint - DEBUG - {}
2020-03-31 14:24:27,843 - cfnlint - DEBUG - Merging configurations...
2020-03-31 14:24:27,844 - cfnlint - DEBUG - Begin linting of file: cloudlink-waf.template
2020-03-31 14:24:27,853 - cfnlint - DEBUG - Completed linting of file: cloudlink-waf.template
E0000 found unknown escape character '/'
cloudlink-waf.template:28:41
```
When I convert the template over to YAML it doesn't complain:
```
AppIdRegexPattern:
Type: AWS::WAFv2::RegexPatternSet
Properties:
Name: x-mitel-app-regex
Description: Handles validating an appropriate Mitel app identifier is present,
and the app is provided
Scope: REGIONAL
RegularExpressionList:
- "^app=[a-z-\\.0-9]+/[0-9.a-z-]+;"
```
</issue>
<code>
[start of src/cfnlint/decode/__init__.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import sys
6 import logging
7 import six
8 try:
9 from json.decoder import JSONDecodeError
10 except ImportError:
11 JSONDecodeError = ValueError
12 from yaml.parser import ParserError, ScannerError
13 from yaml import YAMLError
14 from cfnlint.decode import cfn_yaml, cfn_json
15 from cfnlint.rules import Match, ParseError
16
17
18 LOGGER = logging.getLogger(__name__)
19
20
21 def decode(filename, ignore_bad_template):
22 """
23 Decode filename into an object
24 """
25 template = None
26 matches = []
27 try:
28 template = cfn_yaml.load(filename)
29 except IOError as e:
30 if e.errno == 2:
31 LOGGER.error('Template file not found: %s', filename)
32 matches.append(create_match_file_error(
33 filename, 'Template file not found: %s' % filename))
34 elif e.errno == 21:
35 LOGGER.error('Template references a directory, not a file: %s',
36 filename)
37 matches.append(create_match_file_error(
38 filename,
39 'Template references a directory, not a file: %s' % filename))
40 elif e.errno == 13:
41 LOGGER.error('Permission denied when accessing template file: %s',
42 filename)
43 matches.append(create_match_file_error(
44 filename,
45 'Permission denied when accessing template file: %s' % filename))
46
47 if matches:
48 return(None, matches)
49 except UnicodeDecodeError as err:
50 LOGGER.error('Cannot read file contents: %s', filename)
51 matches.append(create_match_file_error(
52 filename, 'Cannot read file contents: %s' % filename))
53 except cfn_yaml.CfnParseError as err:
54 err.match.Filename = filename
55 matches = [err.match]
56 except ParserError as err:
57 matches = [create_match_yaml_parser_error(err, filename)]
58 except ScannerError as err:
59 if err.problem in [
60 'found character \'\\t\' that cannot start any token',
61 'found unknown escape character']:
62 try:
63 template = cfn_json.load(filename)
64 except cfn_json.JSONDecodeError as json_err:
65 json_err.match.filename = filename
66 matches = [json_err.match]
67 except JSONDecodeError as json_err:
68 if hasattr(json_err, 'message'):
69 if json_err.message == 'No JSON object could be decoded': # pylint: disable=no-member
70 matches = [create_match_yaml_parser_error(err, filename)]
71 else:
72 matches = [create_match_json_parser_error(json_err, filename)]
73 if hasattr(json_err, 'msg'):
74 if json_err.msg == 'Expecting value': # pylint: disable=no-member
75 matches = [create_match_yaml_parser_error(err, filename)]
76 else:
77 matches = [create_match_json_parser_error(json_err, filename)]
78 except Exception as json_err: # pylint: disable=W0703
79 if ignore_bad_template:
80 LOGGER.info('Template %s is malformed: %s',
81 filename, err.problem)
82 LOGGER.info('Tried to parse %s as JSON but got error: %s',
83 filename, str(json_err))
84 else:
85 LOGGER.error(
86 'Template %s is malformed: %s', filename, err.problem)
87 LOGGER.error('Tried to parse %s as JSON but got error: %s',
88 filename, str(json_err))
89 return (None, [create_match_file_error(
90 filename,
91 'Tried to parse %s as JSON but got error: %s' % (
92 filename, str(json_err)))])
93 else:
94 matches = [create_match_yaml_parser_error(err, filename)]
95 except YAMLError as err:
96 matches = [create_match_file_error(filename, err)]
97
98 if not isinstance(template, dict) and not matches:
99 # Template isn't a dict which means nearly nothing will work
100 matches = [Match(1, 1, 1, 1, filename, ParseError(),
101 message='Template needs to be an object.')]
102 return (template, matches)
103
104
105 def create_match_yaml_parser_error(parser_error, filename):
106 """Create a Match for a parser error"""
107 lineno = parser_error.problem_mark.line + 1
108 colno = parser_error.problem_mark.column + 1
109 msg = parser_error.problem
110 return Match(
111 lineno, colno, lineno, colno + 1, filename,
112 ParseError(), message=msg)
113
114
115 def create_match_file_error(filename, msg):
116 """Create a Match for a parser error"""
117 return Match(
118 linenumber=1, columnnumber=1, linenumberend=1, columnnumberend=2,
119 filename=filename, rule=ParseError(), message=msg)
120
121
122 def create_match_json_parser_error(parser_error, filename):
123 """Create a Match for a parser error"""
124 if sys.version_info[0] == 3:
125 lineno = parser_error.lineno
126 colno = parser_error.colno
127 msg = parser_error.msg
128 elif sys.version_info[0] == 2:
129 lineno = 1
130 colno = 1
131 msg = parser_error.message
132 return Match(
133 lineno, colno, lineno, colno + 1, filename, ParseError(), message=msg)
134
[end of src/cfnlint/decode/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/decode/__init__.py b/src/cfnlint/decode/__init__.py
--- a/src/cfnlint/decode/__init__.py
+++ b/src/cfnlint/decode/__init__.py
@@ -58,7 +58,8 @@
except ScannerError as err:
if err.problem in [
'found character \'\\t\' that cannot start any token',
- 'found unknown escape character']:
+ 'found unknown escape character'] or err.problem.startswith(
+ 'found unknown escape character'):
try:
template = cfn_json.load(filename)
except cfn_json.JSONDecodeError as json_err:
|
{"golden_diff": "diff --git a/src/cfnlint/decode/__init__.py b/src/cfnlint/decode/__init__.py\n--- a/src/cfnlint/decode/__init__.py\n+++ b/src/cfnlint/decode/__init__.py\n@@ -58,7 +58,8 @@\n except ScannerError as err:\n if err.problem in [\n 'found character \\'\\\\t\\' that cannot start any token',\n- 'found unknown escape character']:\n+ 'found unknown escape character'] or err.problem.startswith(\n+ 'found unknown escape character'):\n try:\n template = cfn_json.load(filename)\n except cfn_json.JSONDecodeError as json_err:\n", "issue": "E0000 found unknown escape character '/'\n*cfn-lint version: `0.29.2`*\r\n\r\n*Description of issue.*\r\n\r\nUsing a regex pattern in a JSON cfn template fails linting with `E000 found unknown escape character '/'`\r\n\r\nThis appears to be a duplicate of #727 but the fix applied as part of that PR doesn't seem to be working in this case.\r\n\r\nThe JSON snippet that's causing issues is:\r\n```\r\n\"AppIdRegexPattern\": {\r\n \"Type\": \"AWS::WAFv2::RegexPatternSet\",\r\n \"Properties\": {\r\n \"Name\": \"x-mitel-app-regex\",\r\n \"Description\": \"Handles validating an appropriate Mitel app identifier is present, and the app is provided\",\r\n \"Scope\": \"REGIONAL\",\r\n \"RegularExpressionList\": [\r\n \"^app=[a-z-\\\\.0-9]+\\/[0-9.a-z-]+;\"\r\n ]\r\n }\r\n }\r\n```\r\n\r\nHere's the debug output from linting the file\r\n\r\n```\r\n$ cfn-lint cloudlink-waf.template --debug\r\n2020-03-31 14:24:27,769 - cfnlint - DEBUG - Looking for CFLINTRC before attempting to load\r\n2020-03-31 14:24:27,770 - cfnlint - DEBUG - Validating User CFNLINTRC\r\n2020-03-31 14:24:27,771 - cfnlint - DEBUG - Validating CFNLINTRC config with given JSONSchema\r\n2020-03-31 14:24:27,772 - cfnlint - DEBUG - Schema used: {'$id': 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/data/CfnLintCli/config/schema.json', '$schema': 'http://json-schema.org/draft-07/schema#', 'description': 'CFNLINTRC configuration schema', 'title': 'CFNLINTRC JSON Schema', 'type': 'object', 'additionalProperties': False, 'properties': {'append_rules': {'description': 'Location of directories to append rules from', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_checks': {'description': 'List of checks to ignore', 'items': {'type': 'string'}, 'type': 'array'}, 'include_checks': {'description': 'List of checks to include', 'items': {'type': 'string'}, 'type': 'array'}, 'mandatory_checks': {'description': 'List of mandatory checks to enforce', 'items': {'type': 'string'}, 'type': 'array'}, 'override_spec': {'description': 'Path to spec file to override with', 'type': 'string'}, 'regions': {'description': 'Regions to test against', 'items': {'type': 'string'}, 'type': 'array'}, 'configure_rules': {'description': 'Configure rules', 'patternProperties': {'^.*$': {'type': 'object', 'patternProperties': {'^.*$': {'anyOf': [{'type': 'string'}, {'type': 'integer'}, {'type': 'boolean'}]}}}}, 'additionalProperties': False, 'type': 'object'}, 'templates': {'description': 'Templates to lint', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_templates': {'description': 'Templates to ignore', 'items': {'type': 'string'}, 'type': 'array'}}}\r\n2020-03-31 14:24:27,802 - cfnlint - DEBUG - Config used: {}\r\n2020-03-31 14:24:27,808 - cfnlint - DEBUG - CFNLINTRC looks valid!\r\n2020-03-31 14:24:27,808 - cfnlint - DEBUG - Validating Project CFNLINTRC\r\n2020-03-31 14:24:27,809 - cfnlint - DEBUG - Validating CFNLINTRC config with given JSONSchema\r\n2020-03-31 14:24:27,809 - cfnlint - DEBUG - Schema used: {'$id': 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/data/CfnLintCli/config/schema.json', '$schema': 'http://json-schema.org/draft-07/schema#', 'description': 'CFNLINTRC configuration schema', 'title': 'CFNLINTRC JSON Schema', 'type': 'object', 'additionalProperties': False, 'properties': {'append_rules': {'description': 'Location of directories to append rules from', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_checks': {'description': 'List of checks to ignore', 'items': {'type': 'string'}, 'type': 'array'}, 'include_checks': {'description': 'List of checks to include', 'items': {'type': 'string'}, 'type': 'array'}, 'mandatory_checks': {'description': 'List of mandatory checks to enforce', 'items': {'type': 'string'}, 'type': 'array'}, 'override_spec': {'description': 'Path to spec file to override with', 'type': 'string'}, 'regions': {'description': 'Regions to test against', 'items': {'type': 'string'}, 'type': 'array'}, 'configure_rules': {'description': 'Configure rules', 'patternProperties': {'^.*$': {'type': 'object', 'patternProperties': {'^.*$': {'anyOf': [{'type': 'string'}, {'type': 'integer'}, {'type': 'boolean'}]}}}}, 'additionalProperties': False, 'type': 'object'}, 'templates': {'description': 'Templates to lint', 'items': {'type': 'string'}, 'type': 'array'}, 'ignore_templates': {'description': 'Templates to ignore', 'items': {'type': 'string'}, 'type': 'array'}}}\r\n2020-03-31 14:24:27,811 - cfnlint - DEBUG - Config used: {}\r\n2020-03-31 14:24:27,818 - cfnlint - DEBUG - CFNLINTRC looks valid!\r\n2020-03-31 14:24:27,838 - cfnlint - DEBUG - User configuration loaded as\r\n2020-03-31 14:24:27,840 - cfnlint - DEBUG - {}\r\n2020-03-31 14:24:27,842 - cfnlint - DEBUG - Project configuration loaded as\r\n2020-03-31 14:24:27,842 - cfnlint - DEBUG - {}\r\n2020-03-31 14:24:27,843 - cfnlint - DEBUG - Merging configurations...\r\n2020-03-31 14:24:27,844 - cfnlint - DEBUG - Begin linting of file: cloudlink-waf.template\r\n2020-03-31 14:24:27,853 - cfnlint - DEBUG - Completed linting of file: cloudlink-waf.template\r\nE0000 found unknown escape character '/'\r\ncloudlink-waf.template:28:41\r\n```\r\n\r\nWhen I convert the template over to YAML it doesn't complain:\r\n```\r\nAppIdRegexPattern:\r\n Type: AWS::WAFv2::RegexPatternSet\r\n Properties:\r\n Name: x-mitel-app-regex\r\n Description: Handles validating an appropriate Mitel app identifier is present,\r\n and the app is provided\r\n Scope: REGIONAL\r\n RegularExpressionList:\r\n - \"^app=[a-z-\\\\.0-9]+/[0-9.a-z-]+;\"\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport sys\nimport logging\nimport six\ntry:\n from json.decoder import JSONDecodeError\nexcept ImportError:\n JSONDecodeError = ValueError\nfrom yaml.parser import ParserError, ScannerError\nfrom yaml import YAMLError\nfrom cfnlint.decode import cfn_yaml, cfn_json\nfrom cfnlint.rules import Match, ParseError\n\n\nLOGGER = logging.getLogger(__name__)\n\n\ndef decode(filename, ignore_bad_template):\n \"\"\"\n Decode filename into an object\n \"\"\"\n template = None\n matches = []\n try:\n template = cfn_yaml.load(filename)\n except IOError as e:\n if e.errno == 2:\n LOGGER.error('Template file not found: %s', filename)\n matches.append(create_match_file_error(\n filename, 'Template file not found: %s' % filename))\n elif e.errno == 21:\n LOGGER.error('Template references a directory, not a file: %s',\n filename)\n matches.append(create_match_file_error(\n filename,\n 'Template references a directory, not a file: %s' % filename))\n elif e.errno == 13:\n LOGGER.error('Permission denied when accessing template file: %s',\n filename)\n matches.append(create_match_file_error(\n filename,\n 'Permission denied when accessing template file: %s' % filename))\n\n if matches:\n return(None, matches)\n except UnicodeDecodeError as err:\n LOGGER.error('Cannot read file contents: %s', filename)\n matches.append(create_match_file_error(\n filename, 'Cannot read file contents: %s' % filename))\n except cfn_yaml.CfnParseError as err:\n err.match.Filename = filename\n matches = [err.match]\n except ParserError as err:\n matches = [create_match_yaml_parser_error(err, filename)]\n except ScannerError as err:\n if err.problem in [\n 'found character \\'\\\\t\\' that cannot start any token',\n 'found unknown escape character']:\n try:\n template = cfn_json.load(filename)\n except cfn_json.JSONDecodeError as json_err:\n json_err.match.filename = filename\n matches = [json_err.match]\n except JSONDecodeError as json_err:\n if hasattr(json_err, 'message'):\n if json_err.message == 'No JSON object could be decoded': # pylint: disable=no-member\n matches = [create_match_yaml_parser_error(err, filename)]\n else:\n matches = [create_match_json_parser_error(json_err, filename)]\n if hasattr(json_err, 'msg'):\n if json_err.msg == 'Expecting value': # pylint: disable=no-member\n matches = [create_match_yaml_parser_error(err, filename)]\n else:\n matches = [create_match_json_parser_error(json_err, filename)]\n except Exception as json_err: # pylint: disable=W0703\n if ignore_bad_template:\n LOGGER.info('Template %s is malformed: %s',\n filename, err.problem)\n LOGGER.info('Tried to parse %s as JSON but got error: %s',\n filename, str(json_err))\n else:\n LOGGER.error(\n 'Template %s is malformed: %s', filename, err.problem)\n LOGGER.error('Tried to parse %s as JSON but got error: %s',\n filename, str(json_err))\n return (None, [create_match_file_error(\n filename,\n 'Tried to parse %s as JSON but got error: %s' % (\n filename, str(json_err)))])\n else:\n matches = [create_match_yaml_parser_error(err, filename)]\n except YAMLError as err:\n matches = [create_match_file_error(filename, err)]\n\n if not isinstance(template, dict) and not matches:\n # Template isn't a dict which means nearly nothing will work\n matches = [Match(1, 1, 1, 1, filename, ParseError(),\n message='Template needs to be an object.')]\n return (template, matches)\n\n\ndef create_match_yaml_parser_error(parser_error, filename):\n \"\"\"Create a Match for a parser error\"\"\"\n lineno = parser_error.problem_mark.line + 1\n colno = parser_error.problem_mark.column + 1\n msg = parser_error.problem\n return Match(\n lineno, colno, lineno, colno + 1, filename,\n ParseError(), message=msg)\n\n\ndef create_match_file_error(filename, msg):\n \"\"\"Create a Match for a parser error\"\"\"\n return Match(\n linenumber=1, columnnumber=1, linenumberend=1, columnnumberend=2,\n filename=filename, rule=ParseError(), message=msg)\n\n\ndef create_match_json_parser_error(parser_error, filename):\n \"\"\"Create a Match for a parser error\"\"\"\n if sys.version_info[0] == 3:\n lineno = parser_error.lineno\n colno = parser_error.colno\n msg = parser_error.msg\n elif sys.version_info[0] == 2:\n lineno = 1\n colno = 1\n msg = parser_error.message\n return Match(\n lineno, colno, lineno, colno + 1, filename, ParseError(), message=msg)\n", "path": "src/cfnlint/decode/__init__.py"}]}
| 3,759 | 144 |
gh_patches_debug_42531
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-1763
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CSS bundles generation breaks background images relative urls
This is a bug related to PR #1300.
</issue>
<code>
[start of Products/CMFPlone/resources/browser/combine.py]
1 from zExceptions import NotFound
2 from Acquisition import aq_base
3 from datetime import datetime
4 from plone.registry.interfaces import IRegistry
5 from plone.resource.file import FilesystemFile
6 from plone.resource.interfaces import IResourceDirectory
7 from Products.CMFPlone.interfaces import IBundleRegistry
8 from Products.CMFPlone.interfaces.resources import (
9 OVERRIDE_RESOURCE_DIRECTORY_NAME,
10 )
11 from StringIO import StringIO
12 from zope.component import getUtility
13 from zope.component import queryUtility
14
15 PRODUCTION_RESOURCE_DIRECTORY = "production"
16
17
18 def get_production_resource_directory():
19 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
20 if persistent_directory is None:
21 return ''
22 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
23 try:
24 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
25 except NotFound:
26 return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
27 timestamp = production_folder.readFile('timestamp.txt')
28 return "%s/++unique++%s" % (
29 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
30
31
32 def get_resource(context, path):
33 if path.startswith('++plone++'):
34 # ++plone++ resources can be customized, we return their override
35 # value if any
36 overrides = get_override_directory(context)
37 filepath = path[9:]
38 if overrides.isFile(filepath):
39 return overrides.readFile(filepath)
40
41 resource = context.unrestrictedTraverse(path)
42 if isinstance(resource, FilesystemFile):
43 (directory, sep, filename) = path.rpartition('/')
44 return context.unrestrictedTraverse(directory).readFile(filename)
45 else:
46 if hasattr(aq_base(resource), 'GET'):
47 # for FileResource
48 return resource.GET()
49 else:
50 # any BrowserView
51 return resource()
52
53
54 def write_js(context, folder, meta_bundle):
55 registry = getUtility(IRegistry)
56 resources = []
57
58 # default resources
59 if meta_bundle == 'default' and registry.records.get(
60 'plone.resources/jquery.js'
61 ):
62 resources.append(get_resource(context,
63 registry.records['plone.resources/jquery.js'].value))
64 resources.append(get_resource(context,
65 registry.records['plone.resources.requirejs'].value))
66 resources.append(get_resource(context,
67 registry.records['plone.resources.configjs'].value))
68
69 # bundles
70 bundles = registry.collectionOfInterface(
71 IBundleRegistry, prefix="plone.bundles", check=False)
72 for bundle in bundles.values():
73 if bundle.merge_with == meta_bundle and bundle.jscompilation:
74 resources.append(get_resource(context, bundle.jscompilation))
75
76 fi = StringIO()
77 for script in resources:
78 fi.write(script + '\n')
79 folder.writeFile(meta_bundle + ".js", fi)
80
81
82 def write_css(context, folder, meta_bundle):
83 registry = getUtility(IRegistry)
84 resources = []
85
86 bundles = registry.collectionOfInterface(
87 IBundleRegistry, prefix="plone.bundles", check=False)
88 for bundle in bundles.values():
89 if bundle.merge_with == meta_bundle and bundle.csscompilation:
90 resources.append(get_resource(context, bundle.csscompilation))
91
92 fi = StringIO()
93 for script in resources:
94 fi.write(script + '\n')
95 folder.writeFile(meta_bundle + ".css", fi)
96
97
98 def get_override_directory(context):
99 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
100 if persistent_directory is None:
101 return
102 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
103 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
104 return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
105
106
107 def combine_bundles(context):
108 container = get_override_directory(context)
109 if PRODUCTION_RESOURCE_DIRECTORY not in container:
110 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
111 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
112
113 # store timestamp
114 fi = StringIO()
115 fi.write(datetime.now().isoformat())
116 production_folder.writeFile("timestamp.txt", fi)
117
118 # generate new combined bundles
119 write_js(context, production_folder, 'default')
120 write_js(context, production_folder, 'logged-in')
121 write_css(context, production_folder, 'default')
122 write_css(context, production_folder, 'logged-in')
123
[end of Products/CMFPlone/resources/browser/combine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -1,18 +1,20 @@
-from zExceptions import NotFound
from Acquisition import aq_base
from datetime import datetime
from plone.registry.interfaces import IRegistry
from plone.resource.file import FilesystemFile
from plone.resource.interfaces import IResourceDirectory
from Products.CMFPlone.interfaces import IBundleRegistry
-from Products.CMFPlone.interfaces.resources import (
- OVERRIDE_RESOURCE_DIRECTORY_NAME,
-)
+from Products.CMFPlone.interfaces.resources import OVERRIDE_RESOURCE_DIRECTORY_NAME # noqa
from StringIO import StringIO
+from zExceptions import NotFound
from zope.component import getUtility
from zope.component import queryUtility
+import logging
+import re
+
PRODUCTION_RESOURCE_DIRECTORY = "production"
+logger = logging.getLogger(__name__)
def get_production_resource_directory():
@@ -24,6 +26,8 @@
production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
except NotFound:
return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
+ if 'timestamp.txt' not in production_folder:
+ return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
timestamp = production_folder.readFile('timestamp.txt')
return "%s/++unique++%s" % (
PRODUCTION_RESOURCE_DIRECTORY, timestamp)
@@ -38,7 +42,12 @@
if overrides.isFile(filepath):
return overrides.readFile(filepath)
- resource = context.unrestrictedTraverse(path)
+ try:
+ resource = context.unrestrictedTraverse(path)
+ except NotFound:
+ logger.warn(u"Could not find resource {0}. You may have to create it first.".format(path)) # noqa
+ return
+
if isinstance(resource, FilesystemFile):
(directory, sep, filename) = path.rpartition('/')
return context.unrestrictedTraverse(directory).readFile(filename)
@@ -71,7 +80,10 @@
IBundleRegistry, prefix="plone.bundles", check=False)
for bundle in bundles.values():
if bundle.merge_with == meta_bundle and bundle.jscompilation:
- resources.append(get_resource(context, bundle.jscompilation))
+ resource = get_resource(context, bundle.jscompilation)
+ if not resource:
+ continue
+ resources.append(resource)
fi = StringIO()
for script in resources:
@@ -87,7 +99,18 @@
IBundleRegistry, prefix="plone.bundles", check=False)
for bundle in bundles.values():
if bundle.merge_with == meta_bundle and bundle.csscompilation:
- resources.append(get_resource(context, bundle.csscompilation))
+ css = get_resource(context, bundle.csscompilation)
+ if not css:
+ continue
+ (path, sep, filename) = bundle.csscompilation.rpartition('/')
+ # Process relative urls:
+ # we prefix with current resource path any url not starting with
+ # '/' or http: or data:
+ css = re.sub(
+ r"""(url\(['"]?(?!['"]?([a-z]+:|\/)))""",
+ r'\1%s/' % path,
+ css)
+ resources.append(css)
fi = StringIO()
for script in resources:
|
{"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -1,18 +1,20 @@\n-from zExceptions import NotFound\n from Acquisition import aq_base\n from datetime import datetime\n from plone.registry.interfaces import IRegistry\n from plone.resource.file import FilesystemFile\n from plone.resource.interfaces import IResourceDirectory\n from Products.CMFPlone.interfaces import IBundleRegistry\n-from Products.CMFPlone.interfaces.resources import (\n- OVERRIDE_RESOURCE_DIRECTORY_NAME,\n-)\n+from Products.CMFPlone.interfaces.resources import OVERRIDE_RESOURCE_DIRECTORY_NAME # noqa\n from StringIO import StringIO\n+from zExceptions import NotFound\n from zope.component import getUtility\n from zope.component import queryUtility\n \n+import logging\n+import re\n+\n PRODUCTION_RESOURCE_DIRECTORY = \"production\"\n+logger = logging.getLogger(__name__)\n \n \n def get_production_resource_directory():\n@@ -24,6 +26,8 @@\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n+ if 'timestamp.txt' not in production_folder:\n+ return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n@@ -38,7 +42,12 @@\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n \n- resource = context.unrestrictedTraverse(path)\n+ try:\n+ resource = context.unrestrictedTraverse(path)\n+ except NotFound:\n+ logger.warn(u\"Could not find resource {0}. You may have to create it first.\".format(path)) # noqa\n+ return\n+\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n@@ -71,7 +80,10 @@\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.jscompilation:\n- resources.append(get_resource(context, bundle.jscompilation))\n+ resource = get_resource(context, bundle.jscompilation)\n+ if not resource:\n+ continue\n+ resources.append(resource)\n \n fi = StringIO()\n for script in resources:\n@@ -87,7 +99,18 @@\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.csscompilation:\n- resources.append(get_resource(context, bundle.csscompilation))\n+ css = get_resource(context, bundle.csscompilation)\n+ if not css:\n+ continue\n+ (path, sep, filename) = bundle.csscompilation.rpartition('/')\n+ # Process relative urls:\n+ # we prefix with current resource path any url not starting with\n+ # '/' or http: or data:\n+ css = re.sub(\n+ r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n+ r'\\1%s/' % path,\n+ css)\n+ resources.append(css)\n \n fi = StringIO()\n for script in resources:\n", "issue": "CSS bundles generation breaks background images relative urls\nThis is a bug related to PR #1300.\n\n", "before_files": [{"content": "from zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.jscompilation:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.csscompilation:\n resources.append(get_resource(context, bundle.csscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]}
| 1,706 | 755 |
gh_patches_debug_28524
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-6936
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow passing fsspec options to WebHDFS to enable Kerberos and https
After #6662 it is no longer possible to use custom hdfscli clients with WebHDFS, as a result we are no longer able to connect using Kerberos or HTTPS.
fsspec allows passing in options to enable kerberos and https, we simply need to pass them through and expose them in the DVC config file:
https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/webhdfs.html
This would also be an opportunity to remove the no longer used hdfscli options.
</issue>
<code>
[start of dvc/fs/webhdfs.py]
1 import threading
2
3 from funcy import cached_property, wrap_prop
4
5 from dvc.path_info import CloudURLInfo
6 from dvc.scheme import Schemes
7
8 # pylint:disable=abstract-method
9 from .fsspec_wrapper import CallbackMixin, FSSpecWrapper
10
11
12 class WebHDFSFileSystem(CallbackMixin, FSSpecWrapper):
13 scheme = Schemes.WEBHDFS
14 PATH_CLS = CloudURLInfo
15 REQUIRES = {"fsspec": "fsspec"}
16 PARAM_CHECKSUM = "checksum"
17
18 def _with_bucket(self, path):
19 if isinstance(path, self.PATH_CLS):
20 return f"/{path.path.rstrip('/')}"
21 return path
22
23 @staticmethod
24 def _get_kwargs_from_urls(urlpath):
25 from fsspec.implementations.webhdfs import WebHDFS
26
27 return (
28 WebHDFS._get_kwargs_from_urls( # pylint:disable=protected-access
29 urlpath
30 )
31 )
32
33 def _prepare_credentials(self, **config):
34 if "webhdfs_token" in config:
35 config["token"] = config.pop("webhdfs_token")
36
37 return config
38
39 @wrap_prop(threading.Lock())
40 @cached_property
41 def fs(self):
42 from fsspec.implementations.webhdfs import WebHDFS
43
44 return WebHDFS(**self.fs_args)
45
46 def checksum(self, path_info):
47 path = self._with_bucket(path_info)
48 ukey = self.fs.ukey(path)
49 return ukey["bytes"]
50
[end of dvc/fs/webhdfs.py]
[start of dvc/config_schema.py]
1 import os
2 from urllib.parse import urlparse
3
4 from funcy import walk_values
5 from voluptuous import (
6 All,
7 Any,
8 Coerce,
9 Invalid,
10 Lower,
11 Optional,
12 Range,
13 Schema,
14 )
15
16 Bool = All(
17 Lower,
18 Any("true", "false"),
19 lambda v: v == "true",
20 msg="expected true or false",
21 )
22
23
24 def supported_cache_type(types):
25 """Checks if link type config option consists only of valid values.
26
27 Args:
28 types (list/string): type(s) of links that dvc should try out.
29 """
30 if types is None:
31 return None
32 if isinstance(types, str):
33 types = [typ.strip() for typ in types.split(",")]
34
35 unsupported = set(types) - {"reflink", "hardlink", "symlink", "copy"}
36 if unsupported:
37 raise Invalid(
38 "Unsupported cache type(s): {}".format(", ".join(unsupported))
39 )
40
41 return types
42
43
44 def Choices(*choices):
45 """Checks that value belongs to the specified set of values
46
47 Args:
48 *choices: pass allowed values as arguments, or pass a list or
49 tuple as a single argument
50 """
51 return Any(*choices, msg="expected one of {}".format(", ".join(choices)))
52
53
54 def ByUrl(mapping):
55 schemas = walk_values(Schema, mapping)
56
57 def validate(data):
58 if "url" not in data:
59 raise Invalid("expected 'url'")
60
61 parsed = urlparse(data["url"])
62 # Windows absolute paths should really have scheme == "" (local)
63 if os.name == "nt" and len(parsed.scheme) == 1 and parsed.netloc == "":
64 return schemas[""](data)
65 if parsed.scheme not in schemas:
66 raise Invalid(f"Unsupported URL type {parsed.scheme}://")
67
68 return schemas[parsed.scheme](data)
69
70 return validate
71
72
73 class RelPath(str):
74 pass
75
76
77 REMOTE_COMMON = {
78 "url": str,
79 "checksum_jobs": All(Coerce(int), Range(1)),
80 "jobs": All(Coerce(int), Range(1)),
81 Optional("no_traverse"): Bool, # obsoleted
82 "verify": Bool,
83 }
84 LOCAL_COMMON = {
85 "type": supported_cache_type,
86 Optional("protected", default=False): Bool, # obsoleted
87 "shared": All(Lower, Choices("group")),
88 Optional("slow_link_warning", default=True): Bool,
89 }
90 HTTP_COMMON = {
91 "auth": All(Lower, Choices("basic", "digest", "custom")),
92 "custom_auth_header": str,
93 "user": str,
94 "password": str,
95 "ask_password": Bool,
96 "ssl_verify": Any(Bool, str),
97 "method": str,
98 }
99 WEBDAV_COMMON = {
100 "user": str,
101 "password": str,
102 "ask_password": Bool,
103 "token": str,
104 "cert_path": str,
105 "key_path": str,
106 "timeout": Coerce(int),
107 "ssl_verify": Any(Bool, str),
108 }
109
110 SCHEMA = {
111 "core": {
112 "remote": Lower,
113 "checksum_jobs": All(Coerce(int), Range(1)),
114 Optional("interactive", default=False): Bool,
115 Optional("analytics", default=True): Bool,
116 Optional("hardlink_lock", default=False): Bool,
117 Optional("no_scm", default=False): Bool,
118 Optional("autostage", default=False): Bool,
119 Optional("experiments"): Bool, # obsoleted
120 Optional("check_update", default=True): Bool,
121 "machine": Lower,
122 },
123 "cache": {
124 "local": str,
125 "s3": str,
126 "gs": str,
127 "hdfs": str,
128 "webhdfs": str,
129 "ssh": str,
130 "azure": str,
131 # This is for default local cache
132 "dir": str,
133 **LOCAL_COMMON,
134 },
135 "remote": {
136 str: ByUrl(
137 {
138 "": {**LOCAL_COMMON, **REMOTE_COMMON},
139 "s3": {
140 "region": str,
141 "profile": str,
142 "credentialpath": str,
143 "configpath": str,
144 "endpointurl": str,
145 "access_key_id": str,
146 "secret_access_key": str,
147 "session_token": str,
148 Optional("listobjects", default=False): Bool, # obsoleted
149 Optional("use_ssl", default=True): Bool,
150 "ssl_verify": Any(Bool, str),
151 "sse": str,
152 "sse_kms_key_id": str,
153 "acl": str,
154 "grant_read": str,
155 "grant_read_acp": str,
156 "grant_write_acp": str,
157 "grant_full_control": str,
158 "cache_regions": bool,
159 "read_timeout": Coerce(int),
160 "connect_timeout": Coerce(int),
161 **REMOTE_COMMON,
162 },
163 "gs": {
164 "projectname": str,
165 "credentialpath": str,
166 **REMOTE_COMMON,
167 },
168 "ssh": {
169 "type": supported_cache_type,
170 "port": Coerce(int),
171 "user": str,
172 "password": str,
173 "ask_password": Bool,
174 "keyfile": str,
175 "timeout": Coerce(int),
176 "gss_auth": Bool,
177 "allow_agent": Bool,
178 **REMOTE_COMMON,
179 },
180 "hdfs": {"user": str, "kerb_ticket": str, **REMOTE_COMMON},
181 "webhdfs": {
182 "hdfscli_config": str,
183 "webhdfs_token": str,
184 "user": str,
185 "webhdfs_alias": str,
186 **REMOTE_COMMON,
187 },
188 "azure": {
189 "connection_string": str,
190 "sas_token": str,
191 "account_name": str,
192 "account_key": str,
193 "tenant_id": str,
194 "client_id": str,
195 "client_secret": str,
196 "allow_anonymous_login": Bool,
197 "exclude_environment_credential": Bool,
198 "exclude_visual_studio_code_credential": Bool,
199 "exclude_shared_token_cache_credential": Bool,
200 "exclude_managed_identity_credential": Bool,
201 **REMOTE_COMMON,
202 },
203 "oss": {
204 "oss_key_id": str,
205 "oss_key_secret": str,
206 "oss_endpoint": str,
207 **REMOTE_COMMON,
208 },
209 "gdrive": {
210 "gdrive_use_service_account": Bool,
211 "gdrive_client_id": str,
212 "gdrive_client_secret": str,
213 "gdrive_user_credentials_file": str,
214 "gdrive_service_account_user_email": str,
215 "gdrive_service_account_json_file_path": str,
216 Optional("gdrive_trash_only", default=False): Bool,
217 **REMOTE_COMMON,
218 },
219 "http": {**HTTP_COMMON, **REMOTE_COMMON},
220 "https": {**HTTP_COMMON, **REMOTE_COMMON},
221 "webdav": {**WEBDAV_COMMON, **REMOTE_COMMON},
222 "webdavs": {**WEBDAV_COMMON, **REMOTE_COMMON},
223 "remote": {str: object}, # Any of the above options are valid
224 }
225 )
226 },
227 "state": {
228 "dir": str,
229 "row_limit": All(Coerce(int), Range(1)), # obsoleted
230 "row_cleanup_quota": All(Coerce(int), Range(0, 100)), # obsoleted
231 },
232 "index": {
233 "dir": str,
234 },
235 "machine": {
236 str: {
237 "cloud": All(Lower, Choices("aws", "azure")),
238 "region": All(
239 Lower, Choices("us-west", "us-east", "eu-west", "eu-north")
240 ),
241 "image": str,
242 "name": str,
243 "spot": Bool,
244 "spot_price": Coerce(float),
245 "instance_hdd_size": Coerce(int),
246 "instance_type": Lower,
247 "instance_gpu": Lower,
248 "ssh_private": str,
249 "startup_script": str,
250 },
251 },
252 # section for experimental features
253 "feature": {
254 Optional("machine", default=False): Bool,
255 # enabled by default. It's of no use, kept for backward compatibility.
256 Optional("parametrization", default=True): Bool,
257 },
258 "plots": {"html_template": str},
259 "exp": {
260 "code": str,
261 "data": str,
262 "models": str,
263 "metrics": str,
264 "params": str,
265 "plots": str,
266 "live": str,
267 },
268 }
269
[end of dvc/config_schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/config_schema.py b/dvc/config_schema.py
--- a/dvc/config_schema.py
+++ b/dvc/config_schema.py
@@ -179,10 +179,12 @@
},
"hdfs": {"user": str, "kerb_ticket": str, **REMOTE_COMMON},
"webhdfs": {
- "hdfscli_config": str,
- "webhdfs_token": str,
- "user": str,
- "webhdfs_alias": str,
+ "kerberos": Bool,
+ "kerberos_principal": str,
+ "proxy_to": str,
+ "ssl_verify": Any(Bool, str),
+ "token": str,
+ "use_https": Bool,
**REMOTE_COMMON,
},
"azure": {
diff --git a/dvc/fs/webhdfs.py b/dvc/fs/webhdfs.py
--- a/dvc/fs/webhdfs.py
+++ b/dvc/fs/webhdfs.py
@@ -31,9 +31,10 @@
)
def _prepare_credentials(self, **config):
- if "webhdfs_token" in config:
- config["token"] = config.pop("webhdfs_token")
-
+ self._ssl_verify = config.pop("ssl_verify", True)
+ principal = config.pop("kerberos_principal", None)
+ if principal:
+ config["kerb_kwargs"] = {"principal": principal}
return config
@wrap_prop(threading.Lock())
@@ -41,7 +42,9 @@
def fs(self):
from fsspec.implementations.webhdfs import WebHDFS
- return WebHDFS(**self.fs_args)
+ fs = WebHDFS(**self.fs_args)
+ fs.session.verify = self._ssl_verify
+ return fs
def checksum(self, path_info):
path = self._with_bucket(path_info)
|
{"golden_diff": "diff --git a/dvc/config_schema.py b/dvc/config_schema.py\n--- a/dvc/config_schema.py\n+++ b/dvc/config_schema.py\n@@ -179,10 +179,12 @@\n },\n \"hdfs\": {\"user\": str, \"kerb_ticket\": str, **REMOTE_COMMON},\n \"webhdfs\": {\n- \"hdfscli_config\": str,\n- \"webhdfs_token\": str,\n- \"user\": str,\n- \"webhdfs_alias\": str,\n+ \"kerberos\": Bool,\n+ \"kerberos_principal\": str,\n+ \"proxy_to\": str,\n+ \"ssl_verify\": Any(Bool, str),\n+ \"token\": str,\n+ \"use_https\": Bool,\n **REMOTE_COMMON,\n },\n \"azure\": {\ndiff --git a/dvc/fs/webhdfs.py b/dvc/fs/webhdfs.py\n--- a/dvc/fs/webhdfs.py\n+++ b/dvc/fs/webhdfs.py\n@@ -31,9 +31,10 @@\n )\n \n def _prepare_credentials(self, **config):\n- if \"webhdfs_token\" in config:\n- config[\"token\"] = config.pop(\"webhdfs_token\")\n-\n+ self._ssl_verify = config.pop(\"ssl_verify\", True)\n+ principal = config.pop(\"kerberos_principal\", None)\n+ if principal:\n+ config[\"kerb_kwargs\"] = {\"principal\": principal}\n return config\n \n @wrap_prop(threading.Lock())\n@@ -41,7 +42,9 @@\n def fs(self):\n from fsspec.implementations.webhdfs import WebHDFS\n \n- return WebHDFS(**self.fs_args)\n+ fs = WebHDFS(**self.fs_args)\n+ fs.session.verify = self._ssl_verify\n+ return fs\n \n def checksum(self, path_info):\n path = self._with_bucket(path_info)\n", "issue": "Allow passing fsspec options to WebHDFS to enable Kerberos and https\nAfter #6662 it is no longer possible to use custom hdfscli clients with WebHDFS, as a result we are no longer able to connect using Kerberos or HTTPS.\r\n\r\nfsspec allows passing in options to enable kerberos and https, we simply need to pass them through and expose them in the DVC config file:\r\nhttps://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/webhdfs.html\r\n\r\nThis would also be an opportunity to remove the no longer used hdfscli options.\n", "before_files": [{"content": "import threading\n\nfrom funcy import cached_property, wrap_prop\n\nfrom dvc.path_info import CloudURLInfo\nfrom dvc.scheme import Schemes\n\n# pylint:disable=abstract-method\nfrom .fsspec_wrapper import CallbackMixin, FSSpecWrapper\n\n\nclass WebHDFSFileSystem(CallbackMixin, FSSpecWrapper):\n scheme = Schemes.WEBHDFS\n PATH_CLS = CloudURLInfo\n REQUIRES = {\"fsspec\": \"fsspec\"}\n PARAM_CHECKSUM = \"checksum\"\n\n def _with_bucket(self, path):\n if isinstance(path, self.PATH_CLS):\n return f\"/{path.path.rstrip('/')}\"\n return path\n\n @staticmethod\n def _get_kwargs_from_urls(urlpath):\n from fsspec.implementations.webhdfs import WebHDFS\n\n return (\n WebHDFS._get_kwargs_from_urls( # pylint:disable=protected-access\n urlpath\n )\n )\n\n def _prepare_credentials(self, **config):\n if \"webhdfs_token\" in config:\n config[\"token\"] = config.pop(\"webhdfs_token\")\n\n return config\n\n @wrap_prop(threading.Lock())\n @cached_property\n def fs(self):\n from fsspec.implementations.webhdfs import WebHDFS\n\n return WebHDFS(**self.fs_args)\n\n def checksum(self, path_info):\n path = self._with_bucket(path_info)\n ukey = self.fs.ukey(path)\n return ukey[\"bytes\"]\n", "path": "dvc/fs/webhdfs.py"}, {"content": "import os\nfrom urllib.parse import urlparse\n\nfrom funcy import walk_values\nfrom voluptuous import (\n All,\n Any,\n Coerce,\n Invalid,\n Lower,\n Optional,\n Range,\n Schema,\n)\n\nBool = All(\n Lower,\n Any(\"true\", \"false\"),\n lambda v: v == \"true\",\n msg=\"expected true or false\",\n)\n\n\ndef supported_cache_type(types):\n \"\"\"Checks if link type config option consists only of valid values.\n\n Args:\n types (list/string): type(s) of links that dvc should try out.\n \"\"\"\n if types is None:\n return None\n if isinstance(types, str):\n types = [typ.strip() for typ in types.split(\",\")]\n\n unsupported = set(types) - {\"reflink\", \"hardlink\", \"symlink\", \"copy\"}\n if unsupported:\n raise Invalid(\n \"Unsupported cache type(s): {}\".format(\", \".join(unsupported))\n )\n\n return types\n\n\ndef Choices(*choices):\n \"\"\"Checks that value belongs to the specified set of values\n\n Args:\n *choices: pass allowed values as arguments, or pass a list or\n tuple as a single argument\n \"\"\"\n return Any(*choices, msg=\"expected one of {}\".format(\", \".join(choices)))\n\n\ndef ByUrl(mapping):\n schemas = walk_values(Schema, mapping)\n\n def validate(data):\n if \"url\" not in data:\n raise Invalid(\"expected 'url'\")\n\n parsed = urlparse(data[\"url\"])\n # Windows absolute paths should really have scheme == \"\" (local)\n if os.name == \"nt\" and len(parsed.scheme) == 1 and parsed.netloc == \"\":\n return schemas[\"\"](data)\n if parsed.scheme not in schemas:\n raise Invalid(f\"Unsupported URL type {parsed.scheme}://\")\n\n return schemas[parsed.scheme](data)\n\n return validate\n\n\nclass RelPath(str):\n pass\n\n\nREMOTE_COMMON = {\n \"url\": str,\n \"checksum_jobs\": All(Coerce(int), Range(1)),\n \"jobs\": All(Coerce(int), Range(1)),\n Optional(\"no_traverse\"): Bool, # obsoleted\n \"verify\": Bool,\n}\nLOCAL_COMMON = {\n \"type\": supported_cache_type,\n Optional(\"protected\", default=False): Bool, # obsoleted\n \"shared\": All(Lower, Choices(\"group\")),\n Optional(\"slow_link_warning\", default=True): Bool,\n}\nHTTP_COMMON = {\n \"auth\": All(Lower, Choices(\"basic\", \"digest\", \"custom\")),\n \"custom_auth_header\": str,\n \"user\": str,\n \"password\": str,\n \"ask_password\": Bool,\n \"ssl_verify\": Any(Bool, str),\n \"method\": str,\n}\nWEBDAV_COMMON = {\n \"user\": str,\n \"password\": str,\n \"ask_password\": Bool,\n \"token\": str,\n \"cert_path\": str,\n \"key_path\": str,\n \"timeout\": Coerce(int),\n \"ssl_verify\": Any(Bool, str),\n}\n\nSCHEMA = {\n \"core\": {\n \"remote\": Lower,\n \"checksum_jobs\": All(Coerce(int), Range(1)),\n Optional(\"interactive\", default=False): Bool,\n Optional(\"analytics\", default=True): Bool,\n Optional(\"hardlink_lock\", default=False): Bool,\n Optional(\"no_scm\", default=False): Bool,\n Optional(\"autostage\", default=False): Bool,\n Optional(\"experiments\"): Bool, # obsoleted\n Optional(\"check_update\", default=True): Bool,\n \"machine\": Lower,\n },\n \"cache\": {\n \"local\": str,\n \"s3\": str,\n \"gs\": str,\n \"hdfs\": str,\n \"webhdfs\": str,\n \"ssh\": str,\n \"azure\": str,\n # This is for default local cache\n \"dir\": str,\n **LOCAL_COMMON,\n },\n \"remote\": {\n str: ByUrl(\n {\n \"\": {**LOCAL_COMMON, **REMOTE_COMMON},\n \"s3\": {\n \"region\": str,\n \"profile\": str,\n \"credentialpath\": str,\n \"configpath\": str,\n \"endpointurl\": str,\n \"access_key_id\": str,\n \"secret_access_key\": str,\n \"session_token\": str,\n Optional(\"listobjects\", default=False): Bool, # obsoleted\n Optional(\"use_ssl\", default=True): Bool,\n \"ssl_verify\": Any(Bool, str),\n \"sse\": str,\n \"sse_kms_key_id\": str,\n \"acl\": str,\n \"grant_read\": str,\n \"grant_read_acp\": str,\n \"grant_write_acp\": str,\n \"grant_full_control\": str,\n \"cache_regions\": bool,\n \"read_timeout\": Coerce(int),\n \"connect_timeout\": Coerce(int),\n **REMOTE_COMMON,\n },\n \"gs\": {\n \"projectname\": str,\n \"credentialpath\": str,\n **REMOTE_COMMON,\n },\n \"ssh\": {\n \"type\": supported_cache_type,\n \"port\": Coerce(int),\n \"user\": str,\n \"password\": str,\n \"ask_password\": Bool,\n \"keyfile\": str,\n \"timeout\": Coerce(int),\n \"gss_auth\": Bool,\n \"allow_agent\": Bool,\n **REMOTE_COMMON,\n },\n \"hdfs\": {\"user\": str, \"kerb_ticket\": str, **REMOTE_COMMON},\n \"webhdfs\": {\n \"hdfscli_config\": str,\n \"webhdfs_token\": str,\n \"user\": str,\n \"webhdfs_alias\": str,\n **REMOTE_COMMON,\n },\n \"azure\": {\n \"connection_string\": str,\n \"sas_token\": str,\n \"account_name\": str,\n \"account_key\": str,\n \"tenant_id\": str,\n \"client_id\": str,\n \"client_secret\": str,\n \"allow_anonymous_login\": Bool,\n \"exclude_environment_credential\": Bool,\n \"exclude_visual_studio_code_credential\": Bool,\n \"exclude_shared_token_cache_credential\": Bool,\n \"exclude_managed_identity_credential\": Bool,\n **REMOTE_COMMON,\n },\n \"oss\": {\n \"oss_key_id\": str,\n \"oss_key_secret\": str,\n \"oss_endpoint\": str,\n **REMOTE_COMMON,\n },\n \"gdrive\": {\n \"gdrive_use_service_account\": Bool,\n \"gdrive_client_id\": str,\n \"gdrive_client_secret\": str,\n \"gdrive_user_credentials_file\": str,\n \"gdrive_service_account_user_email\": str,\n \"gdrive_service_account_json_file_path\": str,\n Optional(\"gdrive_trash_only\", default=False): Bool,\n **REMOTE_COMMON,\n },\n \"http\": {**HTTP_COMMON, **REMOTE_COMMON},\n \"https\": {**HTTP_COMMON, **REMOTE_COMMON},\n \"webdav\": {**WEBDAV_COMMON, **REMOTE_COMMON},\n \"webdavs\": {**WEBDAV_COMMON, **REMOTE_COMMON},\n \"remote\": {str: object}, # Any of the above options are valid\n }\n )\n },\n \"state\": {\n \"dir\": str,\n \"row_limit\": All(Coerce(int), Range(1)), # obsoleted\n \"row_cleanup_quota\": All(Coerce(int), Range(0, 100)), # obsoleted\n },\n \"index\": {\n \"dir\": str,\n },\n \"machine\": {\n str: {\n \"cloud\": All(Lower, Choices(\"aws\", \"azure\")),\n \"region\": All(\n Lower, Choices(\"us-west\", \"us-east\", \"eu-west\", \"eu-north\")\n ),\n \"image\": str,\n \"name\": str,\n \"spot\": Bool,\n \"spot_price\": Coerce(float),\n \"instance_hdd_size\": Coerce(int),\n \"instance_type\": Lower,\n \"instance_gpu\": Lower,\n \"ssh_private\": str,\n \"startup_script\": str,\n },\n },\n # section for experimental features\n \"feature\": {\n Optional(\"machine\", default=False): Bool,\n # enabled by default. It's of no use, kept for backward compatibility.\n Optional(\"parametrization\", default=True): Bool,\n },\n \"plots\": {\"html_template\": str},\n \"exp\": {\n \"code\": str,\n \"data\": str,\n \"models\": str,\n \"metrics\": str,\n \"params\": str,\n \"plots\": str,\n \"live\": str,\n },\n}\n", "path": "dvc/config_schema.py"}]}
| 3,680 | 422 |
gh_patches_debug_7450
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-1636
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Swapfile not really disabled
The Ansible config tries to disable swapfile on the Application and Monitor Servers, via `swapoff -a`. This works, but only for the current boot cycle. If a machine is configured with a swapfile in `/etc/fstab`, that swapfile will be restored on a subsequent reboot. Since the machines reboot nightly, the `swapoff -a` approach is close to useless.
In order to disable swap effectively, the first-run Ansible config should ensure that no swap entries exist in fstab, removing them if found.
</issue>
<code>
[start of securedrop/version.py]
1 __version__ = '0.3.11'
2
[end of securedrop/version.py]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # SecureDrop documentation build configuration file, created by
4 # sphinx-quickstart on Tue Oct 13 12:08:52 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17 import shlex
18
19 # Detect if we're being built by Read the Docs
20 # https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs
21 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
22
23 # If extensions (or modules to document with autodoc) are in another directory,
24 # add these directories to sys.path here. If the directory is relative to the
25 # documentation root, use os.path.abspath to make it absolute, like shown here.
26 #sys.path.insert(0, os.path.abspath('.'))
27
28 # -- General configuration ------------------------------------------------
29
30 # If your documentation needs a minimal Sphinx version, state it here.
31 #needs_sphinx = '1.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = ['sphinx.ext.todo', ]
37
38 # Add any paths that contain templates here, relative to this directory.
39 templates_path = ['_templates']
40
41 # The suffix(es) of source filenames.
42 # You can specify multiple suffix as a list of string:
43 # source_suffix = ['.rst', '.md']
44 source_suffix = '.rst'
45
46 # The encoding of source files.
47 #source_encoding = 'utf-8-sig'
48
49 # The master toctree document.
50 master_doc = 'index'
51
52 # General information about the project.
53 project = u'SecureDrop'
54 copyright = u'2015, Freedom of the Press Foundation'
55 author = u'SecureDrop Team and Contributors'
56
57 # The version info for the project you're documenting, acts as replacement for
58 # |version| and |release|, also used in various other places throughout the
59 # built documents.
60 #
61 # The short X.Y version.
62 version = '0.3.11'
63 # The full version, including alpha/beta/rc tags.
64 release = '0.3.11'
65
66 # The language for content autogenerated by Sphinx. Refer to documentation
67 # for a list of supported languages.
68 #
69 # This is also used if you do content translation via gettext catalogs.
70 # Usually you set "language" from the command line for these cases.
71 language = None
72
73 # There are two options for replacing |today|: either, you set today to some
74 # non-false value, then it is used:
75 #today = ''
76 # Else, today_fmt is used as the format for a strftime call.
77 #today_fmt = '%B %d, %Y'
78
79 # List of patterns, relative to source directory, that match files and
80 # directories to ignore when looking for source files.
81 exclude_patterns = ['_build']
82
83 # The reST default role (used for this markup: `text`) to use for all
84 # documents.
85 #default_role = None
86
87 # If true, '()' will be appended to :func: etc. cross-reference text.
88 #add_function_parentheses = True
89
90 # If true, the current module name will be prepended to all description
91 # unit titles (such as .. function::).
92 #add_module_names = True
93
94 # If true, sectionauthor and moduleauthor directives will be shown in the
95 # output. They are ignored by default.
96 #show_authors = False
97
98 # The name of the Pygments (syntax highlighting) style to use.
99 pygments_style = 'sphinx'
100
101 # A list of ignored prefixes for module index sorting.
102 #modindex_common_prefix = []
103
104 # If true, keep warnings as "system message" paragraphs in the built documents.
105 #keep_warnings = False
106
107 # If true, `todo` and `todoList` produce output, else they produce nothing.
108 todo_include_todos = False
109
110
111 # -- Options for HTML output ----------------------------------------------
112
113 # The theme to use for HTML and HTML Help pages. See the documentation for
114 # a list of builtin themes.
115 if on_rtd:
116 html_theme = 'default'
117 else:
118 try:
119 # If you want to build the docs locally using the RTD theme,
120 # you may need to install it: ``pip install sphinx_rtd_theme``.
121 # https://github.com/snide/sphinx_rtd_theme#via-package
122 import sphinx_rtd_theme
123 html_theme = "sphinx_rtd_theme"
124 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
125 except ImportError:
126 # This theme is included with Sphinx and is quite nice (based
127 # on the Pocoo themes), but since we're using the RTD theme
128 # for the production docs, it's best to use that to avoid
129 # issues due to discrepancies between the themes.
130 html_theme = 'alabaster'
131
132 # Theme options are theme-specific and customize the look and feel of a theme
133 # further. For a list of options available for each theme, see the
134 # documentation.
135 #html_theme_options = {}
136
137 # Add any paths that contain custom themes here, relative to this directory.
138 #html_theme_path = []
139
140 # The name for this set of Sphinx documents. If None, it defaults to
141 # "<project> v<release> documentation".
142 #html_title = None
143
144 # A shorter title for the navigation bar. Default is the same as html_title.
145 #html_short_title = None
146
147 # The name of an image file (relative to this directory) to place at the top
148 # of the sidebar.
149 #html_logo = None
150
151 # The name of an image file (within the static path) to use as favicon of the
152 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
153 # pixels large.
154 #html_favicon = None
155
156 # Add any paths that contain custom static files (such as style sheets) here,
157 # relative to this directory. They are copied after the builtin static files,
158 # so a file named "default.css" will overwrite the builtin "default.css".
159 html_static_path = ['_static']
160
161 # Add any extra paths that contain custom files (such as robots.txt or
162 # .htaccess) here, relative to this directory. These files are copied
163 # directly to the root of the documentation.
164 #html_extra_path = []
165
166 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
167 # using the given strftime format.
168 #html_last_updated_fmt = '%b %d, %Y'
169
170 # If true, SmartyPants will be used to convert quotes and dashes to
171 # typographically correct entities.
172 #html_use_smartypants = True
173
174 # Custom sidebar templates, maps document names to template names.
175 #html_sidebars = {}
176
177 # Additional templates that should be rendered to pages, maps page names to
178 # template names.
179 #html_additional_pages = {}
180
181 # If false, no module index is generated.
182 #html_domain_indices = True
183
184 # If false, no index is generated.
185 #html_use_index = True
186
187 # If true, the index is split into individual pages for each letter.
188 #html_split_index = False
189
190 # If true, links to the reST sources are added to the pages.
191 #html_show_sourcelink = True
192
193 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
194 #html_show_sphinx = True
195
196 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
197 #html_show_copyright = True
198
199 # If true, an OpenSearch description file will be output, and all pages will
200 # contain a <link> tag referring to it. The value of this option must be the
201 # base URL from which the finished HTML is served.
202 #html_use_opensearch = ''
203
204 # This is the file name suffix for HTML files (e.g. ".xhtml").
205 #html_file_suffix = None
206
207 # Language to be used for generating the HTML full-text search index.
208 # Sphinx supports the following languages:
209 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
210 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
211 #html_search_language = 'en'
212
213 # A dictionary with options for the search language support, empty by default.
214 # Now only 'ja' uses this config value
215 #html_search_options = {'type': 'default'}
216
217 # The name of a javascript file (relative to the configuration directory) that
218 # implements a search results scorer. If empty, the default will be used.
219 #html_search_scorer = 'scorer.js'
220
221 # Output file base name for HTML help builder.
222 htmlhelp_basename = 'SecureDropdoc'
223
224 # -- Options for LaTeX output ---------------------------------------------
225
226 latex_elements = {
227 # The paper size ('letterpaper' or 'a4paper').
228 #'papersize': 'letterpaper',
229
230 # The font size ('10pt', '11pt' or '12pt').
231 #'pointsize': '10pt',
232
233 # Additional stuff for the LaTeX preamble.
234 #'preamble': '',
235
236 # Latex figure (float) alignment
237 #'figure_align': 'htbp',
238 }
239
240 # Grouping the document tree into LaTeX files. List of tuples
241 # (source start file, target name, title,
242 # author, documentclass [howto, manual, or own class]).
243 latex_documents = [
244 (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',
245 author, 'manual'),
246 ]
247
248 # The name of an image file (relative to this directory) to place at the top of
249 # the title page.
250 #latex_logo = None
251
252 # For "manual" documents, if this is true, then toplevel headings are parts,
253 # not chapters.
254 #latex_use_parts = False
255
256 # If true, show page references after internal links.
257 #latex_show_pagerefs = False
258
259 # If true, show URL addresses after external links.
260 #latex_show_urls = False
261
262 # Documents to append as an appendix to all manuals.
263 #latex_appendices = []
264
265 # If false, no module index is generated.
266 #latex_domain_indices = True
267
268
269 # -- Options for manual page output ---------------------------------------
270
271 # One entry per manual page. List of tuples
272 # (source start file, name, description, authors, manual section).
273 man_pages = [
274 (master_doc, 'securedrop', u'SecureDrop Documentation',
275 [author], 1)
276 ]
277
278 # If true, show URL addresses after external links.
279 #man_show_urls = False
280
281
282 # -- Options for Texinfo output -------------------------------------------
283
284 # Grouping the document tree into Texinfo files. List of tuples
285 # (source start file, target name, title, author,
286 # dir menu entry, description, category)
287 texinfo_documents = [
288 (master_doc, 'SecureDrop', u'SecureDrop Documentation',
289 author, 'SecureDrop', 'One line description of project.',
290 'Miscellaneous'),
291 ]
292
293 # Documents to append as an appendix to all manuals.
294 #texinfo_appendices = []
295
296 # If false, no module index is generated.
297 #texinfo_domain_indices = True
298
299 # How to display URL addresses: 'footnote', 'no', or 'inline'.
300 #texinfo_show_urls = 'footnote'
301
302 # If true, do not generate a @detailmenu in the "Top" node's menu.
303 #texinfo_no_detailmenu = False
304
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -59,9 +59,9 @@
# built documents.
#
# The short X.Y version.
-version = '0.3.11'
+version = '0.3.12'
# The full version, including alpha/beta/rc tags.
-release = '0.3.11'
+release = '0.3.12'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = '0.3.11'
+__version__ = '0.3.12'
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -59,9 +59,9 @@\n # built documents.\n #\n # The short X.Y version.\n-version = '0.3.11'\n+version = '0.3.12'\n # The full version, including alpha/beta/rc tags.\n-release = '0.3.11'\n+release = '0.3.12'\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\ndiff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = '0.3.11'\n+__version__ = '0.3.12'\n", "issue": "Swapfile not really disabled\nThe Ansible config tries to disable swapfile on the Application and Monitor Servers, via `swapoff -a`. This works, but only for the current boot cycle. If a machine is configured with a swapfile in `/etc/fstab`, that swapfile will be restored on a subsequent reboot. Since the machines reboot nightly, the `swapoff -a` approach is close to useless.\r\n\r\nIn order to disable swap effectively, the first-run Ansible config should ensure that no swap entries exist in fstab, removing them if found. \n", "before_files": [{"content": "__version__ = '0.3.11'\n", "path": "securedrop/version.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport shlex\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2015, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.3.11'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.3.11'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}]}
| 4,034 | 188 |
gh_patches_debug_658
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-2258
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.148
On the docket:
+ [x] The Pex CLI should warn when it creates a PEX zip that requires zip64. #2247
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.147"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.147"
+__version__ = "2.1.148"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.147\"\n+__version__ = \"2.1.148\"\n", "issue": "Release 2.1.148\nOn the docket:\r\n+ [x] The Pex CLI should warn when it creates a PEX zip that requires zip64. #2247\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.147\"\n", "path": "pex/version.py"}]}
| 627 | 98 |
gh_patches_debug_18472
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-3550
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ModuleNotFoundError: No module named 'botocore' when using OpenFile
Need to rewrite this so that this dependency is imported only when the S3 store is being used.
Thereby making it optional like s3fs.
</issue>
<code>
[start of modin/core/io/file_dispatcher.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """
15 Module houses `FileDispatcher` class.
16
17 `FileDispatcher` can be used as abstract base class for dispatchers of specific file formats or
18 for direct files processing.
19 """
20
21 import fsspec
22 import os
23 import re
24 from modin.config import Backend
25 import numpy as np
26
27 S3_ADDRESS_REGEX = re.compile("[sS]3://(.*?)/(.*)")
28 NOT_IMPLEMENTED_MESSAGE = "Implement in children classes!"
29
30
31 class OpenFile:
32 """
33 OpenFile is a context manager for an input file.
34
35 OpenFile uses fsspec to open files on __enter__. On __exit__, it closes the
36 fsspec file. This class exists to encapsulate the special behavior in
37 __enter__ around anon=False and anon=True for s3 buckets.
38
39 Parameters
40 ----------
41 file_path : str
42 String that represents the path to the file (paths to S3 buckets
43 are also acceptable).
44 mode : str, default: "rb"
45 String, which defines which mode file should be open.
46 compression : str, default: "infer"
47 File compression name.
48
49 Attributes
50 ----------
51 file_path : str
52 String that represents the path to the file
53 mode : str
54 String that defines which mode the file should be opened in.
55 compression : str
56 File compression name.
57 file : fsspec.core.OpenFile
58 The opened file.
59 """
60
61 def __init__(self, file_path, mode="rb", compression="infer"):
62 self.file_path = file_path
63 self.mode = mode
64 self.compression = compression
65
66 def __enter__(self):
67 """
68 Open the file with fsspec and return the opened file.
69
70 Returns
71 -------
72 fsspec.core.OpenFile
73 The opened file.
74 """
75 from botocore.exceptions import NoCredentialsError
76
77 try:
78 self.file = fsspec.open(
79 self.file_path, self.mode, self.compression, anon=False
80 )
81 return self.file.open()
82 except NoCredentialsError:
83 self.file = fsspec.open(
84 self.file_path, self.mode, self.compression, anon=True
85 )
86 return self.file.open()
87
88 def __exit__(self, *args):
89 """
90 Close the file.
91
92 Parameters
93 ----------
94 *args : any type
95 Variable positional arguments, all unused.
96 """
97 self.file.close()
98
99
100 class FileDispatcher:
101 """
102 Class handles util functions for reading data from different kinds of files.
103
104 Notes
105 -----
106 `_read`, `deploy`, `parse` and `materialize` are abstract methods and should be
107 implemented in the child classes (functions signatures can differ between child
108 classes).
109 """
110
111 frame_cls = None
112 frame_partition_cls = None
113 query_compiler_cls = None
114
115 @classmethod
116 def read(cls, *args, **kwargs):
117 """
118 Read data according passed `args` and `kwargs`.
119
120 Parameters
121 ----------
122 *args : iterable
123 Positional arguments to be passed into `_read` function.
124 **kwargs : dict
125 Keywords arguments to be passed into `_read` function.
126
127 Returns
128 -------
129 query_compiler : BaseQueryCompiler
130 Query compiler with imported data for further processing.
131
132 Notes
133 -----
134 `read` is high-level function that calls specific for defined backend, engine and
135 dispatcher class `_read` function with passed parameters and performs some
136 postprocessing work on the resulting query_compiler object.
137 """
138 query_compiler = cls._read(*args, **kwargs)
139 # TODO (devin-petersohn): Make this section more general for non-pandas kernel
140 # implementations.
141 if Backend.get() == "Pandas":
142 import pandas as kernel_lib
143 elif Backend.get() == "Cudf":
144 import cudf as kernel_lib
145 else:
146 raise NotImplementedError("FIXME")
147
148 if hasattr(query_compiler, "dtypes") and any(
149 isinstance(t, kernel_lib.CategoricalDtype) for t in query_compiler.dtypes
150 ):
151 dtypes = query_compiler.dtypes
152 return query_compiler.astype(
153 {
154 t: dtypes[t]
155 for t in dtypes.index
156 if isinstance(dtypes[t], kernel_lib.CategoricalDtype)
157 }
158 )
159 return query_compiler
160
161 @classmethod
162 def _read(cls, *args, **kwargs):
163 """
164 Perform reading of the data from file.
165
166 Should be implemented in the child class.
167
168 Parameters
169 ----------
170 *args : iterable
171 Positional arguments of the function.
172 **kwargs : dict
173 Keywords arguments of the function.
174 """
175 raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)
176
177 @classmethod
178 def get_path(cls, file_path):
179 """
180 Process `file_path` in accordance to it's type.
181
182 Parameters
183 ----------
184 file_path : str
185 String that represents the path to the file (paths to S3 buckets
186 are also acceptable).
187
188 Returns
189 -------
190 str
191 Updated or verified `file_path` parameter.
192
193 Notes
194 -----
195 if `file_path` is an S3 bucket, parameter will be returned as is, otherwise
196 absolute path will be returned.
197 """
198 if S3_ADDRESS_REGEX.search(file_path):
199 return file_path
200 else:
201 return os.path.abspath(file_path)
202
203 @classmethod
204 def file_size(cls, f):
205 """
206 Get the size of file associated with file handle `f`.
207
208 Parameters
209 ----------
210 f : file-like object
211 File-like object, that should be used to get file size.
212
213 Returns
214 -------
215 int
216 File size in bytes.
217 """
218 cur_pos = f.tell()
219 f.seek(0, os.SEEK_END)
220 size = f.tell()
221 f.seek(cur_pos, os.SEEK_SET)
222 return size
223
224 @classmethod
225 def file_exists(cls, file_path):
226 """
227 Check if `file_path` exists.
228
229 Parameters
230 ----------
231 file_path : str
232 String that represents the path to the file (paths to S3 buckets
233 are also acceptable).
234
235 Returns
236 -------
237 bool
238 Whether file exists or not.
239 """
240 if isinstance(file_path, str):
241 match = S3_ADDRESS_REGEX.search(file_path)
242 if match is not None:
243 if file_path[0] == "S":
244 file_path = "{}{}".format("s", file_path[1:])
245 import s3fs as S3FS
246 from botocore.exceptions import NoCredentialsError
247
248 s3fs = S3FS.S3FileSystem(anon=False)
249 exists = False
250 try:
251 exists = s3fs.exists(file_path) or exists
252 except NoCredentialsError:
253 pass
254 s3fs = S3FS.S3FileSystem(anon=True)
255 return exists or s3fs.exists(file_path)
256 return os.path.exists(file_path)
257
258 @classmethod
259 def deploy(cls, func, args, num_returns): # noqa: PR01
260 """
261 Deploy remote task.
262
263 Should be implemented in the task class (for example in the `RayTask`).
264 """
265 raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)
266
267 def parse(self, func, args, num_returns): # noqa: PR01
268 """
269 Parse file's data in the worker process.
270
271 Should be implemented in the parser class (for example in the `PandasCSVParser`).
272 """
273 raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)
274
275 @classmethod
276 def materialize(cls, obj_id): # noqa: PR01
277 """
278 Get results from worker.
279
280 Should be implemented in the task class (for example in the `RayTask`).
281 """
282 raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)
283
284 @classmethod
285 def build_partition(cls, partition_ids, row_lengths, column_widths):
286 """
287 Build array with partitions of `cls.frame_partition_cls` class.
288
289 Parameters
290 ----------
291 partition_ids : list
292 Array with references to the partitions data.
293 row_lengths : list
294 Partitions rows lengths.
295 column_widths : list
296 Number of columns in each partition.
297
298 Returns
299 -------
300 np.ndarray
301 array with shape equals to the shape of `partition_ids` and
302 filed with partition objects.
303 """
304 return np.array(
305 [
306 [
307 cls.frame_partition_cls(
308 partition_ids[i][j],
309 length=row_lengths[i],
310 width=column_widths[j],
311 )
312 for j in range(len(partition_ids[i]))
313 ]
314 for i in range(len(partition_ids))
315 ]
316 )
317
[end of modin/core/io/file_dispatcher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modin/core/io/file_dispatcher.py b/modin/core/io/file_dispatcher.py
--- a/modin/core/io/file_dispatcher.py
+++ b/modin/core/io/file_dispatcher.py
@@ -72,18 +72,21 @@
fsspec.core.OpenFile
The opened file.
"""
- from botocore.exceptions import NoCredentialsError
+ try:
+ from botocore.exceptions import NoCredentialsError
+
+ credential_error_type = (NoCredentialsError,)
+ except ModuleNotFoundError:
+ credential_error_type = ()
+
+ args = (self.file_path, self.mode, self.compression)
+ self.file = fsspec.open(*args, anon=False)
try:
- self.file = fsspec.open(
- self.file_path, self.mode, self.compression, anon=False
- )
- return self.file.open()
- except NoCredentialsError:
- self.file = fsspec.open(
- self.file_path, self.mode, self.compression, anon=True
- )
return self.file.open()
+ except credential_error_type:
+ self.file = fsspec.open(*args, anon=True)
+ return self.file.open()
def __exit__(self, *args):
"""
|
{"golden_diff": "diff --git a/modin/core/io/file_dispatcher.py b/modin/core/io/file_dispatcher.py\n--- a/modin/core/io/file_dispatcher.py\n+++ b/modin/core/io/file_dispatcher.py\n@@ -72,18 +72,21 @@\n fsspec.core.OpenFile\n The opened file.\n \"\"\"\n- from botocore.exceptions import NoCredentialsError\n+ try:\n+ from botocore.exceptions import NoCredentialsError\n+\n+ credential_error_type = (NoCredentialsError,)\n+ except ModuleNotFoundError:\n+ credential_error_type = ()\n+\n+ args = (self.file_path, self.mode, self.compression)\n \n+ self.file = fsspec.open(*args, anon=False)\n try:\n- self.file = fsspec.open(\n- self.file_path, self.mode, self.compression, anon=False\n- )\n- return self.file.open()\n- except NoCredentialsError:\n- self.file = fsspec.open(\n- self.file_path, self.mode, self.compression, anon=True\n- )\n return self.file.open()\n+ except credential_error_type:\n+ self.file = fsspec.open(*args, anon=True)\n+ return self.file.open()\n \n def __exit__(self, *args):\n \"\"\"\n", "issue": "ModuleNotFoundError: No module named 'botocore' when using OpenFile\nNeed to rewrite this so that this dependency is imported only when the S3 store is being used.\r\nThereby making it optional like s3fs.\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"\nModule houses `FileDispatcher` class.\n\n`FileDispatcher` can be used as abstract base class for dispatchers of specific file formats or\nfor direct files processing.\n\"\"\"\n\nimport fsspec\nimport os\nimport re\nfrom modin.config import Backend\nimport numpy as np\n\nS3_ADDRESS_REGEX = re.compile(\"[sS]3://(.*?)/(.*)\")\nNOT_IMPLEMENTED_MESSAGE = \"Implement in children classes!\"\n\n\nclass OpenFile:\n \"\"\"\n OpenFile is a context manager for an input file.\n\n OpenFile uses fsspec to open files on __enter__. On __exit__, it closes the\n fsspec file. This class exists to encapsulate the special behavior in\n __enter__ around anon=False and anon=True for s3 buckets.\n\n Parameters\n ----------\n file_path : str\n String that represents the path to the file (paths to S3 buckets\n are also acceptable).\n mode : str, default: \"rb\"\n String, which defines which mode file should be open.\n compression : str, default: \"infer\"\n File compression name.\n\n Attributes\n ----------\n file_path : str\n String that represents the path to the file\n mode : str\n String that defines which mode the file should be opened in.\n compression : str\n File compression name.\n file : fsspec.core.OpenFile\n The opened file.\n \"\"\"\n\n def __init__(self, file_path, mode=\"rb\", compression=\"infer\"):\n self.file_path = file_path\n self.mode = mode\n self.compression = compression\n\n def __enter__(self):\n \"\"\"\n Open the file with fsspec and return the opened file.\n\n Returns\n -------\n fsspec.core.OpenFile\n The opened file.\n \"\"\"\n from botocore.exceptions import NoCredentialsError\n\n try:\n self.file = fsspec.open(\n self.file_path, self.mode, self.compression, anon=False\n )\n return self.file.open()\n except NoCredentialsError:\n self.file = fsspec.open(\n self.file_path, self.mode, self.compression, anon=True\n )\n return self.file.open()\n\n def __exit__(self, *args):\n \"\"\"\n Close the file.\n\n Parameters\n ----------\n *args : any type\n Variable positional arguments, all unused.\n \"\"\"\n self.file.close()\n\n\nclass FileDispatcher:\n \"\"\"\n Class handles util functions for reading data from different kinds of files.\n\n Notes\n -----\n `_read`, `deploy`, `parse` and `materialize` are abstract methods and should be\n implemented in the child classes (functions signatures can differ between child\n classes).\n \"\"\"\n\n frame_cls = None\n frame_partition_cls = None\n query_compiler_cls = None\n\n @classmethod\n def read(cls, *args, **kwargs):\n \"\"\"\n Read data according passed `args` and `kwargs`.\n\n Parameters\n ----------\n *args : iterable\n Positional arguments to be passed into `_read` function.\n **kwargs : dict\n Keywords arguments to be passed into `_read` function.\n\n Returns\n -------\n query_compiler : BaseQueryCompiler\n Query compiler with imported data for further processing.\n\n Notes\n -----\n `read` is high-level function that calls specific for defined backend, engine and\n dispatcher class `_read` function with passed parameters and performs some\n postprocessing work on the resulting query_compiler object.\n \"\"\"\n query_compiler = cls._read(*args, **kwargs)\n # TODO (devin-petersohn): Make this section more general for non-pandas kernel\n # implementations.\n if Backend.get() == \"Pandas\":\n import pandas as kernel_lib\n elif Backend.get() == \"Cudf\":\n import cudf as kernel_lib\n else:\n raise NotImplementedError(\"FIXME\")\n\n if hasattr(query_compiler, \"dtypes\") and any(\n isinstance(t, kernel_lib.CategoricalDtype) for t in query_compiler.dtypes\n ):\n dtypes = query_compiler.dtypes\n return query_compiler.astype(\n {\n t: dtypes[t]\n for t in dtypes.index\n if isinstance(dtypes[t], kernel_lib.CategoricalDtype)\n }\n )\n return query_compiler\n\n @classmethod\n def _read(cls, *args, **kwargs):\n \"\"\"\n Perform reading of the data from file.\n\n Should be implemented in the child class.\n\n Parameters\n ----------\n *args : iterable\n Positional arguments of the function.\n **kwargs : dict\n Keywords arguments of the function.\n \"\"\"\n raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)\n\n @classmethod\n def get_path(cls, file_path):\n \"\"\"\n Process `file_path` in accordance to it's type.\n\n Parameters\n ----------\n file_path : str\n String that represents the path to the file (paths to S3 buckets\n are also acceptable).\n\n Returns\n -------\n str\n Updated or verified `file_path` parameter.\n\n Notes\n -----\n if `file_path` is an S3 bucket, parameter will be returned as is, otherwise\n absolute path will be returned.\n \"\"\"\n if S3_ADDRESS_REGEX.search(file_path):\n return file_path\n else:\n return os.path.abspath(file_path)\n\n @classmethod\n def file_size(cls, f):\n \"\"\"\n Get the size of file associated with file handle `f`.\n\n Parameters\n ----------\n f : file-like object\n File-like object, that should be used to get file size.\n\n Returns\n -------\n int\n File size in bytes.\n \"\"\"\n cur_pos = f.tell()\n f.seek(0, os.SEEK_END)\n size = f.tell()\n f.seek(cur_pos, os.SEEK_SET)\n return size\n\n @classmethod\n def file_exists(cls, file_path):\n \"\"\"\n Check if `file_path` exists.\n\n Parameters\n ----------\n file_path : str\n String that represents the path to the file (paths to S3 buckets\n are also acceptable).\n\n Returns\n -------\n bool\n Whether file exists or not.\n \"\"\"\n if isinstance(file_path, str):\n match = S3_ADDRESS_REGEX.search(file_path)\n if match is not None:\n if file_path[0] == \"S\":\n file_path = \"{}{}\".format(\"s\", file_path[1:])\n import s3fs as S3FS\n from botocore.exceptions import NoCredentialsError\n\n s3fs = S3FS.S3FileSystem(anon=False)\n exists = False\n try:\n exists = s3fs.exists(file_path) or exists\n except NoCredentialsError:\n pass\n s3fs = S3FS.S3FileSystem(anon=True)\n return exists or s3fs.exists(file_path)\n return os.path.exists(file_path)\n\n @classmethod\n def deploy(cls, func, args, num_returns): # noqa: PR01\n \"\"\"\n Deploy remote task.\n\n Should be implemented in the task class (for example in the `RayTask`).\n \"\"\"\n raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)\n\n def parse(self, func, args, num_returns): # noqa: PR01\n \"\"\"\n Parse file's data in the worker process.\n\n Should be implemented in the parser class (for example in the `PandasCSVParser`).\n \"\"\"\n raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)\n\n @classmethod\n def materialize(cls, obj_id): # noqa: PR01\n \"\"\"\n Get results from worker.\n\n Should be implemented in the task class (for example in the `RayTask`).\n \"\"\"\n raise NotImplementedError(NOT_IMPLEMENTED_MESSAGE)\n\n @classmethod\n def build_partition(cls, partition_ids, row_lengths, column_widths):\n \"\"\"\n Build array with partitions of `cls.frame_partition_cls` class.\n\n Parameters\n ----------\n partition_ids : list\n Array with references to the partitions data.\n row_lengths : list\n Partitions rows lengths.\n column_widths : list\n Number of columns in each partition.\n\n Returns\n -------\n np.ndarray\n array with shape equals to the shape of `partition_ids` and\n filed with partition objects.\n \"\"\"\n return np.array(\n [\n [\n cls.frame_partition_cls(\n partition_ids[i][j],\n length=row_lengths[i],\n width=column_widths[j],\n )\n for j in range(len(partition_ids[i]))\n ]\n for i in range(len(partition_ids))\n ]\n )\n", "path": "modin/core/io/file_dispatcher.py"}]}
| 3,459 | 278 |
gh_patches_debug_39852
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-4927
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IndexError with util.apply_parallel (RGB images)
I don't really understand the origin of this error. I could only reproduce with RGB images, not with 2D arrays. Does `apply_parallel` only work for 2D images?
## Way to reproduce
```python
from skimage import util, segmentation, data
img = data.astronaut()
_ = util.apply_parallel(segmentation.slic, img, chunks=None)
```
Output
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-3-63f67e7b9ccd> in <module>
----> 1 _ = util.apply_parallel(segmentation.slic, img, chunks=None)
~/code/scikit-image/skimage/util/apply_parallel.py in apply_parallel(function, array, chunks, depth, mode, extra_arguments, extra_keywords, compute)
143 res = darr.map_overlap(wrapped_func, depth, boundary=mode)
144 if compute:
--> 145 res = res.compute()
146
147 return res
~/.local/lib/python3.7/site-packages/dask/base.py in compute(self, **kwargs)
164 dask.base.compute
165 """
--> 166 (result,) = compute(self, traverse=False, **kwargs)
167 return result
168
~/.local/lib/python3.7/site-packages/dask/base.py in compute(*args, **kwargs)
443
444 results = schedule(dsk, keys, **kwargs)
--> 445 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
446
447
~/.local/lib/python3.7/site-packages/dask/base.py in <listcomp>(.0)
443
444 results = schedule(dsk, keys, **kwargs)
--> 445 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
446
447
~/.local/lib/python3.7/site-packages/dask/array/core.py in finalize(results)
984 while isinstance(results2, (tuple, list)):
985 if len(results2) > 1:
--> 986 return concatenate3(results)
987 else:
988 results2 = results2[0]
~/.local/lib/python3.7/site-packages/dask/array/core.py in concatenate3(arrays)
4383 if not ndim:
4384 return arrays
-> 4385 chunks = chunks_from_arrays(arrays)
4386 shape = tuple(map(sum, chunks))
4387
~/.local/lib/python3.7/site-packages/dask/array/core.py in chunks_from_arrays(arrays)
4152
4153 while isinstance(arrays, (list, tuple)):
-> 4154 result.append(tuple([shape(deepfirst(a))[dim] for a in arrays]))
4155 arrays = arrays[0]
4156 dim += 1
~/.local/lib/python3.7/site-packages/dask/array/core.py in <listcomp>(.0)
4152
4153 while isinstance(arrays, (list, tuple)):
-> 4154 result.append(tuple([shape(deepfirst(a))[dim] for a in arrays]))
4155 arrays = arrays[0]
4156 dim += 1
IndexError: tuple index out of range
```
## Version information
```python
# Paste the output of the following python commands
from __future__ import print_function
import sys; print(sys.version)
import platform; print(platform.platform())
import skimage; print("scikit-image version: {}".format(skimage.__version__))
import numpy; print("numpy version: {}".format(numpy.__version__))
```
```python
# your output here
3.7.3 (default, Oct 7 2019, 12:56:13)
[GCC 8.3.0]
Linux-5.0.0-38-generic-x86_64-with-Ubuntu-19.04-disco
scikit-image version: 0.18.dev0
numpy version: 1.16.4
```
</issue>
<code>
[start of skimage/util/apply_parallel.py]
1 __all__ = ['apply_parallel']
2
3
4 def _get_chunks(shape, ncpu):
5 """Split the array into equal sized chunks based on the number of
6 available processors. The last chunk in each dimension absorbs the
7 remainder array elements if the number of CPUs does not divide evenly into
8 the number of array elements.
9
10 Examples
11 --------
12 >>> _get_chunks((4, 4), 4)
13 ((2, 2), (2, 2))
14 >>> _get_chunks((4, 4), 2)
15 ((2, 2), (4,))
16 >>> _get_chunks((5, 5), 2)
17 ((2, 3), (5,))
18 >>> _get_chunks((2, 4), 2)
19 ((1, 1), (4,))
20 """
21 # since apply_parallel is in the critical import path, we lazy import
22 # math just when we need it.
23 from math import ceil
24
25 chunks = []
26 nchunks_per_dim = int(ceil(ncpu ** (1./len(shape))))
27
28 used_chunks = 1
29 for i in shape:
30 if used_chunks < ncpu:
31 regular_chunk = i // nchunks_per_dim
32 remainder_chunk = regular_chunk + (i % nchunks_per_dim)
33
34 if regular_chunk == 0:
35 chunk_lens = (remainder_chunk,)
36 else:
37 chunk_lens = ((regular_chunk,) * (nchunks_per_dim - 1) +
38 (remainder_chunk,))
39 else:
40 chunk_lens = (i,)
41
42 chunks.append(chunk_lens)
43 used_chunks *= nchunks_per_dim
44 return tuple(chunks)
45
46
47 def _ensure_dask_array(array, chunks=None):
48 import dask.array as da
49 if isinstance(array, da.Array):
50 return array
51
52 return da.from_array(array, chunks=chunks)
53
54
55 def apply_parallel(function, array, chunks=None, depth=0, mode=None,
56 extra_arguments=(), extra_keywords={}, *, compute=None):
57 """Map a function in parallel across an array.
58
59 Split an array into possibly overlapping chunks of a given depth and
60 boundary type, call the given function in parallel on the chunks, combine
61 the chunks and return the resulting array.
62
63 Parameters
64 ----------
65 function : function
66 Function to be mapped which takes an array as an argument.
67 array : numpy array or dask array
68 Array which the function will be applied to.
69 chunks : int, tuple, or tuple of tuples, optional
70 A single integer is interpreted as the length of one side of a square
71 chunk that should be tiled across the array. One tuple of length
72 ``array.ndim`` represents the shape of a chunk, and it is tiled across
73 the array. A list of tuples of length ``ndim``, where each sub-tuple
74 is a sequence of chunk sizes along the corresponding dimension. If
75 None, the array is broken up into chunks based on the number of
76 available cpus. More information about chunks is in the documentation
77 `here <https://dask.pydata.org/en/latest/array-design.html>`_.
78 depth : int, optional
79 Integer equal to the depth of the added boundary cells. Defaults to
80 zero.
81 mode : {'reflect', 'symmetric', 'periodic', 'wrap', 'nearest', 'edge'}, optional
82 type of external boundary padding.
83 extra_arguments : tuple, optional
84 Tuple of arguments to be passed to the function.
85 extra_keywords : dictionary, optional
86 Dictionary of keyword arguments to be passed to the function.
87 compute : bool, optional
88 If ``True``, compute eagerly returning a NumPy Array.
89 If ``False``, compute lazily returning a Dask Array.
90 If ``None`` (default), compute based on array type provided
91 (eagerly for NumPy Arrays and lazily for Dask Arrays).
92
93 Returns
94 -------
95 out : ndarray or dask Array
96 Returns the result of the applying the operation.
97 Type is dependent on the ``compute`` argument.
98
99 Notes
100 -----
101 Numpy edge modes 'symmetric', 'wrap', and 'edge' are converted to the
102 equivalent ``dask`` boundary modes 'reflect', 'periodic' and 'nearest',
103 respectively.
104 Setting ``compute=False`` can be useful for chaining later operations.
105 For example region selection to preview a result or storing large data
106 to disk instead of loading in memory.
107
108 """
109 try:
110 # Importing dask takes time. since apply_parallel is on the
111 # minimum import path of skimage, we lazy attempt to import dask
112 import dask.array as da
113 except ImportError:
114 raise RuntimeError("Could not import 'dask'. Please install "
115 "using 'pip install dask'")
116
117 if compute is None:
118 compute = not isinstance(array, da.Array)
119
120 if chunks is None:
121 shape = array.shape
122 try:
123 # since apply_parallel is in the critical import path, we lazy
124 # import multiprocessing just when we need it.
125 from multiprocessing import cpu_count
126 ncpu = cpu_count()
127 except NotImplementedError:
128 ncpu = 4
129 chunks = _get_chunks(shape, ncpu)
130
131 if mode == 'wrap':
132 mode = 'periodic'
133 elif mode == 'symmetric':
134 mode = 'reflect'
135 elif mode == 'edge':
136 mode = 'nearest'
137
138 def wrapped_func(arr):
139 return function(arr, *extra_arguments, **extra_keywords)
140
141 darr = _ensure_dask_array(array, chunks=chunks)
142
143 res = darr.map_overlap(wrapped_func, depth, boundary=mode)
144 if compute:
145 res = res.compute()
146
147 return res
148
[end of skimage/util/apply_parallel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/util/apply_parallel.py b/skimage/util/apply_parallel.py
--- a/skimage/util/apply_parallel.py
+++ b/skimage/util/apply_parallel.py
@@ -1,3 +1,6 @@
+import numpy
+
+
__all__ = ['apply_parallel']
@@ -53,7 +56,8 @@
def apply_parallel(function, array, chunks=None, depth=0, mode=None,
- extra_arguments=(), extra_keywords={}, *, compute=None):
+ extra_arguments=(), extra_keywords={}, *, dtype=None,
+ multichannel=False, compute=None):
"""Map a function in parallel across an array.
Split an array into possibly overlapping chunks of a given depth and
@@ -84,6 +88,25 @@
Tuple of arguments to be passed to the function.
extra_keywords : dictionary, optional
Dictionary of keyword arguments to be passed to the function.
+ dtype : data-type or None, optional
+ The data-type of the `function` output. If None, Dask will attempt to
+ infer this by calling the function on data of shape ``(1,) * ndim``.
+ For functions expecting RGB or multichannel data this may be
+ problematic. In such cases, the user should manually specify this dtype
+ argument instead.
+
+ .. versionadded:: 0.18
+ ``dtype`` was added in 0.18.
+ multichannel : bool, optional
+ If `chunks` is None and `multichannel` is True, this function will keep
+ only a single chunk along the channels axis. When `depth` is specified
+ as a scalar value, that depth will be applied only to the non-channels
+ axes (a depth of 0 will be used along the channels axis). If the user
+ manually specified both `chunks` and a `depth` tuple, then this
+ argument will have no effect.
+
+ .. versionadded:: 0.18
+ ``multichannel`` was added in 0.18.
compute : bool, optional
If ``True``, compute eagerly returning a NumPy Array.
If ``False``, compute lazily returning a Dask Array.
@@ -126,7 +149,10 @@
ncpu = cpu_count()
except NotImplementedError:
ncpu = 4
- chunks = _get_chunks(shape, ncpu)
+ if multichannel:
+ chunks = _get_chunks(shape[:-1], ncpu) + (shape[-1],)
+ else:
+ chunks = _get_chunks(shape, ncpu)
if mode == 'wrap':
mode = 'periodic'
@@ -135,12 +161,16 @@
elif mode == 'edge':
mode = 'nearest'
+ if multichannel and numpy.isscalar(depth):
+ # depth is only used along the non-channel axes
+ depth = (depth,) * (len(array.shape) - 1) + (0,)
+
def wrapped_func(arr):
return function(arr, *extra_arguments, **extra_keywords)
darr = _ensure_dask_array(array, chunks=chunks)
- res = darr.map_overlap(wrapped_func, depth, boundary=mode)
+ res = darr.map_overlap(wrapped_func, depth, boundary=mode, dtype=dtype)
if compute:
res = res.compute()
|
{"golden_diff": "diff --git a/skimage/util/apply_parallel.py b/skimage/util/apply_parallel.py\n--- a/skimage/util/apply_parallel.py\n+++ b/skimage/util/apply_parallel.py\n@@ -1,3 +1,6 @@\n+import numpy\n+\n+\n __all__ = ['apply_parallel']\n \n \n@@ -53,7 +56,8 @@\n \n \n def apply_parallel(function, array, chunks=None, depth=0, mode=None,\n- extra_arguments=(), extra_keywords={}, *, compute=None):\n+ extra_arguments=(), extra_keywords={}, *, dtype=None,\n+ multichannel=False, compute=None):\n \"\"\"Map a function in parallel across an array.\n \n Split an array into possibly overlapping chunks of a given depth and\n@@ -84,6 +88,25 @@\n Tuple of arguments to be passed to the function.\n extra_keywords : dictionary, optional\n Dictionary of keyword arguments to be passed to the function.\n+ dtype : data-type or None, optional\n+ The data-type of the `function` output. If None, Dask will attempt to\n+ infer this by calling the function on data of shape ``(1,) * ndim``.\n+ For functions expecting RGB or multichannel data this may be\n+ problematic. In such cases, the user should manually specify this dtype\n+ argument instead.\n+\n+ .. versionadded:: 0.18\n+ ``dtype`` was added in 0.18.\n+ multichannel : bool, optional\n+ If `chunks` is None and `multichannel` is True, this function will keep\n+ only a single chunk along the channels axis. When `depth` is specified\n+ as a scalar value, that depth will be applied only to the non-channels\n+ axes (a depth of 0 will be used along the channels axis). If the user\n+ manually specified both `chunks` and a `depth` tuple, then this\n+ argument will have no effect.\n+\n+ .. versionadded:: 0.18\n+ ``multichannel`` was added in 0.18.\n compute : bool, optional\n If ``True``, compute eagerly returning a NumPy Array.\n If ``False``, compute lazily returning a Dask Array.\n@@ -126,7 +149,10 @@\n ncpu = cpu_count()\n except NotImplementedError:\n ncpu = 4\n- chunks = _get_chunks(shape, ncpu)\n+ if multichannel:\n+ chunks = _get_chunks(shape[:-1], ncpu) + (shape[-1],)\n+ else:\n+ chunks = _get_chunks(shape, ncpu)\n \n if mode == 'wrap':\n mode = 'periodic'\n@@ -135,12 +161,16 @@\n elif mode == 'edge':\n mode = 'nearest'\n \n+ if multichannel and numpy.isscalar(depth):\n+ # depth is only used along the non-channel axes\n+ depth = (depth,) * (len(array.shape) - 1) + (0,)\n+\n def wrapped_func(arr):\n return function(arr, *extra_arguments, **extra_keywords)\n \n darr = _ensure_dask_array(array, chunks=chunks)\n \n- res = darr.map_overlap(wrapped_func, depth, boundary=mode)\n+ res = darr.map_overlap(wrapped_func, depth, boundary=mode, dtype=dtype)\n if compute:\n res = res.compute()\n", "issue": "IndexError with util.apply_parallel (RGB images)\nI don't really understand the origin of this error. I could only reproduce with RGB images, not with 2D arrays. Does `apply_parallel` only work for 2D images?\r\n\r\n## Way to reproduce\r\n```python\r\nfrom skimage import util, segmentation, data\r\nimg = data.astronaut()\r\n_ = util.apply_parallel(segmentation.slic, img, chunks=None)\r\n```\r\n\r\nOutput\r\n```\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n<ipython-input-3-63f67e7b9ccd> in <module>\r\n----> 1 _ = util.apply_parallel(segmentation.slic, img, chunks=None)\r\n\r\n~/code/scikit-image/skimage/util/apply_parallel.py in apply_parallel(function, array, chunks, depth, mode, extra_arguments, extra_keywords, compute)\r\n 143 res = darr.map_overlap(wrapped_func, depth, boundary=mode)\r\n 144 if compute:\r\n--> 145 res = res.compute()\r\n 146 \r\n 147 return res\r\n\r\n~/.local/lib/python3.7/site-packages/dask/base.py in compute(self, **kwargs)\r\n 164 dask.base.compute\r\n 165 \"\"\"\r\n--> 166 (result,) = compute(self, traverse=False, **kwargs)\r\n 167 return result\r\n 168 \r\n\r\n~/.local/lib/python3.7/site-packages/dask/base.py in compute(*args, **kwargs)\r\n 443 \r\n 444 results = schedule(dsk, keys, **kwargs)\r\n--> 445 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])\r\n 446 \r\n 447 \r\n\r\n~/.local/lib/python3.7/site-packages/dask/base.py in <listcomp>(.0)\r\n 443 \r\n 444 results = schedule(dsk, keys, **kwargs)\r\n--> 445 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])\r\n 446 \r\n 447 \r\n\r\n~/.local/lib/python3.7/site-packages/dask/array/core.py in finalize(results)\r\n 984 while isinstance(results2, (tuple, list)):\r\n 985 if len(results2) > 1:\r\n--> 986 return concatenate3(results)\r\n 987 else:\r\n 988 results2 = results2[0]\r\n\r\n~/.local/lib/python3.7/site-packages/dask/array/core.py in concatenate3(arrays)\r\n 4383 if not ndim:\r\n 4384 return arrays\r\n-> 4385 chunks = chunks_from_arrays(arrays)\r\n 4386 shape = tuple(map(sum, chunks))\r\n 4387 \r\n\r\n~/.local/lib/python3.7/site-packages/dask/array/core.py in chunks_from_arrays(arrays)\r\n 4152 \r\n 4153 while isinstance(arrays, (list, tuple)):\r\n-> 4154 result.append(tuple([shape(deepfirst(a))[dim] for a in arrays]))\r\n 4155 arrays = arrays[0]\r\n 4156 dim += 1\r\n\r\n~/.local/lib/python3.7/site-packages/dask/array/core.py in <listcomp>(.0)\r\n 4152 \r\n 4153 while isinstance(arrays, (list, tuple)):\r\n-> 4154 result.append(tuple([shape(deepfirst(a))[dim] for a in arrays]))\r\n 4155 arrays = arrays[0]\r\n 4156 dim += 1\r\n\r\nIndexError: tuple index out of range\r\n```\r\n\r\n## Version information\r\n```python\r\n# Paste the output of the following python commands\r\nfrom __future__ import print_function\r\nimport sys; print(sys.version)\r\nimport platform; print(platform.platform())\r\nimport skimage; print(\"scikit-image version: {}\".format(skimage.__version__))\r\nimport numpy; print(\"numpy version: {}\".format(numpy.__version__))\r\n```\r\n\r\n```python\r\n# your output here\r\n3.7.3 (default, Oct 7 2019, 12:56:13) \r\n[GCC 8.3.0]\r\nLinux-5.0.0-38-generic-x86_64-with-Ubuntu-19.04-disco\r\nscikit-image version: 0.18.dev0\r\nnumpy version: 1.16.4\r\n```\r\n\n", "before_files": [{"content": "__all__ = ['apply_parallel']\n\n\ndef _get_chunks(shape, ncpu):\n \"\"\"Split the array into equal sized chunks based on the number of\n available processors. The last chunk in each dimension absorbs the\n remainder array elements if the number of CPUs does not divide evenly into\n the number of array elements.\n\n Examples\n --------\n >>> _get_chunks((4, 4), 4)\n ((2, 2), (2, 2))\n >>> _get_chunks((4, 4), 2)\n ((2, 2), (4,))\n >>> _get_chunks((5, 5), 2)\n ((2, 3), (5,))\n >>> _get_chunks((2, 4), 2)\n ((1, 1), (4,))\n \"\"\"\n # since apply_parallel is in the critical import path, we lazy import\n # math just when we need it.\n from math import ceil\n\n chunks = []\n nchunks_per_dim = int(ceil(ncpu ** (1./len(shape))))\n\n used_chunks = 1\n for i in shape:\n if used_chunks < ncpu:\n regular_chunk = i // nchunks_per_dim\n remainder_chunk = regular_chunk + (i % nchunks_per_dim)\n\n if regular_chunk == 0:\n chunk_lens = (remainder_chunk,)\n else:\n chunk_lens = ((regular_chunk,) * (nchunks_per_dim - 1) +\n (remainder_chunk,))\n else:\n chunk_lens = (i,)\n\n chunks.append(chunk_lens)\n used_chunks *= nchunks_per_dim\n return tuple(chunks)\n\n\ndef _ensure_dask_array(array, chunks=None):\n import dask.array as da\n if isinstance(array, da.Array):\n return array\n\n return da.from_array(array, chunks=chunks)\n\n\ndef apply_parallel(function, array, chunks=None, depth=0, mode=None,\n extra_arguments=(), extra_keywords={}, *, compute=None):\n \"\"\"Map a function in parallel across an array.\n\n Split an array into possibly overlapping chunks of a given depth and\n boundary type, call the given function in parallel on the chunks, combine\n the chunks and return the resulting array.\n\n Parameters\n ----------\n function : function\n Function to be mapped which takes an array as an argument.\n array : numpy array or dask array\n Array which the function will be applied to.\n chunks : int, tuple, or tuple of tuples, optional\n A single integer is interpreted as the length of one side of a square\n chunk that should be tiled across the array. One tuple of length\n ``array.ndim`` represents the shape of a chunk, and it is tiled across\n the array. A list of tuples of length ``ndim``, where each sub-tuple\n is a sequence of chunk sizes along the corresponding dimension. If\n None, the array is broken up into chunks based on the number of\n available cpus. More information about chunks is in the documentation\n `here <https://dask.pydata.org/en/latest/array-design.html>`_.\n depth : int, optional\n Integer equal to the depth of the added boundary cells. Defaults to\n zero.\n mode : {'reflect', 'symmetric', 'periodic', 'wrap', 'nearest', 'edge'}, optional\n type of external boundary padding.\n extra_arguments : tuple, optional\n Tuple of arguments to be passed to the function.\n extra_keywords : dictionary, optional\n Dictionary of keyword arguments to be passed to the function.\n compute : bool, optional\n If ``True``, compute eagerly returning a NumPy Array.\n If ``False``, compute lazily returning a Dask Array.\n If ``None`` (default), compute based on array type provided\n (eagerly for NumPy Arrays and lazily for Dask Arrays).\n\n Returns\n -------\n out : ndarray or dask Array\n Returns the result of the applying the operation.\n Type is dependent on the ``compute`` argument.\n\n Notes\n -----\n Numpy edge modes 'symmetric', 'wrap', and 'edge' are converted to the\n equivalent ``dask`` boundary modes 'reflect', 'periodic' and 'nearest',\n respectively.\n Setting ``compute=False`` can be useful for chaining later operations.\n For example region selection to preview a result or storing large data\n to disk instead of loading in memory.\n\n \"\"\"\n try:\n # Importing dask takes time. since apply_parallel is on the\n # minimum import path of skimage, we lazy attempt to import dask\n import dask.array as da\n except ImportError:\n raise RuntimeError(\"Could not import 'dask'. Please install \"\n \"using 'pip install dask'\")\n\n if compute is None:\n compute = not isinstance(array, da.Array)\n\n if chunks is None:\n shape = array.shape\n try:\n # since apply_parallel is in the critical import path, we lazy\n # import multiprocessing just when we need it.\n from multiprocessing import cpu_count\n ncpu = cpu_count()\n except NotImplementedError:\n ncpu = 4\n chunks = _get_chunks(shape, ncpu)\n\n if mode == 'wrap':\n mode = 'periodic'\n elif mode == 'symmetric':\n mode = 'reflect'\n elif mode == 'edge':\n mode = 'nearest'\n\n def wrapped_func(arr):\n return function(arr, *extra_arguments, **extra_keywords)\n\n darr = _ensure_dask_array(array, chunks=chunks)\n\n res = darr.map_overlap(wrapped_func, depth, boundary=mode)\n if compute:\n res = res.compute()\n\n return res\n", "path": "skimage/util/apply_parallel.py"}]}
| 3,144 | 767 |
gh_patches_debug_2942
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-3004
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
installing latest 5.0.3 on windows machines is still using pywin32==227 but not pywin32==301
[Bump pywin32 from 227 to 301 ]( https://github.com/docker/docker-py/commit/e0d186d754693feb7d27c2352e455c5febb4a5cd) was already merged in to bump pywin32 from 227 to 301. But, when installing latest 5.0.3 on windows machines is resulting in install of pywin32==227
Most likely extras_require needs updated
https://github.com/docker/docker-py/blob/a48a5a9647761406d66e8271f19fab7fa0c5f582/setup.py#L19
Pywin32 upgrade
Fix issue #2902
@aiordache @ulyssessouza, please, accept this PR to fix this annoying bug
Don't pin to pywin32 227
The hard pin to 227 is keeping us from using docker with other projects that depend on a newer version of pywin32.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import codecs
4 import os
5
6 from setuptools import find_packages
7 from setuptools import setup
8
9 ROOT_DIR = os.path.dirname(__file__)
10 SOURCE_DIR = os.path.join(ROOT_DIR)
11
12 requirements = [
13 'websocket-client >= 0.32.0',
14 'requests >= 2.14.2, != 2.18.0',
15 ]
16
17 extras_require = {
18 # win32 APIs if on Windows (required for npipe support)
19 ':sys_platform == "win32"': 'pywin32==227',
20
21 # If using docker-py over TLS, highly recommend this option is
22 # pip-installed or pinned.
23
24 # TODO: if pip installing both "requests" and "requests[security]", the
25 # extra package from the "security" option are not installed (see
26 # https://github.com/pypa/pip/issues/4391). Once that's fixed, instead of
27 # installing the extra dependencies, install the following instead:
28 # 'requests[security] >= 2.5.2, != 2.11.0, != 2.12.2'
29 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=3.4.7', 'idna>=2.0.0'],
30
31 # Only required when connecting using the ssh:// protocol
32 'ssh': ['paramiko>=2.4.3'],
33
34 }
35
36 version = None
37 exec(open('docker/version.py').read())
38
39 with open('./test-requirements.txt') as test_reqs_txt:
40 test_requirements = [line for line in test_reqs_txt]
41
42
43 long_description = ''
44 with codecs.open('./README.md', encoding='utf-8') as readme_md:
45 long_description = readme_md.read()
46
47 setup(
48 name="docker",
49 version=version,
50 description="A Python library for the Docker Engine API.",
51 long_description=long_description,
52 long_description_content_type='text/markdown',
53 url='https://github.com/docker/docker-py',
54 project_urls={
55 'Documentation': 'https://docker-py.readthedocs.io',
56 'Changelog': 'https://docker-py.readthedocs.io/en/stable/change-log.html', # noqa: E501
57 'Source': 'https://github.com/docker/docker-py',
58 'Tracker': 'https://github.com/docker/docker-py/issues',
59 },
60 packages=find_packages(exclude=["tests.*", "tests"]),
61 install_requires=requirements,
62 tests_require=test_requirements,
63 extras_require=extras_require,
64 python_requires='>=3.6',
65 zip_safe=False,
66 test_suite='tests',
67 classifiers=[
68 'Development Status :: 5 - Production/Stable',
69 'Environment :: Other Environment',
70 'Intended Audience :: Developers',
71 'Operating System :: OS Independent',
72 'Programming Language :: Python',
73 'Programming Language :: Python :: 3',
74 'Programming Language :: Python :: 3.6',
75 'Programming Language :: Python :: 3.7',
76 'Programming Language :: Python :: 3.8',
77 'Programming Language :: Python :: 3.9',
78 'Programming Language :: Python :: 3.10',
79 'Topic :: Software Development',
80 'Topic :: Utilities',
81 'License :: OSI Approved :: Apache Software License',
82 ],
83 maintainer='Ulysses Souza',
84 maintainer_email='[email protected]',
85 )
86
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,7 @@
extras_require = {
# win32 APIs if on Windows (required for npipe support)
- ':sys_platform == "win32"': 'pywin32==227',
+ ':sys_platform == "win32"': 'pywin32>=304',
# If using docker-py over TLS, highly recommend this option is
# pip-installed or pinned.
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,7 @@\n \n extras_require = {\n # win32 APIs if on Windows (required for npipe support)\n- ':sys_platform == \"win32\"': 'pywin32==227',\n+ ':sys_platform == \"win32\"': 'pywin32>=304',\n \n # If using docker-py over TLS, highly recommend this option is\n # pip-installed or pinned.\n", "issue": "installing latest 5.0.3 on windows machines is still using pywin32==227 but not pywin32==301\n[Bump pywin32 from 227 to 301 ]( https://github.com/docker/docker-py/commit/e0d186d754693feb7d27c2352e455c5febb4a5cd) was already merged in to bump pywin32 from 227 to 301. But, when installing latest 5.0.3 on windows machines is resulting in install of pywin32==227\r\n\r\nMost likely extras_require needs updated\r\nhttps://github.com/docker/docker-py/blob/a48a5a9647761406d66e8271f19fab7fa0c5f582/setup.py#L19\r\n\r\n\r\n\r\n\nPywin32 upgrade\nFix issue #2902\r\n\r\n@aiordache @ulyssessouza, please, accept this PR to fix this annoying bug\r\n\nDon't pin to pywin32 227\nThe hard pin to 227 is keeping us from using docker with other projects that depend on a newer version of pywin32.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport codecs\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'websocket-client >= 0.32.0',\n 'requests >= 2.14.2, != 2.18.0',\n]\n\nextras_require = {\n # win32 APIs if on Windows (required for npipe support)\n ':sys_platform == \"win32\"': 'pywin32==227',\n\n # If using docker-py over TLS, highly recommend this option is\n # pip-installed or pinned.\n\n # TODO: if pip installing both \"requests\" and \"requests[security]\", the\n # extra package from the \"security\" option are not installed (see\n # https://github.com/pypa/pip/issues/4391). Once that's fixed, instead of\n # installing the extra dependencies, install the following instead:\n # 'requests[security] >= 2.5.2, != 2.11.0, != 2.12.2'\n 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=3.4.7', 'idna>=2.0.0'],\n\n # Only required when connecting using the ssh:// protocol\n 'ssh': ['paramiko>=2.4.3'],\n\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nlong_description = ''\nwith codecs.open('./README.md', encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nsetup(\n name=\"docker\",\n version=version,\n description=\"A Python library for the Docker Engine API.\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/docker/docker-py',\n project_urls={\n 'Documentation': 'https://docker-py.readthedocs.io',\n 'Changelog': 'https://docker-py.readthedocs.io/en/stable/change-log.html', # noqa: E501\n 'Source': 'https://github.com/docker/docker-py',\n 'Tracker': 'https://github.com/docker/docker-py/issues',\n },\n packages=find_packages(exclude=[\"tests.*\", \"tests\"]),\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n python_requires='>=3.6',\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n maintainer='Ulysses Souza',\n maintainer_email='[email protected]',\n)\n", "path": "setup.py"}]}
| 1,732 | 122 |
gh_patches_debug_12597
|
rasdani/github-patches
|
git_diff
|
sublimelsp__LSP-1110
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Double requests for documentHighlight
I've noticed that setting a cursor on some symbol makes the `documentHighlight` underline blink once.
Checked logs and saw the request being made twice on each cursor movement:
```
:: --> pyls textDocument/documentHighlight(12): {'textDocument': {'uri': 'file:////LSP/plugin/highlights.py'}, 'position': {'character': 8, 'line': 38}}
:: --> pyls textDocument/documentHighlight(13): {'textDocument': {'uri': 'file:////LSP/plugin/highlights.py'}, 'position': {'character': 8, 'line': 38}}
```
Then added log in `DocumentHighlightListener` class, inside `on_selection_modified_async` method and that listener seems to be triggered twice on cursor movement. Tested with `print(self.view.file_name())`.
</issue>
<code>
[start of plugin/highlights.py]
1 import sublime
2 from .core.protocol import Request, Range, DocumentHighlightKind
3 from .core.registry import LSPViewEventListener
4 from .core.settings import settings
5 from .core.typing import List, Dict, Optional
6 from .core.views import range_to_region, text_document_position_params
7 from .core.windows import debounced
8
9 SUBLIME_WORD_MASK = 515
10 NO_HIGHLIGHT_SCOPES = 'comment, string'
11
12 _kind2name = {
13 DocumentHighlightKind.Unknown: "unknown",
14 DocumentHighlightKind.Text: "text",
15 DocumentHighlightKind.Read: "read",
16 DocumentHighlightKind.Write: "write"
17 }
18
19
20 def remove_highlights(view: sublime.View) -> None:
21 for kind in settings.document_highlight_scopes.keys():
22 view.erase_regions("lsp_highlight_{}".format(kind))
23
24
25 class DocumentHighlightListener(LSPViewEventListener):
26 def __init__(self, view: sublime.View) -> None:
27 super().__init__(view)
28 self._initialized = False
29 self._enabled = False
30 self._stored_point = -1
31
32 @classmethod
33 def is_applicable(cls, view_settings: dict) -> bool:
34 if 'documentHighlight' in settings.disabled_capabilities:
35 return False
36 return cls.has_supported_syntax(view_settings)
37
38 def on_selection_modified_async(self) -> None:
39 if not self._initialized:
40 self._initialize()
41 if self._enabled and settings.document_highlight_style:
42 try:
43 current_point = self.view.sel()[0].begin()
44 except IndexError:
45 return
46 self._stored_point = current_point
47 self._clear_regions()
48 debounced(self._on_document_highlight, 500, lambda: self._stored_point == current_point, async_thread=True)
49
50 def _initialize(self) -> None:
51 self._initialized = True
52 session = self.session("documentHighlightProvider")
53 if session:
54 self._enabled = True
55
56 def _clear_regions(self) -> None:
57 for kind in settings.document_highlight_scopes.keys():
58 self.view.erase_regions("lsp_highlight_{}".format(kind))
59
60 def _on_document_highlight(self) -> None:
61 self._clear_regions()
62 if len(self.view.sel()) != 1:
63 return
64 point = self.view.sel()[0].begin()
65 word_at_sel = self.view.classify(point)
66 if word_at_sel & SUBLIME_WORD_MASK:
67 if self.view.match_selector(point, NO_HIGHLIGHT_SCOPES):
68 return
69 session = self.session("documentHighlightProvider", point)
70 if session:
71 params = text_document_position_params(self.view, point)
72 request = Request.documentHighlight(params)
73 session.send_request(request, self._handle_response)
74
75 def _handle_response(self, response: Optional[List]) -> None:
76 if not response:
77 return
78 kind2regions = {} # type: Dict[str, List[sublime.Region]]
79 for kind in range(0, 4):
80 kind2regions[_kind2name[kind]] = []
81 for highlight in response:
82 r = range_to_region(Range.from_lsp(highlight["range"]), self.view)
83 kind = highlight.get("kind", DocumentHighlightKind.Unknown)
84 if kind is not None:
85 kind2regions[_kind2name[kind]].append(r)
86 if settings.document_highlight_style == "fill":
87 flags = 0
88 elif settings.document_highlight_style == "box":
89 flags = sublime.DRAW_NO_FILL
90 else:
91 flags = sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE
92 if settings.document_highlight_style == "underline":
93 flags |= sublime.DRAW_SOLID_UNDERLINE
94 elif settings.document_highlight_style == "stippled":
95 flags |= sublime.DRAW_STIPPLED_UNDERLINE
96 elif settings.document_highlight_style == "squiggly":
97 flags |= sublime.DRAW_SQUIGGLY_UNDERLINE
98
99 self._clear_regions()
100 for kind_str, regions in kind2regions.items():
101 if regions:
102 scope = settings.document_highlight_scopes.get(kind_str, None)
103 if scope:
104 self.view.add_regions("lsp_highlight_{}".format(kind_str),
105 regions, scope=scope, flags=flags)
106
[end of plugin/highlights.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugin/highlights.py b/plugin/highlights.py
--- a/plugin/highlights.py
+++ b/plugin/highlights.py
@@ -43,9 +43,11 @@
current_point = self.view.sel()[0].begin()
except IndexError:
return
- self._stored_point = current_point
self._clear_regions()
- debounced(self._on_document_highlight, 500, lambda: self._stored_point == current_point, async_thread=True)
+ if self._stored_point != current_point:
+ self._stored_point = current_point
+ debounced(self._on_document_highlight, 500, lambda: self._stored_point == current_point,
+ async_thread=True)
def _initialize(self) -> None:
self._initialized = True
|
{"golden_diff": "diff --git a/plugin/highlights.py b/plugin/highlights.py\n--- a/plugin/highlights.py\n+++ b/plugin/highlights.py\n@@ -43,9 +43,11 @@\n current_point = self.view.sel()[0].begin()\n except IndexError:\n return\n- self._stored_point = current_point\n self._clear_regions()\n- debounced(self._on_document_highlight, 500, lambda: self._stored_point == current_point, async_thread=True)\n+ if self._stored_point != current_point:\n+ self._stored_point = current_point\n+ debounced(self._on_document_highlight, 500, lambda: self._stored_point == current_point,\n+ async_thread=True)\n \n def _initialize(self) -> None:\n self._initialized = True\n", "issue": "Double requests for documentHighlight\nI've noticed that setting a cursor on some symbol makes the `documentHighlight` underline blink once.\r\n\r\nChecked logs and saw the request being made twice on each cursor movement:\r\n```\r\n:: --> pyls textDocument/documentHighlight(12): {'textDocument': {'uri': 'file:////LSP/plugin/highlights.py'}, 'position': {'character': 8, 'line': 38}}\r\n:: --> pyls textDocument/documentHighlight(13): {'textDocument': {'uri': 'file:////LSP/plugin/highlights.py'}, 'position': {'character': 8, 'line': 38}}\r\n```\r\n\r\nThen added log in `DocumentHighlightListener` class, inside `on_selection_modified_async` method and that listener seems to be triggered twice on cursor movement. Tested with `print(self.view.file_name())`.\n", "before_files": [{"content": "import sublime\nfrom .core.protocol import Request, Range, DocumentHighlightKind\nfrom .core.registry import LSPViewEventListener\nfrom .core.settings import settings\nfrom .core.typing import List, Dict, Optional\nfrom .core.views import range_to_region, text_document_position_params\nfrom .core.windows import debounced\n\nSUBLIME_WORD_MASK = 515\nNO_HIGHLIGHT_SCOPES = 'comment, string'\n\n_kind2name = {\n DocumentHighlightKind.Unknown: \"unknown\",\n DocumentHighlightKind.Text: \"text\",\n DocumentHighlightKind.Read: \"read\",\n DocumentHighlightKind.Write: \"write\"\n}\n\n\ndef remove_highlights(view: sublime.View) -> None:\n for kind in settings.document_highlight_scopes.keys():\n view.erase_regions(\"lsp_highlight_{}\".format(kind))\n\n\nclass DocumentHighlightListener(LSPViewEventListener):\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self._initialized = False\n self._enabled = False\n self._stored_point = -1\n\n @classmethod\n def is_applicable(cls, view_settings: dict) -> bool:\n if 'documentHighlight' in settings.disabled_capabilities:\n return False\n return cls.has_supported_syntax(view_settings)\n\n def on_selection_modified_async(self) -> None:\n if not self._initialized:\n self._initialize()\n if self._enabled and settings.document_highlight_style:\n try:\n current_point = self.view.sel()[0].begin()\n except IndexError:\n return\n self._stored_point = current_point\n self._clear_regions()\n debounced(self._on_document_highlight, 500, lambda: self._stored_point == current_point, async_thread=True)\n\n def _initialize(self) -> None:\n self._initialized = True\n session = self.session(\"documentHighlightProvider\")\n if session:\n self._enabled = True\n\n def _clear_regions(self) -> None:\n for kind in settings.document_highlight_scopes.keys():\n self.view.erase_regions(\"lsp_highlight_{}\".format(kind))\n\n def _on_document_highlight(self) -> None:\n self._clear_regions()\n if len(self.view.sel()) != 1:\n return\n point = self.view.sel()[0].begin()\n word_at_sel = self.view.classify(point)\n if word_at_sel & SUBLIME_WORD_MASK:\n if self.view.match_selector(point, NO_HIGHLIGHT_SCOPES):\n return\n session = self.session(\"documentHighlightProvider\", point)\n if session:\n params = text_document_position_params(self.view, point)\n request = Request.documentHighlight(params)\n session.send_request(request, self._handle_response)\n\n def _handle_response(self, response: Optional[List]) -> None:\n if not response:\n return\n kind2regions = {} # type: Dict[str, List[sublime.Region]]\n for kind in range(0, 4):\n kind2regions[_kind2name[kind]] = []\n for highlight in response:\n r = range_to_region(Range.from_lsp(highlight[\"range\"]), self.view)\n kind = highlight.get(\"kind\", DocumentHighlightKind.Unknown)\n if kind is not None:\n kind2regions[_kind2name[kind]].append(r)\n if settings.document_highlight_style == \"fill\":\n flags = 0\n elif settings.document_highlight_style == \"box\":\n flags = sublime.DRAW_NO_FILL\n else:\n flags = sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE\n if settings.document_highlight_style == \"underline\":\n flags |= sublime.DRAW_SOLID_UNDERLINE\n elif settings.document_highlight_style == \"stippled\":\n flags |= sublime.DRAW_STIPPLED_UNDERLINE\n elif settings.document_highlight_style == \"squiggly\":\n flags |= sublime.DRAW_SQUIGGLY_UNDERLINE\n\n self._clear_regions()\n for kind_str, regions in kind2regions.items():\n if regions:\n scope = settings.document_highlight_scopes.get(kind_str, None)\n if scope:\n self.view.add_regions(\"lsp_highlight_{}\".format(kind_str),\n regions, scope=scope, flags=flags)\n", "path": "plugin/highlights.py"}]}
| 1,818 | 174 |
gh_patches_debug_10328
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-1997
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NH failing since at least 2017-11-16
NH has been failing since 2017-11-16
Based on automated runs it appears that NH has not run successfully in 2 days (2017-11-16).
```
23:17:39 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Yvonne Thomas"}
23:17:39 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "David Lundgren"}
23:17:39 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Neal Kurk"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Timothy Smith"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Thomas Buco"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Sandra Keans"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Jane Beaulieu"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "David Huot"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Franklin Tilton"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Kermit Williams"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Jacqueline Cali-Pitts"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Michael McCarthy"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Martin Jack"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Martha Hennessey"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Harold French"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Frank Sapareto"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Gary Daniels"}
23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Suzanne Smith"}
people: {}
nh (scrape, import)
committees: {}
no pupa_settings on path, using defaults
bills: {}
import jurisdictions...
import organizations...
import people...
import posts...
import memberships...
subcommands[args.subcommand].handle(args, other)
File "/opt/openstates/venv-pupa//bin/pupa", line 11, in <module>
load_entry_point('pupa', 'console_scripts', 'pupa')()
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py", line 67, in main
Traceback (most recent call last):
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 307, in do_handle
report['import'] = self.do_import(juris, args)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 211, in do_import
report.update(membership_importer.import_directory(datadir))
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 190, in import_directory
return self.import_data(json_stream())
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 227, in import_data
obj_id, what = self.import_item(data)
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 247, in import_item
data = self.prepare_for_db(data)
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/memberships.py", line 50, in prepare_for_db
data['post_id'] = self.post_importer.resolve_json_id(data['post_id'])
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 165, in resolve_json_id
raise UnresolvedIdError(errmsg)
pupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{"label": "13", "organization__classification": "lower"}
```
Visit http://bobsled.openstates.org for more info.
</issue>
<code>
[start of openstates/nh/people.py]
1 import re
2
3 from pupa.scrape import Person, Scraper
4 from openstates.utils import LXMLMixin
5
6
7 class NHPersonScraper(Scraper, LXMLMixin):
8 members_url = 'http://www.gencourt.state.nh.us/downloads/Members.txt'
9 lookup_url = 'http://www.gencourt.state.nh.us/house/members/memberlookup.aspx'
10 house_profile_url = 'http://www.gencourt.state.nh.us/house/members/member.aspx?member={}'
11 senate_profile_url = 'http://www.gencourt.state.nh.us/Senate/members/webpages/district{}.aspx'
12
13 chamber_map = {'H': 'lower', 'S': 'upper'}
14 party_map = {
15 'D': 'Democratic',
16 'R': 'Republican',
17 'I': 'Independent',
18 'L': 'Libertarian',
19 }
20
21 def _get_photo(self, url, chamber):
22 """Attempts to find a portrait in the given legislator profile."""
23 doc = self.lxmlize(url)
24
25 if chamber == 'upper':
26 src = doc.xpath('//div[@id="page_content"]//img[contains(@src, '
27 '"images/senators") or contains(@src, "Senator")]/@src')
28 elif chamber == 'lower':
29 src = doc.xpath('//img[contains(@src, "images/memberpics")]/@src')
30
31 if src and 'nophoto' not in src[0]:
32 photo_url = src[0]
33 else:
34 photo_url = ''
35
36 return photo_url
37
38 def _parse_person(self, row, chamber, seat_map):
39 # Capture legislator vitals.
40 first_name = row['FirstName']
41 middle_name = row['MiddleName']
42 last_name = row['LastName']
43 full_name = '{} {} {}'.format(first_name, middle_name, last_name)
44 full_name = re.sub(r'[\s]{2,}', ' ', full_name)
45
46 if chamber == 'lower':
47 district = '{} {}'.format(row['County'], int(row['District'])).strip()
48 else:
49 district = str(int(row['District'])).strip()
50
51 party = self.party_map[row['party'].upper()]
52 email = row['WorkEmail']
53
54 if district == '0':
55 self.warning('Skipping {}, district is set to 0'.format(full_name))
56 return
57
58 # Temporary fix for Kari Lerner
59 if district == 'Rockingham 0' and last_name == 'Lerner':
60 district = 'Rockingham 4'
61
62 person = Person(primary_org=chamber,
63 district=district,
64 name=full_name,
65 party=party)
66
67 extras = {
68 'first_name': first_name,
69 'middle_name': middle_name,
70 'last_name': last_name
71 }
72
73 person.extras = extras
74 if email:
75 person.add_contact_detail(type='email', value=email, note='District Office')
76
77 # Capture legislator office contact information.
78 district_address = '{}\n{}\n{}, {} {}'.format(row['Address'],
79 row['address2'],
80 row['city'], row['State'],
81 row['Zipcode']).strip()
82
83 phone = row['Phone'].strip()
84 if not phone:
85 phone = None
86
87 if district_address:
88 person.add_contact_detail(type='address', value=district_address, note='Home Office')
89 if phone:
90 person.add_contact_detail(type='voice', value=phone, note='Home Office')
91
92 # Retrieve legislator portrait.
93 profile_url = None
94 if chamber == 'upper':
95 profile_url = self.senate_profile_url.format(row['District'])
96 elif chamber == 'lower':
97 try:
98 seat_number = seat_map[row['seatno']]
99 profile_url = self.house_profile_url.format(seat_number)
100 except KeyError:
101 pass
102
103 if profile_url:
104 person.image = self._get_photo(profile_url, chamber)
105 person.add_source(profile_url)
106
107 return person
108
109 def _parse_members_txt(self):
110 lines = self.get(self.members_url).text.splitlines()
111
112 header = lines[0].split('\t')
113
114 for line in lines[1:]:
115 yield dict(zip(header, line.split('\t')))
116
117 def _parse_seat_map(self):
118 """Get mapping between seat numbers and legislator identifiers."""
119 seat_map = {}
120 page = self.lxmlize(self.lookup_url)
121 options = page.xpath('//select[@id="member"]/option')
122 for option in options:
123 member_url = self.house_profile_url.format(option.attrib['value'])
124 member_page = self.lxmlize(member_url)
125 table = member_page.xpath('//table[@id="Table1"]')
126 if table:
127 res = re.search(r'seat #:(\d+)', table[0].text_content(), re.IGNORECASE)
128 if res:
129 seat_map[res.groups()[0]] = option.attrib['value']
130 return seat_map
131
132 def scrape(self, chamber=None):
133 chambers = [chamber] if chamber is not None else ['upper', 'lower']
134 seat_map = self._parse_seat_map()
135 for chamber in chambers:
136 for row in self._parse_members_txt():
137 print(row['electedStatus'])
138 if (self.chamber_map[row['LegislativeBody']] == chamber and
139 row['electedStatus'] != 'Former'):
140 person = self._parse_person(row, chamber, seat_map)
141
142 # allow for skipping
143 if not person:
144 continue
145
146 person.add_source(self.members_url)
147 person.add_link(self.members_url)
148 yield person
149
[end of openstates/nh/people.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openstates/nh/people.py b/openstates/nh/people.py
--- a/openstates/nh/people.py
+++ b/openstates/nh/people.py
@@ -59,6 +59,13 @@
if district == 'Rockingham 0' and last_name == 'Lerner':
district = 'Rockingham 4'
+ # Temporary fix for Casey Conley
+ if last_name == 'Conley':
+ if district == '13':
+ district = 'Strafford 13'
+ elif district == 'Strafford 13':
+ self.info('"Temporary fix for Casey Conley" can be removed')
+
person = Person(primary_org=chamber,
district=district,
name=full_name,
|
{"golden_diff": "diff --git a/openstates/nh/people.py b/openstates/nh/people.py\n--- a/openstates/nh/people.py\n+++ b/openstates/nh/people.py\n@@ -59,6 +59,13 @@\n if district == 'Rockingham 0' and last_name == 'Lerner':\n district = 'Rockingham 4'\n \n+ # Temporary fix for Casey Conley\n+ if last_name == 'Conley':\n+ if district == '13':\n+ district = 'Strafford 13'\n+ elif district == 'Strafford 13':\n+ self.info('\"Temporary fix for Casey Conley\" can be removed')\n+\n person = Person(primary_org=chamber,\n district=district,\n name=full_name,\n", "issue": "NH failing since at least 2017-11-16\nNH has been failing since 2017-11-16\n\nBased on automated runs it appears that NH has not run successfully in 2 days (2017-11-16).\n\n\n```\n 23:17:39 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Yvonne Thomas\"}\n23:17:39 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"David Lundgren\"}\n23:17:39 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Neal Kurk\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Timothy Smith\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Thomas Buco\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Sandra Keans\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Jane Beaulieu\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"David Huot\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Franklin Tilton\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Kermit Williams\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Jacqueline Cali-Pitts\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Michael McCarthy\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Martin Jack\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Martha Hennessey\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Harold French\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Frank Sapareto\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Gary Daniels\"}\n23:17:40 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Suzanne Smith\"}\n people: {}\nnh (scrape, import)\n committees: {}\nno pupa_settings on path, using defaults\n bills: {}\nimport jurisdictions...\nimport organizations...\nimport people...\nimport posts...\nimport memberships...\n subcommands[args.subcommand].handle(args, other)\n File \"/opt/openstates/venv-pupa//bin/pupa\", line 11, in <module>\n load_entry_point('pupa', 'console_scripts', 'pupa')()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py\", line 67, in main\nTraceback (most recent call last):\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 260, in handle\n return self.do_handle(args, other, juris)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 307, in do_handle\n report['import'] = self.do_import(juris, args)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 211, in do_import\n report.update(membership_importer.import_directory(datadir))\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 190, in import_directory\n return self.import_data(json_stream())\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 227, in import_data\n obj_id, what = self.import_item(data)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 247, in import_item\n data = self.prepare_for_db(data)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/memberships.py\", line 50, in prepare_for_db\n data['post_id'] = self.post_importer.resolve_json_id(data['post_id'])\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 165, in resolve_json_id\n raise UnresolvedIdError(errmsg)\npupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{\"label\": \"13\", \"organization__classification\": \"lower\"}\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "import re\n\nfrom pupa.scrape import Person, Scraper\nfrom openstates.utils import LXMLMixin\n\n\nclass NHPersonScraper(Scraper, LXMLMixin):\n members_url = 'http://www.gencourt.state.nh.us/downloads/Members.txt'\n lookup_url = 'http://www.gencourt.state.nh.us/house/members/memberlookup.aspx'\n house_profile_url = 'http://www.gencourt.state.nh.us/house/members/member.aspx?member={}'\n senate_profile_url = 'http://www.gencourt.state.nh.us/Senate/members/webpages/district{}.aspx'\n\n chamber_map = {'H': 'lower', 'S': 'upper'}\n party_map = {\n 'D': 'Democratic',\n 'R': 'Republican',\n 'I': 'Independent',\n 'L': 'Libertarian',\n }\n\n def _get_photo(self, url, chamber):\n \"\"\"Attempts to find a portrait in the given legislator profile.\"\"\"\n doc = self.lxmlize(url)\n\n if chamber == 'upper':\n src = doc.xpath('//div[@id=\"page_content\"]//img[contains(@src, '\n '\"images/senators\") or contains(@src, \"Senator\")]/@src')\n elif chamber == 'lower':\n src = doc.xpath('//img[contains(@src, \"images/memberpics\")]/@src')\n\n if src and 'nophoto' not in src[0]:\n photo_url = src[0]\n else:\n photo_url = ''\n\n return photo_url\n\n def _parse_person(self, row, chamber, seat_map):\n # Capture legislator vitals.\n first_name = row['FirstName']\n middle_name = row['MiddleName']\n last_name = row['LastName']\n full_name = '{} {} {}'.format(first_name, middle_name, last_name)\n full_name = re.sub(r'[\\s]{2,}', ' ', full_name)\n\n if chamber == 'lower':\n district = '{} {}'.format(row['County'], int(row['District'])).strip()\n else:\n district = str(int(row['District'])).strip()\n\n party = self.party_map[row['party'].upper()]\n email = row['WorkEmail']\n\n if district == '0':\n self.warning('Skipping {}, district is set to 0'.format(full_name))\n return\n\n # Temporary fix for Kari Lerner\n if district == 'Rockingham 0' and last_name == 'Lerner':\n district = 'Rockingham 4'\n\n person = Person(primary_org=chamber,\n district=district,\n name=full_name,\n party=party)\n\n extras = {\n 'first_name': first_name,\n 'middle_name': middle_name,\n 'last_name': last_name\n }\n\n person.extras = extras\n if email:\n person.add_contact_detail(type='email', value=email, note='District Office')\n\n # Capture legislator office contact information.\n district_address = '{}\\n{}\\n{}, {} {}'.format(row['Address'],\n row['address2'],\n row['city'], row['State'],\n row['Zipcode']).strip()\n\n phone = row['Phone'].strip()\n if not phone:\n phone = None\n\n if district_address:\n person.add_contact_detail(type='address', value=district_address, note='Home Office')\n if phone:\n person.add_contact_detail(type='voice', value=phone, note='Home Office')\n\n # Retrieve legislator portrait.\n profile_url = None\n if chamber == 'upper':\n profile_url = self.senate_profile_url.format(row['District'])\n elif chamber == 'lower':\n try:\n seat_number = seat_map[row['seatno']]\n profile_url = self.house_profile_url.format(seat_number)\n except KeyError:\n pass\n\n if profile_url:\n person.image = self._get_photo(profile_url, chamber)\n person.add_source(profile_url)\n\n return person\n\n def _parse_members_txt(self):\n lines = self.get(self.members_url).text.splitlines()\n\n header = lines[0].split('\\t')\n\n for line in lines[1:]:\n yield dict(zip(header, line.split('\\t')))\n\n def _parse_seat_map(self):\n \"\"\"Get mapping between seat numbers and legislator identifiers.\"\"\"\n seat_map = {}\n page = self.lxmlize(self.lookup_url)\n options = page.xpath('//select[@id=\"member\"]/option')\n for option in options:\n member_url = self.house_profile_url.format(option.attrib['value'])\n member_page = self.lxmlize(member_url)\n table = member_page.xpath('//table[@id=\"Table1\"]')\n if table:\n res = re.search(r'seat #:(\\d+)', table[0].text_content(), re.IGNORECASE)\n if res:\n seat_map[res.groups()[0]] = option.attrib['value']\n return seat_map\n\n def scrape(self, chamber=None):\n chambers = [chamber] if chamber is not None else ['upper', 'lower']\n seat_map = self._parse_seat_map()\n for chamber in chambers:\n for row in self._parse_members_txt():\n print(row['electedStatus'])\n if (self.chamber_map[row['LegislativeBody']] == chamber and\n row['electedStatus'] != 'Former'):\n person = self._parse_person(row, chamber, seat_map)\n\n # allow for skipping\n if not person:\n continue\n\n person.add_source(self.members_url)\n person.add_link(self.members_url)\n yield person\n", "path": "openstates/nh/people.py"}]}
| 3,192 | 169 |
gh_patches_debug_2147
|
rasdani/github-patches
|
git_diff
|
spacetelescope__jwql-421
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add README to style_guide directory
We are starting to have a range of helpful documents in our `jwql/style_guide` directory - the general style guide. This is great!
I am thinking it would now be helpful to include a `README.md` file in there, so that any prospective user who looks there is met with some information about what resources are available.
</issue>
<code>
[start of jwql/utils/monitor_template.py]
1 #! /usr/bin/env python
2
3 """
4 This module is intended to be a template to aid in creating new
5 monitoring scripts and to demonstrate how to format them to fully
6 utilize the ``jwql`` framework.
7
8 Each monitoring script must be executable from the command line (i.e.
9 have a ``if '__name__' == '__main__' section), as well as have a "main"
10 function that calls all other functions, methods, or modules (i.e.
11 the entirety of the code is executed within the scope of the main
12 function), as shown in this example.
13
14 Users may utilize the ``jwql`` framework functions for logging,
15 setting permissions, parsing filenames, etc. (See related ``import``s).
16
17 Authors
18 -------
19
20 - Catherine Martlin
21 - Matthew Bourque
22
23 Use
24 ---
25
26 This module can be executed from the command line:
27 ::
28
29 python monitor_template.py
30
31 Alternatively, it can be called from a python environment via the
32 following import statements:
33 ::
34
35 from monitor_template import main_monitor_function
36 from monitor_template import secondary_function
37
38 Dependencies
39 ------------
40
41 The user must have a configuration file named ``config.json``
42 placed in the ``utils`` directory.
43
44 Notes
45 -----
46
47 Any monitoring script written for ``jwql`` must adhere to the
48 ``jwql`` style guide located at:
49 https://github.com/spacetelescope/jwql/blob/master/style_guide/style_guide.md
50 """
51
52 import os
53 import logging
54
55 from astroquery.mast import Mast
56 from jwst import datamodels
57 from bokeh.charts import Donut
58 from bokeh.embed import components
59
60 # Functions for logging
61 from jwql.logging.logging_functions import configure_logging
62 from jwql.logging.logging_functions import log_info
63 from jwql.logging.logging_functions import log_fail
64
65 # Function for setting permissions of files/directories
66 from jwql.permissions.permissions import set_permissions
67
68 # Function for parsing filenames
69 from jwql.utils.utils import filename_parser
70
71 # Objects for hard-coded information
72 from jwql.utils.utils import get_config
73 from jwql.utils.constants import JWST_DATAPRODUCTS, JWST_INSTRUMENT_NAMES
74
75
76 @log_fail
77 @log_info
78 def monitor_template_main():
79 """ The main function of the ``monitor_template`` module."""
80
81 # Example of logging
82 my_variable = 'foo'
83 logging.info('Some useful information: {}'.format(my_variable))
84
85 # Example of querying for a dataset via MAST API
86 service = "Mast.Jwst.Filtered.Niriss"
87 params = {"columns": "filename",
88 "filters": [{"paramName": "filter",
89 "values": ['F430M']}]}
90 response = Mast.service_request_async(service, params)
91 result = response[0].json()['data']
92 filename_of_interest = result[0]['filename'] # jw00304002001_02102_00001_nis_uncal.fits
93
94 # Example of parsing a filename
95 filename_dict = filename_parser(filename_of_interest)
96 # Contents of filename_dict:
97 # {'program_id': '00304',
98 # 'observation': '002',
99 # 'visit': '001',
100 # 'visit_group': '02',
101 # 'parallel_seq_id': '1',
102 # 'activity': '02',
103 # 'exposure_id': '00001',
104 # 'detector': 'nis',
105 # 'suffix': 'uncal'}
106
107 # Example of locating a dataset in the filesystem
108 filesystem = get_config()['filesystem']
109 dataset = os.path.join(filesystem, 'jw{}'.format(filename_dict['program_id']),
110 filename_of_interest)
111
112 # Example of reading in dataset using jwst.datamodels
113 im = datamodels.open(dataset)
114 # Now have access to:
115 # im.data # Data array
116 # im.err # ERR array
117 # im.meta # Metadata such as header keywords
118
119 # Example of saving a file and setting permissions
120 im.save('some_filename.fits')
121 set_permissions('some_filename.fits')
122
123 # Example of creating and exporting a Bokeh plot
124 plt = Donut(im.data, plot_width=600, plot_height=600)
125 plt.sizing_mode = 'stretch_both' # Necessary for responsive sizing on web app
126 script, div = components(plt)
127
128 plot_output_dir = get_config()['outputs']
129 div_outfile = os.path.join(plot_output_dir, 'monitor_name',
130 filename_of_interest + "_component.html")
131 script_outfile = os.path.join(plot_output_dir, 'monitor_name',
132 filename_of_interest + "_component.js")
133
134 for outfile, component in zip([div_outfile, script_outfile], [div, script]):
135 with open(outfile, 'w') as f:
136 f.write(component)
137 f.close()
138 set_permissions(outfile)
139
140 # Perform any other necessary code
141 well_named_variable = "Function does something."
142 result_of_second_function = second_function(well_named_variable)
143
144
145 def second_function(input_value):
146 """ This is your axiliary function; you may have many of these.
147
148 Parameters
149 ----------
150 input_value : str
151 Some value to modify in the function.
152
153 Returns
154 -------
155 useful_result : str
156 The result of modifying the input value.
157 """
158
159 # Begin logging:
160 logging.info(" ")
161 logging.info("The auxiliary function has started running.")
162
163 # Example function:
164 useful_result = input_value + " The other function did something, too."
165
166 logging.info("The auxiliary function is returning: ")
167 logging.info(useful_result)
168 logging.info(" ")
169
170 return useful_result
171
172
173 if __name__ == '__main__':
174
175 # Configure logging
176 module = os.path.basename(__file__).strip('.py')
177 configure_logging(module)
178
179 # Call the main function
180 monitor_template_main()
181
[end of jwql/utils/monitor_template.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jwql/utils/monitor_template.py b/jwql/utils/monitor_template.py
--- a/jwql/utils/monitor_template.py
+++ b/jwql/utils/monitor_template.py
@@ -46,7 +46,7 @@
Any monitoring script written for ``jwql`` must adhere to the
``jwql`` style guide located at:
- https://github.com/spacetelescope/jwql/blob/master/style_guide/style_guide.md
+ https://github.com/spacetelescope/jwql/blob/master/style_guide/README.md
"""
import os
|
{"golden_diff": "diff --git a/jwql/utils/monitor_template.py b/jwql/utils/monitor_template.py\n--- a/jwql/utils/monitor_template.py\n+++ b/jwql/utils/monitor_template.py\n@@ -46,7 +46,7 @@\n \n Any monitoring script written for ``jwql`` must adhere to the\n ``jwql`` style guide located at:\n- https://github.com/spacetelescope/jwql/blob/master/style_guide/style_guide.md\n+ https://github.com/spacetelescope/jwql/blob/master/style_guide/README.md\n \"\"\"\n \n import os\n", "issue": "Add README to style_guide directory\nWe are starting to have a range of helpful documents in our `jwql/style_guide` directory - the general style guide. This is great!\r\n\r\nI am thinking it would now be helpful to include a `README.md` file in there, so that any prospective user who looks there is met with some information about what resources are available.\n", "before_files": [{"content": "#! /usr/bin/env python\n\n\"\"\"\nThis module is intended to be a template to aid in creating new\nmonitoring scripts and to demonstrate how to format them to fully\nutilize the ``jwql`` framework.\n\nEach monitoring script must be executable from the command line (i.e.\nhave a ``if '__name__' == '__main__' section), as well as have a \"main\"\nfunction that calls all other functions, methods, or modules (i.e.\nthe entirety of the code is executed within the scope of the main\nfunction), as shown in this example.\n\nUsers may utilize the ``jwql`` framework functions for logging,\nsetting permissions, parsing filenames, etc. (See related ``import``s).\n\nAuthors\n-------\n\n - Catherine Martlin\n - Matthew Bourque\n\nUse\n---\n\n This module can be executed from the command line:\n ::\n\n python monitor_template.py\n\n Alternatively, it can be called from a python environment via the\n following import statements:\n ::\n\n from monitor_template import main_monitor_function\n from monitor_template import secondary_function\n\nDependencies\n------------\n\n The user must have a configuration file named ``config.json``\n placed in the ``utils`` directory.\n\nNotes\n-----\n\n Any monitoring script written for ``jwql`` must adhere to the\n ``jwql`` style guide located at:\n https://github.com/spacetelescope/jwql/blob/master/style_guide/style_guide.md\n\"\"\"\n\nimport os\nimport logging\n\nfrom astroquery.mast import Mast\nfrom jwst import datamodels\nfrom bokeh.charts import Donut\nfrom bokeh.embed import components\n\n# Functions for logging\nfrom jwql.logging.logging_functions import configure_logging\nfrom jwql.logging.logging_functions import log_info\nfrom jwql.logging.logging_functions import log_fail\n\n# Function for setting permissions of files/directories\nfrom jwql.permissions.permissions import set_permissions\n\n# Function for parsing filenames\nfrom jwql.utils.utils import filename_parser\n\n# Objects for hard-coded information\nfrom jwql.utils.utils import get_config\nfrom jwql.utils.constants import JWST_DATAPRODUCTS, JWST_INSTRUMENT_NAMES\n\n\n@log_fail\n@log_info\ndef monitor_template_main():\n \"\"\" The main function of the ``monitor_template`` module.\"\"\"\n\n # Example of logging\n my_variable = 'foo'\n logging.info('Some useful information: {}'.format(my_variable))\n\n # Example of querying for a dataset via MAST API\n service = \"Mast.Jwst.Filtered.Niriss\"\n params = {\"columns\": \"filename\",\n \"filters\": [{\"paramName\": \"filter\",\n \"values\": ['F430M']}]}\n response = Mast.service_request_async(service, params)\n result = response[0].json()['data']\n filename_of_interest = result[0]['filename'] # jw00304002001_02102_00001_nis_uncal.fits\n\n # Example of parsing a filename\n filename_dict = filename_parser(filename_of_interest)\n # Contents of filename_dict:\n # {'program_id': '00304',\n # 'observation': '002',\n # 'visit': '001',\n # 'visit_group': '02',\n # 'parallel_seq_id': '1',\n # 'activity': '02',\n # 'exposure_id': '00001',\n # 'detector': 'nis',\n # 'suffix': 'uncal'}\n\n # Example of locating a dataset in the filesystem\n filesystem = get_config()['filesystem']\n dataset = os.path.join(filesystem, 'jw{}'.format(filename_dict['program_id']),\n filename_of_interest)\n\n # Example of reading in dataset using jwst.datamodels\n im = datamodels.open(dataset)\n # Now have access to:\n # im.data # Data array\n # im.err # ERR array\n # im.meta # Metadata such as header keywords\n\n # Example of saving a file and setting permissions\n im.save('some_filename.fits')\n set_permissions('some_filename.fits')\n\n # Example of creating and exporting a Bokeh plot\n plt = Donut(im.data, plot_width=600, plot_height=600)\n plt.sizing_mode = 'stretch_both' # Necessary for responsive sizing on web app\n script, div = components(plt)\n\n plot_output_dir = get_config()['outputs']\n div_outfile = os.path.join(plot_output_dir, 'monitor_name',\n filename_of_interest + \"_component.html\")\n script_outfile = os.path.join(plot_output_dir, 'monitor_name',\n filename_of_interest + \"_component.js\")\n\n for outfile, component in zip([div_outfile, script_outfile], [div, script]):\n with open(outfile, 'w') as f:\n f.write(component)\n f.close()\n set_permissions(outfile)\n\n # Perform any other necessary code\n well_named_variable = \"Function does something.\"\n result_of_second_function = second_function(well_named_variable)\n\n\ndef second_function(input_value):\n \"\"\" This is your axiliary function; you may have many of these.\n\n Parameters\n ----------\n input_value : str\n Some value to modify in the function.\n\n Returns\n -------\n useful_result : str\n The result of modifying the input value.\n \"\"\"\n\n # Begin logging:\n logging.info(\" \")\n logging.info(\"The auxiliary function has started running.\")\n\n # Example function:\n useful_result = input_value + \" The other function did something, too.\"\n\n logging.info(\"The auxiliary function is returning: \")\n logging.info(useful_result)\n logging.info(\" \")\n\n return useful_result\n\n\nif __name__ == '__main__':\n\n # Configure logging\n module = os.path.basename(__file__).strip('.py')\n configure_logging(module)\n\n # Call the main function\n monitor_template_main()\n", "path": "jwql/utils/monitor_template.py"}]}
| 2,366 | 131 |
gh_patches_debug_15724
|
rasdani/github-patches
|
git_diff
|
pyscript__pyscript-1902
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
js_modules not behaving like in Polyscript
### Checklist
- [X] I added a descriptive title
- [X] I searched for other issues and couldn't find a solution or duplication
- [X] I already searched in Google and didn't find any good information or help
### What happened?
Apparently `from pyscript.js_modules import Thing` doesn't work in *PyScript* the same way it does on *Polyscript*.
The main difference is that in *PyScript* that's exported within the Python code, as opposite of being registered as JS module like it is for *Polyscript* where *js_modules* use `registerJSModule` utility instead.
### What browsers are you seeing the problem on? (if applicable)
_No response_
### Console info
_No response_
### Additional Context
_No response_
</issue>
<code>
[start of pyscript.core/src/stdlib/pyscript/magic_js.py]
1 import js as globalThis
2 from polyscript import js_modules
3 from pyscript.util import NotSupported
4
5 RUNNING_IN_WORKER = not hasattr(globalThis, "document")
6
7 if RUNNING_IN_WORKER:
8 import js
9 import polyscript
10
11 PyWorker = NotSupported(
12 "pyscript.PyWorker",
13 "pyscript.PyWorker works only when running in the main thread",
14 )
15 window = polyscript.xworker.window
16 document = window.document
17 js.document = document
18 sync = polyscript.xworker.sync
19
20 # in workers the display does not have a default ID
21 # but there is a sync utility from xworker
22 def current_target():
23 return polyscript.target
24
25 else:
26 import _pyscript
27 from _pyscript import PyWorker
28
29 window = globalThis
30 document = globalThis.document
31 sync = NotSupported(
32 "pyscript.sync", "pyscript.sync works only when running in a worker"
33 )
34
35 # in MAIN the current element target exist, just use it
36 def current_target():
37 return _pyscript.target
38
[end of pyscript.core/src/stdlib/pyscript/magic_js.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyscript.core/src/stdlib/pyscript/magic_js.py b/pyscript.core/src/stdlib/pyscript/magic_js.py
--- a/pyscript.core/src/stdlib/pyscript/magic_js.py
+++ b/pyscript.core/src/stdlib/pyscript/magic_js.py
@@ -1,9 +1,28 @@
+import sys
+
import js as globalThis
from polyscript import js_modules
from pyscript.util import NotSupported
RUNNING_IN_WORKER = not hasattr(globalThis, "document")
+
+# allow `from pyscript.js_modules.xxx import yyy`
+class JSModule(object):
+ def __init__(self, name):
+ self.name = name
+
+ def __getattr__(self, field):
+ # avoid pyodide looking for non existent fields
+ if not field.startswith("_"):
+ return getattr(getattr(js_modules, self.name), field)
+
+
+# generate N modules in the system that will proxy the real value
+for name in globalThis.Reflect.ownKeys(js_modules):
+ sys.modules[f"pyscript.js_modules.{name}"] = JSModule(name)
+sys.modules["pyscript.js_modules"] = js_modules
+
if RUNNING_IN_WORKER:
import js
import polyscript
|
{"golden_diff": "diff --git a/pyscript.core/src/stdlib/pyscript/magic_js.py b/pyscript.core/src/stdlib/pyscript/magic_js.py\n--- a/pyscript.core/src/stdlib/pyscript/magic_js.py\n+++ b/pyscript.core/src/stdlib/pyscript/magic_js.py\n@@ -1,9 +1,28 @@\n+import sys\n+\n import js as globalThis\n from polyscript import js_modules\n from pyscript.util import NotSupported\n \n RUNNING_IN_WORKER = not hasattr(globalThis, \"document\")\n \n+\n+# allow `from pyscript.js_modules.xxx import yyy`\n+class JSModule(object):\n+ def __init__(self, name):\n+ self.name = name\n+\n+ def __getattr__(self, field):\n+ # avoid pyodide looking for non existent fields\n+ if not field.startswith(\"_\"):\n+ return getattr(getattr(js_modules, self.name), field)\n+\n+\n+# generate N modules in the system that will proxy the real value\n+for name in globalThis.Reflect.ownKeys(js_modules):\n+ sys.modules[f\"pyscript.js_modules.{name}\"] = JSModule(name)\n+sys.modules[\"pyscript.js_modules\"] = js_modules\n+\n if RUNNING_IN_WORKER:\n import js\n import polyscript\n", "issue": "js_modules not behaving like in Polyscript\n### Checklist\n\n- [X] I added a descriptive title\n- [X] I searched for other issues and couldn't find a solution or duplication\n- [X] I already searched in Google and didn't find any good information or help\n\n### What happened?\n\nApparently `from pyscript.js_modules import Thing` doesn't work in *PyScript* the same way it does on *Polyscript*.\r\n\r\nThe main difference is that in *PyScript* that's exported within the Python code, as opposite of being registered as JS module like it is for *Polyscript* where *js_modules* use `registerJSModule` utility instead.\n\n### What browsers are you seeing the problem on? (if applicable)\n\n_No response_\n\n### Console info\n\n_No response_\n\n### Additional Context\n\n_No response_\n", "before_files": [{"content": "import js as globalThis\nfrom polyscript import js_modules\nfrom pyscript.util import NotSupported\n\nRUNNING_IN_WORKER = not hasattr(globalThis, \"document\")\n\nif RUNNING_IN_WORKER:\n import js\n import polyscript\n\n PyWorker = NotSupported(\n \"pyscript.PyWorker\",\n \"pyscript.PyWorker works only when running in the main thread\",\n )\n window = polyscript.xworker.window\n document = window.document\n js.document = document\n sync = polyscript.xworker.sync\n\n # in workers the display does not have a default ID\n # but there is a sync utility from xworker\n def current_target():\n return polyscript.target\n\nelse:\n import _pyscript\n from _pyscript import PyWorker\n\n window = globalThis\n document = globalThis.document\n sync = NotSupported(\n \"pyscript.sync\", \"pyscript.sync works only when running in a worker\"\n )\n\n # in MAIN the current element target exist, just use it\n def current_target():\n return _pyscript.target\n", "path": "pyscript.core/src/stdlib/pyscript/magic_js.py"}]}
| 1,036 | 283 |
gh_patches_debug_51274
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-1806
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OTLP gRPC exporter silently fails if scheme is not specified in endpoint
Issue arising from implementing https://github.com/open-telemetry/opentelemetry-python/pull/1771
**Steps to reproduce**
Supplying an remote collector hostname without scheme causes the OTLP exporter to silently not export spans.
https://github.com/open-telemetry/opentelemetry-python/blob/b3455cd1164f9c5f336cc26a52fb351cb422b0b2/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py#L210
`parsed_url.netloc` is an empty str if the scheme is not specified e.g. `localhost:55680`, this causes spans to not be exported to a remote collector as `endpoint` is empty.
**What is the expected behavior?**
Spans are correctly exported to remote collector via OTLP.
**What is the actual behavior?**
Spans are not exported to remote collector via OTLP.
**Additional context**
Per [opentelemetry specs](https://github.com/open-telemetry/opentelemetry-specification/blob/f62744a679814937214fd17394ab3fa8a9099424/specification/protocol/exporter.md#configuration-options), it was written that the scheme must be specified in the endpoint; this library should either enforce that the scheme is supplied (fail hard if not) or assume a sane default (http?) for the purposes of using this library.
</issue>
<code>
[start of exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """OTLP Exporter"""
16
17 import logging
18 from abc import ABC, abstractmethod
19 from collections.abc import Mapping, Sequence
20 from os import environ
21 from time import sleep
22 from typing import Any, Callable, Dict, Generic, List, Optional
23 from typing import Sequence as TypingSequence
24 from typing import Text, TypeVar
25 from urllib import parse
26 from urllib.parse import urlparse
27
28 from backoff import expo
29 from google.rpc.error_details_pb2 import RetryInfo
30 from grpc import (
31 ChannelCredentials,
32 Compression,
33 RpcError,
34 StatusCode,
35 insecure_channel,
36 secure_channel,
37 ssl_channel_credentials,
38 )
39
40 from opentelemetry.proto.common.v1.common_pb2 import AnyValue, KeyValue
41 from opentelemetry.proto.resource.v1.resource_pb2 import Resource
42 from opentelemetry.sdk.environment_variables import (
43 OTEL_EXPORTER_OTLP_CERTIFICATE,
44 OTEL_EXPORTER_OTLP_COMPRESSION,
45 OTEL_EXPORTER_OTLP_ENDPOINT,
46 OTEL_EXPORTER_OTLP_HEADERS,
47 OTEL_EXPORTER_OTLP_TIMEOUT,
48 )
49 from opentelemetry.sdk.resources import Resource as SDKResource
50
51 logger = logging.getLogger(__name__)
52 SDKDataT = TypeVar("SDKDataT")
53 ResourceDataT = TypeVar("ResourceDataT")
54 TypingResourceT = TypeVar("TypingResourceT")
55 ExportServiceRequestT = TypeVar("ExportServiceRequestT")
56 ExportResultT = TypeVar("ExportResultT")
57
58 _ENVIRON_TO_COMPRESSION = {
59 None: None,
60 "gzip": Compression.Gzip,
61 }
62
63
64 class InvalidCompressionValueException(Exception):
65 def __init__(self, environ_key: str, environ_value: str):
66 super().__init__(
67 'Invalid value "{}" for compression envvar {}'.format(
68 environ_value, environ_key
69 )
70 )
71
72
73 def environ_to_compression(environ_key: str) -> Optional[Compression]:
74 environ_value = (
75 environ[environ_key].lower().strip()
76 if environ_key in environ
77 else None
78 )
79 if environ_value not in _ENVIRON_TO_COMPRESSION:
80 raise InvalidCompressionValueException(environ_key, environ_value)
81 return _ENVIRON_TO_COMPRESSION[environ_value]
82
83
84 def _translate_key_values(key: Text, value: Any) -> KeyValue:
85
86 if isinstance(value, bool):
87 any_value = AnyValue(bool_value=value)
88
89 elif isinstance(value, str):
90 any_value = AnyValue(string_value=value)
91
92 elif isinstance(value, int):
93 any_value = AnyValue(int_value=value)
94
95 elif isinstance(value, float):
96 any_value = AnyValue(double_value=value)
97
98 elif isinstance(value, Sequence):
99 any_value = AnyValue(array_value=value)
100
101 elif isinstance(value, Mapping):
102 any_value = AnyValue(kvlist_value=value)
103
104 else:
105 raise Exception(
106 "Invalid type {} of value {}".format(type(value), value)
107 )
108
109 return KeyValue(key=key, value=any_value)
110
111
112 def get_resource_data(
113 sdk_resource_instrumentation_library_data: Dict[
114 SDKResource, ResourceDataT
115 ],
116 resource_class: Callable[..., TypingResourceT],
117 name: str,
118 ) -> List[TypingResourceT]:
119
120 resource_data = []
121
122 for (
123 sdk_resource,
124 instrumentation_library_data,
125 ) in sdk_resource_instrumentation_library_data.items():
126
127 collector_resource = Resource()
128
129 for key, value in sdk_resource.attributes.items():
130
131 try:
132 # pylint: disable=no-member
133 collector_resource.attributes.append(
134 _translate_key_values(key, value)
135 )
136 except Exception as error: # pylint: disable=broad-except
137 logger.exception(error)
138
139 resource_data.append(
140 resource_class(
141 **{
142 "resource": collector_resource,
143 "instrumentation_library_{}".format(name): [
144 instrumentation_library_data
145 ],
146 }
147 )
148 )
149
150 return resource_data
151
152
153 def _load_credential_from_file(filepath) -> ChannelCredentials:
154 try:
155 with open(filepath, "rb") as creds_file:
156 credential = creds_file.read()
157 return ssl_channel_credentials(credential)
158 except FileNotFoundError:
159 logger.exception("Failed to read credential file")
160 return None
161
162
163 def _get_credentials(creds, environ_key):
164 if creds is not None:
165 return creds
166 creds_env = environ.get(environ_key)
167 if creds_env:
168 return _load_credential_from_file(creds_env)
169 return ssl_channel_credentials()
170
171
172 # pylint: disable=no-member
173 class OTLPExporterMixin(
174 ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]
175 ):
176 """OTLP span exporter
177
178 Args:
179 endpoint: OpenTelemetry Collector receiver endpoint
180 insecure: Connection type
181 credentials: ChannelCredentials object for server authentication
182 headers: Headers to send when exporting
183 timeout: Backend request timeout in seconds
184 compression: gRPC compression method to use
185 """
186
187 def __init__(
188 self,
189 endpoint: Optional[str] = None,
190 insecure: Optional[bool] = None,
191 credentials: Optional[ChannelCredentials] = None,
192 headers: Optional[Sequence] = None,
193 timeout: Optional[int] = None,
194 compression: Optional[Compression] = None,
195 ):
196 super().__init__()
197
198 endpoint = endpoint or environ.get(
199 OTEL_EXPORTER_OTLP_ENDPOINT, "http://localhost:4317"
200 )
201
202 parsed_url = urlparse(endpoint)
203
204 if insecure is None:
205 if parsed_url.scheme == "https":
206 insecure = False
207 else:
208 insecure = True
209
210 endpoint = parsed_url.netloc
211
212 self._headers = headers or environ.get(OTEL_EXPORTER_OTLP_HEADERS)
213 if isinstance(self._headers, str):
214 self._headers = tuple(
215 tuple(item.split("=")) for item in self._headers.split(",")
216 )
217 self._timeout = timeout or int(
218 environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, 10)
219 )
220 self._collector_span_kwargs = None
221
222 compression = (
223 environ_to_compression(OTEL_EXPORTER_OTLP_COMPRESSION)
224 if compression is None
225 else compression
226 ) or Compression.NoCompression
227
228 if insecure:
229 self._client = self._stub(
230 insecure_channel(endpoint, compression=compression)
231 )
232 else:
233 credentials = _get_credentials(
234 credentials, OTEL_EXPORTER_OTLP_CERTIFICATE
235 )
236 self._client = self._stub(
237 secure_channel(endpoint, credentials, compression=compression)
238 )
239
240 @abstractmethod
241 def _translate_data(
242 self, data: TypingSequence[SDKDataT]
243 ) -> ExportServiceRequestT:
244 pass
245
246 def _export(self, data: TypingSequence[SDKDataT]) -> ExportResultT:
247 # expo returns a generator that yields delay values which grow
248 # exponentially. Once delay is greater than max_value, the yielded
249 # value will remain constant.
250 # max_value is set to 900 (900 seconds is 15 minutes) to use the same
251 # value as used in the Go implementation.
252
253 max_value = 900
254
255 for delay in expo(max_value=max_value):
256
257 if delay == max_value:
258 return self._result.FAILURE
259
260 try:
261 self._client.Export(
262 request=self._translate_data(data),
263 metadata=self._headers,
264 timeout=self._timeout,
265 )
266
267 return self._result.SUCCESS
268
269 except RpcError as error:
270
271 if error.code() in [
272 StatusCode.CANCELLED,
273 StatusCode.DEADLINE_EXCEEDED,
274 StatusCode.PERMISSION_DENIED,
275 StatusCode.UNAUTHENTICATED,
276 StatusCode.RESOURCE_EXHAUSTED,
277 StatusCode.ABORTED,
278 StatusCode.OUT_OF_RANGE,
279 StatusCode.UNAVAILABLE,
280 StatusCode.DATA_LOSS,
281 ]:
282
283 retry_info_bin = dict(error.trailing_metadata()).get(
284 "google.rpc.retryinfo-bin"
285 )
286 if retry_info_bin is not None:
287 retry_info = RetryInfo()
288 retry_info.ParseFromString(retry_info_bin)
289 delay = (
290 retry_info.retry_delay.seconds
291 + retry_info.retry_delay.nanos / 1.0e9
292 )
293
294 logger.debug(
295 "Waiting %ss before retrying export of span", delay
296 )
297 sleep(delay)
298 continue
299
300 if error.code() == StatusCode.OK:
301 return self._result.SUCCESS
302
303 return self._result.FAILURE
304
305 return self._result.FAILURE
306
307 def shutdown(self) -> None:
308 pass
309
[end of exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py
--- a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py
@@ -207,7 +207,8 @@
else:
insecure = True
- endpoint = parsed_url.netloc
+ if parsed_url.netloc:
+ endpoint = parsed_url.netloc
self._headers = headers or environ.get(OTEL_EXPORTER_OTLP_HEADERS)
if isinstance(self._headers, str):
|
{"golden_diff": "diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py\n--- a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py\n+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py\n@@ -207,7 +207,8 @@\n else:\n insecure = True\n \n- endpoint = parsed_url.netloc\n+ if parsed_url.netloc:\n+ endpoint = parsed_url.netloc\n \n self._headers = headers or environ.get(OTEL_EXPORTER_OTLP_HEADERS)\n if isinstance(self._headers, str):\n", "issue": "OTLP gRPC exporter silently fails if scheme is not specified in endpoint\nIssue arising from implementing https://github.com/open-telemetry/opentelemetry-python/pull/1771\r\n\r\n**Steps to reproduce**\r\nSupplying an remote collector hostname without scheme causes the OTLP exporter to silently not export spans.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python/blob/b3455cd1164f9c5f336cc26a52fb351cb422b0b2/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py#L210\r\n\r\n`parsed_url.netloc` is an empty str if the scheme is not specified e.g. `localhost:55680`, this causes spans to not be exported to a remote collector as `endpoint` is empty.\r\n\r\n**What is the expected behavior?**\r\nSpans are correctly exported to remote collector via OTLP.\r\n\r\n**What is the actual behavior?**\r\nSpans are not exported to remote collector via OTLP.\r\n\r\n**Additional context**\r\nPer [opentelemetry specs](https://github.com/open-telemetry/opentelemetry-specification/blob/f62744a679814937214fd17394ab3fa8a9099424/specification/protocol/exporter.md#configuration-options), it was written that the scheme must be specified in the endpoint; this library should either enforce that the scheme is supplied (fail hard if not) or assume a sane default (http?) for the purposes of using this library.\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OTLP Exporter\"\"\"\n\nimport logging\nfrom abc import ABC, abstractmethod\nfrom collections.abc import Mapping, Sequence\nfrom os import environ\nfrom time import sleep\nfrom typing import Any, Callable, Dict, Generic, List, Optional\nfrom typing import Sequence as TypingSequence\nfrom typing import Text, TypeVar\nfrom urllib import parse\nfrom urllib.parse import urlparse\n\nfrom backoff import expo\nfrom google.rpc.error_details_pb2 import RetryInfo\nfrom grpc import (\n ChannelCredentials,\n Compression,\n RpcError,\n StatusCode,\n insecure_channel,\n secure_channel,\n ssl_channel_credentials,\n)\n\nfrom opentelemetry.proto.common.v1.common_pb2 import AnyValue, KeyValue\nfrom opentelemetry.proto.resource.v1.resource_pb2 import Resource\nfrom opentelemetry.sdk.environment_variables import (\n OTEL_EXPORTER_OTLP_CERTIFICATE,\n OTEL_EXPORTER_OTLP_COMPRESSION,\n OTEL_EXPORTER_OTLP_ENDPOINT,\n OTEL_EXPORTER_OTLP_HEADERS,\n OTEL_EXPORTER_OTLP_TIMEOUT,\n)\nfrom opentelemetry.sdk.resources import Resource as SDKResource\n\nlogger = logging.getLogger(__name__)\nSDKDataT = TypeVar(\"SDKDataT\")\nResourceDataT = TypeVar(\"ResourceDataT\")\nTypingResourceT = TypeVar(\"TypingResourceT\")\nExportServiceRequestT = TypeVar(\"ExportServiceRequestT\")\nExportResultT = TypeVar(\"ExportResultT\")\n\n_ENVIRON_TO_COMPRESSION = {\n None: None,\n \"gzip\": Compression.Gzip,\n}\n\n\nclass InvalidCompressionValueException(Exception):\n def __init__(self, environ_key: str, environ_value: str):\n super().__init__(\n 'Invalid value \"{}\" for compression envvar {}'.format(\n environ_value, environ_key\n )\n )\n\n\ndef environ_to_compression(environ_key: str) -> Optional[Compression]:\n environ_value = (\n environ[environ_key].lower().strip()\n if environ_key in environ\n else None\n )\n if environ_value not in _ENVIRON_TO_COMPRESSION:\n raise InvalidCompressionValueException(environ_key, environ_value)\n return _ENVIRON_TO_COMPRESSION[environ_value]\n\n\ndef _translate_key_values(key: Text, value: Any) -> KeyValue:\n\n if isinstance(value, bool):\n any_value = AnyValue(bool_value=value)\n\n elif isinstance(value, str):\n any_value = AnyValue(string_value=value)\n\n elif isinstance(value, int):\n any_value = AnyValue(int_value=value)\n\n elif isinstance(value, float):\n any_value = AnyValue(double_value=value)\n\n elif isinstance(value, Sequence):\n any_value = AnyValue(array_value=value)\n\n elif isinstance(value, Mapping):\n any_value = AnyValue(kvlist_value=value)\n\n else:\n raise Exception(\n \"Invalid type {} of value {}\".format(type(value), value)\n )\n\n return KeyValue(key=key, value=any_value)\n\n\ndef get_resource_data(\n sdk_resource_instrumentation_library_data: Dict[\n SDKResource, ResourceDataT\n ],\n resource_class: Callable[..., TypingResourceT],\n name: str,\n) -> List[TypingResourceT]:\n\n resource_data = []\n\n for (\n sdk_resource,\n instrumentation_library_data,\n ) in sdk_resource_instrumentation_library_data.items():\n\n collector_resource = Resource()\n\n for key, value in sdk_resource.attributes.items():\n\n try:\n # pylint: disable=no-member\n collector_resource.attributes.append(\n _translate_key_values(key, value)\n )\n except Exception as error: # pylint: disable=broad-except\n logger.exception(error)\n\n resource_data.append(\n resource_class(\n **{\n \"resource\": collector_resource,\n \"instrumentation_library_{}\".format(name): [\n instrumentation_library_data\n ],\n }\n )\n )\n\n return resource_data\n\n\ndef _load_credential_from_file(filepath) -> ChannelCredentials:\n try:\n with open(filepath, \"rb\") as creds_file:\n credential = creds_file.read()\n return ssl_channel_credentials(credential)\n except FileNotFoundError:\n logger.exception(\"Failed to read credential file\")\n return None\n\n\ndef _get_credentials(creds, environ_key):\n if creds is not None:\n return creds\n creds_env = environ.get(environ_key)\n if creds_env:\n return _load_credential_from_file(creds_env)\n return ssl_channel_credentials()\n\n\n# pylint: disable=no-member\nclass OTLPExporterMixin(\n ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]\n):\n \"\"\"OTLP span exporter\n\n Args:\n endpoint: OpenTelemetry Collector receiver endpoint\n insecure: Connection type\n credentials: ChannelCredentials object for server authentication\n headers: Headers to send when exporting\n timeout: Backend request timeout in seconds\n compression: gRPC compression method to use\n \"\"\"\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n insecure: Optional[bool] = None,\n credentials: Optional[ChannelCredentials] = None,\n headers: Optional[Sequence] = None,\n timeout: Optional[int] = None,\n compression: Optional[Compression] = None,\n ):\n super().__init__()\n\n endpoint = endpoint or environ.get(\n OTEL_EXPORTER_OTLP_ENDPOINT, \"http://localhost:4317\"\n )\n\n parsed_url = urlparse(endpoint)\n\n if insecure is None:\n if parsed_url.scheme == \"https\":\n insecure = False\n else:\n insecure = True\n\n endpoint = parsed_url.netloc\n\n self._headers = headers or environ.get(OTEL_EXPORTER_OTLP_HEADERS)\n if isinstance(self._headers, str):\n self._headers = tuple(\n tuple(item.split(\"=\")) for item in self._headers.split(\",\")\n )\n self._timeout = timeout or int(\n environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, 10)\n )\n self._collector_span_kwargs = None\n\n compression = (\n environ_to_compression(OTEL_EXPORTER_OTLP_COMPRESSION)\n if compression is None\n else compression\n ) or Compression.NoCompression\n\n if insecure:\n self._client = self._stub(\n insecure_channel(endpoint, compression=compression)\n )\n else:\n credentials = _get_credentials(\n credentials, OTEL_EXPORTER_OTLP_CERTIFICATE\n )\n self._client = self._stub(\n secure_channel(endpoint, credentials, compression=compression)\n )\n\n @abstractmethod\n def _translate_data(\n self, data: TypingSequence[SDKDataT]\n ) -> ExportServiceRequestT:\n pass\n\n def _export(self, data: TypingSequence[SDKDataT]) -> ExportResultT:\n # expo returns a generator that yields delay values which grow\n # exponentially. Once delay is greater than max_value, the yielded\n # value will remain constant.\n # max_value is set to 900 (900 seconds is 15 minutes) to use the same\n # value as used in the Go implementation.\n\n max_value = 900\n\n for delay in expo(max_value=max_value):\n\n if delay == max_value:\n return self._result.FAILURE\n\n try:\n self._client.Export(\n request=self._translate_data(data),\n metadata=self._headers,\n timeout=self._timeout,\n )\n\n return self._result.SUCCESS\n\n except RpcError as error:\n\n if error.code() in [\n StatusCode.CANCELLED,\n StatusCode.DEADLINE_EXCEEDED,\n StatusCode.PERMISSION_DENIED,\n StatusCode.UNAUTHENTICATED,\n StatusCode.RESOURCE_EXHAUSTED,\n StatusCode.ABORTED,\n StatusCode.OUT_OF_RANGE,\n StatusCode.UNAVAILABLE,\n StatusCode.DATA_LOSS,\n ]:\n\n retry_info_bin = dict(error.trailing_metadata()).get(\n \"google.rpc.retryinfo-bin\"\n )\n if retry_info_bin is not None:\n retry_info = RetryInfo()\n retry_info.ParseFromString(retry_info_bin)\n delay = (\n retry_info.retry_delay.seconds\n + retry_info.retry_delay.nanos / 1.0e9\n )\n\n logger.debug(\n \"Waiting %ss before retrying export of span\", delay\n )\n sleep(delay)\n continue\n\n if error.code() == StatusCode.OK:\n return self._result.SUCCESS\n\n return self._result.FAILURE\n\n return self._result.FAILURE\n\n def shutdown(self) -> None:\n pass\n", "path": "exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py"}]}
| 3,707 | 199 |
gh_patches_debug_36885
|
rasdani/github-patches
|
git_diff
|
ocadotechnology__codeforlife-portal-173
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Password Reset Hidden while waiting for 2FA
https://github.com/ocadotechnology/rapid-router/issues/684
Original message:
Steps to reproduce on a 2FA enabled account:
1. Got to /teach/, enter login details and click "Sign in".
2. When the token form appears, go back to teach and click "Forgotten Password?" link
You'll end up back at teach and the url will change to "/teach/?next=/teach/home/"
It seems that if 2FA login process is pending, the password reset screen gets redirected back to /teach/.
---
=> This is because after the sign in attempt, the teacher is no longer "not_logged_in" because a UserProfile has been found.
</issue>
<code>
[start of portal/permissions.py]
1 # -*- coding: utf-8 -*-
2 # Code for Life
3 #
4 # Copyright (C) 2015, Ocado Innovation Limited
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU Affero General Public License as
8 # published by the Free Software Foundation, either version 3 of the
9 # License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU Affero General Public License for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 # ADDITIONAL TERMS – Section 7 GNU General Public Licence
20 #
21 # This licence does not grant any right, title or interest in any “Ocado” logos,
22 # trade names or the trademark “Ocado” or any other trademarks or domain names
23 # owned by Ocado Innovation Limited or the Ocado group of companies or any other
24 # distinctive brand features of “Ocado” as may be secured from time to time. You
25 # must not distribute any modification of this program using the trademark
26 # “Ocado” or claim any affiliation or association with Ocado or its employees.
27 #
28 # You are not authorised to use the name Ocado (or any of its trade names) or
29 # the names of any author or contributor in advertising or for publicity purposes
30 # pertaining to the distribution of this program, without the prior written
31 # authorisation of Ocado.
32 #
33 # Any propagation, distribution or conveyance of this program must include this
34 # copyright notice and these terms. You must not misrepresent the origins of this
35 # program; modified versions of the program must be marked as such and not
36 # identified as the original program.
37 from functools import wraps
38 from django.http import HttpResponseRedirect
39 from django.core.urlresolvers import reverse_lazy
40
41 from portal.utils import using_two_factor
42
43
44 def logged_in_as_teacher(u):
45 if not hasattr(u, 'userprofile') or not hasattr(u.userprofile, 'teacher'):
46 return False
47
48 return u.is_verified() or not using_two_factor(u)
49
50
51 def logged_in_as_student(u):
52 return hasattr(u, 'userprofile') and hasattr(u.userprofile, 'student')
53
54
55 def not_logged_in(u):
56 return not hasattr(u, 'userprofile')
57
58
59 def teacher_verified(view_func):
60 @wraps(view_func)
61 def wrapped(request, *args, **kwargs):
62 u = request.user
63 if (not hasattr(u, 'userprofile') or not hasattr(u.userprofile, 'teacher') or
64 (not u.is_verified() and using_two_factor(u))):
65 return HttpResponseRedirect(reverse_lazy('teach'))
66
67 return view_func(request, *args, **kwargs)
68
69 return wrapped
70
[end of portal/permissions.py]
[start of portal/views/registration.py]
1 # -*- coding: utf-8 -*-
2 # Code for Life
3 #
4 # Copyright (C) 2015, Ocado Innovation Limited
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU Affero General Public License as
8 # published by the Free Software Foundation, either version 3 of the
9 # License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU Affero General Public License for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 # ADDITIONAL TERMS – Section 7 GNU General Public Licence
20 #
21 # This licence does not grant any right, title or interest in any “Ocado” logos,
22 # trade names or the trademark “Ocado” or any other trademarks or domain names
23 # owned by Ocado Innovation Limited or the Ocado group of companies or any other
24 # distinctive brand features of “Ocado” as may be secured from time to time. You
25 # must not distribute any modification of this program using the trademark
26 # “Ocado” or claim any affiliation or association with Ocado or its employees.
27 #
28 # You are not authorised to use the name Ocado (or any of its trade names) or
29 # the names of any author or contributor in advertising or for publicity purposes
30 # pertaining to the distribution of this program, without the prior written
31 # authorisation of Ocado.
32 #
33 # Any propagation, distribution or conveyance of this program must include this
34 # copyright notice and these terms. You must not misrepresent the origins of this
35 # program; modified versions of the program must be marked as such and not
36 # identified as the original program.
37
38 from django.http import HttpResponseRedirect
39 from django.utils.http import urlsafe_base64_decode
40 from django.core.urlresolvers import reverse_lazy
41 from django.contrib.auth.decorators import user_passes_test
42 from django.contrib.auth.views import password_reset, password_reset_confirm
43 from django.contrib.auth import get_user_model
44 from two_factor.views import LoginView
45 from recaptcha import RecaptchaClient
46 from django_recaptcha_field import create_form_subclass_with_recaptcha
47 from deploy.captcha import CAPTCHA_ENABLED
48
49 from portal.forms.registration import PasswordResetSetPasswordForm, StudentPasswordResetForm, TeacherPasswordResetForm
50 from portal.permissions import not_logged_in
51 from portal.helpers.email import PASSWORD_RESET_EMAIL
52 from portal import app_settings
53 from ratelimit.decorators import ratelimit
54
55 recaptcha_client = RecaptchaClient(app_settings.RECAPTCHA_PRIVATE_KEY, app_settings.RECAPTCHA_PUBLIC_KEY)
56
57 @ratelimit('def', periods=['1m'])
58 def custom_2FA_login(request):
59 block_limit = 5
60
61 if getattr(request, 'limits', { 'def' : [0] })['def'][0] >= block_limit:
62 return HttpResponseRedirect(reverse_lazy('locked_out'))
63
64 return LoginView.as_view()(request)
65
66
67 @user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))
68 def password_reset_check_and_confirm(request, uidb64=None, token=None, post_reset_redirect=None):
69 # Customised standard django auth view with customised form to incorporate checking the password set is strong enough
70 UserModel = get_user_model()
71 try:
72 uid = urlsafe_base64_decode(uidb64)
73 user = UserModel._default_manager.get(pk=uid)
74 except (TypeError, ValueError, OverflowError, UserModel.DoesNotExist):
75 user = None
76 if user and hasattr(user.userprofile, 'student'):
77 usertype = 'STUDENT'
78 else:
79 usertype = 'TEACHER'
80 return password_reset_confirm(request, set_password_form=PasswordResetSetPasswordForm, uidb64=uidb64, token=token, post_reset_redirect=post_reset_redirect, extra_context= { 'usertype': usertype })
81
82
83 @user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))
84 def student_password_reset(request, post_reset_redirect):
85 form = StudentPasswordResetForm if not CAPTCHA_ENABLED else decorate_with_captcha(StudentPasswordResetForm, request,
86 recaptcha_client)
87 return password_reset(request, from_email=PASSWORD_RESET_EMAIL, template_name='registration/student_password_reset_form.html', password_reset_form=form, post_reset_redirect=post_reset_redirect, is_admin_site=True)
88
89
90 @user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))
91 def teacher_password_reset(request, post_reset_redirect):
92 form = TeacherPasswordResetForm if not CAPTCHA_ENABLED else decorate_with_captcha(TeacherPasswordResetForm, request,
93 recaptcha_client)
94 return password_reset(request, from_email=PASSWORD_RESET_EMAIL, template_name='registration/teacher_password_reset_form.html', password_reset_form=form, post_reset_redirect=post_reset_redirect, is_admin_site=True)
95
96
97 def decorate_with_captcha(base_class, request, recaptcha_client):
98 form_with_captcha_class = create_form_subclass_with_recaptcha(base_class, recaptcha_client)
99
100 class FormWithCaptcha(form_with_captcha_class):
101
102 def __init__(self, *args, **kwargs):
103 super(FormWithCaptcha, self).__init__(request, *args, **kwargs)
104
105 return FormWithCaptcha
106
[end of portal/views/registration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/portal/permissions.py b/portal/permissions.py
--- a/portal/permissions.py
+++ b/portal/permissions.py
@@ -56,6 +56,10 @@
return not hasattr(u, 'userprofile')
+def not_fully_logged_in(u):
+ return not_logged_in(u) or (not logged_in_as_student(u) and not logged_in_as_teacher(u))
+
+
def teacher_verified(view_func):
@wraps(view_func)
def wrapped(request, *args, **kwargs):
diff --git a/portal/views/registration.py b/portal/views/registration.py
--- a/portal/views/registration.py
+++ b/portal/views/registration.py
@@ -47,7 +47,7 @@
from deploy.captcha import CAPTCHA_ENABLED
from portal.forms.registration import PasswordResetSetPasswordForm, StudentPasswordResetForm, TeacherPasswordResetForm
-from portal.permissions import not_logged_in
+from portal.permissions import not_logged_in, not_fully_logged_in
from portal.helpers.email import PASSWORD_RESET_EMAIL
from portal import app_settings
from ratelimit.decorators import ratelimit
@@ -64,7 +64,7 @@
return LoginView.as_view()(request)
-@user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))
+@user_passes_test(not_fully_logged_in, login_url=reverse_lazy('current_user'))
def password_reset_check_and_confirm(request, uidb64=None, token=None, post_reset_redirect=None):
# Customised standard django auth view with customised form to incorporate checking the password set is strong enough
UserModel = get_user_model()
@@ -87,7 +87,7 @@
return password_reset(request, from_email=PASSWORD_RESET_EMAIL, template_name='registration/student_password_reset_form.html', password_reset_form=form, post_reset_redirect=post_reset_redirect, is_admin_site=True)
-@user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))
+@user_passes_test(not_fully_logged_in, login_url=reverse_lazy('current_user'))
def teacher_password_reset(request, post_reset_redirect):
form = TeacherPasswordResetForm if not CAPTCHA_ENABLED else decorate_with_captcha(TeacherPasswordResetForm, request,
recaptcha_client)
|
{"golden_diff": "diff --git a/portal/permissions.py b/portal/permissions.py\n--- a/portal/permissions.py\n+++ b/portal/permissions.py\n@@ -56,6 +56,10 @@\n return not hasattr(u, 'userprofile')\n \n \n+def not_fully_logged_in(u):\n+ return not_logged_in(u) or (not logged_in_as_student(u) and not logged_in_as_teacher(u))\n+\n+\n def teacher_verified(view_func):\n @wraps(view_func)\n def wrapped(request, *args, **kwargs):\ndiff --git a/portal/views/registration.py b/portal/views/registration.py\n--- a/portal/views/registration.py\n+++ b/portal/views/registration.py\n@@ -47,7 +47,7 @@\n from deploy.captcha import CAPTCHA_ENABLED\n \n from portal.forms.registration import PasswordResetSetPasswordForm, StudentPasswordResetForm, TeacherPasswordResetForm\n-from portal.permissions import not_logged_in\n+from portal.permissions import not_logged_in, not_fully_logged_in\n from portal.helpers.email import PASSWORD_RESET_EMAIL\n from portal import app_settings\n from ratelimit.decorators import ratelimit\n@@ -64,7 +64,7 @@\n return LoginView.as_view()(request)\n \n \n-@user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))\n+@user_passes_test(not_fully_logged_in, login_url=reverse_lazy('current_user'))\n def password_reset_check_and_confirm(request, uidb64=None, token=None, post_reset_redirect=None):\n # Customised standard django auth view with customised form to incorporate checking the password set is strong enough\n UserModel = get_user_model()\n@@ -87,7 +87,7 @@\n return password_reset(request, from_email=PASSWORD_RESET_EMAIL, template_name='registration/student_password_reset_form.html', password_reset_form=form, post_reset_redirect=post_reset_redirect, is_admin_site=True)\n \n \n-@user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))\n+@user_passes_test(not_fully_logged_in, login_url=reverse_lazy('current_user'))\n def teacher_password_reset(request, post_reset_redirect):\n form = TeacherPasswordResetForm if not CAPTCHA_ENABLED else decorate_with_captcha(TeacherPasswordResetForm, request,\n recaptcha_client)\n", "issue": "Password Reset Hidden while waiting for 2FA\nhttps://github.com/ocadotechnology/rapid-router/issues/684\n\nOriginal message: \nSteps to reproduce on a 2FA enabled account:\n1. Got to /teach/, enter login details and click \"Sign in\".\n2. When the token form appears, go back to teach and click \"Forgotten Password?\" link\n\nYou'll end up back at teach and the url will change to \"/teach/?next=/teach/home/\"\n\nIt seems that if 2FA login process is pending, the password reset screen gets redirected back to /teach/.\n\n---\n\n=> This is because after the sign in attempt, the teacher is no longer \"not_logged_in\" because a UserProfile has been found.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Code for Life\n#\n# Copyright (C) 2015, Ocado Innovation Limited\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n# ADDITIONAL TERMS \u2013 Section 7 GNU General Public Licence\n#\n# This licence does not grant any right, title or interest in any \u201cOcado\u201d logos,\n# trade names or the trademark \u201cOcado\u201d or any other trademarks or domain names\n# owned by Ocado Innovation Limited or the Ocado group of companies or any other\n# distinctive brand features of \u201cOcado\u201d as may be secured from time to time. You\n# must not distribute any modification of this program using the trademark\n# \u201cOcado\u201d or claim any affiliation or association with Ocado or its employees.\n#\n# You are not authorised to use the name Ocado (or any of its trade names) or\n# the names of any author or contributor in advertising or for publicity purposes\n# pertaining to the distribution of this program, without the prior written\n# authorisation of Ocado.\n#\n# Any propagation, distribution or conveyance of this program must include this\n# copyright notice and these terms. You must not misrepresent the origins of this\n# program; modified versions of the program must be marked as such and not\n# identified as the original program.\nfrom functools import wraps\nfrom django.http import HttpResponseRedirect\nfrom django.core.urlresolvers import reverse_lazy\n\nfrom portal.utils import using_two_factor\n\n\ndef logged_in_as_teacher(u):\n if not hasattr(u, 'userprofile') or not hasattr(u.userprofile, 'teacher'):\n return False\n\n return u.is_verified() or not using_two_factor(u)\n\n\ndef logged_in_as_student(u):\n return hasattr(u, 'userprofile') and hasattr(u.userprofile, 'student')\n\n\ndef not_logged_in(u):\n return not hasattr(u, 'userprofile')\n\n\ndef teacher_verified(view_func):\n @wraps(view_func)\n def wrapped(request, *args, **kwargs):\n u = request.user\n if (not hasattr(u, 'userprofile') or not hasattr(u.userprofile, 'teacher') or\n (not u.is_verified() and using_two_factor(u))):\n return HttpResponseRedirect(reverse_lazy('teach'))\n\n return view_func(request, *args, **kwargs)\n\n return wrapped\n", "path": "portal/permissions.py"}, {"content": "# -*- coding: utf-8 -*-\n# Code for Life\n#\n# Copyright (C) 2015, Ocado Innovation Limited\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n# ADDITIONAL TERMS \u2013 Section 7 GNU General Public Licence\n#\n# This licence does not grant any right, title or interest in any \u201cOcado\u201d logos,\n# trade names or the trademark \u201cOcado\u201d or any other trademarks or domain names\n# owned by Ocado Innovation Limited or the Ocado group of companies or any other\n# distinctive brand features of \u201cOcado\u201d as may be secured from time to time. You\n# must not distribute any modification of this program using the trademark\n# \u201cOcado\u201d or claim any affiliation or association with Ocado or its employees.\n#\n# You are not authorised to use the name Ocado (or any of its trade names) or\n# the names of any author or contributor in advertising or for publicity purposes\n# pertaining to the distribution of this program, without the prior written\n# authorisation of Ocado.\n#\n# Any propagation, distribution or conveyance of this program must include this\n# copyright notice and these terms. You must not misrepresent the origins of this\n# program; modified versions of the program must be marked as such and not\n# identified as the original program.\n\nfrom django.http import HttpResponseRedirect\nfrom django.utils.http import urlsafe_base64_decode\nfrom django.core.urlresolvers import reverse_lazy\nfrom django.contrib.auth.decorators import user_passes_test\nfrom django.contrib.auth.views import password_reset, password_reset_confirm\nfrom django.contrib.auth import get_user_model\nfrom two_factor.views import LoginView\nfrom recaptcha import RecaptchaClient\nfrom django_recaptcha_field import create_form_subclass_with_recaptcha\nfrom deploy.captcha import CAPTCHA_ENABLED\n\nfrom portal.forms.registration import PasswordResetSetPasswordForm, StudentPasswordResetForm, TeacherPasswordResetForm\nfrom portal.permissions import not_logged_in\nfrom portal.helpers.email import PASSWORD_RESET_EMAIL\nfrom portal import app_settings\nfrom ratelimit.decorators import ratelimit\n\nrecaptcha_client = RecaptchaClient(app_settings.RECAPTCHA_PRIVATE_KEY, app_settings.RECAPTCHA_PUBLIC_KEY)\n\n@ratelimit('def', periods=['1m'])\ndef custom_2FA_login(request):\n block_limit = 5\n\n if getattr(request, 'limits', { 'def' : [0] })['def'][0] >= block_limit:\n return HttpResponseRedirect(reverse_lazy('locked_out'))\n\n return LoginView.as_view()(request)\n\n\n@user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))\ndef password_reset_check_and_confirm(request, uidb64=None, token=None, post_reset_redirect=None):\n # Customised standard django auth view with customised form to incorporate checking the password set is strong enough\n UserModel = get_user_model()\n try:\n uid = urlsafe_base64_decode(uidb64)\n user = UserModel._default_manager.get(pk=uid)\n except (TypeError, ValueError, OverflowError, UserModel.DoesNotExist):\n user = None\n if user and hasattr(user.userprofile, 'student'):\n usertype = 'STUDENT'\n else:\n usertype = 'TEACHER'\n return password_reset_confirm(request, set_password_form=PasswordResetSetPasswordForm, uidb64=uidb64, token=token, post_reset_redirect=post_reset_redirect, extra_context= { 'usertype': usertype })\n\n\n@user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))\ndef student_password_reset(request, post_reset_redirect):\n form = StudentPasswordResetForm if not CAPTCHA_ENABLED else decorate_with_captcha(StudentPasswordResetForm, request,\n recaptcha_client)\n return password_reset(request, from_email=PASSWORD_RESET_EMAIL, template_name='registration/student_password_reset_form.html', password_reset_form=form, post_reset_redirect=post_reset_redirect, is_admin_site=True)\n\n\n@user_passes_test(not_logged_in, login_url=reverse_lazy('current_user'))\ndef teacher_password_reset(request, post_reset_redirect):\n form = TeacherPasswordResetForm if not CAPTCHA_ENABLED else decorate_with_captcha(TeacherPasswordResetForm, request,\n recaptcha_client)\n return password_reset(request, from_email=PASSWORD_RESET_EMAIL, template_name='registration/teacher_password_reset_form.html', password_reset_form=form, post_reset_redirect=post_reset_redirect, is_admin_site=True)\n\n\ndef decorate_with_captcha(base_class, request, recaptcha_client):\n form_with_captcha_class = create_form_subclass_with_recaptcha(base_class, recaptcha_client)\n\n class FormWithCaptcha(form_with_captcha_class):\n\n def __init__(self, *args, **kwargs):\n super(FormWithCaptcha, self).__init__(request, *args, **kwargs)\n\n return FormWithCaptcha\n", "path": "portal/views/registration.py"}]}
| 2,834 | 488 |
gh_patches_debug_2216
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1551
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
builtins.KeyError: 'log_time' Python error
**Describe the bug**
Cowrie won't log properly, due that output plugins are not working -> output_splunk
Following error occurs:
```
2021-04-28T07:00:17.796991Z [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.virustotal.Output object at 0x7f3a13c9c550>>) due to exception: [Failure instance: Traceback: <class 'KeyError'>: 'log_time'
/home/cowrie/cowrie/src/cowrie/ssh/transport.py:246:connectionLost
/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/threadable.py:51:sync
/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/log.py:281:msg
/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py:147:publishToNewObserver
--- <exception caught here> ---
/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_observer.py:82:__call__
/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py:55:__call__
]
Traceback (most recent call last):
File "/home/cowrie/cowrie/src/cowrie/ssh/transport.py", line 246, in connectionLost
log.msg(
File "/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/threadable.py", line 51, in sync
return function(self, *args, **kwargs)
File "/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/log.py", line 281, in msg
_publishNew(self._publishPublisher, actualEventDict, textFromEventDict)
File "/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py", line 147, in publishToNewObserver
observer(eventDict)
--- <exception caught here> ---
File "/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_observer.py", line 82, in __call__
observer(event)
File "/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py", line 55, in __call__
event["time"] = event["log_time"]
builtins.KeyError: 'log_time'
```
**To Reproduce**
Steps to reproduce the behavior:
1. git clone cowrie
2. setup venv
3. setup cowrie.cfg
4. include splunk output
5. run cowrie
6. run honeypot session
**Expected behavior**
Cowrie should properly log.
**Server (please complete the following information):**
- OS: `Linux cowrie-1 5.4.103-1-pve #1 SMP PVE 5.4.103-1 (Sun, 07 Mar 2021 15:55:09 +0100) x86_64 x86_64 x86_64 GNU/Linux`
- Python: Python 3.8.6
</issue>
<code>
[start of src/cowrie/core/output.py]
1 # Copyright (c) 2015 Michel Oosterhof <[email protected]>
2 # All rights reserved.
3 #
4 # Redistribution and use in source and binary forms, with or without
5 # modification, are permitted provided that the following conditions
6 # are met:
7 #
8 # 1. Redistributions of source code must retain the above copyright
9 # notice, this list of conditions and the following disclaimer.
10 # 2. Redistributions in binary form must reproduce the above copyright
11 # notice, this list of conditions and the following disclaimer in the
12 # documentation and/or other materials provided with the distribution.
13 # 3. The names of the author(s) may not be used to endorse or promote
14 # products derived from this software without specific prior written
15 # permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR
18 # IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
19 # OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
20 # IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY DIRECT, INDIRECT,
21 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
22 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
24 # AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
25 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
26 # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
27 # SUCH DAMAGE.
28
29
30 import abc
31 import re
32 import socket
33 import time
34 from os import environ
35 from typing import Any, Dict, Pattern
36
37 from twisted.internet import reactor
38 from twisted.logger import formatTime
39
40 from cowrie.core.config import CowrieConfig
41
42 # Events:
43 # cowrie.client.fingerprint
44 # cowrie.client.size
45 # cowrie.client.var
46 # cowrie.client.version
47 # cowrie.command.input
48 # cowrie.command.failed
49 # cowrie.command.success (deprecated)
50 # cowrie.direct-tcpip.data
51 # cowrie.direct-tcpip.request
52 # cowrie.log.closed
53 # cowrie.login.failed
54 # cowrie.login.success
55 # cowrie.session.closed
56 # cowrie.session.connect
57 # cowrie.session.file_download
58 # cowrie.session.file_upload
59
60 """
61 The time is available in two formats in each event, as key 'time'
62 in epoch format and in key 'timestamp' as a ISO compliant string
63 in UTC.
64 """
65
66
67 class Output(metaclass=abc.ABCMeta):
68 """
69 This is the abstract base class intended to be inherited by
70 cowrie output plugins. Plugins require the mandatory
71 methods: stop, start and write
72 """
73
74 def __init__(self) -> None:
75 self.sessions: Dict[str, str] = {}
76 self.ips: Dict[str, str] = {}
77
78 # Need these for each individual transport, or else the session numbers overlap
79 self.sshRegex: Pattern[str] = re.compile(".*SSHTransport,([0-9]+),[0-9a-f:.]+$")
80 self.telnetRegex: Pattern[str] = re.compile(
81 ".*TelnetTransport,([0-9]+),[0-9a-f:.]+$"
82 )
83 self.sensor: str = CowrieConfig.get(
84 "honeypot", "sensor_name", fallback=socket.gethostname()
85 )
86 self.timeFormat: str
87
88 # use Z for UTC (Zulu) time, it's shorter.
89 if "TZ" in environ and environ["TZ"] == "UTC":
90 self.timeFormat = "%Y-%m-%dT%H:%M:%S.%fZ"
91 else:
92 self.timeFormat = "%Y-%m-%dT%H:%M:%S.%f%z"
93
94 # Event trigger so that stop() is called by the reactor when stopping
95 reactor.addSystemEventTrigger("before", "shutdown", self.stop) # type: ignore
96
97 self.start()
98
99 def logDispatch(self, **kw: str) -> None:
100 """
101 Use logDispatch when the HoneypotTransport prefix is not available.
102 Here you can explicitly set the sessionIds to tie the sessions together
103 """
104 ev = kw
105 # ev["message"] = msg
106 self.emit(ev)
107
108 @abc.abstractmethod
109 def start(self) -> None:
110 """
111 Abstract method to initialize output plugin
112 """
113 pass
114
115 @abc.abstractmethod
116 def stop(self) -> None:
117 """
118 Abstract method to shut down output plugin
119 """
120 pass
121
122 @abc.abstractmethod
123 def write(self, event: Dict[str, Any]) -> None:
124 """
125 Handle a general event within the output plugin
126 """
127 pass
128
129 def emit(self, event: dict) -> None:
130 """
131 This is the main emit() hook that gets called by the the Twisted logging
132
133 To make this work with Cowrie, the event dictionary needs the following keys:
134 - 'eventid'
135 - 'sessionno' or 'session'
136 - 'message' or 'format'
137 """
138 sessionno: str
139 ev: dict
140
141 # Ignore stdout and stderr in output plugins
142 if "printed" in event:
143 return
144
145 # Ignore anything without eventid
146 if "eventid" not in event:
147 return
148
149 # Ignore anything without session information
150 if (
151 "sessionno" not in event
152 and "session" not in event
153 and "system" not in event
154 ):
155 return
156
157 # Ignore anything without message
158 if "message" not in event and "format" not in event:
159 return
160
161 ev: Dict[str, any] = event # type: ignore
162 ev["sensor"] = self.sensor
163
164 if "isError" in ev:
165 del ev["isError"]
166
167 # Add ISO timestamp and sensor data
168 if "time" not in ev:
169 ev["time"] = time.time()
170 ev["timestamp"] = formatTime(ev["time"], timeFormat=self.timeFormat)
171
172 if "format" in ev and ("message" not in ev or ev["message"] == ()):
173 try:
174 ev["message"] = ev["format"] % ev
175 del ev["format"]
176 except Exception:
177 pass
178
179 # Explicit sessionno (from logDispatch) overrides from 'system'
180 if "sessionno" in ev:
181 sessionno = ev["sessionno"]
182 del ev["sessionno"]
183 # Maybe it's passed explicitly
184 elif "session" in ev:
185 # reverse engineer sessionno
186 try:
187 sessionno = next(
188 key
189 for key, value in self.sessions.items()
190 if value == ev["session"]
191 )
192 except StopIteration:
193 return
194 # Extract session id from the twisted log prefix
195 elif "system" in ev:
196 sessionno = "0"
197 telnetmatch = self.telnetRegex.match(ev["system"])
198 if telnetmatch:
199 sessionno = "T{}".format(telnetmatch.groups()[0])
200 else:
201 sshmatch = self.sshRegex.match(ev["system"])
202 if sshmatch:
203 sessionno = "S{}".format(sshmatch.groups()[0])
204 if sessionno == "0":
205 return
206
207 if sessionno in self.ips:
208 ev["src_ip"] = self.ips[sessionno]
209
210 # Connection event is special. adds to session list
211 if ev["eventid"] == "cowrie.session.connect":
212 self.sessions[sessionno] = ev["session"]
213 self.ips[sessionno] = ev["src_ip"]
214 else:
215 ev["session"] = self.sessions[sessionno]
216
217 self.write(ev)
218
219 # Disconnect is special, remove cached data
220 if ev["eventid"] == "cowrie.session.closed":
221 del self.sessions[sessionno]
222 del self.ips[sessionno]
223
[end of src/cowrie/core/output.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cowrie/core/output.py b/src/cowrie/core/output.py
--- a/src/cowrie/core/output.py
+++ b/src/cowrie/core/output.py
@@ -158,7 +158,7 @@
if "message" not in event and "format" not in event:
return
- ev: Dict[str, any] = event # type: ignore
+ ev: Dict[str, any] = event.copy() # type: ignore
ev["sensor"] = self.sensor
if "isError" in ev:
|
{"golden_diff": "diff --git a/src/cowrie/core/output.py b/src/cowrie/core/output.py\n--- a/src/cowrie/core/output.py\n+++ b/src/cowrie/core/output.py\n@@ -158,7 +158,7 @@\n if \"message\" not in event and \"format\" not in event:\n return\n \n- ev: Dict[str, any] = event # type: ignore\n+ ev: Dict[str, any] = event.copy() # type: ignore\n ev[\"sensor\"] = self.sensor\n \n if \"isError\" in ev:\n", "issue": "builtins.KeyError: 'log_time' Python error\n**Describe the bug**\r\nCowrie won't log properly, due that output plugins are not working -> output_splunk\r\nFollowing error occurs:\r\n```\r\n2021-04-28T07:00:17.796991Z [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.virustotal.Output object at 0x7f3a13c9c550>>) due to exception: [Failure instance: Traceback: <class 'KeyError'>: 'log_time'\r\n /home/cowrie/cowrie/src/cowrie/ssh/transport.py:246:connectionLost\r\n /home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/threadable.py:51:sync\r\n /home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/log.py:281:msg\r\n /home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py:147:publishToNewObserver\r\n --- <exception caught here> ---\r\n /home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_observer.py:82:__call__\r\n /home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py:55:__call__\r\n ]\r\n Traceback (most recent call last):\r\n File \"/home/cowrie/cowrie/src/cowrie/ssh/transport.py\", line 246, in connectionLost\r\n log.msg(\r\n File \"/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/threadable.py\", line 51, in sync\r\n return function(self, *args, **kwargs)\r\n File \"/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/python/log.py\", line 281, in msg\r\n _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)\r\n File \"/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py\", line 147, in publishToNewObserver\r\n observer(eventDict)\r\n --- <exception caught here> ---\r\n File \"/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_observer.py\", line 82, in __call__\r\n observer(event)\r\n File \"/home/cowrie/cowrie/cowrie-env/lib/python3.8/site-packages/twisted/logger/_legacy.py\", line 55, in __call__\r\n event[\"time\"] = event[\"log_time\"]\r\n builtins.KeyError: 'log_time'\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. git clone cowrie\r\n2. setup venv\r\n3. setup cowrie.cfg\r\n4. include splunk output\r\n5. run cowrie\r\n6. run honeypot session\r\n\r\n**Expected behavior**\r\nCowrie should properly log.\r\n\r\n**Server (please complete the following information):**\r\n - OS: `Linux cowrie-1 5.4.103-1-pve #1 SMP PVE 5.4.103-1 (Sun, 07 Mar 2021 15:55:09 +0100) x86_64 x86_64 x86_64 GNU/Linux`\r\n - Python: Python 3.8.6\r\n\n", "before_files": [{"content": "# Copyright (c) 2015 Michel Oosterhof <[email protected]>\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# 1. Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# 3. The names of the author(s) may not be used to endorse or promote\n# products derived from this software without specific prior written\n# permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS OR\n# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\n# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\n# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED\n# AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n# SUCH DAMAGE.\n\n\nimport abc\nimport re\nimport socket\nimport time\nfrom os import environ\nfrom typing import Any, Dict, Pattern\n\nfrom twisted.internet import reactor\nfrom twisted.logger import formatTime\n\nfrom cowrie.core.config import CowrieConfig\n\n# Events:\n# cowrie.client.fingerprint\n# cowrie.client.size\n# cowrie.client.var\n# cowrie.client.version\n# cowrie.command.input\n# cowrie.command.failed\n# cowrie.command.success (deprecated)\n# cowrie.direct-tcpip.data\n# cowrie.direct-tcpip.request\n# cowrie.log.closed\n# cowrie.login.failed\n# cowrie.login.success\n# cowrie.session.closed\n# cowrie.session.connect\n# cowrie.session.file_download\n# cowrie.session.file_upload\n\n\"\"\"\nThe time is available in two formats in each event, as key 'time'\nin epoch format and in key 'timestamp' as a ISO compliant string\nin UTC.\n\"\"\"\n\n\nclass Output(metaclass=abc.ABCMeta):\n \"\"\"\n This is the abstract base class intended to be inherited by\n cowrie output plugins. Plugins require the mandatory\n methods: stop, start and write\n \"\"\"\n\n def __init__(self) -> None:\n self.sessions: Dict[str, str] = {}\n self.ips: Dict[str, str] = {}\n\n # Need these for each individual transport, or else the session numbers overlap\n self.sshRegex: Pattern[str] = re.compile(\".*SSHTransport,([0-9]+),[0-9a-f:.]+$\")\n self.telnetRegex: Pattern[str] = re.compile(\n \".*TelnetTransport,([0-9]+),[0-9a-f:.]+$\"\n )\n self.sensor: str = CowrieConfig.get(\n \"honeypot\", \"sensor_name\", fallback=socket.gethostname()\n )\n self.timeFormat: str\n\n # use Z for UTC (Zulu) time, it's shorter.\n if \"TZ\" in environ and environ[\"TZ\"] == \"UTC\":\n self.timeFormat = \"%Y-%m-%dT%H:%M:%S.%fZ\"\n else:\n self.timeFormat = \"%Y-%m-%dT%H:%M:%S.%f%z\"\n\n # Event trigger so that stop() is called by the reactor when stopping\n reactor.addSystemEventTrigger(\"before\", \"shutdown\", self.stop) # type: ignore\n\n self.start()\n\n def logDispatch(self, **kw: str) -> None:\n \"\"\"\n Use logDispatch when the HoneypotTransport prefix is not available.\n Here you can explicitly set the sessionIds to tie the sessions together\n \"\"\"\n ev = kw\n # ev[\"message\"] = msg\n self.emit(ev)\n\n @abc.abstractmethod\n def start(self) -> None:\n \"\"\"\n Abstract method to initialize output plugin\n \"\"\"\n pass\n\n @abc.abstractmethod\n def stop(self) -> None:\n \"\"\"\n Abstract method to shut down output plugin\n \"\"\"\n pass\n\n @abc.abstractmethod\n def write(self, event: Dict[str, Any]) -> None:\n \"\"\"\n Handle a general event within the output plugin\n \"\"\"\n pass\n\n def emit(self, event: dict) -> None:\n \"\"\"\n This is the main emit() hook that gets called by the the Twisted logging\n\n To make this work with Cowrie, the event dictionary needs the following keys:\n - 'eventid'\n - 'sessionno' or 'session'\n - 'message' or 'format'\n \"\"\"\n sessionno: str\n ev: dict\n\n # Ignore stdout and stderr in output plugins\n if \"printed\" in event:\n return\n\n # Ignore anything without eventid\n if \"eventid\" not in event:\n return\n\n # Ignore anything without session information\n if (\n \"sessionno\" not in event\n and \"session\" not in event\n and \"system\" not in event\n ):\n return\n\n # Ignore anything without message\n if \"message\" not in event and \"format\" not in event:\n return\n\n ev: Dict[str, any] = event # type: ignore\n ev[\"sensor\"] = self.sensor\n\n if \"isError\" in ev:\n del ev[\"isError\"]\n\n # Add ISO timestamp and sensor data\n if \"time\" not in ev:\n ev[\"time\"] = time.time()\n ev[\"timestamp\"] = formatTime(ev[\"time\"], timeFormat=self.timeFormat)\n\n if \"format\" in ev and (\"message\" not in ev or ev[\"message\"] == ()):\n try:\n ev[\"message\"] = ev[\"format\"] % ev\n del ev[\"format\"]\n except Exception:\n pass\n\n # Explicit sessionno (from logDispatch) overrides from 'system'\n if \"sessionno\" in ev:\n sessionno = ev[\"sessionno\"]\n del ev[\"sessionno\"]\n # Maybe it's passed explicitly\n elif \"session\" in ev:\n # reverse engineer sessionno\n try:\n sessionno = next(\n key\n for key, value in self.sessions.items()\n if value == ev[\"session\"]\n )\n except StopIteration:\n return\n # Extract session id from the twisted log prefix\n elif \"system\" in ev:\n sessionno = \"0\"\n telnetmatch = self.telnetRegex.match(ev[\"system\"])\n if telnetmatch:\n sessionno = \"T{}\".format(telnetmatch.groups()[0])\n else:\n sshmatch = self.sshRegex.match(ev[\"system\"])\n if sshmatch:\n sessionno = \"S{}\".format(sshmatch.groups()[0])\n if sessionno == \"0\":\n return\n\n if sessionno in self.ips:\n ev[\"src_ip\"] = self.ips[sessionno]\n\n # Connection event is special. adds to session list\n if ev[\"eventid\"] == \"cowrie.session.connect\":\n self.sessions[sessionno] = ev[\"session\"]\n self.ips[sessionno] = ev[\"src_ip\"]\n else:\n ev[\"session\"] = self.sessions[sessionno]\n\n self.write(ev)\n\n # Disconnect is special, remove cached data\n if ev[\"eventid\"] == \"cowrie.session.closed\":\n del self.sessions[sessionno]\n del self.ips[sessionno]\n", "path": "src/cowrie/core/output.py"}]}
| 3,648 | 127 |
gh_patches_debug_5283
|
rasdani/github-patches
|
git_diff
|
azavea__raster-vision-469
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move to using master in azavea/models
We've been using a branch of azavea/models, which can cause confusion.
Before release, merge the `upgrade-sept-2018` branch into the main branch and update our install_deps script accordingly.
</issue>
<code>
[start of rastervision/runner/command_dag.py]
1 import networkx as nx
2
3 import rastervision as rv
4 from rastervision.utils.files import file_exists
5
6 import click
7
8
9 class CommandDAG:
10 """ A directed acyclic graph of command definitions.
11 """
12
13 def __init__(self,
14 command_definitions,
15 rerun_commands=False,
16 skip_file_check=False):
17 """Generates a CommandDAG from a list of CommandDefinitions
18
19 This logic checks if there are any non-exsiting URIs that are
20 not produced as outputs by some command in the set. If so,
21 it raises a ConfigError stating the missing files.
22 """
23 # Create a set of edges, from input_uri to command_config and
24 # from command_config to output_uri. Nodes for commands are their
25 # index into command_definitions.
26
27 uri_dag = nx.DiGraph()
28
29 for idx, command_def in enumerate(command_definitions):
30 uri_dag.add_node(idx)
31 for input_uri in command_def.io_def.input_uris:
32 uri_dag.add_edge(input_uri, idx)
33
34 for output_uri in command_def.io_def.output_uris:
35 uri_dag.add_edge(idx, output_uri)
36
37 # Find all source input_uris, and ensure they exist.
38 if not skip_file_check:
39 unsolved_sources = [
40 uri for uri in uri_dag.nodes
41 if (type(uri) == str and len(uri_dag.in_edges(uri)) == 0)
42 ]
43
44 missing_files = []
45
46 with click.progressbar(
47 unsolved_sources,
48 label='Ensuring input files exists ') as uris:
49 for uri in uris:
50 if not file_exists(uri):
51 missing_files.append(uri)
52
53 if any(missing_files):
54 raise rv.ConfigError(
55 'Files do not exist and are not supplied by commands:\n'
56 '\t{}\n'.format(',\b\t'.join(missing_files)))
57
58 # If we are not rerunning, remove commands that have existing outputs.
59 self.skipped_commands = []
60 if not rerun_commands:
61 commands_to_outputs = [(idx, edge[1]) for idx in uri_dag.nodes
62 if type(idx) == int
63 for edge in uri_dag.out_edges(idx)]
64 with click.progressbar(
65 commands_to_outputs,
66 label='Checking for existing output') as lst:
67 for idx, output_uri in lst:
68 if file_exists(output_uri):
69 uri_dag.remove_edge(idx, output_uri)
70
71 for idx in set(map(lambda x: x[0], commands_to_outputs)):
72 if len(uri_dag.out_edges(idx)) == 0:
73 self.skipped_commands.append(command_definitions[idx])
74 uri_dag.remove_node(idx)
75
76 # Collapse the graph to create edges from command to command.
77 command_id_dag = nx.DiGraph()
78
79 for idx in [idx for idx in uri_dag.nodes if (type(idx) == int)]:
80 command_id_dag.add_node(idx)
81 for upstream_idx in [
82 edge2[0] for edge1 in uri_dag.in_edges(idx)
83 for edge2 in uri_dag.in_edges(edge1[0])
84 ]:
85 command_id_dag.add_edge(upstream_idx, idx)
86
87 # Feed this digraph of commands to the child runner.
88 self.command_definitions = command_definitions
89 self.command_id_dag = command_id_dag
90
91 def get_sorted_commands(self):
92 """Return a topologically sorted list of commands configurations.
93
94 Returns a list of command configurations that are sorted such that every
95 command that depends on some other parent command appears later
96 than that parent command.
97 """
98 return [
99 self.command_definitions[idx].command_config
100 for idx in self.get_sorted_command_ids()
101 ]
102
103 def get_sorted_command_ids(self):
104 """Return a topologically sorted list of commands ids.
105
106 Returns a list of command IDs that can be used to retrieve
107 specific commands out of this DAG. These are sorted such that every
108 command that depends on some other parent command appears later
109 than that parent command.
110 """
111 return [idx for idx in nx.topological_sort(self.command_id_dag)]
112
113 def get_command(self, command_id):
114 """Retrieves a command configuration for the given ID"""
115 return self.get_command_definition(command_id).command_config
116
117 def get_command_definition(self, command_id):
118 """Retrieves a command definition for the given ID"""
119 return self.command_definitions[command_id]
120
121 def get_upstream_command_ids(self, command_id):
122 """Returns the command ids for upstream commands for the command
123 with the given id.
124 """
125 return list(
126 map(lambda x: x[0], self.command_id_dag.in_edges(command_id)))
127
128 def get_command_definitions(self):
129 """Returns the command definitions that will be run in this DAG."""
130 return [
131 self.command_definitions[idx] for idx in self.command_id_dag.nodes
132 ]
133
[end of rastervision/runner/command_dag.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rastervision/runner/command_dag.py b/rastervision/runner/command_dag.py
--- a/rastervision/runner/command_dag.py
+++ b/rastervision/runner/command_dag.py
@@ -45,7 +45,7 @@
with click.progressbar(
unsolved_sources,
- label='Ensuring input files exists ') as uris:
+ label='Ensuring input files exist ') as uris:
for uri in uris:
if not file_exists(uri):
missing_files.append(uri)
|
{"golden_diff": "diff --git a/rastervision/runner/command_dag.py b/rastervision/runner/command_dag.py\n--- a/rastervision/runner/command_dag.py\n+++ b/rastervision/runner/command_dag.py\n@@ -45,7 +45,7 @@\n \n with click.progressbar(\n unsolved_sources,\n- label='Ensuring input files exists ') as uris:\n+ label='Ensuring input files exist ') as uris:\n for uri in uris:\n if not file_exists(uri):\n missing_files.append(uri)\n", "issue": "Move to using master in azavea/models\nWe've been using a branch of azavea/models, which can cause confusion.\r\n\r\nBefore release, merge the `upgrade-sept-2018` branch into the main branch and update our install_deps script accordingly.\n", "before_files": [{"content": "import networkx as nx\n\nimport rastervision as rv\nfrom rastervision.utils.files import file_exists\n\nimport click\n\n\nclass CommandDAG:\n \"\"\" A directed acyclic graph of command definitions.\n \"\"\"\n\n def __init__(self,\n command_definitions,\n rerun_commands=False,\n skip_file_check=False):\n \"\"\"Generates a CommandDAG from a list of CommandDefinitions\n\n This logic checks if there are any non-exsiting URIs that are\n not produced as outputs by some command in the set. If so,\n it raises a ConfigError stating the missing files.\n \"\"\"\n # Create a set of edges, from input_uri to command_config and\n # from command_config to output_uri. Nodes for commands are their\n # index into command_definitions.\n\n uri_dag = nx.DiGraph()\n\n for idx, command_def in enumerate(command_definitions):\n uri_dag.add_node(idx)\n for input_uri in command_def.io_def.input_uris:\n uri_dag.add_edge(input_uri, idx)\n\n for output_uri in command_def.io_def.output_uris:\n uri_dag.add_edge(idx, output_uri)\n\n # Find all source input_uris, and ensure they exist.\n if not skip_file_check:\n unsolved_sources = [\n uri for uri in uri_dag.nodes\n if (type(uri) == str and len(uri_dag.in_edges(uri)) == 0)\n ]\n\n missing_files = []\n\n with click.progressbar(\n unsolved_sources,\n label='Ensuring input files exists ') as uris:\n for uri in uris:\n if not file_exists(uri):\n missing_files.append(uri)\n\n if any(missing_files):\n raise rv.ConfigError(\n 'Files do not exist and are not supplied by commands:\\n'\n '\\t{}\\n'.format(',\\b\\t'.join(missing_files)))\n\n # If we are not rerunning, remove commands that have existing outputs.\n self.skipped_commands = []\n if not rerun_commands:\n commands_to_outputs = [(idx, edge[1]) for idx in uri_dag.nodes\n if type(idx) == int\n for edge in uri_dag.out_edges(idx)]\n with click.progressbar(\n commands_to_outputs,\n label='Checking for existing output') as lst:\n for idx, output_uri in lst:\n if file_exists(output_uri):\n uri_dag.remove_edge(idx, output_uri)\n\n for idx in set(map(lambda x: x[0], commands_to_outputs)):\n if len(uri_dag.out_edges(idx)) == 0:\n self.skipped_commands.append(command_definitions[idx])\n uri_dag.remove_node(idx)\n\n # Collapse the graph to create edges from command to command.\n command_id_dag = nx.DiGraph()\n\n for idx in [idx for idx in uri_dag.nodes if (type(idx) == int)]:\n command_id_dag.add_node(idx)\n for upstream_idx in [\n edge2[0] for edge1 in uri_dag.in_edges(idx)\n for edge2 in uri_dag.in_edges(edge1[0])\n ]:\n command_id_dag.add_edge(upstream_idx, idx)\n\n # Feed this digraph of commands to the child runner.\n self.command_definitions = command_definitions\n self.command_id_dag = command_id_dag\n\n def get_sorted_commands(self):\n \"\"\"Return a topologically sorted list of commands configurations.\n\n Returns a list of command configurations that are sorted such that every\n command that depends on some other parent command appears later\n than that parent command.\n \"\"\"\n return [\n self.command_definitions[idx].command_config\n for idx in self.get_sorted_command_ids()\n ]\n\n def get_sorted_command_ids(self):\n \"\"\"Return a topologically sorted list of commands ids.\n\n Returns a list of command IDs that can be used to retrieve\n specific commands out of this DAG. These are sorted such that every\n command that depends on some other parent command appears later\n than that parent command.\n \"\"\"\n return [idx for idx in nx.topological_sort(self.command_id_dag)]\n\n def get_command(self, command_id):\n \"\"\"Retrieves a command configuration for the given ID\"\"\"\n return self.get_command_definition(command_id).command_config\n\n def get_command_definition(self, command_id):\n \"\"\"Retrieves a command definition for the given ID\"\"\"\n return self.command_definitions[command_id]\n\n def get_upstream_command_ids(self, command_id):\n \"\"\"Returns the command ids for upstream commands for the command\n with the given id.\n \"\"\"\n return list(\n map(lambda x: x[0], self.command_id_dag.in_edges(command_id)))\n\n def get_command_definitions(self):\n \"\"\"Returns the command definitions that will be run in this DAG.\"\"\"\n return [\n self.command_definitions[idx] for idx in self.command_id_dag.nodes\n ]\n", "path": "rastervision/runner/command_dag.py"}]}
| 1,951 | 125 |
gh_patches_debug_1363
|
rasdani/github-patches
|
git_diff
|
ManageIQ__integration_tests-7728
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cfme.log only showing on first test in a run.
cfme.log link only appears on the first test from a selection but shows all logs from all tests in that run. Expected to have a separate log link for each test specific to that test. See attached

</issue>
<code>
[start of artifactor/plugins/logger.py]
1 """ Logger plugin for Artifactor
2
3 Add a stanza to the artifactor config like this,
4 artifactor:
5 log_dir: /home/username/outdir
6 per_run: test #test, run, None
7 overwrite: True
8 plugins:
9 logger:
10 enabled: True
11 plugin: logger
12 level: DEBUG
13 """
14 import os
15 from logging import makeLogRecord
16 from artifactor import ArtifactorBasePlugin
17 from cfme.utils.log import make_file_handler
18
19
20 class Logger(ArtifactorBasePlugin):
21
22 class Test(object):
23 def __init__(self, ident):
24 self.ident = ident
25 self.in_progress = False
26 self.handler = None
27
28 def close(self):
29 if self.handle is not None:
30 self.handler.close()
31 self.handler = None
32
33 def plugin_initialize(self):
34 self.register_plugin_hook('start_test', self.start_test)
35 self.register_plugin_hook('finish_test', self.finish_test)
36 self.register_plugin_hook('log_message', self.log_message)
37
38 def configure(self):
39 self.configured = True
40 self.level = self.data.get('level', 'DEBUG')
41
42 @ArtifactorBasePlugin.check_configured
43 def start_test(self, artifact_path, test_name, test_location, slaveid):
44 if not slaveid:
45 slaveid = "Master"
46 test_ident = "{}/{}".format(test_location, test_name)
47 if slaveid in self.store:
48 if self.store[slaveid].in_progress:
49 print("Test already running, can't start another, logger")
50 return None
51 self.store[slaveid].close()
52 self.store[slaveid] = self.Test(test_ident)
53 self.store[slaveid].in_progress = True
54 filename = "{ident}-cfme.log".format(ident=self.ident)
55 self.store[slaveid].handler = make_file_handler(
56 filename,
57 root=artifact_path,
58 # we overwrite
59 mode='w',
60 level=self.level)
61
62 self.fire_hook('filedump', test_location=test_location, test_name=test_name,
63 description="cfme.log", slaveid=slaveid, contents="", file_type="log",
64 display_glyph="align-justify", dont_write=True,
65 os_filename=os.path.join(artifact_path, filename),
66 group_id="pytest-logfile")
67
68 @ArtifactorBasePlugin.check_configured
69 def finish_test(self, artifact_path, test_name, test_location, slaveid):
70 if not slaveid:
71 slaveid = "Master"
72 self.store[slaveid].in_progress = False
73 self.store[slaveid].close()
74
75 @ArtifactorBasePlugin.check_configured
76 def log_message(self, log_record, slaveid):
77 # json transport fallout: args must be a dict or a tuple, json makes a tuple into a list
78 args = log_record['args']
79 log_record['args'] = tuple(args) if isinstance(args, list) else args
80 record = makeLogRecord(log_record)
81 if not slaveid:
82 slaveid = "Master"
83 if slaveid in self.store:
84 handler = self.store[slaveid].handler
85 if handler and record.levelno >= handler.level:
86 handler.handle(record)
87
[end of artifactor/plugins/logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/artifactor/plugins/logger.py b/artifactor/plugins/logger.py
--- a/artifactor/plugins/logger.py
+++ b/artifactor/plugins/logger.py
@@ -26,7 +26,7 @@
self.handler = None
def close(self):
- if self.handle is not None:
+ if self.handler is not None:
self.handler.close()
self.handler = None
|
{"golden_diff": "diff --git a/artifactor/plugins/logger.py b/artifactor/plugins/logger.py\n--- a/artifactor/plugins/logger.py\n+++ b/artifactor/plugins/logger.py\n@@ -26,7 +26,7 @@\n self.handler = None\n \n def close(self):\n- if self.handle is not None:\n+ if self.handler is not None:\n self.handler.close()\n self.handler = None\n", "issue": "cfme.log only showing on first test in a run.\ncfme.log link only appears on the first test from a selection but shows all logs from all tests in that run. Expected to have a separate log link for each test specific to that test. See attached\r\n\r\n\n", "before_files": [{"content": "\"\"\" Logger plugin for Artifactor\n\nAdd a stanza to the artifactor config like this,\nartifactor:\n log_dir: /home/username/outdir\n per_run: test #test, run, None\n overwrite: True\n plugins:\n logger:\n enabled: True\n plugin: logger\n level: DEBUG\n\"\"\"\nimport os\nfrom logging import makeLogRecord\nfrom artifactor import ArtifactorBasePlugin\nfrom cfme.utils.log import make_file_handler\n\n\nclass Logger(ArtifactorBasePlugin):\n\n class Test(object):\n def __init__(self, ident):\n self.ident = ident\n self.in_progress = False\n self.handler = None\n\n def close(self):\n if self.handle is not None:\n self.handler.close()\n self.handler = None\n\n def plugin_initialize(self):\n self.register_plugin_hook('start_test', self.start_test)\n self.register_plugin_hook('finish_test', self.finish_test)\n self.register_plugin_hook('log_message', self.log_message)\n\n def configure(self):\n self.configured = True\n self.level = self.data.get('level', 'DEBUG')\n\n @ArtifactorBasePlugin.check_configured\n def start_test(self, artifact_path, test_name, test_location, slaveid):\n if not slaveid:\n slaveid = \"Master\"\n test_ident = \"{}/{}\".format(test_location, test_name)\n if slaveid in self.store:\n if self.store[slaveid].in_progress:\n print(\"Test already running, can't start another, logger\")\n return None\n self.store[slaveid].close()\n self.store[slaveid] = self.Test(test_ident)\n self.store[slaveid].in_progress = True\n filename = \"{ident}-cfme.log\".format(ident=self.ident)\n self.store[slaveid].handler = make_file_handler(\n filename,\n root=artifact_path,\n # we overwrite\n mode='w',\n level=self.level)\n\n self.fire_hook('filedump', test_location=test_location, test_name=test_name,\n description=\"cfme.log\", slaveid=slaveid, contents=\"\", file_type=\"log\",\n display_glyph=\"align-justify\", dont_write=True,\n os_filename=os.path.join(artifact_path, filename),\n group_id=\"pytest-logfile\")\n\n @ArtifactorBasePlugin.check_configured\n def finish_test(self, artifact_path, test_name, test_location, slaveid):\n if not slaveid:\n slaveid = \"Master\"\n self.store[slaveid].in_progress = False\n self.store[slaveid].close()\n\n @ArtifactorBasePlugin.check_configured\n def log_message(self, log_record, slaveid):\n # json transport fallout: args must be a dict or a tuple, json makes a tuple into a list\n args = log_record['args']\n log_record['args'] = tuple(args) if isinstance(args, list) else args\n record = makeLogRecord(log_record)\n if not slaveid:\n slaveid = \"Master\"\n if slaveid in self.store:\n handler = self.store[slaveid].handler\n if handler and record.levelno >= handler.level:\n handler.handle(record)\n", "path": "artifactor/plugins/logger.py"}]}
| 1,534 | 88 |
gh_patches_debug_40711
|
rasdani/github-patches
|
git_diff
|
airctic__icevision-144
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rescale transforms and torchvision RCNNs
## 🐛 Bug
torchvisions `FasterRCNN` and `MaskRCNN` internally rescales the images via `GeneralizedRCNNTransform`.
This means that any resizing transform previously applied to the image (probably on the `Dataset` stage) will be unexpectedly overriden.
It becomes really confusing if we should apply a resize transform via `Dataset` or via the model, ideally we want all transform to be applied at the `Dataset` stage. Going one step further, can this even be said about normalization?
## Solution 1
Changes the default of `min_size` and `max_size` in the model to 2 and 9999 respectively, practically making them non effective.
## Solution 2
Monkey patch `model.transform` to a function that returns the same items it receives (being careful with what are the expected output types)
---
Solution 1 seems like a better idea, it's less magical, we can still have `min/max_size` as parameters, we just change their defaults.
---
Should we do the same with normalize? Changing the mean to 0 and std to 1? It's them clear that if we want to normalize we should do it in the `Dataset` stage.
I'm thinking this because models that we integrate in the future might not have this internal transforms, and it will be very confusing to a new user why sometimes he has to explicity define normalization and sometimes not.
</issue>
<code>
[start of mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py]
1 __all__ = ["MantisMaskRCNN"]
2
3 from mantisshrimp.imports import *
4 from mantisshrimp.core import *
5 from mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *
6 from mantisshrimp.models.mantis_rcnn.mantis_rcnn import *
7 from mantisshrimp.models.mantis_rcnn.mantis_faster_rcnn import *
8 from mantisshrimp.backbones import *
9
10
11 class MantisMaskRCNN(MantisRCNN):
12 @delegates(MaskRCNN.__init__)
13 def __init__(
14 self,
15 num_classes: int,
16 backbone: nn.Module = None,
17 param_groups: List[nn.Module] = None,
18 **kwargs,
19 ):
20 super().__init__()
21 self.num_classes = num_classes
22
23 if backbone is None:
24 # Creates the default fasterrcnn as given in pytorch. Trained on COCO dataset
25 self.model = maskrcnn_resnet50_fpn(pretrained=True, **kwargs)
26 in_features = self.model.roi_heads.box_predictor.cls_score.in_features
27 self.model.roi_heads.box_predictor = FastRCNNPredictor(
28 in_features, num_classes
29 )
30 in_features_mask = (
31 self.model.roi_heads.mask_predictor.conv5_mask.in_channels
32 )
33 self.model.roi_heads.mask_predictor = MaskRCNNPredictor(
34 in_channels=in_features_mask, dim_reduced=256, num_classes=num_classes
35 )
36 param_groups = resnet_fpn_backbone_param_groups(self.model.backbone)
37 else:
38 self.model = MaskRCNN(backbone, num_classes=num_classes, **kwargs)
39 param_groups = param_groups or [backbone]
40
41 self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]
42 check_all_model_params_in_groups(self.model, self.param_groups)
43
44 def forward(self, images, targets=None):
45 return self.model(images, targets)
46
47 def predict(
48 self,
49 images: List[np.ndarray],
50 detection_threshold: float = 0.5,
51 mask_threshold: float = 0.5,
52 ):
53 convert_raw_prediction = partial(
54 self.convert_raw_prediction,
55 detection_threshold=detection_threshold,
56 mask_threshold=mask_threshold,
57 )
58
59 return self._predict(
60 images=images, convert_raw_prediction=convert_raw_prediction
61 )
62
63 @property
64 def param_groups(self):
65 return self._param_groups
66
67 @staticmethod
68 def convert_raw_prediction(
69 raw_pred: dict, detection_threshold: float, mask_threshold: float
70 ):
71 preds = MantisFasterRCNN.convert_raw_prediction(
72 raw_pred=raw_pred, detection_threshold=detection_threshold
73 )
74
75 above_threshold = preds["above_threshold"]
76 masks_probs = raw_pred["masks"][above_threshold]
77 masks_probs = masks_probs.detach().cpu().numpy()
78 # convert probabilities to 0 or 1 based on mask_threshold
79 masks = masks_probs > mask_threshold
80 masks = MaskArray(masks.squeeze(1))
81
82 return {**preds, "masks": masks}
83
84 @staticmethod
85 def build_training_sample(
86 imageid: int,
87 img: np.ndarray,
88 labels: List[int],
89 bboxes: List[BBox],
90 masks: MaskArray,
91 **kwargs,
92 ):
93 x, y = MantisFasterRCNN.build_training_sample(
94 imageid=imageid, img=img, labels=labels, bboxes=bboxes,
95 )
96 y["masks"] = tensor(masks.data, dtype=torch.uint8)
97 return x, y
98
[end of mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py]
[start of mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py]
1 __all__ = ["MantisFasterRCNN"]
2
3 from mantisshrimp.imports import *
4 from mantisshrimp.core import *
5 from mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *
6 from mantisshrimp.models.mantis_rcnn.mantis_rcnn import *
7 from mantisshrimp.backbones import *
8
9
10 class MantisFasterRCNN(MantisRCNN):
11 """
12 Creates a flexible Faster RCNN implementation based on torchvision library.
13 Args:
14 n_class (int) : number of classes. Do not have class_id "0" it is reserved as background.
15 n_class = number of classes to label + 1 for background.
16 """
17
18 @delegates(FasterRCNN.__init__)
19 def __init__(
20 self,
21 num_classes: int,
22 backbone: nn.Module = None,
23 param_groups: List[nn.Module] = None,
24 metrics=None,
25 **kwargs,
26 ):
27 super().__init__(metrics=metrics)
28 self.num_classes = num_classes
29 self.backbone = backbone
30 if backbone is None:
31 # Creates the default fasterrcnn as given in pytorch. Trained on COCO dataset
32 self.model = fasterrcnn_resnet50_fpn(pretrained=True, **kwargs)
33 in_features = self.model.roi_heads.box_predictor.cls_score.in_features
34 self.model.roi_heads.box_predictor = FastRCNNPredictor(
35 in_features, num_classes
36 )
37 param_groups = resnet_fpn_backbone_param_groups(self.model.backbone)
38 else:
39 self.model = FasterRCNN(backbone, num_classes=num_classes, **kwargs)
40 param_groups = param_groups or [backbone]
41
42 self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]
43 check_all_model_params_in_groups(self.model, self._param_groups)
44
45 def forward(self, images, targets=None):
46 return self.model(images, targets)
47
48 def predict(self, images: List[np.ndarray], detection_threshold: float = 0.5):
49 convert_raw_prediction = partial(
50 self.convert_raw_prediction, detection_threshold=detection_threshold,
51 )
52
53 return self._predict(
54 images=images, convert_raw_prediction=convert_raw_prediction
55 )
56
57 @property
58 def param_groups(self):
59 return self._param_groups
60
61 @staticmethod
62 def convert_raw_prediction(raw_pred: dict, detection_threshold: float):
63 above_threshold = raw_pred["scores"] >= detection_threshold
64
65 labels = raw_pred["labels"][above_threshold]
66 labels = labels.detach().cpu().numpy()
67
68 scores = raw_pred["scores"][above_threshold]
69 scores = scores.detach().cpu().numpy()
70
71 boxes = raw_pred["boxes"][above_threshold]
72 bboxes = []
73 for box_tensor in boxes:
74 xyxy = box_tensor.cpu().tolist()
75 bbox = BBox.from_xyxy(*xyxy)
76 bboxes.append(bbox)
77
78 return {
79 "labels": labels,
80 "scores": scores,
81 "bboxes": bboxes,
82 "above_threshold": above_threshold,
83 }
84
85 @staticmethod
86 def build_training_sample(
87 imageid: int, img: np.ndarray, labels: List[int], bboxes: List[BBox], **kwargs,
88 ):
89 x = im2tensor(img)
90 # injected values when annotations are empty are disconsidered
91 # because we mark label as 0 (background)
92 _fake_box = [0, 1, 2, 3]
93 y = {
94 "image_id": tensor(imageid, dtype=torch.int64),
95 "labels": tensor(labels or [0], dtype=torch.int64),
96 "boxes": tensor([o.xyxy for o in bboxes] or [_fake_box], dtype=torch.float),
97 }
98 return x, y
99
[end of mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py]
[start of mantisshrimp/models/mantis_rcnn/mantis_rcnn.py]
1 __all__ = ["MantisRCNN"]
2
3 from mantisshrimp.imports import *
4 from mantisshrimp.utils import *
5 from mantisshrimp.models.mantis_module import *
6 from mantisshrimp.backbones import *
7
8
9 class MantisRCNN(MantisModule, ABC):
10 def __init__(self, metrics=None):
11 super().__init__()
12 self.metrics = metrics or []
13
14 @staticmethod
15 @abstractmethod
16 def build_training_sample(*args, **kwargs):
17 """
18 Converts a record to a format understood by the model.
19 """
20
21 def _predict(
22 self, images: List[np.ndarray], convert_raw_prediction,
23 ):
24 self.eval()
25 tensor_images = [im2tensor(img).to(self.device) for img in images]
26 raw_preds = self(tensor_images)
27
28 return [convert_raw_prediction(raw_pred) for raw_pred in raw_preds]
29
30 @staticmethod
31 def loss(preds, targs) -> Tensor:
32 return sum(preds.values())
33
34 @classmethod
35 def collate_fn(cls, data):
36 ts = [cls.build_training_sample(**o) for o in data]
37 xb, yb = zip(*ts)
38 return xb, list(yb)
39
40 @classmethod
41 def dataloader(cls, dataset, **kwargs) -> DataLoader:
42 return DataLoader(dataset=dataset, collate_fn=cls.collate_fn, **kwargs)
43
44 @staticmethod
45 def get_backbone_by_name(
46 name: str, fpn: bool = True, pretrained: bool = True
47 ) -> nn.Module:
48 """
49 Args:
50 backbone (str): If none creates a default resnet50_fpn model trained on MS COCO 2017
51 Supported backones are: "resnet18", "resnet34","resnet50", "resnet101", "resnet152",
52 "resnext50_32x4d", "resnext101_32x8d", "wide_resnet50_2", "wide_resnet101_2", as resnets with fpn backbones.
53 Without fpn backbones supported are: "resnet18", "resnet34", "resnet50","resnet101",
54 "resnet152", "resnext101_32x8d", "mobilenet", "vgg11", "vgg13", "vgg16", "vgg19",
55 pretrained (bool): Creates a pretrained backbone with imagenet weights.
56 """
57 # Giving string as a backbone, which is either supported resnet or backbone
58 if fpn:
59 # Creates a torchvision resnet model with fpn added
60 # It returns BackboneWithFPN model
61 backbone = resnet_fpn_backbone(name, pretrained=pretrained)
62 else:
63 # This does not create fpn backbone, it is supported for all models
64 backbone = create_torchvision_backbone(name, pretrained=pretrained)
65 return backbone
66
[end of mantisshrimp/models/mantis_rcnn/mantis_rcnn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py b/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py
--- a/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py
+++ b/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py
@@ -4,7 +4,6 @@
from mantisshrimp.core import *
from mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *
from mantisshrimp.models.mantis_rcnn.mantis_rcnn import *
-from mantisshrimp.backbones import *
class MantisFasterRCNN(MantisRCNN):
@@ -22,6 +21,7 @@
backbone: nn.Module = None,
param_groups: List[nn.Module] = None,
metrics=None,
+ remove_internal_transforms=True,
**kwargs,
):
super().__init__(metrics=metrics)
@@ -42,6 +42,9 @@
self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]
check_all_model_params_in_groups(self.model, self._param_groups)
+ if remove_internal_transforms:
+ self._remove_transforms_from_model(self.model)
+
def forward(self, images, targets=None):
return self.model(images, targets)
diff --git a/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py b/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py
--- a/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py
+++ b/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py
@@ -5,7 +5,6 @@
from mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *
from mantisshrimp.models.mantis_rcnn.mantis_rcnn import *
from mantisshrimp.models.mantis_rcnn.mantis_faster_rcnn import *
-from mantisshrimp.backbones import *
class MantisMaskRCNN(MantisRCNN):
@@ -15,6 +14,7 @@
num_classes: int,
backbone: nn.Module = None,
param_groups: List[nn.Module] = None,
+ remove_internal_transforms: bool = True,
**kwargs,
):
super().__init__()
@@ -41,6 +41,9 @@
self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]
check_all_model_params_in_groups(self.model, self.param_groups)
+ if remove_internal_transforms:
+ self._remove_transforms_from_model(self.model)
+
def forward(self, images, targets=None):
return self.model(images, targets)
diff --git a/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py b/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py
--- a/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py
+++ b/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py
@@ -27,6 +27,16 @@
return [convert_raw_prediction(raw_pred) for raw_pred in raw_preds]
+ def _remove_transforms_from_model(self, model: GeneralizedRCNN):
+ def noop_normalize(image):
+ return image
+
+ def noop_resize(image, target):
+ return image, target
+
+ model.transform.normalize = noop_normalize
+ model.transform.resize = noop_resize
+
@staticmethod
def loss(preds, targs) -> Tensor:
return sum(preds.values())
|
{"golden_diff": "diff --git a/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py b/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py\n--- a/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py\n+++ b/mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py\n@@ -4,7 +4,6 @@\n from mantisshrimp.core import *\n from mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *\n from mantisshrimp.models.mantis_rcnn.mantis_rcnn import *\n-from mantisshrimp.backbones import *\n \n \n class MantisFasterRCNN(MantisRCNN):\n@@ -22,6 +21,7 @@\n backbone: nn.Module = None,\n param_groups: List[nn.Module] = None,\n metrics=None,\n+ remove_internal_transforms=True,\n **kwargs,\n ):\n super().__init__(metrics=metrics)\n@@ -42,6 +42,9 @@\n self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]\n check_all_model_params_in_groups(self.model, self._param_groups)\n \n+ if remove_internal_transforms:\n+ self._remove_transforms_from_model(self.model)\n+\n def forward(self, images, targets=None):\n return self.model(images, targets)\n \ndiff --git a/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py b/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py\n--- a/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py\n+++ b/mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py\n@@ -5,7 +5,6 @@\n from mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *\n from mantisshrimp.models.mantis_rcnn.mantis_rcnn import *\n from mantisshrimp.models.mantis_rcnn.mantis_faster_rcnn import *\n-from mantisshrimp.backbones import *\n \n \n class MantisMaskRCNN(MantisRCNN):\n@@ -15,6 +14,7 @@\n num_classes: int,\n backbone: nn.Module = None,\n param_groups: List[nn.Module] = None,\n+ remove_internal_transforms: bool = True,\n **kwargs,\n ):\n super().__init__()\n@@ -41,6 +41,9 @@\n self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]\n check_all_model_params_in_groups(self.model, self.param_groups)\n \n+ if remove_internal_transforms:\n+ self._remove_transforms_from_model(self.model)\n+\n def forward(self, images, targets=None):\n return self.model(images, targets)\n \ndiff --git a/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py b/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py\n--- a/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py\n+++ b/mantisshrimp/models/mantis_rcnn/mantis_rcnn.py\n@@ -27,6 +27,16 @@\n \n return [convert_raw_prediction(raw_pred) for raw_pred in raw_preds]\n \n+ def _remove_transforms_from_model(self, model: GeneralizedRCNN):\n+ def noop_normalize(image):\n+ return image\n+\n+ def noop_resize(image, target):\n+ return image, target\n+\n+ model.transform.normalize = noop_normalize\n+ model.transform.resize = noop_resize\n+\n @staticmethod\n def loss(preds, targs) -> Tensor:\n return sum(preds.values())\n", "issue": "Rescale transforms and torchvision RCNNs\n## \ud83d\udc1b Bug\r\ntorchvisions `FasterRCNN` and `MaskRCNN` internally rescales the images via `GeneralizedRCNNTransform`.\r\n\r\nThis means that any resizing transform previously applied to the image (probably on the `Dataset` stage) will be unexpectedly overriden.\r\n\r\nIt becomes really confusing if we should apply a resize transform via `Dataset` or via the model, ideally we want all transform to be applied at the `Dataset` stage. Going one step further, can this even be said about normalization?\r\n\r\n## Solution 1\r\nChanges the default of `min_size` and `max_size` in the model to 2 and 9999 respectively, practically making them non effective.\r\n\r\n## Solution 2\r\nMonkey patch `model.transform` to a function that returns the same items it receives (being careful with what are the expected output types)\r\n\r\n---\r\nSolution 1 seems like a better idea, it's less magical, we can still have `min/max_size` as parameters, we just change their defaults.\r\n\r\n---\r\nShould we do the same with normalize? Changing the mean to 0 and std to 1? It's them clear that if we want to normalize we should do it in the `Dataset` stage.\r\n\r\nI'm thinking this because models that we integrate in the future might not have this internal transforms, and it will be very confusing to a new user why sometimes he has to explicity define normalization and sometimes not.\r\n\n", "before_files": [{"content": "__all__ = [\"MantisMaskRCNN\"]\n\nfrom mantisshrimp.imports import *\nfrom mantisshrimp.core import *\nfrom mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *\nfrom mantisshrimp.models.mantis_rcnn.mantis_rcnn import *\nfrom mantisshrimp.models.mantis_rcnn.mantis_faster_rcnn import *\nfrom mantisshrimp.backbones import *\n\n\nclass MantisMaskRCNN(MantisRCNN):\n @delegates(MaskRCNN.__init__)\n def __init__(\n self,\n num_classes: int,\n backbone: nn.Module = None,\n param_groups: List[nn.Module] = None,\n **kwargs,\n ):\n super().__init__()\n self.num_classes = num_classes\n\n if backbone is None:\n # Creates the default fasterrcnn as given in pytorch. Trained on COCO dataset\n self.model = maskrcnn_resnet50_fpn(pretrained=True, **kwargs)\n in_features = self.model.roi_heads.box_predictor.cls_score.in_features\n self.model.roi_heads.box_predictor = FastRCNNPredictor(\n in_features, num_classes\n )\n in_features_mask = (\n self.model.roi_heads.mask_predictor.conv5_mask.in_channels\n )\n self.model.roi_heads.mask_predictor = MaskRCNNPredictor(\n in_channels=in_features_mask, dim_reduced=256, num_classes=num_classes\n )\n param_groups = resnet_fpn_backbone_param_groups(self.model.backbone)\n else:\n self.model = MaskRCNN(backbone, num_classes=num_classes, **kwargs)\n param_groups = param_groups or [backbone]\n\n self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]\n check_all_model_params_in_groups(self.model, self.param_groups)\n\n def forward(self, images, targets=None):\n return self.model(images, targets)\n\n def predict(\n self,\n images: List[np.ndarray],\n detection_threshold: float = 0.5,\n mask_threshold: float = 0.5,\n ):\n convert_raw_prediction = partial(\n self.convert_raw_prediction,\n detection_threshold=detection_threshold,\n mask_threshold=mask_threshold,\n )\n\n return self._predict(\n images=images, convert_raw_prediction=convert_raw_prediction\n )\n\n @property\n def param_groups(self):\n return self._param_groups\n\n @staticmethod\n def convert_raw_prediction(\n raw_pred: dict, detection_threshold: float, mask_threshold: float\n ):\n preds = MantisFasterRCNN.convert_raw_prediction(\n raw_pred=raw_pred, detection_threshold=detection_threshold\n )\n\n above_threshold = preds[\"above_threshold\"]\n masks_probs = raw_pred[\"masks\"][above_threshold]\n masks_probs = masks_probs.detach().cpu().numpy()\n # convert probabilities to 0 or 1 based on mask_threshold\n masks = masks_probs > mask_threshold\n masks = MaskArray(masks.squeeze(1))\n\n return {**preds, \"masks\": masks}\n\n @staticmethod\n def build_training_sample(\n imageid: int,\n img: np.ndarray,\n labels: List[int],\n bboxes: List[BBox],\n masks: MaskArray,\n **kwargs,\n ):\n x, y = MantisFasterRCNN.build_training_sample(\n imageid=imageid, img=img, labels=labels, bboxes=bboxes,\n )\n y[\"masks\"] = tensor(masks.data, dtype=torch.uint8)\n return x, y\n", "path": "mantisshrimp/models/mantis_rcnn/mantis_mask_rcnn.py"}, {"content": "__all__ = [\"MantisFasterRCNN\"]\n\nfrom mantisshrimp.imports import *\nfrom mantisshrimp.core import *\nfrom mantisshrimp.models.mantis_rcnn.rcnn_param_groups import *\nfrom mantisshrimp.models.mantis_rcnn.mantis_rcnn import *\nfrom mantisshrimp.backbones import *\n\n\nclass MantisFasterRCNN(MantisRCNN):\n \"\"\"\n Creates a flexible Faster RCNN implementation based on torchvision library.\n Args: \n n_class (int) : number of classes. Do not have class_id \"0\" it is reserved as background.\n n_class = number of classes to label + 1 for background.\n \"\"\"\n\n @delegates(FasterRCNN.__init__)\n def __init__(\n self,\n num_classes: int,\n backbone: nn.Module = None,\n param_groups: List[nn.Module] = None,\n metrics=None,\n **kwargs,\n ):\n super().__init__(metrics=metrics)\n self.num_classes = num_classes\n self.backbone = backbone\n if backbone is None:\n # Creates the default fasterrcnn as given in pytorch. Trained on COCO dataset\n self.model = fasterrcnn_resnet50_fpn(pretrained=True, **kwargs)\n in_features = self.model.roi_heads.box_predictor.cls_score.in_features\n self.model.roi_heads.box_predictor = FastRCNNPredictor(\n in_features, num_classes\n )\n param_groups = resnet_fpn_backbone_param_groups(self.model.backbone)\n else:\n self.model = FasterRCNN(backbone, num_classes=num_classes, **kwargs)\n param_groups = param_groups or [backbone]\n\n self._param_groups = param_groups + [self.model.rpn, self.model.roi_heads]\n check_all_model_params_in_groups(self.model, self._param_groups)\n\n def forward(self, images, targets=None):\n return self.model(images, targets)\n\n def predict(self, images: List[np.ndarray], detection_threshold: float = 0.5):\n convert_raw_prediction = partial(\n self.convert_raw_prediction, detection_threshold=detection_threshold,\n )\n\n return self._predict(\n images=images, convert_raw_prediction=convert_raw_prediction\n )\n\n @property\n def param_groups(self):\n return self._param_groups\n\n @staticmethod\n def convert_raw_prediction(raw_pred: dict, detection_threshold: float):\n above_threshold = raw_pred[\"scores\"] >= detection_threshold\n\n labels = raw_pred[\"labels\"][above_threshold]\n labels = labels.detach().cpu().numpy()\n\n scores = raw_pred[\"scores\"][above_threshold]\n scores = scores.detach().cpu().numpy()\n\n boxes = raw_pred[\"boxes\"][above_threshold]\n bboxes = []\n for box_tensor in boxes:\n xyxy = box_tensor.cpu().tolist()\n bbox = BBox.from_xyxy(*xyxy)\n bboxes.append(bbox)\n\n return {\n \"labels\": labels,\n \"scores\": scores,\n \"bboxes\": bboxes,\n \"above_threshold\": above_threshold,\n }\n\n @staticmethod\n def build_training_sample(\n imageid: int, img: np.ndarray, labels: List[int], bboxes: List[BBox], **kwargs,\n ):\n x = im2tensor(img)\n # injected values when annotations are empty are disconsidered\n # because we mark label as 0 (background)\n _fake_box = [0, 1, 2, 3]\n y = {\n \"image_id\": tensor(imageid, dtype=torch.int64),\n \"labels\": tensor(labels or [0], dtype=torch.int64),\n \"boxes\": tensor([o.xyxy for o in bboxes] or [_fake_box], dtype=torch.float),\n }\n return x, y\n", "path": "mantisshrimp/models/mantis_rcnn/mantis_faster_rcnn.py"}, {"content": "__all__ = [\"MantisRCNN\"]\n\nfrom mantisshrimp.imports import *\nfrom mantisshrimp.utils import *\nfrom mantisshrimp.models.mantis_module import *\nfrom mantisshrimp.backbones import *\n\n\nclass MantisRCNN(MantisModule, ABC):\n def __init__(self, metrics=None):\n super().__init__()\n self.metrics = metrics or []\n\n @staticmethod\n @abstractmethod\n def build_training_sample(*args, **kwargs):\n \"\"\"\n Converts a record to a format understood by the model.\n \"\"\"\n\n def _predict(\n self, images: List[np.ndarray], convert_raw_prediction,\n ):\n self.eval()\n tensor_images = [im2tensor(img).to(self.device) for img in images]\n raw_preds = self(tensor_images)\n\n return [convert_raw_prediction(raw_pred) for raw_pred in raw_preds]\n\n @staticmethod\n def loss(preds, targs) -> Tensor:\n return sum(preds.values())\n\n @classmethod\n def collate_fn(cls, data):\n ts = [cls.build_training_sample(**o) for o in data]\n xb, yb = zip(*ts)\n return xb, list(yb)\n\n @classmethod\n def dataloader(cls, dataset, **kwargs) -> DataLoader:\n return DataLoader(dataset=dataset, collate_fn=cls.collate_fn, **kwargs)\n\n @staticmethod\n def get_backbone_by_name(\n name: str, fpn: bool = True, pretrained: bool = True\n ) -> nn.Module:\n \"\"\"\n Args:\n backbone (str): If none creates a default resnet50_fpn model trained on MS COCO 2017\n Supported backones are: \"resnet18\", \"resnet34\",\"resnet50\", \"resnet101\", \"resnet152\",\n \"resnext50_32x4d\", \"resnext101_32x8d\", \"wide_resnet50_2\", \"wide_resnet101_2\", as resnets with fpn backbones.\n Without fpn backbones supported are: \"resnet18\", \"resnet34\", \"resnet50\",\"resnet101\",\n \"resnet152\", \"resnext101_32x8d\", \"mobilenet\", \"vgg11\", \"vgg13\", \"vgg16\", \"vgg19\",\n pretrained (bool): Creates a pretrained backbone with imagenet weights.\n \"\"\"\n # Giving string as a backbone, which is either supported resnet or backbone\n if fpn:\n # Creates a torchvision resnet model with fpn added\n # It returns BackboneWithFPN model\n backbone = resnet_fpn_backbone(name, pretrained=pretrained)\n else:\n # This does not create fpn backbone, it is supported for all models\n backbone = create_torchvision_backbone(name, pretrained=pretrained)\n return backbone\n", "path": "mantisshrimp/models/mantis_rcnn/mantis_rcnn.py"}]}
| 3,698 | 786 |
gh_patches_debug_613
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1314
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.38
On the docket:
+ [ ] PEX direct requirement metadata for resolves via Pip is incorrect. #1311
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.37"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.37"
+__version__ = "2.1.38"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.37\"\n+__version__ = \"2.1.38\"\n", "issue": "Release 2.1.38\nOn the docket:\r\n+ [ ] PEX direct requirement metadata for resolves via Pip is incorrect. #1311\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.37\"\n", "path": "pex/version.py"}]}
| 617 | 96 |
gh_patches_debug_30596
|
rasdani/github-patches
|
git_diff
|
celery__celery-6298
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Database backend race condition during table creation
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue. **N/A**
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug. **N/A**
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- https://github.com/celery/celery/issues/4653
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.4.7 (cliffs)
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.4.7 (cliffs) kombu:4.6.11 py:3.8.2
billiard:3.6.3.0 py-amqp:2.6.1
platform -> system:Darwin arch:64bit
kernel version:16.7.0 imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:db+postgresql://<redacted>
include: ['redacted']
accept_content: ['redacted-custom']
database_table_names: {
'group': 'celery_group', 'task': 'celery_task'}
result_serializer: 'redacted-custom'
task_serializer: 'redacted-custom'
task_track_started: True
broker_url: 'amqp://<redacted>'
result_backend: 'db+postgresql://<redacted>'
```
</p>
</details>
# Steps to Reproduce
When celery uses a database result backend, the following line can be called multiple times from different processes:
https://github.com/celery/celery/blob/9a6c2923e859b6993227605610255bd632c1ae68/celery/backends/database/session.py#L56
This is a race condition because SQLAlchemy first checks if the tables/sequences exist and then tries to create them. It causes errors like this (at least on PostgreSQL):
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/redacted.py", line 168, in _redacted
result = async_result.get()
File "/usr/local/lib/python3.7/site-packages/celery/result.py", line 226, in get
self.maybe_throw(callback=callback)
File "/usr/local/lib/python3.7/site-packages/celery/result.py", line 342, in maybe_throw
self.throw(value, self._to_remote_traceback(tb))
File "/usr/local/lib/python3.7/site-packages/celery/result.py", line 335, in throw
self.on_ready.throw(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/vine/promises.py", line 244, in throw
reraise(type(exc), exc, tb)
File "/usr/local/lib/python3.7/site-packages/vine/five.py", line 195, in reraise
raise value
Exception: <class 'sqlalchemy.exc.IntegrityError'>(('(psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "pg_type_typname_nsp_index"\nDETAIL: Key (typname, typnamespace)=(taskset_id_sequence, 2200) already exists.\n',))
```
One workaround is to force the table creation ahead of time as was proposed by a user in the issue I linked: https://github.com/celery/celery/issues/4653#issuecomment-400029147.
I think Celery should handle this itself. A possible solution would catch `IntegrityError` and try again until `create_all` succeeds. (Perhaps with a limited number of retries and with sleeps compared to this snippet):
```python
def prepare_models(self, engine):
from sqlalchemy.exc import IntegrityError
if not self.prepared:
while True:
try:
ResultModelBase.metadata.create_all(engine)
except IntegrityError:
continue
else:
break
self.prepared = True
```
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
This example doesn't use celery at all, but shows that calling create_all in multiple processes can cause the error. It's a race condition, so you might need to try it multiple times or play around with the number of processes:
<details>
<p>
Requires a local postgres, and this database must be created:
```
createdb racetest
```
```python
from concurrent.futures import ProcessPoolExecutor, as_completed
from sqlalchemy import Column, Integer, Table, MetaData, create_engine
metadata = MetaData()
tbl1 = Table('tbl1', metadata, Column('id', Integer, primary_key=True))
def create_all(url):
engine = create_engine(url)
metadata.create_all(bind=engine)
def main():
url = 'postgresql:///racetest'
engine = create_engine(url)
# Make sure schema is empty before we start
metadata.drop_all(bind=engine)
with ProcessPoolExecutor(max_workers=50) as executor:
futures = []
for _ in range(50):
future = executor.submit(create_all, url)
futures.append(future)
for fut in as_completed(futures):
fut.result()
if __name__ == '__main__':
main()
```
</p>
</details>
</issue>
<code>
[start of celery/backends/database/session.py]
1 """SQLAlchemy session."""
2 from kombu.utils.compat import register_after_fork
3 from sqlalchemy import create_engine
4 from sqlalchemy.ext.declarative import declarative_base
5 from sqlalchemy.orm import sessionmaker
6 from sqlalchemy.pool import NullPool
7
8 ResultModelBase = declarative_base()
9
10 __all__ = ('SessionManager',)
11
12
13 def _after_fork_cleanup_session(session):
14 session._after_fork()
15
16
17 class SessionManager:
18 """Manage SQLAlchemy sessions."""
19
20 def __init__(self):
21 self._engines = {}
22 self._sessions = {}
23 self.forked = False
24 self.prepared = False
25 if register_after_fork is not None:
26 register_after_fork(self, _after_fork_cleanup_session)
27
28 def _after_fork(self):
29 self.forked = True
30
31 def get_engine(self, dburi, **kwargs):
32 if self.forked:
33 try:
34 return self._engines[dburi]
35 except KeyError:
36 engine = self._engines[dburi] = create_engine(dburi, **kwargs)
37 return engine
38 else:
39 kwargs = {k: v for k, v in kwargs.items() if
40 not k.startswith('pool')}
41 return create_engine(dburi, poolclass=NullPool, **kwargs)
42
43 def create_session(self, dburi, short_lived_sessions=False, **kwargs):
44 engine = self.get_engine(dburi, **kwargs)
45 if self.forked:
46 if short_lived_sessions or dburi not in self._sessions:
47 self._sessions[dburi] = sessionmaker(bind=engine)
48 return engine, self._sessions[dburi]
49 return engine, sessionmaker(bind=engine)
50
51 def prepare_models(self, engine):
52 if not self.prepared:
53 ResultModelBase.metadata.create_all(engine)
54 self.prepared = True
55
56 def session_factory(self, dburi, **kwargs):
57 engine, session = self.create_session(dburi, **kwargs)
58 self.prepare_models(engine)
59 return session()
60
[end of celery/backends/database/session.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/celery/backends/database/session.py b/celery/backends/database/session.py
--- a/celery/backends/database/session.py
+++ b/celery/backends/database/session.py
@@ -1,14 +1,21 @@
"""SQLAlchemy session."""
+import time
+
from kombu.utils.compat import register_after_fork
from sqlalchemy import create_engine
+from sqlalchemy.exc import DatabaseError
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import NullPool
+from celery.utils.time import get_exponential_backoff_interval
+
ResultModelBase = declarative_base()
__all__ = ('SessionManager',)
+PREPARE_MODELS_MAX_RETRIES = 10
+
def _after_fork_cleanup_session(session):
session._after_fork()
@@ -50,7 +57,25 @@
def prepare_models(self, engine):
if not self.prepared:
- ResultModelBase.metadata.create_all(engine)
+ # SQLAlchemy will check if the items exist before trying to
+ # create them, which is a race condition. If it raises an error
+ # in one iteration, the next may pass all the existence checks
+ # and the call will succeed.
+ retries = 0
+ while True:
+ try:
+ ResultModelBase.metadata.create_all(engine)
+ except DatabaseError:
+ if retries < PREPARE_MODELS_MAX_RETRIES:
+ sleep_amount_ms = get_exponential_backoff_interval(
+ 10, retries, 1000, True
+ )
+ time.sleep(sleep_amount_ms / 1000)
+ retries += 1
+ else:
+ raise
+ else:
+ break
self.prepared = True
def session_factory(self, dburi, **kwargs):
|
{"golden_diff": "diff --git a/celery/backends/database/session.py b/celery/backends/database/session.py\n--- a/celery/backends/database/session.py\n+++ b/celery/backends/database/session.py\n@@ -1,14 +1,21 @@\n \"\"\"SQLAlchemy session.\"\"\"\n+import time\n+\n from kombu.utils.compat import register_after_fork\n from sqlalchemy import create_engine\n+from sqlalchemy.exc import DatabaseError\n from sqlalchemy.ext.declarative import declarative_base\n from sqlalchemy.orm import sessionmaker\n from sqlalchemy.pool import NullPool\n \n+from celery.utils.time import get_exponential_backoff_interval\n+\n ResultModelBase = declarative_base()\n \n __all__ = ('SessionManager',)\n \n+PREPARE_MODELS_MAX_RETRIES = 10\n+\n \n def _after_fork_cleanup_session(session):\n session._after_fork()\n@@ -50,7 +57,25 @@\n \n def prepare_models(self, engine):\n if not self.prepared:\n- ResultModelBase.metadata.create_all(engine)\n+ # SQLAlchemy will check if the items exist before trying to\n+ # create them, which is a race condition. If it raises an error\n+ # in one iteration, the next may pass all the existence checks\n+ # and the call will succeed.\n+ retries = 0\n+ while True:\n+ try:\n+ ResultModelBase.metadata.create_all(engine)\n+ except DatabaseError:\n+ if retries < PREPARE_MODELS_MAX_RETRIES:\n+ sleep_amount_ms = get_exponential_backoff_interval(\n+ 10, retries, 1000, True\n+ )\n+ time.sleep(sleep_amount_ms / 1000)\n+ retries += 1\n+ else:\n+ raise\n+ else:\n+ break\n self.prepared = True\n \n def session_factory(self, dburi, **kwargs):\n", "issue": "Database backend race condition during table creation\n<!--\r\nPlease fill this template entirely and do not erase parts of it.\r\nWe reserve the right to close without a response\r\nbug reports which are incomplete.\r\n-->\r\n# Checklist\r\n<!--\r\nTo check an item on the list replace [ ] with [x].\r\n-->\r\n- [x] I have verified that the issue exists against the `master` branch of Celery.\r\n- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.\r\n- [x] I have read the relevant section in the\r\n [contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)\r\n on reporting bugs.\r\n- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)\r\n for similar or identical bug reports.\r\n- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)\r\n for existing proposed fixes.\r\n- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)\r\n to find out if the bug was already fixed in the master branch.\r\n- [x] I have included all related issues and possible duplicate issues\r\n in this issue (If there are none, check this box anyway).\r\n\r\n## Mandatory Debugging Information\r\n\r\n- [x] I have included the output of ``celery -A proj report`` in the issue.\r\n (if you are not able to do this, then at least specify the Celery\r\n version affected).\r\n- [x] I have verified that the issue exists against the `master` branch of Celery.\r\n- [ ] I have included the contents of ``pip freeze`` in the issue. **N/A**\r\n- [ ] I have included all the versions of all the external dependencies required\r\n to reproduce this bug. **N/A**\r\n\r\n## Optional Debugging Information\r\n<!--\r\nTry some of the below if you think they are relevant.\r\nIt will help us figure out the scope of the bug and how many users it affects.\r\n-->\r\n- [ ] I have tried reproducing the issue on more than one Python version\r\n and/or implementation.\r\n- [ ] I have tried reproducing the issue on more than one message broker and/or\r\n result backend.\r\n- [ ] I have tried reproducing the issue on more than one version of the message\r\n broker and/or result backend.\r\n- [ ] I have tried reproducing the issue on more than one operating system.\r\n- [ ] I have tried reproducing the issue on more than one workers pool.\r\n- [ ] I have tried reproducing the issue with autoscaling, retries,\r\n ETA/Countdown & rate limits disabled.\r\n- [ ] I have tried reproducing the issue after downgrading\r\n and/or upgrading Celery and its dependencies.\r\n\r\n## Related Issues and Possible Duplicates\r\n<!--\r\nPlease make sure to search and mention any related issues\r\nor possible duplicates to this issue as requested by the checklist above.\r\n\r\nThis may or may not include issues in other repositories that the Celery project\r\nmaintains or other repositories that are dependencies of Celery.\r\n\r\nIf you don't know how to mention issues, please refer to Github's documentation\r\non the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests\r\n-->\r\n\r\n#### Related Issues\r\n\r\n- https://github.com/celery/celery/issues/4653\r\n\r\n#### Possible Duplicates\r\n\r\n- None\r\n\r\n## Environment & Settings\r\n<!-- Include the contents of celery --version below -->\r\n**Celery version**: 4.4.7 (cliffs)\r\n<!-- Include the output of celery -A proj report below -->\r\n<details>\r\n<summary><b><code>celery report</code> Output:</b></summary>\r\n<p>\r\n\r\n```\r\nsoftware -> celery:4.4.7 (cliffs) kombu:4.6.11 py:3.8.2\r\n billiard:3.6.3.0 py-amqp:2.6.1\r\nplatform -> system:Darwin arch:64bit\r\n kernel version:16.7.0 imp:CPython\r\nloader -> celery.loaders.app.AppLoader\r\nsettings -> transport:amqp results:db+postgresql://<redacted>\r\n\r\ninclude: ['redacted']\r\naccept_content: ['redacted-custom']\r\ndatabase_table_names: {\r\n 'group': 'celery_group', 'task': 'celery_task'}\r\nresult_serializer: 'redacted-custom'\r\ntask_serializer: 'redacted-custom'\r\ntask_track_started: True\r\nbroker_url: 'amqp://<redacted>'\r\nresult_backend: 'db+postgresql://<redacted>'\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n# Steps to Reproduce\r\n\r\nWhen celery uses a database result backend, the following line can be called multiple times from different processes:\r\n\r\nhttps://github.com/celery/celery/blob/9a6c2923e859b6993227605610255bd632c1ae68/celery/backends/database/session.py#L56\r\n\r\nThis is a race condition because SQLAlchemy first checks if the tables/sequences exist and then tries to create them. It causes errors like this (at least on PostgreSQL):\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/redacted.py\", line 168, in _redacted\r\n result = async_result.get()\r\n File \"/usr/local/lib/python3.7/site-packages/celery/result.py\", line 226, in get\r\n self.maybe_throw(callback=callback)\r\n File \"/usr/local/lib/python3.7/site-packages/celery/result.py\", line 342, in maybe_throw\r\n self.throw(value, self._to_remote_traceback(tb))\r\n File \"/usr/local/lib/python3.7/site-packages/celery/result.py\", line 335, in throw\r\n self.on_ready.throw(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/vine/promises.py\", line 244, in throw\r\n reraise(type(exc), exc, tb)\r\n File \"/usr/local/lib/python3.7/site-packages/vine/five.py\", line 195, in reraise\r\n raise value\r\nException: <class 'sqlalchemy.exc.IntegrityError'>(('(psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint \"pg_type_typname_nsp_index\"\\nDETAIL: Key (typname, typnamespace)=(taskset_id_sequence, 2200) already exists.\\n',))\r\n```\r\n\r\nOne workaround is to force the table creation ahead of time as was proposed by a user in the issue I linked: https://github.com/celery/celery/issues/4653#issuecomment-400029147.\r\n\r\nI think Celery should handle this itself. A possible solution would catch `IntegrityError` and try again until `create_all` succeeds. (Perhaps with a limited number of retries and with sleeps compared to this snippet):\r\n\r\n```python\r\n def prepare_models(self, engine):\r\n from sqlalchemy.exc import IntegrityError\r\n if not self.prepared:\r\n while True:\r\n try:\r\n ResultModelBase.metadata.create_all(engine)\r\n except IntegrityError:\r\n continue\r\n else:\r\n break\r\n self.prepared = True\r\n```\r\n\r\n## Minimally Reproducible Test Case\r\n<!--\r\nPlease provide a reproducible test case.\r\nRefer to the Reporting Bugs section in our contribution guide.\r\n\r\nWe prefer submitting test cases in the form of a PR to our integration test suite.\r\nIf you can provide one, please mention the PR number below.\r\nIf not, please attach the most minimal code example required to reproduce the issue below.\r\nIf the test case is too large, please include a link to a gist or a repository below.\r\n-->\r\n\r\nThis example doesn't use celery at all, but shows that calling create_all in multiple processes can cause the error. It's a race condition, so you might need to try it multiple times or play around with the number of processes:\r\n\r\n<details>\r\n<p>\r\n\r\n\r\nRequires a local postgres, and this database must be created:\r\n\r\n```\r\ncreatedb racetest\r\n```\r\n\r\n```python\r\nfrom concurrent.futures import ProcessPoolExecutor, as_completed\r\n\r\nfrom sqlalchemy import Column, Integer, Table, MetaData, create_engine\r\n\r\nmetadata = MetaData()\r\n\r\ntbl1 = Table('tbl1', metadata, Column('id', Integer, primary_key=True))\r\n\r\n\r\ndef create_all(url):\r\n engine = create_engine(url)\r\n metadata.create_all(bind=engine)\r\n\r\n\r\ndef main():\r\n url = 'postgresql:///racetest'\r\n engine = create_engine(url)\r\n \r\n # Make sure schema is empty before we start\r\n metadata.drop_all(bind=engine)\r\n\r\n with ProcessPoolExecutor(max_workers=50) as executor:\r\n futures = []\r\n for _ in range(50):\r\n future = executor.submit(create_all, url)\r\n futures.append(future)\r\n\r\n for fut in as_completed(futures):\r\n fut.result()\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n\r\n```\r\n\r\n</p>\r\n</details>\r\n\n", "before_files": [{"content": "\"\"\"SQLAlchemy session.\"\"\"\nfrom kombu.utils.compat import register_after_fork\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy.pool import NullPool\n\nResultModelBase = declarative_base()\n\n__all__ = ('SessionManager',)\n\n\ndef _after_fork_cleanup_session(session):\n session._after_fork()\n\n\nclass SessionManager:\n \"\"\"Manage SQLAlchemy sessions.\"\"\"\n\n def __init__(self):\n self._engines = {}\n self._sessions = {}\n self.forked = False\n self.prepared = False\n if register_after_fork is not None:\n register_after_fork(self, _after_fork_cleanup_session)\n\n def _after_fork(self):\n self.forked = True\n\n def get_engine(self, dburi, **kwargs):\n if self.forked:\n try:\n return self._engines[dburi]\n except KeyError:\n engine = self._engines[dburi] = create_engine(dburi, **kwargs)\n return engine\n else:\n kwargs = {k: v for k, v in kwargs.items() if\n not k.startswith('pool')}\n return create_engine(dburi, poolclass=NullPool, **kwargs)\n\n def create_session(self, dburi, short_lived_sessions=False, **kwargs):\n engine = self.get_engine(dburi, **kwargs)\n if self.forked:\n if short_lived_sessions or dburi not in self._sessions:\n self._sessions[dburi] = sessionmaker(bind=engine)\n return engine, self._sessions[dburi]\n return engine, sessionmaker(bind=engine)\n\n def prepare_models(self, engine):\n if not self.prepared:\n ResultModelBase.metadata.create_all(engine)\n self.prepared = True\n\n def session_factory(self, dburi, **kwargs):\n engine, session = self.create_session(dburi, **kwargs)\n self.prepare_models(engine)\n return session()\n", "path": "celery/backends/database/session.py"}]}
| 3,124 | 411 |
gh_patches_debug_8086
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-1904
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Higher resolution icons are still saved in 32x32 directory
Despite Lutris bumping its icon size to 128x128 (currently it's still 64x64 as bump to 128x128 hasn't been deployed yet), it still saves the icons into `icons/hicolor/32x32`.
It should probably not do that and save it in proper 128x128 location instead.
</issue>
<code>
[start of lutris/settings.py]
1 """Internal settings."""
2 import os
3 from gi.repository import GLib
4 from lutris.util.settings import SettingsIO
5 from lutris import __version__
6
7 PROJECT = "Lutris"
8 VERSION = __version__
9 COPYRIGHT = "(c) 2010-2019 Lutris Gaming Platform"
10 AUTHORS = [
11 "The Lutris team"
12 ]
13
14 # Paths
15 CONFIG_DIR = os.path.join(GLib.get_user_config_dir(), "lutris")
16 CONFIG_FILE = os.path.join(CONFIG_DIR, "lutris.conf")
17 DATA_DIR = os.path.join(GLib.get_user_data_dir(), "lutris")
18 RUNNER_DIR = os.path.join(DATA_DIR, "runners")
19 RUNTIME_DIR = os.path.join(DATA_DIR, "runtime")
20 CACHE_DIR = os.path.join(GLib.get_user_cache_dir(), "lutris")
21 GAME_CONFIG_DIR = os.path.join(CONFIG_DIR, "games")
22
23 TMP_PATH = os.path.join(CACHE_DIR, "tmp")
24 BANNER_PATH = os.path.join(DATA_DIR, "banners")
25 COVERART_PATH = os.path.join(DATA_DIR, "coverart")
26 ICON_PATH = os.path.join(GLib.get_user_data_dir(), "icons", "hicolor", "32x32", "apps")
27
28 sio = SettingsIO(CONFIG_FILE)
29 PGA_DB = sio.read_setting("pga_path") or os.path.join(DATA_DIR, "pga.db")
30 SITE_URL = sio.read_setting("website") or "https://lutris.net"
31
32 INSTALLER_URL = SITE_URL + "/api/installers/%s"
33 # XXX change this, should query on the installer, not the game.
34 INSTALLER_REVISION_URL = SITE_URL + "/api/installers/games/%s/revisions/%s"
35 GAME_URL = SITE_URL + "/games/%s/"
36 ICON_URL = SITE_URL + "/games/icon/%s.png"
37 BANNER_URL = SITE_URL + "/games/banner/%s.jpg"
38 RUNTIME_URL = "https://lutris.net/api/runtime"
39
40 read_setting = sio.read_setting
41 write_setting = sio.write_setting
42
[end of lutris/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/settings.py b/lutris/settings.py
--- a/lutris/settings.py
+++ b/lutris/settings.py
@@ -23,7 +23,7 @@
TMP_PATH = os.path.join(CACHE_DIR, "tmp")
BANNER_PATH = os.path.join(DATA_DIR, "banners")
COVERART_PATH = os.path.join(DATA_DIR, "coverart")
-ICON_PATH = os.path.join(GLib.get_user_data_dir(), "icons", "hicolor", "32x32", "apps")
+ICON_PATH = os.path.join(GLib.get_user_data_dir(), "icons", "hicolor", "128x128", "apps")
sio = SettingsIO(CONFIG_FILE)
PGA_DB = sio.read_setting("pga_path") or os.path.join(DATA_DIR, "pga.db")
|
{"golden_diff": "diff --git a/lutris/settings.py b/lutris/settings.py\n--- a/lutris/settings.py\n+++ b/lutris/settings.py\n@@ -23,7 +23,7 @@\n TMP_PATH = os.path.join(CACHE_DIR, \"tmp\")\n BANNER_PATH = os.path.join(DATA_DIR, \"banners\")\n COVERART_PATH = os.path.join(DATA_DIR, \"coverart\")\n-ICON_PATH = os.path.join(GLib.get_user_data_dir(), \"icons\", \"hicolor\", \"32x32\", \"apps\")\n+ICON_PATH = os.path.join(GLib.get_user_data_dir(), \"icons\", \"hicolor\", \"128x128\", \"apps\")\n \n sio = SettingsIO(CONFIG_FILE)\n PGA_DB = sio.read_setting(\"pga_path\") or os.path.join(DATA_DIR, \"pga.db\")\n", "issue": "Higher resolution icons are still saved in 32x32 directory\nDespite Lutris bumping its icon size to 128x128 (currently it's still 64x64 as bump to 128x128 hasn't been deployed yet), it still saves the icons into `icons/hicolor/32x32`.\r\nIt should probably not do that and save it in proper 128x128 location instead.\n", "before_files": [{"content": "\"\"\"Internal settings.\"\"\"\nimport os\nfrom gi.repository import GLib\nfrom lutris.util.settings import SettingsIO\nfrom lutris import __version__\n\nPROJECT = \"Lutris\"\nVERSION = __version__\nCOPYRIGHT = \"(c) 2010-2019 Lutris Gaming Platform\"\nAUTHORS = [\n \"The Lutris team\"\n]\n\n# Paths\nCONFIG_DIR = os.path.join(GLib.get_user_config_dir(), \"lutris\")\nCONFIG_FILE = os.path.join(CONFIG_DIR, \"lutris.conf\")\nDATA_DIR = os.path.join(GLib.get_user_data_dir(), \"lutris\")\nRUNNER_DIR = os.path.join(DATA_DIR, \"runners\")\nRUNTIME_DIR = os.path.join(DATA_DIR, \"runtime\")\nCACHE_DIR = os.path.join(GLib.get_user_cache_dir(), \"lutris\")\nGAME_CONFIG_DIR = os.path.join(CONFIG_DIR, \"games\")\n\nTMP_PATH = os.path.join(CACHE_DIR, \"tmp\")\nBANNER_PATH = os.path.join(DATA_DIR, \"banners\")\nCOVERART_PATH = os.path.join(DATA_DIR, \"coverart\")\nICON_PATH = os.path.join(GLib.get_user_data_dir(), \"icons\", \"hicolor\", \"32x32\", \"apps\")\n\nsio = SettingsIO(CONFIG_FILE)\nPGA_DB = sio.read_setting(\"pga_path\") or os.path.join(DATA_DIR, \"pga.db\")\nSITE_URL = sio.read_setting(\"website\") or \"https://lutris.net\"\n\nINSTALLER_URL = SITE_URL + \"/api/installers/%s\"\n# XXX change this, should query on the installer, not the game.\nINSTALLER_REVISION_URL = SITE_URL + \"/api/installers/games/%s/revisions/%s\"\nGAME_URL = SITE_URL + \"/games/%s/\"\nICON_URL = SITE_URL + \"/games/icon/%s.png\"\nBANNER_URL = SITE_URL + \"/games/banner/%s.jpg\"\nRUNTIME_URL = \"https://lutris.net/api/runtime\"\n\nread_setting = sio.read_setting\nwrite_setting = sio.write_setting\n", "path": "lutris/settings.py"}]}
| 1,134 | 180 |
gh_patches_debug_3319
|
rasdani/github-patches
|
git_diff
|
spack__spack-3825
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`scorep` does not work on Darwin
The Score-P package requires a case-sensitive file system. This is described in the install notes, and I confirmed with the developers. I suggest to disable Score-P on Darwin to avoid others having to track down this problem in the same way I had to. Alternatively, we can add an install-time test whether the build or install directories are on a case-insensitive file system.
</issue>
<code>
[start of var/spack/repos/builtin/packages/scorep/package.py]
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the LICENSE file for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 from spack import *
26
27
28 class Scorep(AutotoolsPackage):
29 """The Score-P measurement infrastructure is a highly scalable and
30 easy-to-use tool suite for profiling, event tracing, and online analysis
31 of HPC applications.
32 """
33
34 homepage = "http://www.vi-hps.org/projects/score-p"
35 url = "http://www.vi-hps.org/upload/packages/scorep/scorep-2.0.2.tar.gz"
36
37 version('3.0', '44da8beaa3f71436a5f6fe51938aab2f')
38 version('2.0.2', '8f00e79e1b5b96e511c5ebecd10b2888')
39 version('1.4.2', '3b9a042b13bdd5836452354e6567f71e')
40 version('1.3', '9db6f957b7f51fa01377a9537867a55c')
41
42 ##########
43 # Dependencies for SCORE-P are quite tight. See the homepage for more
44 # information.
45 # SCOREP 3
46 depends_on('otf2@2:', when='@3:')
47 depends_on('opari2@2:', when='@3:')
48 depends_on('[email protected]:', when='@3:')
49 # SCOREP 2.0.2
50 depends_on('[email protected]', when='@2.0.2')
51 depends_on('[email protected]', when='@2.0.2')
52 depends_on('[email protected]:4.4', when='@2.0.2')
53 # SCOREP 1.4.2
54 depends_on('[email protected]:1.6', when='@1.4.2')
55 depends_on('[email protected]', when='@1.4.2')
56 depends_on('[email protected]:4.4', when='@1.4.2')
57 # SCOREP 1.3
58 depends_on("[email protected]", when='@1.3')
59 depends_on("[email protected]", when='@1.3')
60 depends_on("[email protected]", when='@1.3')
61 ##########
62
63 depends_on("mpi")
64 depends_on("papi")
65
66 variant('shmem', default=False, description='Enable shmem tracing')
67
68 def configure_args(self):
69 spec = self.spec
70
71 config_args = [
72 "--with-otf2=%s" % spec['otf2'].prefix.bin,
73 "--with-opari2=%s" % spec['opari2'].prefix.bin,
74 "--with-cube=%s" % spec['cube'].prefix.bin,
75 "--with-papi-header=%s" % spec['papi'].prefix.include,
76 "--with-papi-lib=%s" % spec['papi'].prefix.lib,
77 "--enable-shared",
78 ]
79
80 if '~shmem' in spec:
81 config_args.append("--without-shmem")
82
83 config_args.extend(["CFLAGS=-fPIC", "CXXFLAGS=-fPIC"])
84 return config_args
85
[end of var/spack/repos/builtin/packages/scorep/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/scorep/package.py b/var/spack/repos/builtin/packages/scorep/package.py
--- a/var/spack/repos/builtin/packages/scorep/package.py
+++ b/var/spack/repos/builtin/packages/scorep/package.py
@@ -65,6 +65,11 @@
variant('shmem', default=False, description='Enable shmem tracing')
+ # Score-P requires a case-sensitive file system, and therefore
+ # does not work on macOS
+ # https://github.com/LLNL/spack/issues/1609
+ conflicts('platform=darwin')
+
def configure_args(self):
spec = self.spec
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/scorep/package.py b/var/spack/repos/builtin/packages/scorep/package.py\n--- a/var/spack/repos/builtin/packages/scorep/package.py\n+++ b/var/spack/repos/builtin/packages/scorep/package.py\n@@ -65,6 +65,11 @@\n \n variant('shmem', default=False, description='Enable shmem tracing')\n \n+ # Score-P requires a case-sensitive file system, and therefore\n+ # does not work on macOS\n+ # https://github.com/LLNL/spack/issues/1609\n+ conflicts('platform=darwin')\n+\n def configure_args(self):\n spec = self.spec\n", "issue": "`scorep` does not work on Darwin\nThe Score-P package requires a case-sensitive file system. This is described in the install notes, and I confirmed with the developers. I suggest to disable Score-P on Darwin to avoid others having to track down this problem in the same way I had to. Alternatively, we can add an install-time test whether the build or install directories are on a case-insensitive file system.\n\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass Scorep(AutotoolsPackage):\n \"\"\"The Score-P measurement infrastructure is a highly scalable and\n easy-to-use tool suite for profiling, event tracing, and online analysis\n of HPC applications.\n \"\"\"\n\n homepage = \"http://www.vi-hps.org/projects/score-p\"\n url = \"http://www.vi-hps.org/upload/packages/scorep/scorep-2.0.2.tar.gz\"\n\n version('3.0', '44da8beaa3f71436a5f6fe51938aab2f')\n version('2.0.2', '8f00e79e1b5b96e511c5ebecd10b2888')\n version('1.4.2', '3b9a042b13bdd5836452354e6567f71e')\n version('1.3', '9db6f957b7f51fa01377a9537867a55c')\n\n ##########\n # Dependencies for SCORE-P are quite tight. See the homepage for more\n # information.\n # SCOREP 3\n depends_on('otf2@2:', when='@3:')\n depends_on('opari2@2:', when='@3:')\n depends_on('[email protected]:', when='@3:')\n # SCOREP 2.0.2\n depends_on('[email protected]', when='@2.0.2')\n depends_on('[email protected]', when='@2.0.2')\n depends_on('[email protected]:4.4', when='@2.0.2')\n # SCOREP 1.4.2\n depends_on('[email protected]:1.6', when='@1.4.2')\n depends_on('[email protected]', when='@1.4.2')\n depends_on('[email protected]:4.4', when='@1.4.2')\n # SCOREP 1.3\n depends_on(\"[email protected]\", when='@1.3')\n depends_on(\"[email protected]\", when='@1.3')\n depends_on(\"[email protected]\", when='@1.3')\n ##########\n\n depends_on(\"mpi\")\n depends_on(\"papi\")\n\n variant('shmem', default=False, description='Enable shmem tracing')\n\n def configure_args(self):\n spec = self.spec\n\n config_args = [\n \"--with-otf2=%s\" % spec['otf2'].prefix.bin,\n \"--with-opari2=%s\" % spec['opari2'].prefix.bin,\n \"--with-cube=%s\" % spec['cube'].prefix.bin,\n \"--with-papi-header=%s\" % spec['papi'].prefix.include,\n \"--with-papi-lib=%s\" % spec['papi'].prefix.lib,\n \"--enable-shared\",\n ]\n\n if '~shmem' in spec:\n config_args.append(\"--without-shmem\")\n\n config_args.extend([\"CFLAGS=-fPIC\", \"CXXFLAGS=-fPIC\"])\n return config_args\n", "path": "var/spack/repos/builtin/packages/scorep/package.py"}]}
| 1,835 | 156 |
gh_patches_debug_26643
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-2502
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GPI: Address DEBUG log spam to cocotb.gpi logger
There are two places that produce lots of DEBUG log statements in the GPI:
https://github.com/cocotb/cocotb/blob/a9cd7ad2625cd8cbfcde3fb5959fb7316b33b4a2/cocotb/share/lib/gpi/GpiCbHdl.cpp#L94-L100
https://github.com/cocotb/cocotb/blob/6eb637ae1ec4110fb3aa10ef04051fe8ef163108/cocotb/share/include/cocotb_utils.h#L57-L73
They are both related to tracing the program flow between GPI and Python.
I'd like to be able to filter out these messages without losing other DEBUG info on the `cocotb.gpi` logger. I would prefer to have them disabled by default with a way to enable them when debugging cocotb issues.
I don't know what the best approach is. New lower log level (#1226)? New distinct logger with a filter?
</issue>
<code>
[start of cocotb/log.py]
1 # Copyright (c) 2013, 2018 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4 #
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Everything related to logging
30 """
31
32 import os
33 import sys
34 import logging
35 import warnings
36
37 from cocotb.utils import (
38 get_sim_time, get_time_from_sim_steps, want_color_output
39 )
40
41 import cocotb.ANSI as ANSI
42
43 if "COCOTB_REDUCED_LOG_FMT" in os.environ:
44 _suppress = True
45 else:
46 _suppress = False
47
48 # Column alignment
49 _LEVEL_CHARS = len("CRITICAL") # noqa
50 _RECORD_CHARS = 35 # noqa
51 _FILENAME_CHARS = 20 # noqa
52 _LINENO_CHARS = 4 # noqa
53 _FUNCNAME_CHARS = 31 # noqa
54
55 # Default log level if not overwritten by the user.
56 _COCOTB_LOG_LEVEL_DEFAULT = "INFO"
57
58
59 def default_config():
60 """ Apply the default cocotb log formatting to the root logger.
61
62 This hooks up the logger to write to stdout, using either
63 :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending
64 on whether colored output is requested. It also adds a
65 :class:`SimTimeContextFilter` filter so that
66 :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.
67
68 The logging level for cocotb logs is set based on the
69 :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.
70
71 If desired, this logging configuration can be overwritten by calling
72 ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by
73 manually resetting the root logger instance.
74 An example of this can be found in the section on :ref:`rotating-logger`.
75
76 .. versionadded:: 1.4
77 """
78 # construct an appropriate handler
79 hdlr = logging.StreamHandler(sys.stdout)
80 hdlr.addFilter(SimTimeContextFilter())
81 if want_color_output():
82 hdlr.setFormatter(SimColourLogFormatter())
83 else:
84 hdlr.setFormatter(SimLogFormatter())
85
86 logging.setLoggerClass(SimBaseLog) # For backwards compatibility
87 logging.basicConfig()
88 logging.getLogger().handlers = [hdlr] # overwrite default handlers
89
90 # apply level settings for cocotb
91 log = logging.getLogger('cocotb')
92
93 try:
94 # All log levels are upper case, convert the user input for convenience.
95 level = os.environ["COCOTB_LOG_LEVEL"].upper()
96 except KeyError:
97 level = _COCOTB_LOG_LEVEL_DEFAULT
98
99 try:
100 log.setLevel(level)
101 except ValueError:
102 valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG')
103 raise ValueError("Invalid log level %r passed through the "
104 "COCOTB_LOG_LEVEL environment variable. Valid log "
105 "levels: %s" % (level, ', '.join(valid_levels)))
106
107 # Notify GPI of log level, which it uses as an optimization to avoid
108 # calling into Python.
109 from cocotb import simulator
110 simulator.log_level(log.getEffectiveLevel())
111
112
113 class SimBaseLog(logging.getLoggerClass()):
114 """ This class only exists for backwards compatibility """
115
116 @property
117 def logger(self):
118 warnings.warn(
119 "the .logger attribute should not be used now that `SimLog` "
120 "returns a native logger instance directly.",
121 DeprecationWarning, stacklevel=2)
122 return self
123
124 @property
125 def colour(self):
126 warnings.warn(
127 "the .colour attribute may be removed in future, use the "
128 "equivalent `cocotb.utils.want_color_output()` instead",
129 DeprecationWarning, stacklevel=2)
130 return want_color_output()
131
132
133 # this used to be a class, hence the unusual capitalization
134 def SimLog(name, ident=None):
135 """ Like logging.getLogger, but append a numeric identifier to the name """
136 if ident is not None:
137 name = f"{name}.0x{ident:x}"
138 return logging.getLogger(name)
139
140
141 class SimTimeContextFilter(logging.Filter):
142 """
143 A filter to inject simulator times into the log records.
144
145 This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.
146
147 This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.
148
149 .. versionadded:: 1.4
150 """
151
152 # needed to make our docs render well
153 def __init__(self):
154 """"""
155 super().__init__()
156
157 def filter(self, record):
158 try:
159 record.created_sim_time = get_sim_time()
160 except RecursionError:
161 # get_sim_time may try to log - if that happens, we can't
162 # attach a simulator time to this message.
163 record.created_sim_time = None
164 return True
165
166
167 class SimLogFormatter(logging.Formatter):
168 """Log formatter to provide consistent log message handling.
169
170 This will only add simulator timestamps if the handler object this
171 formatter is attached to has a :class:`SimTimeContextFilter` filter
172 attached, which cocotb ensures by default.
173 """
174
175 # Removes the arguments from the base class. Docstring needed to make
176 # sphinx happy.
177 def __init__(self):
178 """ Takes no arguments. """
179 super().__init__()
180
181 # Justify and truncate
182 @staticmethod
183 def ljust(string, chars):
184 if len(string) > chars:
185 return ".." + string[(chars - 2) * -1:]
186 return string.ljust(chars)
187
188 @staticmethod
189 def rjust(string, chars):
190 if len(string) > chars:
191 return ".." + string[(chars - 2) * -1:]
192 return string.rjust(chars)
193
194 def _format(self, level, record, msg, coloured=False):
195 sim_time = getattr(record, 'created_sim_time', None)
196 if sim_time is None:
197 sim_time_str = " -.--ns"
198 else:
199 time_ns = get_time_from_sim_steps(sim_time, 'ns')
200 sim_time_str = f"{time_ns:6.2f}ns"
201 prefix = sim_time_str.rjust(11) + ' ' + level + ' '
202 if not _suppress:
203 prefix += self.ljust(record.name, _RECORD_CHARS) + \
204 self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS) + \
205 ':' + self.ljust(str(record.lineno), _LINENO_CHARS) + \
206 ' in ' + self.ljust(str(record.funcName), _FUNCNAME_CHARS) + ' '
207
208 # these lines are copied from the builtin logger
209 if record.exc_info:
210 # Cache the traceback text to avoid converting it multiple times
211 # (it's constant anyway)
212 if not record.exc_text:
213 record.exc_text = self.formatException(record.exc_info)
214 if record.exc_text:
215 if msg[-1:] != "\n":
216 msg = msg + "\n"
217 msg = msg + record.exc_text
218
219 prefix_len = len(prefix)
220 if coloured:
221 prefix_len -= (len(level) - _LEVEL_CHARS)
222 pad = "\n" + " " * (prefix_len)
223 return prefix + pad.join(msg.split('\n'))
224
225 def format(self, record):
226 """Prettify the log output, annotate with simulation time"""
227
228 msg = record.getMessage()
229 level = record.levelname.ljust(_LEVEL_CHARS)
230
231 return self._format(level, record, msg)
232
233
234 class SimColourLogFormatter(SimLogFormatter):
235 """Log formatter to provide consistent log message handling."""
236
237 loglevel2colour = {
238 logging.DEBUG : "%s",
239 logging.INFO : "%s",
240 logging.WARNING : ANSI.COLOR_WARNING + "%s" + ANSI.COLOR_DEFAULT,
241 logging.ERROR : ANSI.COLOR_ERROR + "%s" + ANSI.COLOR_DEFAULT,
242 logging.CRITICAL: ANSI.COLOR_CRITICAL + "%s" + ANSI.COLOR_DEFAULT,
243 }
244
245 def format(self, record):
246 """Prettify the log output, annotate with simulation time"""
247
248 msg = record.getMessage()
249
250 # Need to colour each line in case coloring is applied in the message
251 msg = '\n'.join([SimColourLogFormatter.loglevel2colour.get(record.levelno,"%s") % line for line in msg.split('\n')])
252 level = (SimColourLogFormatter.loglevel2colour.get(record.levelno, "%s") %
253 record.levelname.ljust(_LEVEL_CHARS))
254
255 return self._format(level, record, msg, coloured=True)
256
257
258 def _filter_from_c(logger_name, level):
259 return logging.getLogger(logger_name).isEnabledFor(level)
260
261
262 def _log_from_c(logger_name, level, filename, lineno, msg, function_name):
263 """
264 This is for use from the C world, and allows us to insert C stack
265 information.
266 """
267 logger = logging.getLogger(logger_name)
268 if logger.isEnabledFor(level):
269 record = logger.makeRecord(
270 logger.name,
271 level,
272 filename,
273 lineno,
274 msg,
275 None,
276 None,
277 function_name
278 )
279 logger.handle(record)
280
[end of cocotb/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cocotb/log.py b/cocotb/log.py
--- a/cocotb/log.py
+++ b/cocotb/log.py
@@ -52,6 +52,10 @@
_LINENO_CHARS = 4 # noqa
_FUNCNAME_CHARS = 31 # noqa
+# Custom log level
+logging.TRACE = 5
+logging.addLevelName(5, "TRACE")
+
# Default log level if not overwritten by the user.
_COCOTB_LOG_LEVEL_DEFAULT = "INFO"
@@ -99,7 +103,7 @@
try:
log.setLevel(level)
except ValueError:
- valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG')
+ valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG', 'TRACE')
raise ValueError("Invalid log level %r passed through the "
"COCOTB_LOG_LEVEL environment variable. Valid log "
"levels: %s" % (level, ', '.join(valid_levels)))
@@ -235,6 +239,7 @@
"""Log formatter to provide consistent log message handling."""
loglevel2colour = {
+ logging.TRACE : "%s",
logging.DEBUG : "%s",
logging.INFO : "%s",
logging.WARNING : ANSI.COLOR_WARNING + "%s" + ANSI.COLOR_DEFAULT,
|
{"golden_diff": "diff --git a/cocotb/log.py b/cocotb/log.py\n--- a/cocotb/log.py\n+++ b/cocotb/log.py\n@@ -52,6 +52,10 @@\n _LINENO_CHARS = 4 # noqa\n _FUNCNAME_CHARS = 31 # noqa\n \n+# Custom log level\n+logging.TRACE = 5\n+logging.addLevelName(5, \"TRACE\")\n+\n # Default log level if not overwritten by the user.\n _COCOTB_LOG_LEVEL_DEFAULT = \"INFO\"\n \n@@ -99,7 +103,7 @@\n try:\n log.setLevel(level)\n except ValueError:\n- valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG')\n+ valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG', 'TRACE')\n raise ValueError(\"Invalid log level %r passed through the \"\n \"COCOTB_LOG_LEVEL environment variable. Valid log \"\n \"levels: %s\" % (level, ', '.join(valid_levels)))\n@@ -235,6 +239,7 @@\n \"\"\"Log formatter to provide consistent log message handling.\"\"\"\n \n loglevel2colour = {\n+ logging.TRACE : \"%s\",\n logging.DEBUG : \"%s\",\n logging.INFO : \"%s\",\n logging.WARNING : ANSI.COLOR_WARNING + \"%s\" + ANSI.COLOR_DEFAULT,\n", "issue": "GPI: Address DEBUG log spam to cocotb.gpi logger\nThere are two places that produce lots of DEBUG log statements in the GPI:\r\n\r\nhttps://github.com/cocotb/cocotb/blob/a9cd7ad2625cd8cbfcde3fb5959fb7316b33b4a2/cocotb/share/lib/gpi/GpiCbHdl.cpp#L94-L100\r\n\r\nhttps://github.com/cocotb/cocotb/blob/6eb637ae1ec4110fb3aa10ef04051fe8ef163108/cocotb/share/include/cocotb_utils.h#L57-L73\r\n\r\nThey are both related to tracing the program flow between GPI and Python.\r\n\r\nI'd like to be able to filter out these messages without losing other DEBUG info on the `cocotb.gpi` logger. I would prefer to have them disabled by default with a way to enable them when debugging cocotb issues.\r\n\r\nI don't know what the best approach is. New lower log level (#1226)? New distinct logger with a filter?\n", "before_files": [{"content": "# Copyright (c) 2013, 2018 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nEverything related to logging\n\"\"\"\n\nimport os\nimport sys\nimport logging\nimport warnings\n\nfrom cocotb.utils import (\n get_sim_time, get_time_from_sim_steps, want_color_output\n)\n\nimport cocotb.ANSI as ANSI\n\nif \"COCOTB_REDUCED_LOG_FMT\" in os.environ:\n _suppress = True\nelse:\n _suppress = False\n\n# Column alignment\n_LEVEL_CHARS = len(\"CRITICAL\") # noqa\n_RECORD_CHARS = 35 # noqa\n_FILENAME_CHARS = 20 # noqa\n_LINENO_CHARS = 4 # noqa\n_FUNCNAME_CHARS = 31 # noqa\n\n# Default log level if not overwritten by the user.\n_COCOTB_LOG_LEVEL_DEFAULT = \"INFO\"\n\n\ndef default_config():\n \"\"\" Apply the default cocotb log formatting to the root logger.\n\n This hooks up the logger to write to stdout, using either\n :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending\n on whether colored output is requested. It also adds a\n :class:`SimTimeContextFilter` filter so that\n :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.\n\n The logging level for cocotb logs is set based on the\n :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.\n\n If desired, this logging configuration can be overwritten by calling\n ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by\n manually resetting the root logger instance.\n An example of this can be found in the section on :ref:`rotating-logger`.\n\n .. versionadded:: 1.4\n \"\"\"\n # construct an appropriate handler\n hdlr = logging.StreamHandler(sys.stdout)\n hdlr.addFilter(SimTimeContextFilter())\n if want_color_output():\n hdlr.setFormatter(SimColourLogFormatter())\n else:\n hdlr.setFormatter(SimLogFormatter())\n\n logging.setLoggerClass(SimBaseLog) # For backwards compatibility\n logging.basicConfig()\n logging.getLogger().handlers = [hdlr] # overwrite default handlers\n\n # apply level settings for cocotb\n log = logging.getLogger('cocotb')\n\n try:\n # All log levels are upper case, convert the user input for convenience.\n level = os.environ[\"COCOTB_LOG_LEVEL\"].upper()\n except KeyError:\n level = _COCOTB_LOG_LEVEL_DEFAULT\n\n try:\n log.setLevel(level)\n except ValueError:\n valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG')\n raise ValueError(\"Invalid log level %r passed through the \"\n \"COCOTB_LOG_LEVEL environment variable. Valid log \"\n \"levels: %s\" % (level, ', '.join(valid_levels)))\n\n # Notify GPI of log level, which it uses as an optimization to avoid\n # calling into Python.\n from cocotb import simulator\n simulator.log_level(log.getEffectiveLevel())\n\n\nclass SimBaseLog(logging.getLoggerClass()):\n \"\"\" This class only exists for backwards compatibility \"\"\"\n\n @property\n def logger(self):\n warnings.warn(\n \"the .logger attribute should not be used now that `SimLog` \"\n \"returns a native logger instance directly.\",\n DeprecationWarning, stacklevel=2)\n return self\n\n @property\n def colour(self):\n warnings.warn(\n \"the .colour attribute may be removed in future, use the \"\n \"equivalent `cocotb.utils.want_color_output()` instead\",\n DeprecationWarning, stacklevel=2)\n return want_color_output()\n\n\n# this used to be a class, hence the unusual capitalization\ndef SimLog(name, ident=None):\n \"\"\" Like logging.getLogger, but append a numeric identifier to the name \"\"\"\n if ident is not None:\n name = f\"{name}.0x{ident:x}\"\n return logging.getLogger(name)\n\n\nclass SimTimeContextFilter(logging.Filter):\n \"\"\"\n A filter to inject simulator times into the log records.\n\n This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.\n\n This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.\n\n .. versionadded:: 1.4\n \"\"\"\n\n # needed to make our docs render well\n def __init__(self):\n \"\"\"\"\"\"\n super().__init__()\n\n def filter(self, record):\n try:\n record.created_sim_time = get_sim_time()\n except RecursionError:\n # get_sim_time may try to log - if that happens, we can't\n # attach a simulator time to this message.\n record.created_sim_time = None\n return True\n\n\nclass SimLogFormatter(logging.Formatter):\n \"\"\"Log formatter to provide consistent log message handling.\n\n This will only add simulator timestamps if the handler object this\n formatter is attached to has a :class:`SimTimeContextFilter` filter\n attached, which cocotb ensures by default.\n \"\"\"\n\n # Removes the arguments from the base class. Docstring needed to make\n # sphinx happy.\n def __init__(self):\n \"\"\" Takes no arguments. \"\"\"\n super().__init__()\n\n # Justify and truncate\n @staticmethod\n def ljust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1:]\n return string.ljust(chars)\n\n @staticmethod\n def rjust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1:]\n return string.rjust(chars)\n\n def _format(self, level, record, msg, coloured=False):\n sim_time = getattr(record, 'created_sim_time', None)\n if sim_time is None:\n sim_time_str = \" -.--ns\"\n else:\n time_ns = get_time_from_sim_steps(sim_time, 'ns')\n sim_time_str = f\"{time_ns:6.2f}ns\"\n prefix = sim_time_str.rjust(11) + ' ' + level + ' '\n if not _suppress:\n prefix += self.ljust(record.name, _RECORD_CHARS) + \\\n self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS) + \\\n ':' + self.ljust(str(record.lineno), _LINENO_CHARS) + \\\n ' in ' + self.ljust(str(record.funcName), _FUNCNAME_CHARS) + ' '\n\n # these lines are copied from the builtin logger\n if record.exc_info:\n # Cache the traceback text to avoid converting it multiple times\n # (it's constant anyway)\n if not record.exc_text:\n record.exc_text = self.formatException(record.exc_info)\n if record.exc_text:\n if msg[-1:] != \"\\n\":\n msg = msg + \"\\n\"\n msg = msg + record.exc_text\n\n prefix_len = len(prefix)\n if coloured:\n prefix_len -= (len(level) - _LEVEL_CHARS)\n pad = \"\\n\" + \" \" * (prefix_len)\n return prefix + pad.join(msg.split('\\n'))\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n level = record.levelname.ljust(_LEVEL_CHARS)\n\n return self._format(level, record, msg)\n\n\nclass SimColourLogFormatter(SimLogFormatter):\n \"\"\"Log formatter to provide consistent log message handling.\"\"\"\n\n loglevel2colour = {\n logging.DEBUG : \"%s\",\n logging.INFO : \"%s\",\n logging.WARNING : ANSI.COLOR_WARNING + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.ERROR : ANSI.COLOR_ERROR + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.CRITICAL: ANSI.COLOR_CRITICAL + \"%s\" + ANSI.COLOR_DEFAULT,\n }\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n\n # Need to colour each line in case coloring is applied in the message\n msg = '\\n'.join([SimColourLogFormatter.loglevel2colour.get(record.levelno,\"%s\") % line for line in msg.split('\\n')])\n level = (SimColourLogFormatter.loglevel2colour.get(record.levelno, \"%s\") %\n record.levelname.ljust(_LEVEL_CHARS))\n\n return self._format(level, record, msg, coloured=True)\n\n\ndef _filter_from_c(logger_name, level):\n return logging.getLogger(logger_name).isEnabledFor(level)\n\n\ndef _log_from_c(logger_name, level, filename, lineno, msg, function_name):\n \"\"\"\n This is for use from the C world, and allows us to insert C stack\n information.\n \"\"\"\n logger = logging.getLogger(logger_name)\n if logger.isEnabledFor(level):\n record = logger.makeRecord(\n logger.name,\n level,\n filename,\n lineno,\n msg,\n None,\n None,\n function_name\n )\n logger.handle(record)\n", "path": "cocotb/log.py"}]}
| 3,848 | 315 |
gh_patches_debug_4939
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-3960
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: MarkdownConverter not removing code blocks
The `MarkdownConverter` does not remove code blocks in the scenario I have at hand. The first thing that seems to happen is that the markdown gets converter to html and then the code looks for `<pre>` or `<code>` blocks. However, the html produced from our tutorials for example has code blocks as `<p>` ``` ... ``` `</p>`
I am able to fix this by adding the following line [here](https://github.com/deepset-ai/haystack/blob/d962bc0bc95ad1870e37b59f5aef4b6842b2df58/haystack/nodes/file_converter/markdown.py#L90):
`html = re.sub(r"<p>```(.*?)```</p>", " ", html, flags=re.DOTALL)
`
If this is an ok fix, I'm happy to provide a PR
</issue>
<code>
[start of haystack/nodes/file_converter/markdown.py]
1 import logging
2 import re
3 from pathlib import Path
4 from typing import Dict, List, Optional, Tuple, Any
5
6 try:
7 import frontmatter
8 from bs4 import BeautifulSoup, NavigableString
9 from markdown import markdown
10 except (ImportError, ModuleNotFoundError) as ie:
11 from haystack.utils.import_utils import _optional_component_not_installed
12
13 _optional_component_not_installed(__name__, "preprocessing", ie)
14
15 from haystack.nodes.file_converter.base import BaseConverter
16 from haystack.schema import Document
17
18
19 logger = logging.getLogger(__name__)
20
21
22 class MarkdownConverter(BaseConverter):
23 def __init__(
24 self,
25 remove_numeric_tables: bool = False,
26 valid_languages: Optional[List[str]] = None,
27 id_hash_keys: Optional[List[str]] = None,
28 progress_bar: bool = True,
29 remove_code_snippets: bool = True,
30 extract_headlines: bool = False,
31 add_frontmatter_to_meta: bool = False,
32 ):
33 """
34 :param remove_numeric_tables: Not applicable.
35 :param valid_languages: Not applicable.
36 :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's
37 attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are
38 not unique, you can modify the metadata and pass for example, `"meta"` to this field ([`"content"`, `"meta"`]).
39 In this case, the ID is generated by using the content and the defined metadata.
40 :param progress_bar: Show a progress bar for the conversion.
41 :param remove_code_snippets: Whether to remove snippets from the markdown file.
42 :param extract_headlines: Whether to extract headings from the markdown file.
43 :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.
44 """
45 super().__init__(
46 remove_numeric_tables=remove_numeric_tables,
47 valid_languages=valid_languages,
48 id_hash_keys=id_hash_keys,
49 progress_bar=progress_bar,
50 )
51
52 self.remove_code_snippets = remove_code_snippets
53 self.extract_headlines = extract_headlines
54 self.add_frontmatter_to_meta = add_frontmatter_to_meta
55
56 def convert(
57 self,
58 file_path: Path,
59 meta: Optional[Dict[str, Any]] = None,
60 remove_numeric_tables: Optional[bool] = None,
61 valid_languages: Optional[List[str]] = None,
62 encoding: Optional[str] = "utf-8",
63 id_hash_keys: Optional[List[str]] = None,
64 remove_code_snippets: Optional[bool] = None,
65 extract_headlines: Optional[bool] = None,
66 add_frontmatter_to_meta: Optional[bool] = None,
67 ) -> List[Document]:
68 """
69 Reads text from a markdown file and executes optional preprocessing steps.
70
71 :param file_path: path of the file to convert
72 :param meta: dictionary of meta data key-value pairs to append in the returned document.
73 :param encoding: Select the file encoding (default is `utf-8`)
74 :param remove_numeric_tables: Not applicable
75 :param valid_languages: Not applicable
76 :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's
77 attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are
78 not unique, you can modify the metadata and pass e.g. `"meta"` to this field (e.g. [`"content"`, `"meta"`]).
79 In this case the id will be generated by using the content and the defined metadata.
80 :param remove_code_snippets: Whether to remove snippets from the markdown file.
81 :param extract_headlines: Whether to extract headings from the markdown file.
82 :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.
83 """
84
85 id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys
86 remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets
87 extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines
88 add_frontmatter_to_meta = (
89 add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta
90 )
91
92 with open(file_path, encoding=encoding, errors="ignore") as f:
93 metadata, markdown_text = frontmatter.parse(f.read())
94
95 # md -> html -> text since BeautifulSoup can extract text cleanly
96 html = markdown(markdown_text)
97
98 # remove code snippets
99 if remove_code_snippets:
100 html = re.sub(r"<pre>(.*?)</pre>", " ", html, flags=re.DOTALL)
101 html = re.sub(r"<code>(.*?)</code>", " ", html, flags=re.DOTALL)
102 soup = BeautifulSoup(html, "html.parser")
103
104 if add_frontmatter_to_meta:
105 if meta is None:
106 meta = metadata
107 else:
108 meta.update(metadata)
109
110 if extract_headlines:
111 text, headlines = self._extract_text_and_headlines(soup)
112 if meta is None:
113 meta = {}
114 meta["headlines"] = headlines
115 else:
116 text = soup.get_text()
117
118 document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)
119 return [document]
120
121 @staticmethod
122 def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:
123 """
124 Extracts text and headings from a soup object.
125 """
126 headline_tags = {"h1", "h2", "h3", "h4", "h5", "h6"}
127 headlines = []
128 text = ""
129 for desc in soup.descendants:
130 if desc.name in headline_tags:
131 current_headline = desc.get_text()
132 current_start_idx = len(text)
133 current_level = int(desc.name[-1]) - 1
134 headlines.append({"headline": current_headline, "start_idx": current_start_idx, "level": current_level})
135
136 if isinstance(desc, NavigableString):
137 text += desc.get_text()
138
139 return text, headlines
140
[end of haystack/nodes/file_converter/markdown.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/haystack/nodes/file_converter/markdown.py b/haystack/nodes/file_converter/markdown.py
--- a/haystack/nodes/file_converter/markdown.py
+++ b/haystack/nodes/file_converter/markdown.py
@@ -93,7 +93,7 @@
metadata, markdown_text = frontmatter.parse(f.read())
# md -> html -> text since BeautifulSoup can extract text cleanly
- html = markdown(markdown_text)
+ html = markdown(markdown_text, extensions=["fenced_code"])
# remove code snippets
if remove_code_snippets:
|
{"golden_diff": "diff --git a/haystack/nodes/file_converter/markdown.py b/haystack/nodes/file_converter/markdown.py\n--- a/haystack/nodes/file_converter/markdown.py\n+++ b/haystack/nodes/file_converter/markdown.py\n@@ -93,7 +93,7 @@\n metadata, markdown_text = frontmatter.parse(f.read())\n \n # md -> html -> text since BeautifulSoup can extract text cleanly\n- html = markdown(markdown_text)\n+ html = markdown(markdown_text, extensions=[\"fenced_code\"])\n \n # remove code snippets\n if remove_code_snippets:\n", "issue": "bug: MarkdownConverter not removing code blocks\nThe `MarkdownConverter` does not remove code blocks in the scenario I have at hand. The first thing that seems to happen is that the markdown gets converter to html and then the code looks for `<pre>` or `<code>` blocks. However, the html produced from our tutorials for example has code blocks as `<p>` ``` ... ``` `</p>`\r\n\r\nI am able to fix this by adding the following line [here](https://github.com/deepset-ai/haystack/blob/d962bc0bc95ad1870e37b59f5aef4b6842b2df58/haystack/nodes/file_converter/markdown.py#L90):\r\n`html = re.sub(r\"<p>```(.*?)```</p>\", \" \", html, flags=re.DOTALL)\r\n` \r\n\r\nIf this is an ok fix, I'm happy to provide a PR\n", "before_files": [{"content": "import logging\nimport re\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Tuple, Any\n\ntry:\n import frontmatter\n from bs4 import BeautifulSoup, NavigableString\n from markdown import markdown\nexcept (ImportError, ModuleNotFoundError) as ie:\n from haystack.utils.import_utils import _optional_component_not_installed\n\n _optional_component_not_installed(__name__, \"preprocessing\", ie)\n\nfrom haystack.nodes.file_converter.base import BaseConverter\nfrom haystack.schema import Document\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass MarkdownConverter(BaseConverter):\n def __init__(\n self,\n remove_numeric_tables: bool = False,\n valid_languages: Optional[List[str]] = None,\n id_hash_keys: Optional[List[str]] = None,\n progress_bar: bool = True,\n remove_code_snippets: bool = True,\n extract_headlines: bool = False,\n add_frontmatter_to_meta: bool = False,\n ):\n \"\"\"\n :param remove_numeric_tables: Not applicable.\n :param valid_languages: Not applicable.\n :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's\n attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are\n not unique, you can modify the metadata and pass for example, `\"meta\"` to this field ([`\"content\"`, `\"meta\"`]).\n In this case, the ID is generated by using the content and the defined metadata.\n :param progress_bar: Show a progress bar for the conversion.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n super().__init__(\n remove_numeric_tables=remove_numeric_tables,\n valid_languages=valid_languages,\n id_hash_keys=id_hash_keys,\n progress_bar=progress_bar,\n )\n\n self.remove_code_snippets = remove_code_snippets\n self.extract_headlines = extract_headlines\n self.add_frontmatter_to_meta = add_frontmatter_to_meta\n\n def convert(\n self,\n file_path: Path,\n meta: Optional[Dict[str, Any]] = None,\n remove_numeric_tables: Optional[bool] = None,\n valid_languages: Optional[List[str]] = None,\n encoding: Optional[str] = \"utf-8\",\n id_hash_keys: Optional[List[str]] = None,\n remove_code_snippets: Optional[bool] = None,\n extract_headlines: Optional[bool] = None,\n add_frontmatter_to_meta: Optional[bool] = None,\n ) -> List[Document]:\n \"\"\"\n Reads text from a markdown file and executes optional preprocessing steps.\n\n :param file_path: path of the file to convert\n :param meta: dictionary of meta data key-value pairs to append in the returned document.\n :param encoding: Select the file encoding (default is `utf-8`)\n :param remove_numeric_tables: Not applicable\n :param valid_languages: Not applicable\n :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's\n attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are\n not unique, you can modify the metadata and pass e.g. `\"meta\"` to this field (e.g. [`\"content\"`, `\"meta\"`]).\n In this case the id will be generated by using the content and the defined metadata.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n\n id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys\n remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets\n extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines\n add_frontmatter_to_meta = (\n add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta\n )\n\n with open(file_path, encoding=encoding, errors=\"ignore\") as f:\n metadata, markdown_text = frontmatter.parse(f.read())\n\n # md -> html -> text since BeautifulSoup can extract text cleanly\n html = markdown(markdown_text)\n\n # remove code snippets\n if remove_code_snippets:\n html = re.sub(r\"<pre>(.*?)</pre>\", \" \", html, flags=re.DOTALL)\n html = re.sub(r\"<code>(.*?)</code>\", \" \", html, flags=re.DOTALL)\n soup = BeautifulSoup(html, \"html.parser\")\n\n if add_frontmatter_to_meta:\n if meta is None:\n meta = metadata\n else:\n meta.update(metadata)\n\n if extract_headlines:\n text, headlines = self._extract_text_and_headlines(soup)\n if meta is None:\n meta = {}\n meta[\"headlines\"] = headlines\n else:\n text = soup.get_text()\n\n document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)\n return [document]\n\n @staticmethod\n def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:\n \"\"\"\n Extracts text and headings from a soup object.\n \"\"\"\n headline_tags = {\"h1\", \"h2\", \"h3\", \"h4\", \"h5\", \"h6\"}\n headlines = []\n text = \"\"\n for desc in soup.descendants:\n if desc.name in headline_tags:\n current_headline = desc.get_text()\n current_start_idx = len(text)\n current_level = int(desc.name[-1]) - 1\n headlines.append({\"headline\": current_headline, \"start_idx\": current_start_idx, \"level\": current_level})\n\n if isinstance(desc, NavigableString):\n text += desc.get_text()\n\n return text, headlines\n", "path": "haystack/nodes/file_converter/markdown.py"}]}
| 2,376 | 129 |
gh_patches_debug_35151
|
rasdani/github-patches
|
git_diff
|
nltk__nltk-2130
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding Good turing and Add -n Smoothing
Apart from the existing 2 smoothing algorithms already present, I think the add -n smoothing and the Good turing smoothing can be added
</issue>
<code>
[start of nltk/lm/api.py]
1 # -*- coding: utf-8 -*-
2 # Natural Language Toolkit: Language Models
3 #
4 # Copyright (C) 2001-2018 NLTK Project
5 # Authors: Ilia Kurenkov <[email protected]>
6 # URL: <http://nltk.org/>
7 # For license information, see LICENSE.TXT
8 """Language Model Interface."""
9 from __future__ import division, unicode_literals
10
11 import random
12 from abc import ABCMeta, abstractmethod
13 from bisect import bisect
14
15 from six import add_metaclass
16
17 from nltk.lm.counter import NgramCounter
18 from nltk.lm.util import log_base2
19 from nltk.lm.vocabulary import Vocabulary
20
21 try:
22 from itertools import accumulate
23 except ImportError:
24 import operator
25
26 def accumulate(iterable, func=operator.add):
27 """Return running totals"""
28 # accumulate([1,2,3,4,5]) --> 1 3 6 10 15
29 # accumulate([1,2,3,4,5], operator.mul) --> 1 2 6 24 120
30 it = iter(iterable)
31 try:
32 total = next(it)
33 except StopIteration:
34 return
35 yield total
36 for element in it:
37 total = func(total, element)
38 yield total
39
40
41 @add_metaclass(ABCMeta)
42 class Smoothing(object):
43 """Ngram Smoothing Interface
44
45 Implements Chen & Goodman 1995's idea that all smoothing algorithms have
46 certain features in common. This should ideally allow smoothing algoritms to
47 work both with Backoff and Interpolation.
48 """
49
50 def __init__(self, vocabulary, counter):
51 self.vocab = vocabulary
52 self.counts = counter
53
54 @abstractmethod
55 def unigram_score(self, word):
56 raise NotImplementedError()
57
58 @abstractmethod
59 def alpha_gamma(self, word, context):
60 raise NotImplementedError()
61
62
63 def _mean(items):
64 """Return average (aka mean) for sequence of items."""
65 return sum(items) / len(items)
66
67
68 def _random_generator(seed_or_generator):
69 if isinstance(seed_or_generator, random.Random):
70 return seed_or_generator
71 return random.Random(seed_or_generator)
72
73
74 def _weighted_choice(population, weights, random_seed=None):
75 """Like random.choice, but with weights.
76
77 Heavily inspired by python 3.6 `random.choices`.
78 """
79 if not population:
80 raise ValueError("Can't choose from empty population")
81 if len(population) != len(weights):
82 raise ValueError("The number of weights does not match the population")
83 cum_weights = list(accumulate(weights))
84 total = cum_weights[-1]
85 threshold = _random_generator(random_seed).random()
86 return population[bisect(cum_weights, total * threshold)]
87
88
89 @add_metaclass(ABCMeta)
90 class LanguageModel(object):
91 """ABC for Language Models.
92
93 Cannot be directly instantiated itself.
94
95 """
96
97 def __init__(self, order, vocabulary=None, counter=None):
98 """Creates new LanguageModel.
99
100 :param vocabulary: If provided, this vocabulary will be used instead
101 of creating a new one when training.
102 :type vocabulary: `nltk.lm.Vocabulary` or None
103 :param counter: If provided, use this object to count ngrams.
104 :type vocabulary: `nltk.lm.NgramCounter` or None
105 :param ngrams_fn: If given, defines how sentences in training text are turned to ngram
106 sequences.
107 :type ngrams_fn: function or None
108 :param pad_fn: If given, defines how senteces in training text are padded.
109 :type pad_fn: function or None
110
111 """
112 self.order = order
113 self.vocab = Vocabulary() if vocabulary is None else vocabulary
114 self.counts = NgramCounter() if counter is None else counter
115
116 def fit(self, text, vocabulary_text=None):
117 """Trains the model on a text.
118
119 :param text: Training text as a sequence of sentences.
120
121 """
122 if not self.vocab:
123 if vocabulary_text is None:
124 raise ValueError(
125 "Cannot fit without a vocabulary or text to " "create it from."
126 )
127 self.vocab.update(vocabulary_text)
128 self.counts.update(self.vocab.lookup(sent) for sent in text)
129
130 def score(self, word, context=None):
131 """Masks out of vocab (OOV) words and computes their model score.
132
133 For model-specific logic of calculating scores, see the `unmasked_score`
134 method.
135 """
136 return self.unmasked_score(
137 self.vocab.lookup(word), self.vocab.lookup(context) if context else None
138 )
139
140 @abstractmethod
141 def unmasked_score(self, word, context=None):
142 """Score a word given some optional context.
143
144 Concrete models are expected to provide an implementation.
145 Note that this method does not mask its arguments with the OOV label.
146 Use the `score` method for that.
147
148 :param str word: Word for which we want the score
149 :param tuple(str) context: Context the word is in.
150 If `None`, compute unigram score.
151 :param context: tuple(str) or None
152 :rtype: float
153
154 """
155 raise NotImplementedError()
156
157 def logscore(self, word, context=None):
158 """Evaluate the log score of this word in this context.
159
160 The arguments are the same as for `score` and `unmasked_score`.
161
162 """
163 return log_base2(self.score(word, context))
164
165 def context_counts(self, context):
166 """Helper method for retrieving counts for a given context.
167
168 Assumes context has been checked and oov words in it masked.
169 :type context: tuple(str) or None
170
171 """
172 return (
173 self.counts[len(context) + 1][context] if context else self.counts.unigrams
174 )
175
176 def entropy(self, text_ngrams):
177 """Calculate cross-entropy of model for given evaluation text.
178
179 :param Iterable(tuple(str)) text_ngrams: A sequence of ngram tuples.
180 :rtype: float
181
182 """
183 return -1 * _mean(
184 [self.logscore(ngram[-1], ngram[:-1]) for ngram in text_ngrams]
185 )
186
187 def perplexity(self, text_ngrams):
188 """Calculates the perplexity of the given text.
189
190 This is simply 2 ** cross-entropy for the text, so the arguments are the same.
191
192 """
193 return pow(2.0, self.entropy(text_ngrams))
194
195 def generate(self, num_words=1, text_seed=None, random_seed=None):
196 """Generate words from the model.
197
198 :param int num_words: How many words to generate. By default 1.
199 :param text_seed: Generation can be conditioned on preceding context.
200 :param random_seed: If provided, makes the random sampling part of
201 generation reproducible.
202 :return: One (str) word or a list of words generated from model.
203
204 Examples:
205
206 >>> from nltk.lm import MLE
207 >>> lm = MLE(2)
208 >>> lm.fit([[("a", "b"), ("b", "c")]], vocabulary_text=['a', 'b', 'c'])
209 >>> lm.fit([[("a",), ("b",), ("c",)]])
210 >>> lm.generate(random_seed=3)
211 'a'
212 >>> lm.generate(text_seed=['a'])
213 'b'
214
215 """
216 text_seed = [] if text_seed is None else list(text_seed)
217 # base recursion case
218 if num_words == 1:
219 context = (
220 text_seed[-self.order + 1 :]
221 if len(text_seed) >= self.order
222 else text_seed
223 )
224 samples = self.context_counts(self.vocab.lookup(context))
225 while context and not samples:
226 context = context[1:] if len(context) > 1 else []
227 samples = self.context_counts(self.vocab.lookup(context))
228 # sorting achieves two things:
229 # - reproducible randomness when sampling
230 # - turning Mapping into Sequence which _weighted_choice expects
231 samples = sorted(samples)
232 return _weighted_choice(
233 samples, tuple(self.score(w, context) for w in samples), random_seed
234 )
235 # build up text one word at a time
236 generated = []
237 for _ in range(num_words):
238 generated.append(
239 self.generate(
240 num_words=1,
241 text_seed=text_seed + generated,
242 random_seed=random_seed,
243 )
244 )
245 return generated
246
[end of nltk/lm/api.py]
[start of nltk/lm/smoothing.py]
1 # -*- coding: utf-8 -*-
2 # Natural Language Toolkit: Language Model Unit Tests
3 #
4 # Copyright (C) 2001-2018 NLTK Project
5 # Author: Ilia Kurenkov <[email protected]>
6 # URL: <http://nltk.org/>
7 # For license information, see LICENSE.TXT
8 """Smoothing algorithms for language modeling.
9
10 According to Chen & Goodman 1995 these should work with both Backoff and
11 Interpolation.
12 """
13
14 from nltk.lm.api import Smoothing
15
16
17 def _count_non_zero_vals(dictionary):
18 return sum(1.0 for c in dictionary.values() if c > 0)
19
20
21 class WittenBell(Smoothing):
22 """Witten-Bell smoothing."""
23
24 def alpha_gamma(self, word, context):
25 gamma = self.gamma(context)
26 return (1.0 - gamma) * self.alpha(word, context), gamma
27
28 def unigram_score(self, word):
29 return self.counts.unigrams.freq(word)
30
31 def alpha(self, word, context):
32 return self.counts[context].freq(word)
33
34 def gamma(self, context):
35 n_plus = _count_non_zero_vals(self.counts[context])
36 return n_plus / (n_plus + self.counts[len(context) + 1].N())
37
38
39 class KneserNey(Smoothing):
40 """Kneser-Ney Smoothing."""
41
42 def __init__(self, vocabulary, counter, discount=0.1, **kwargs):
43 super(KneserNey, self).__init__(vocabulary, counter, *kwargs)
44 self.discount = discount
45
46 def unigram_score(self, word):
47 return 1.0 / len(self.vocab)
48
49 def alpha_gamma(self, word, context):
50 prefix_counts = self.counts[context]
51 return self.alpha(word, prefix_counts), self.gamma(prefix_counts)
52
53 def alpha(self, word, prefix_counts):
54 return max(prefix_counts[word] - self.discount, 0.0) / prefix_counts.N()
55
56 def gamma(self, prefix_counts):
57 return self.discount * _count_non_zero_vals(prefix_counts) / prefix_counts.N()
58
[end of nltk/lm/smoothing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nltk/lm/api.py b/nltk/lm/api.py
--- a/nltk/lm/api.py
+++ b/nltk/lm/api.py
@@ -45,6 +45,8 @@
Implements Chen & Goodman 1995's idea that all smoothing algorithms have
certain features in common. This should ideally allow smoothing algoritms to
work both with Backoff and Interpolation.
+
+ counter represents the number of counts for ngrams
"""
def __init__(self, vocabulary, counter):
diff --git a/nltk/lm/smoothing.py b/nltk/lm/smoothing.py
--- a/nltk/lm/smoothing.py
+++ b/nltk/lm/smoothing.py
@@ -21,6 +21,10 @@
class WittenBell(Smoothing):
"""Witten-Bell smoothing."""
+ def __init__(self, vocabulary, counter, discount=0.1, **kwargs):
+ super(WittenBell, self).__init__(vocabulary, counter, *kwargs)
+ self.counts = counter
+
def alpha_gamma(self, word, context):
gamma = self.gamma(context)
return (1.0 - gamma) * self.alpha(word, context), gamma
@@ -42,9 +46,10 @@
def __init__(self, vocabulary, counter, discount=0.1, **kwargs):
super(KneserNey, self).__init__(vocabulary, counter, *kwargs)
self.discount = discount
+ self.vocabulary = vocabulary
def unigram_score(self, word):
- return 1.0 / len(self.vocab)
+ return 1.0 / len(self.vocabulary)
def alpha_gamma(self, word, context):
prefix_counts = self.counts[context]
@@ -55,3 +60,19 @@
def gamma(self, prefix_counts):
return self.discount * _count_non_zero_vals(prefix_counts) / prefix_counts.N()
+
+
+class GoodTuring(Smoothing):
+ """Good-Turing Smoothing"""
+ def __init__(self, vocabulary, counter, **kwargs):
+ super(GoodTuring, self).__init__(vocabulary, counter, *kwargs)
+ self.counts = counter
+ self.vocabulary = vocabulary
+
+ def unigram_score(self, word):
+ word_count = self.counts[word]
+ count_plus_1 = 0.
+ for everyContext in self.counts.keys():
+ if len(everyContext.split()) == word_count+1:
+ count_plus_1 += 1
+ return count_plus_1 / len(self.vocabulary)
|
{"golden_diff": "diff --git a/nltk/lm/api.py b/nltk/lm/api.py\n--- a/nltk/lm/api.py\n+++ b/nltk/lm/api.py\n@@ -45,6 +45,8 @@\n Implements Chen & Goodman 1995's idea that all smoothing algorithms have\n certain features in common. This should ideally allow smoothing algoritms to\n work both with Backoff and Interpolation.\n+\n+ counter represents the number of counts for ngrams\n \"\"\"\n \n def __init__(self, vocabulary, counter):\ndiff --git a/nltk/lm/smoothing.py b/nltk/lm/smoothing.py\n--- a/nltk/lm/smoothing.py\n+++ b/nltk/lm/smoothing.py\n@@ -21,6 +21,10 @@\n class WittenBell(Smoothing):\n \"\"\"Witten-Bell smoothing.\"\"\"\n \n+ def __init__(self, vocabulary, counter, discount=0.1, **kwargs):\n+ super(WittenBell, self).__init__(vocabulary, counter, *kwargs)\n+ self.counts = counter\n+\n def alpha_gamma(self, word, context):\n gamma = self.gamma(context)\n return (1.0 - gamma) * self.alpha(word, context), gamma\n@@ -42,9 +46,10 @@\n def __init__(self, vocabulary, counter, discount=0.1, **kwargs):\n super(KneserNey, self).__init__(vocabulary, counter, *kwargs)\n self.discount = discount\n+ self.vocabulary = vocabulary\n \n def unigram_score(self, word):\n- return 1.0 / len(self.vocab)\n+ return 1.0 / len(self.vocabulary)\n \n def alpha_gamma(self, word, context):\n prefix_counts = self.counts[context]\n@@ -55,3 +60,19 @@\n \n def gamma(self, prefix_counts):\n return self.discount * _count_non_zero_vals(prefix_counts) / prefix_counts.N()\n+\n+\n+class GoodTuring(Smoothing):\n+ \"\"\"Good-Turing Smoothing\"\"\"\n+ def __init__(self, vocabulary, counter, **kwargs):\n+ super(GoodTuring, self).__init__(vocabulary, counter, *kwargs)\n+ self.counts = counter\n+ self.vocabulary = vocabulary\n+\n+ def unigram_score(self, word):\n+ word_count = self.counts[word]\n+ count_plus_1 = 0.\n+ for everyContext in self.counts.keys():\n+ if len(everyContext.split()) == word_count+1:\n+ count_plus_1 += 1\n+ return count_plus_1 / len(self.vocabulary)\n", "issue": "Adding Good turing and Add -n Smoothing\nApart from the existing 2 smoothing algorithms already present, I think the add -n smoothing and the Good turing smoothing can be added\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Natural Language Toolkit: Language Models\n#\n# Copyright (C) 2001-2018 NLTK Project\n# Authors: Ilia Kurenkov <[email protected]>\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\"\"\"Language Model Interface.\"\"\"\nfrom __future__ import division, unicode_literals\n\nimport random\nfrom abc import ABCMeta, abstractmethod\nfrom bisect import bisect\n\nfrom six import add_metaclass\n\nfrom nltk.lm.counter import NgramCounter\nfrom nltk.lm.util import log_base2\nfrom nltk.lm.vocabulary import Vocabulary\n\ntry:\n from itertools import accumulate\nexcept ImportError:\n import operator\n\n def accumulate(iterable, func=operator.add):\n \"\"\"Return running totals\"\"\"\n # accumulate([1,2,3,4,5]) --> 1 3 6 10 15\n # accumulate([1,2,3,4,5], operator.mul) --> 1 2 6 24 120\n it = iter(iterable)\n try:\n total = next(it)\n except StopIteration:\n return\n yield total\n for element in it:\n total = func(total, element)\n yield total\n\n\n@add_metaclass(ABCMeta)\nclass Smoothing(object):\n \"\"\"Ngram Smoothing Interface\n\n Implements Chen & Goodman 1995's idea that all smoothing algorithms have\n certain features in common. This should ideally allow smoothing algoritms to\n work both with Backoff and Interpolation.\n \"\"\"\n\n def __init__(self, vocabulary, counter):\n self.vocab = vocabulary\n self.counts = counter\n\n @abstractmethod\n def unigram_score(self, word):\n raise NotImplementedError()\n\n @abstractmethod\n def alpha_gamma(self, word, context):\n raise NotImplementedError()\n\n\ndef _mean(items):\n \"\"\"Return average (aka mean) for sequence of items.\"\"\"\n return sum(items) / len(items)\n\n\ndef _random_generator(seed_or_generator):\n if isinstance(seed_or_generator, random.Random):\n return seed_or_generator\n return random.Random(seed_or_generator)\n\n\ndef _weighted_choice(population, weights, random_seed=None):\n \"\"\"Like random.choice, but with weights.\n\n Heavily inspired by python 3.6 `random.choices`.\n \"\"\"\n if not population:\n raise ValueError(\"Can't choose from empty population\")\n if len(population) != len(weights):\n raise ValueError(\"The number of weights does not match the population\")\n cum_weights = list(accumulate(weights))\n total = cum_weights[-1]\n threshold = _random_generator(random_seed).random()\n return population[bisect(cum_weights, total * threshold)]\n\n\n@add_metaclass(ABCMeta)\nclass LanguageModel(object):\n \"\"\"ABC for Language Models.\n\n Cannot be directly instantiated itself.\n\n \"\"\"\n\n def __init__(self, order, vocabulary=None, counter=None):\n \"\"\"Creates new LanguageModel.\n\n :param vocabulary: If provided, this vocabulary will be used instead\n of creating a new one when training.\n :type vocabulary: `nltk.lm.Vocabulary` or None\n :param counter: If provided, use this object to count ngrams.\n :type vocabulary: `nltk.lm.NgramCounter` or None\n :param ngrams_fn: If given, defines how sentences in training text are turned to ngram\n sequences.\n :type ngrams_fn: function or None\n :param pad_fn: If given, defines how senteces in training text are padded.\n :type pad_fn: function or None\n\n \"\"\"\n self.order = order\n self.vocab = Vocabulary() if vocabulary is None else vocabulary\n self.counts = NgramCounter() if counter is None else counter\n\n def fit(self, text, vocabulary_text=None):\n \"\"\"Trains the model on a text.\n\n :param text: Training text as a sequence of sentences.\n\n \"\"\"\n if not self.vocab:\n if vocabulary_text is None:\n raise ValueError(\n \"Cannot fit without a vocabulary or text to \" \"create it from.\"\n )\n self.vocab.update(vocabulary_text)\n self.counts.update(self.vocab.lookup(sent) for sent in text)\n\n def score(self, word, context=None):\n \"\"\"Masks out of vocab (OOV) words and computes their model score.\n\n For model-specific logic of calculating scores, see the `unmasked_score`\n method.\n \"\"\"\n return self.unmasked_score(\n self.vocab.lookup(word), self.vocab.lookup(context) if context else None\n )\n\n @abstractmethod\n def unmasked_score(self, word, context=None):\n \"\"\"Score a word given some optional context.\n\n Concrete models are expected to provide an implementation.\n Note that this method does not mask its arguments with the OOV label.\n Use the `score` method for that.\n\n :param str word: Word for which we want the score\n :param tuple(str) context: Context the word is in.\n If `None`, compute unigram score.\n :param context: tuple(str) or None\n :rtype: float\n\n \"\"\"\n raise NotImplementedError()\n\n def logscore(self, word, context=None):\n \"\"\"Evaluate the log score of this word in this context.\n\n The arguments are the same as for `score` and `unmasked_score`.\n\n \"\"\"\n return log_base2(self.score(word, context))\n\n def context_counts(self, context):\n \"\"\"Helper method for retrieving counts for a given context.\n\n Assumes context has been checked and oov words in it masked.\n :type context: tuple(str) or None\n\n \"\"\"\n return (\n self.counts[len(context) + 1][context] if context else self.counts.unigrams\n )\n\n def entropy(self, text_ngrams):\n \"\"\"Calculate cross-entropy of model for given evaluation text.\n\n :param Iterable(tuple(str)) text_ngrams: A sequence of ngram tuples.\n :rtype: float\n\n \"\"\"\n return -1 * _mean(\n [self.logscore(ngram[-1], ngram[:-1]) for ngram in text_ngrams]\n )\n\n def perplexity(self, text_ngrams):\n \"\"\"Calculates the perplexity of the given text.\n\n This is simply 2 ** cross-entropy for the text, so the arguments are the same.\n\n \"\"\"\n return pow(2.0, self.entropy(text_ngrams))\n\n def generate(self, num_words=1, text_seed=None, random_seed=None):\n \"\"\"Generate words from the model.\n\n :param int num_words: How many words to generate. By default 1.\n :param text_seed: Generation can be conditioned on preceding context.\n :param random_seed: If provided, makes the random sampling part of\n generation reproducible.\n :return: One (str) word or a list of words generated from model.\n\n Examples:\n\n >>> from nltk.lm import MLE\n >>> lm = MLE(2)\n >>> lm.fit([[(\"a\", \"b\"), (\"b\", \"c\")]], vocabulary_text=['a', 'b', 'c'])\n >>> lm.fit([[(\"a\",), (\"b\",), (\"c\",)]])\n >>> lm.generate(random_seed=3)\n 'a'\n >>> lm.generate(text_seed=['a'])\n 'b'\n\n \"\"\"\n text_seed = [] if text_seed is None else list(text_seed)\n # base recursion case\n if num_words == 1:\n context = (\n text_seed[-self.order + 1 :]\n if len(text_seed) >= self.order\n else text_seed\n )\n samples = self.context_counts(self.vocab.lookup(context))\n while context and not samples:\n context = context[1:] if len(context) > 1 else []\n samples = self.context_counts(self.vocab.lookup(context))\n # sorting achieves two things:\n # - reproducible randomness when sampling\n # - turning Mapping into Sequence which _weighted_choice expects\n samples = sorted(samples)\n return _weighted_choice(\n samples, tuple(self.score(w, context) for w in samples), random_seed\n )\n # build up text one word at a time\n generated = []\n for _ in range(num_words):\n generated.append(\n self.generate(\n num_words=1,\n text_seed=text_seed + generated,\n random_seed=random_seed,\n )\n )\n return generated\n", "path": "nltk/lm/api.py"}, {"content": "# -*- coding: utf-8 -*-\n# Natural Language Toolkit: Language Model Unit Tests\n#\n# Copyright (C) 2001-2018 NLTK Project\n# Author: Ilia Kurenkov <[email protected]>\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\"\"\"Smoothing algorithms for language modeling.\n\nAccording to Chen & Goodman 1995 these should work with both Backoff and\nInterpolation.\n\"\"\"\n\nfrom nltk.lm.api import Smoothing\n\n\ndef _count_non_zero_vals(dictionary):\n return sum(1.0 for c in dictionary.values() if c > 0)\n\n\nclass WittenBell(Smoothing):\n \"\"\"Witten-Bell smoothing.\"\"\"\n\n def alpha_gamma(self, word, context):\n gamma = self.gamma(context)\n return (1.0 - gamma) * self.alpha(word, context), gamma\n\n def unigram_score(self, word):\n return self.counts.unigrams.freq(word)\n\n def alpha(self, word, context):\n return self.counts[context].freq(word)\n\n def gamma(self, context):\n n_plus = _count_non_zero_vals(self.counts[context])\n return n_plus / (n_plus + self.counts[len(context) + 1].N())\n\n\nclass KneserNey(Smoothing):\n \"\"\"Kneser-Ney Smoothing.\"\"\"\n\n def __init__(self, vocabulary, counter, discount=0.1, **kwargs):\n super(KneserNey, self).__init__(vocabulary, counter, *kwargs)\n self.discount = discount\n\n def unigram_score(self, word):\n return 1.0 / len(self.vocab)\n\n def alpha_gamma(self, word, context):\n prefix_counts = self.counts[context]\n return self.alpha(word, prefix_counts), self.gamma(prefix_counts)\n\n def alpha(self, word, prefix_counts):\n return max(prefix_counts[word] - self.discount, 0.0) / prefix_counts.N()\n\n def gamma(self, prefix_counts):\n return self.discount * _count_non_zero_vals(prefix_counts) / prefix_counts.N()\n", "path": "nltk/lm/smoothing.py"}]}
| 3,642 | 586 |
gh_patches_debug_29093
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-1093
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
decimal_to_binary() should return identical values as bin()
https://github.com/TheAlgorithms/Python/blob/7b267e5e4f8ccb72dd58fcf0057642fd62a36bdf/conversions/decimal_to_binary.py#L4
Please change __decimal_to_binary()__ to return identical values as the Python builtin [__bin()__](https://docs.python.org/3/library/functions.html#bin). With doctests to prove it please.
@PatOnTheBack @Corruption13
</issue>
<code>
[start of conversions/decimal_to_binary.py]
1 """Convert a Decimal Number to a Binary Number."""
2
3
4 def decimal_to_binary(num):
5 """Convert a Decimal Number to a Binary Number."""
6 binary = []
7 while num > 0:
8 binary.insert(0, num % 2)
9 num >>= 1
10 return "".join(str(e) for e in binary)
11
12
13 def main():
14 """Print binary equivelents of decimal numbers."""
15 print("\n2 in binary is:")
16 print(decimal_to_binary(2)) # = 10
17 print("\n7 in binary is:")
18 print(decimal_to_binary(7)) # = 111
19 print("\n35 in binary is:")
20 print(decimal_to_binary(35)) # = 100011
21 print("\n")
22
23
24 if __name__ == '__main__':
25 main()
26
[end of conversions/decimal_to_binary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conversions/decimal_to_binary.py b/conversions/decimal_to_binary.py
--- a/conversions/decimal_to_binary.py
+++ b/conversions/decimal_to_binary.py
@@ -2,24 +2,57 @@
def decimal_to_binary(num):
- """Convert a Decimal Number to a Binary Number."""
+
+ """
+ Convert a Integer Decimal Number to a Binary Number as str.
+ >>> decimal_to_binary(0)
+ '0b0'
+ >>> decimal_to_binary(2)
+ '0b10'
+ >>> decimal_to_binary(7)
+ '0b111'
+ >>> decimal_to_binary(35)
+ '0b100011'
+ >>> # negatives work too
+ >>> decimal_to_binary(-2)
+ '-0b10'
+ >>> # other floats will error
+ >>> decimal_to_binary(16.16) # doctest: +ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ TypeError: 'float' object cannot be interpreted as an integer
+ >>> # strings will error as well
+ >>> decimal_to_binary('0xfffff') # doctest: +ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ TypeError: 'str' object cannot be interpreted as an integer
+ """
+
+ if type(num) == float:
+ raise TypeError("'float' object cannot be interpreted as an integer")
+ if type(num) == str:
+ raise TypeError("'str' object cannot be interpreted as an integer")
+
+ if num == 0:
+ return "0b0"
+
+ negative = False
+
+ if num < 0:
+ negative = True
+ num = -num
+
binary = []
while num > 0:
binary.insert(0, num % 2)
num >>= 1
- return "".join(str(e) for e in binary)
+ if negative:
+ return "-0b" + "".join(str(e) for e in binary)
-def main():
- """Print binary equivelents of decimal numbers."""
- print("\n2 in binary is:")
- print(decimal_to_binary(2)) # = 10
- print("\n7 in binary is:")
- print(decimal_to_binary(7)) # = 111
- print("\n35 in binary is:")
- print(decimal_to_binary(35)) # = 100011
- print("\n")
+ return "0b" + "".join(str(e) for e in binary)
-if __name__ == '__main__':
- main()
+if __name__ == "__main__":
+ import doctest
+ doctest.testmod()
|
{"golden_diff": "diff --git a/conversions/decimal_to_binary.py b/conversions/decimal_to_binary.py\n--- a/conversions/decimal_to_binary.py\n+++ b/conversions/decimal_to_binary.py\n@@ -2,24 +2,57 @@\n \n \n def decimal_to_binary(num):\n- \"\"\"Convert a Decimal Number to a Binary Number.\"\"\"\n+\n+ \"\"\"\n+ Convert a Integer Decimal Number to a Binary Number as str.\n+ >>> decimal_to_binary(0)\n+ '0b0'\n+ >>> decimal_to_binary(2)\n+ '0b10'\n+ >>> decimal_to_binary(7)\n+ '0b111'\n+ >>> decimal_to_binary(35)\n+ '0b100011'\n+ >>> # negatives work too\n+ >>> decimal_to_binary(-2)\n+ '-0b10'\n+ >>> # other floats will error\n+ >>> decimal_to_binary(16.16) # doctest: +ELLIPSIS\n+ Traceback (most recent call last):\n+ ...\n+ TypeError: 'float' object cannot be interpreted as an integer\n+ >>> # strings will error as well\n+ >>> decimal_to_binary('0xfffff') # doctest: +ELLIPSIS\n+ Traceback (most recent call last):\n+ ...\n+ TypeError: 'str' object cannot be interpreted as an integer\n+ \"\"\"\n+\n+ if type(num) == float:\n+ raise TypeError(\"'float' object cannot be interpreted as an integer\")\n+ if type(num) == str:\n+ raise TypeError(\"'str' object cannot be interpreted as an integer\")\n+\n+ if num == 0:\n+ return \"0b0\"\n+\n+ negative = False\n+\n+ if num < 0:\n+ negative = True\n+ num = -num\n+\n binary = []\n while num > 0:\n binary.insert(0, num % 2)\n num >>= 1\n- return \"\".join(str(e) for e in binary)\n \n+ if negative:\n+ return \"-0b\" + \"\".join(str(e) for e in binary)\n \n-def main():\n- \"\"\"Print binary equivelents of decimal numbers.\"\"\"\n- print(\"\\n2 in binary is:\")\n- print(decimal_to_binary(2)) # = 10\n- print(\"\\n7 in binary is:\")\n- print(decimal_to_binary(7)) # = 111\n- print(\"\\n35 in binary is:\")\n- print(decimal_to_binary(35)) # = 100011\n- print(\"\\n\")\n+ return \"0b\" + \"\".join(str(e) for e in binary)\n \n \n-if __name__ == '__main__':\n- main()\n+if __name__ == \"__main__\":\n+ import doctest\n+ doctest.testmod()\n", "issue": "decimal_to_binary() should return identical values as bin()\nhttps://github.com/TheAlgorithms/Python/blob/7b267e5e4f8ccb72dd58fcf0057642fd62a36bdf/conversions/decimal_to_binary.py#L4\r\n\r\nPlease change __decimal_to_binary()__ to return identical values as the Python builtin [__bin()__](https://docs.python.org/3/library/functions.html#bin). With doctests to prove it please.\r\n\r\n@PatOnTheBack @Corruption13\n", "before_files": [{"content": "\"\"\"Convert a Decimal Number to a Binary Number.\"\"\"\n\n\ndef decimal_to_binary(num):\n \"\"\"Convert a Decimal Number to a Binary Number.\"\"\"\n binary = []\n while num > 0:\n binary.insert(0, num % 2)\n num >>= 1\n return \"\".join(str(e) for e in binary)\n\n\ndef main():\n \"\"\"Print binary equivelents of decimal numbers.\"\"\"\n print(\"\\n2 in binary is:\")\n print(decimal_to_binary(2)) # = 10\n print(\"\\n7 in binary is:\")\n print(decimal_to_binary(7)) # = 111\n print(\"\\n35 in binary is:\")\n print(decimal_to_binary(35)) # = 100011\n print(\"\\n\")\n\n\nif __name__ == '__main__':\n main()\n", "path": "conversions/decimal_to_binary.py"}]}
| 885 | 628 |
gh_patches_debug_1651
|
rasdani/github-patches
|
git_diff
|
deeppavlov__DeepPavlov-76
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
What is "'Chainer' object has no attribute 'infer'
2018-03-04 14:09:23,638 (util.py:64 WorkerThread2) ERROR - TeleBot: "AttributeError occurred, args=("'Chainer' object has no attribute 'infer'",)
Traceback (most recent call last):
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 58, in run
task(*args, **kwargs)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 48, in handle_inference
pred = model.infer(context)
AttributeError: 'Chainer' object has no attribute 'infer'
"
2018-03-04 14:09:23.638 ERROR in 'TeleBot'['util'] at line 64: AttributeError occurred, args=("'Chainer' object has no attribute 'infer'",)
Traceback (most recent call last):
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 58, in run
task(*args, **kwargs)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 48, in handle_inference
pred = model.infer(context)
AttributeError: 'Chainer' object has no attribute 'infer'
Traceback (most recent call last):
File "deep.py", line 60, in <module>
main()
File "deep.py", line 56, in main
interact_model_by_telegram(pipeline_config_path, token)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 58, in interact_model_by_telegram
init_bot_for_model(token, model)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 52, in init_bot_for_model
bot.polling()
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py", line 264, in polling
self.__threaded_polling(none_stop, interval, timeout)
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py", line 288, in __threaded_polling
self.worker_pool.raise_exceptions()
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 107, in raise_exceptions
six.reraise(self.exc_info[0], self.exc_info[1], self.exc_info[2])
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 58, in run
task(*args, **kwargs)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 48, in handle_inference
pred = model.infer(context)
AttributeError: 'Chainer' object has no attribute 'infer'
</issue>
<code>
[start of telegram_utils/telegram_ui.py]
1 """
2 Copyright 2017 Neural Networks and Deep Learning lab, MIPT
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 """
16 import telebot
17
18 from deeppavlov.core.common.file import read_json
19 from deeppavlov.core.commands.infer import build_model_from_config
20
21
22 def init_bot_for_model(token, model):
23 bot = telebot.TeleBot(token)
24
25 model_name = type(model).__name__
26 models_info = read_json('../telegram_utils/models_info.json')
27 model_info = models_info[model_name] if model_name in models_info else models_info['@default']
28
29 @bot.message_handler(commands=['start'])
30 def send_start_message(message):
31 chat_id = message.chat.id
32 out_message = model_info['start_message']
33 if hasattr(model, 'reset'):
34 model.reset()
35 bot.send_message(chat_id, out_message)
36
37 @bot.message_handler(commands=['help'])
38 def send_help_message(message):
39 chat_id = message.chat.id
40 out_message = model_info['help_message']
41 bot.send_message(chat_id, out_message)
42
43 @bot.message_handler()
44 def handle_inference(message):
45 chat_id = message.chat.id
46 context = message.text
47
48 pred = model.infer(context)
49 reply_message = str(pred)
50 bot.send_message(chat_id, reply_message)
51
52 bot.polling()
53
54
55 def interact_model_by_telegram(config_path, token):
56 config = read_json(config_path)
57 model = build_model_from_config(config)
58 init_bot_for_model(token, model)
59
[end of telegram_utils/telegram_ui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram_utils/telegram_ui.py b/telegram_utils/telegram_ui.py
--- a/telegram_utils/telegram_ui.py
+++ b/telegram_utils/telegram_ui.py
@@ -45,7 +45,7 @@
chat_id = message.chat.id
context = message.text
- pred = model.infer(context)
+ pred = model(context)
reply_message = str(pred)
bot.send_message(chat_id, reply_message)
|
{"golden_diff": "diff --git a/telegram_utils/telegram_ui.py b/telegram_utils/telegram_ui.py\n--- a/telegram_utils/telegram_ui.py\n+++ b/telegram_utils/telegram_ui.py\n@@ -45,7 +45,7 @@\n chat_id = message.chat.id\n context = message.text\n \n- pred = model.infer(context)\n+ pred = model(context)\n reply_message = str(pred)\n bot.send_message(chat_id, reply_message)\n", "issue": "What is \"'Chainer' object has no attribute 'infer'\n2018-03-04 14:09:23,638 (util.py:64 WorkerThread2) ERROR - TeleBot: \"AttributeError occurred, args=(\"'Chainer' object has no attribute 'infer'\",)\r\nTraceback (most recent call last):\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 58, in run\r\n task(*args, **kwargs)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 48, in handle_inference\r\n pred = model.infer(context)\r\nAttributeError: 'Chainer' object has no attribute 'infer'\r\n\"\r\n2018-03-04 14:09:23.638 ERROR in 'TeleBot'['util'] at line 64: AttributeError occurred, args=(\"'Chainer' object has no attribute 'infer'\",)\r\nTraceback (most recent call last):\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 58, in run\r\n task(*args, **kwargs)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 48, in handle_inference\r\n pred = model.infer(context)\r\nAttributeError: 'Chainer' object has no attribute 'infer'\r\n\r\nTraceback (most recent call last):\r\n File \"deep.py\", line 60, in <module>\r\n main()\r\n File \"deep.py\", line 56, in main\r\n interact_model_by_telegram(pipeline_config_path, token)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 58, in interact_model_by_telegram\r\n init_bot_for_model(token, model)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 52, in init_bot_for_model\r\n bot.polling()\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py\", line 264, in polling\r\n self.__threaded_polling(none_stop, interval, timeout)\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py\", line 288, in __threaded_polling\r\n self.worker_pool.raise_exceptions()\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 107, in raise_exceptions\r\n six.reraise(self.exc_info[0], self.exc_info[1], self.exc_info[2])\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/six.py\", line 693, in reraise\r\n raise value\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 58, in run\r\n task(*args, **kwargs)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 48, in handle_inference\r\n pred = model.infer(context)\r\nAttributeError: 'Chainer' object has no attribute 'infer'\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright 2017 Neural Networks and Deep Learning lab, MIPT\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\"\"\"\nimport telebot\n\nfrom deeppavlov.core.common.file import read_json\nfrom deeppavlov.core.commands.infer import build_model_from_config\n\n\ndef init_bot_for_model(token, model):\n bot = telebot.TeleBot(token)\n\n model_name = type(model).__name__\n models_info = read_json('../telegram_utils/models_info.json')\n model_info = models_info[model_name] if model_name in models_info else models_info['@default']\n\n @bot.message_handler(commands=['start'])\n def send_start_message(message):\n chat_id = message.chat.id\n out_message = model_info['start_message']\n if hasattr(model, 'reset'):\n model.reset()\n bot.send_message(chat_id, out_message)\n\n @bot.message_handler(commands=['help'])\n def send_help_message(message):\n chat_id = message.chat.id\n out_message = model_info['help_message']\n bot.send_message(chat_id, out_message)\n\n @bot.message_handler()\n def handle_inference(message):\n chat_id = message.chat.id\n context = message.text\n\n pred = model.infer(context)\n reply_message = str(pred)\n bot.send_message(chat_id, reply_message)\n\n bot.polling()\n\n\ndef interact_model_by_telegram(config_path, token):\n config = read_json(config_path)\n model = build_model_from_config(config)\n init_bot_for_model(token, model)\n", "path": "telegram_utils/telegram_ui.py"}]}
| 1,818 | 99 |
gh_patches_debug_9384
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-4743
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BadHeaderError: Header values can't contain newlines (got 'attachment; filename="vendor.js.map\r\n\n ...
https://sentry.io/sentry/sentry/issues/202980928/
```
BadHeaderError: Header values can't contain newlines (got 'attachment; filename="vendor.js.map\r\n\n "')
(1 additional frame(s) were not displayed)
...
File "sentry/api/base.py", line 86, in handle_exception
return super(Endpoint, self).handle_exception(exc)
File "sentry/api/base.py", line 180, in dispatch
response = handler(request, *args, **kwargs)
File "sentry/api/endpoints/release_file_details.py", line 123, in get
return self.download(releasefile)
File "sentry/api/endpoints/release_file_details.py", line 83, in download
response['Content-Disposition'] = 'attachment; filename="%s"' % posixpath.basename(releasefile.name)
BadHeaderError: Header values can't contain newlines (got 'attachment; filename="vendor.js.map\r\n\n "')
```
</issue>
<code>
[start of src/sentry/api/endpoints/release_file_details.py]
1 from __future__ import absolute_import
2 import posixpath
3
4 from rest_framework import serializers
5 from rest_framework.response import Response
6
7 from sentry.api.base import DocSection
8 from sentry.api.bases.project import ProjectEndpoint, ProjectReleasePermission
9 from sentry.api.exceptions import ResourceDoesNotExist
10 from sentry.api.serializers import serialize
11 from sentry.models import Release, ReleaseFile
12 from sentry.utils.apidocs import scenario, attach_scenarios
13 from django.http import CompatibleStreamingHttpResponse
14
15
16 @scenario('RetrieveReleaseFile')
17 def retrieve_file_scenario(runner):
18 rf = runner.utils.create_release_file(
19 project=runner.default_project,
20 release=runner.default_release,
21 path='/demo/readme.txt',
22 contents='Hello World!'
23 )
24 runner.request(
25 method='GET',
26 path='/projects/%s/%s/releases/%s/files/%s/' % (
27 runner.org.slug, runner.default_project.slug,
28 runner.default_release.version, rf.id)
29 )
30
31
32 @scenario('UpdateReleaseFile')
33 def update_file_scenario(runner):
34 rf = runner.utils.create_release_file(
35 project=runner.default_project,
36 release=runner.default_release,
37 path='/demo/hello.txt',
38 contents='Good bye World!'
39 )
40 runner.request(
41 method='PUT',
42 path='/projects/%s/%s/releases/%s/files/%s/' % (
43 runner.org.slug, runner.default_project.slug,
44 runner.default_release.version, rf.id),
45 data={
46 'name': '/demo/goodbye.txt'
47 }
48 )
49
50
51 @scenario('DeleteReleaseFile')
52 def delete_file_scenario(runner):
53 rf = runner.utils.create_release_file(
54 project=runner.default_project,
55 release=runner.default_release,
56 path='/demo/badfile.txt',
57 contents='Whatever!'
58 )
59 runner.request(
60 method='DELETE',
61 path='/projects/%s/%s/releases/%s/files/%s/' % (
62 runner.org.slug, runner.default_project.slug,
63 runner.default_release.version, rf.id)
64 )
65
66
67 class ReleaseFileSerializer(serializers.Serializer):
68 name = serializers.CharField(max_length=200, required=True)
69
70
71 class ReleaseFileDetailsEndpoint(ProjectEndpoint):
72 doc_section = DocSection.RELEASES
73 permission_classes = (ProjectReleasePermission,)
74
75 def download(self, releasefile):
76 file = releasefile.file
77 fp = file.getfile()
78 response = CompatibleStreamingHttpResponse(
79 iter(lambda: fp.read(4096), b''),
80 content_type=file.headers.get('content-type', 'application/octet-stream'),
81 )
82 response['Content-Length'] = file.size
83 response['Content-Disposition'] = 'attachment; filename="%s"' % posixpath.basename(releasefile.name)
84 return response
85
86 @attach_scenarios([retrieve_file_scenario])
87 def get(self, request, project, version, file_id):
88 """
89 Retrieve a File
90 ```````````````
91
92 Return details on an individual file within a release. This does
93 not actually return the contents of the file, just the associated
94 metadata.
95
96 :pparam string organization_slug: the slug of the organization the
97 release belongs to.
98 :pparam string project_slug: the slug of the project to retrieve the
99 file of.
100 :pparam string version: the version identifier of the release.
101 :pparam string file_id: the ID of the file to retrieve.
102 :auth: required
103 """
104 try:
105 release = Release.objects.get(
106 project=project,
107 version=version,
108 )
109 except Release.DoesNotExist:
110 raise ResourceDoesNotExist
111
112 try:
113 releasefile = ReleaseFile.objects.get(
114 release=release,
115 id=file_id,
116 )
117 except ReleaseFile.DoesNotExist:
118 raise ResourceDoesNotExist
119
120 download_requested = request.GET.get('download') is not None
121 if download_requested and (
122 request.access.has_scope('project:write')):
123 return self.download(releasefile)
124 elif download_requested:
125 return Response(status=403)
126 return Response(serialize(releasefile, request.user))
127
128 @attach_scenarios([update_file_scenario])
129 def put(self, request, project, version, file_id):
130 """
131 Update a File
132 `````````````
133
134 Update metadata of an existing file. Currently only the name of
135 the file can be changed.
136
137 :pparam string organization_slug: the slug of the organization the
138 release belongs to.
139 :pparam string project_slug: the slug of the project to update the
140 file of.
141 :pparam string version: the version identifier of the release.
142 :pparam string file_id: the ID of the file to update.
143 :param string name: the new name of the file.
144 :auth: required
145 """
146 try:
147 release = Release.objects.get(
148 project=project,
149 version=version,
150 )
151 except Release.DoesNotExist:
152 raise ResourceDoesNotExist
153
154 try:
155 releasefile = ReleaseFile.objects.get(
156 release=release,
157 id=file_id,
158 )
159 except ReleaseFile.DoesNotExist:
160 raise ResourceDoesNotExist
161
162 serializer = ReleaseFileSerializer(data=request.DATA)
163
164 if not serializer.is_valid():
165 return Response(serializer.errors, status=400)
166
167 result = serializer.object
168
169 releasefile.update(
170 name=result['name'],
171 )
172
173 return Response(serialize(releasefile, request.user))
174
175 @attach_scenarios([delete_file_scenario])
176 def delete(self, request, project, version, file_id):
177 """
178 Delete a File
179 `````````````
180
181 Permanently remove a file from a release.
182
183 This will also remove the physical file from storage.
184
185 :pparam string organization_slug: the slug of the organization the
186 release belongs to.
187 :pparam string project_slug: the slug of the project to delete the
188 file of.
189 :pparam string version: the version identifier of the release.
190 :pparam string file_id: the ID of the file to delete.
191 :auth: required
192 """
193 try:
194 release = Release.objects.get(
195 project=project,
196 version=version,
197 )
198 except Release.DoesNotExist:
199 raise ResourceDoesNotExist
200
201 try:
202 releasefile = ReleaseFile.objects.get(
203 release=release,
204 id=file_id,
205 )
206 except ReleaseFile.DoesNotExist:
207 raise ResourceDoesNotExist
208
209 file = releasefile.file
210
211 # TODO(dcramer): this doesnt handle a failure from file.deletefile() to
212 # the actual deletion of the db row
213 releasefile.delete()
214 file.delete()
215
216 return Response(status=204)
217
[end of src/sentry/api/endpoints/release_file_details.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/api/endpoints/release_file_details.py b/src/sentry/api/endpoints/release_file_details.py
--- a/src/sentry/api/endpoints/release_file_details.py
+++ b/src/sentry/api/endpoints/release_file_details.py
@@ -80,7 +80,7 @@
content_type=file.headers.get('content-type', 'application/octet-stream'),
)
response['Content-Length'] = file.size
- response['Content-Disposition'] = 'attachment; filename="%s"' % posixpath.basename(releasefile.name)
+ response['Content-Disposition'] = 'attachment; filename="%s"' % posixpath.basename(" ".join(releasefile.name.split()))
return response
@attach_scenarios([retrieve_file_scenario])
|
{"golden_diff": "diff --git a/src/sentry/api/endpoints/release_file_details.py b/src/sentry/api/endpoints/release_file_details.py\n--- a/src/sentry/api/endpoints/release_file_details.py\n+++ b/src/sentry/api/endpoints/release_file_details.py\n@@ -80,7 +80,7 @@\n content_type=file.headers.get('content-type', 'application/octet-stream'),\n )\n response['Content-Length'] = file.size\n- response['Content-Disposition'] = 'attachment; filename=\"%s\"' % posixpath.basename(releasefile.name)\n+ response['Content-Disposition'] = 'attachment; filename=\"%s\"' % posixpath.basename(\" \".join(releasefile.name.split()))\n return response\n \n @attach_scenarios([retrieve_file_scenario])\n", "issue": "BadHeaderError: Header values can't contain newlines (got 'attachment; filename=\"vendor.js.map\\r\\n\\n ...\nhttps://sentry.io/sentry/sentry/issues/202980928/\n\n```\nBadHeaderError: Header values can't contain newlines (got 'attachment; filename=\"vendor.js.map\\r\\n\\n \"')\n(1 additional frame(s) were not displayed)\n...\n File \"sentry/api/base.py\", line 86, in handle_exception\n return super(Endpoint, self).handle_exception(exc)\n File \"sentry/api/base.py\", line 180, in dispatch\n response = handler(request, *args, **kwargs)\n File \"sentry/api/endpoints/release_file_details.py\", line 123, in get\n return self.download(releasefile)\n File \"sentry/api/endpoints/release_file_details.py\", line 83, in download\n response['Content-Disposition'] = 'attachment; filename=\"%s\"' % posixpath.basename(releasefile.name)\n\nBadHeaderError: Header values can't contain newlines (got 'attachment; filename=\"vendor.js.map\\r\\n\\n \"')\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nimport posixpath\n\nfrom rest_framework import serializers\nfrom rest_framework.response import Response\n\nfrom sentry.api.base import DocSection\nfrom sentry.api.bases.project import ProjectEndpoint, ProjectReleasePermission\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.api.serializers import serialize\nfrom sentry.models import Release, ReleaseFile\nfrom sentry.utils.apidocs import scenario, attach_scenarios\nfrom django.http import CompatibleStreamingHttpResponse\n\n\n@scenario('RetrieveReleaseFile')\ndef retrieve_file_scenario(runner):\n rf = runner.utils.create_release_file(\n project=runner.default_project,\n release=runner.default_release,\n path='/demo/readme.txt',\n contents='Hello World!'\n )\n runner.request(\n method='GET',\n path='/projects/%s/%s/releases/%s/files/%s/' % (\n runner.org.slug, runner.default_project.slug,\n runner.default_release.version, rf.id)\n )\n\n\n@scenario('UpdateReleaseFile')\ndef update_file_scenario(runner):\n rf = runner.utils.create_release_file(\n project=runner.default_project,\n release=runner.default_release,\n path='/demo/hello.txt',\n contents='Good bye World!'\n )\n runner.request(\n method='PUT',\n path='/projects/%s/%s/releases/%s/files/%s/' % (\n runner.org.slug, runner.default_project.slug,\n runner.default_release.version, rf.id),\n data={\n 'name': '/demo/goodbye.txt'\n }\n )\n\n\n@scenario('DeleteReleaseFile')\ndef delete_file_scenario(runner):\n rf = runner.utils.create_release_file(\n project=runner.default_project,\n release=runner.default_release,\n path='/demo/badfile.txt',\n contents='Whatever!'\n )\n runner.request(\n method='DELETE',\n path='/projects/%s/%s/releases/%s/files/%s/' % (\n runner.org.slug, runner.default_project.slug,\n runner.default_release.version, rf.id)\n )\n\n\nclass ReleaseFileSerializer(serializers.Serializer):\n name = serializers.CharField(max_length=200, required=True)\n\n\nclass ReleaseFileDetailsEndpoint(ProjectEndpoint):\n doc_section = DocSection.RELEASES\n permission_classes = (ProjectReleasePermission,)\n\n def download(self, releasefile):\n file = releasefile.file\n fp = file.getfile()\n response = CompatibleStreamingHttpResponse(\n iter(lambda: fp.read(4096), b''),\n content_type=file.headers.get('content-type', 'application/octet-stream'),\n )\n response['Content-Length'] = file.size\n response['Content-Disposition'] = 'attachment; filename=\"%s\"' % posixpath.basename(releasefile.name)\n return response\n\n @attach_scenarios([retrieve_file_scenario])\n def get(self, request, project, version, file_id):\n \"\"\"\n Retrieve a File\n ```````````````\n\n Return details on an individual file within a release. This does\n not actually return the contents of the file, just the associated\n metadata.\n\n :pparam string organization_slug: the slug of the organization the\n release belongs to.\n :pparam string project_slug: the slug of the project to retrieve the\n file of.\n :pparam string version: the version identifier of the release.\n :pparam string file_id: the ID of the file to retrieve.\n :auth: required\n \"\"\"\n try:\n release = Release.objects.get(\n project=project,\n version=version,\n )\n except Release.DoesNotExist:\n raise ResourceDoesNotExist\n\n try:\n releasefile = ReleaseFile.objects.get(\n release=release,\n id=file_id,\n )\n except ReleaseFile.DoesNotExist:\n raise ResourceDoesNotExist\n\n download_requested = request.GET.get('download') is not None\n if download_requested and (\n request.access.has_scope('project:write')):\n return self.download(releasefile)\n elif download_requested:\n return Response(status=403)\n return Response(serialize(releasefile, request.user))\n\n @attach_scenarios([update_file_scenario])\n def put(self, request, project, version, file_id):\n \"\"\"\n Update a File\n `````````````\n\n Update metadata of an existing file. Currently only the name of\n the file can be changed.\n\n :pparam string organization_slug: the slug of the organization the\n release belongs to.\n :pparam string project_slug: the slug of the project to update the\n file of.\n :pparam string version: the version identifier of the release.\n :pparam string file_id: the ID of the file to update.\n :param string name: the new name of the file.\n :auth: required\n \"\"\"\n try:\n release = Release.objects.get(\n project=project,\n version=version,\n )\n except Release.DoesNotExist:\n raise ResourceDoesNotExist\n\n try:\n releasefile = ReleaseFile.objects.get(\n release=release,\n id=file_id,\n )\n except ReleaseFile.DoesNotExist:\n raise ResourceDoesNotExist\n\n serializer = ReleaseFileSerializer(data=request.DATA)\n\n if not serializer.is_valid():\n return Response(serializer.errors, status=400)\n\n result = serializer.object\n\n releasefile.update(\n name=result['name'],\n )\n\n return Response(serialize(releasefile, request.user))\n\n @attach_scenarios([delete_file_scenario])\n def delete(self, request, project, version, file_id):\n \"\"\"\n Delete a File\n `````````````\n\n Permanently remove a file from a release.\n\n This will also remove the physical file from storage.\n\n :pparam string organization_slug: the slug of the organization the\n release belongs to.\n :pparam string project_slug: the slug of the project to delete the\n file of.\n :pparam string version: the version identifier of the release.\n :pparam string file_id: the ID of the file to delete.\n :auth: required\n \"\"\"\n try:\n release = Release.objects.get(\n project=project,\n version=version,\n )\n except Release.DoesNotExist:\n raise ResourceDoesNotExist\n\n try:\n releasefile = ReleaseFile.objects.get(\n release=release,\n id=file_id,\n )\n except ReleaseFile.DoesNotExist:\n raise ResourceDoesNotExist\n\n file = releasefile.file\n\n # TODO(dcramer): this doesnt handle a failure from file.deletefile() to\n # the actual deletion of the db row\n releasefile.delete()\n file.delete()\n\n return Response(status=204)\n", "path": "src/sentry/api/endpoints/release_file_details.py"}]}
| 2,765 | 157 |
gh_patches_debug_14730
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-4775
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document link extractor usage outside CrawlSpider rules
https://docs.scrapy.org/en/latest/topics/link-extractors.html mentions that link extractors may be used outside `CrawlSpider`, but it does not go into detail on how to do that.
Also, there are broken references to `scrapy.link.Link`, we should provide reference documentation for that class.
</issue>
<code>
[start of scrapy/link.py]
1 """
2 This module defines the Link object used in Link extractors.
3
4 For actual link extractors implementation see scrapy.linkextractors, or
5 its documentation in: docs/topics/link-extractors.rst
6 """
7
8
9 class Link:
10 """Link objects represent an extracted link by the LinkExtractor."""
11
12 __slots__ = ['url', 'text', 'fragment', 'nofollow']
13
14 def __init__(self, url, text='', fragment='', nofollow=False):
15 if not isinstance(url, str):
16 got = url.__class__.__name__
17 raise TypeError(f"Link urls must be str objects, got {got}")
18 self.url = url
19 self.text = text
20 self.fragment = fragment
21 self.nofollow = nofollow
22
23 def __eq__(self, other):
24 return (
25 self.url == other.url
26 and self.text == other.text
27 and self.fragment == other.fragment
28 and self.nofollow == other.nofollow
29 )
30
31 def __hash__(self):
32 return hash(self.url) ^ hash(self.text) ^ hash(self.fragment) ^ hash(self.nofollow)
33
34 def __repr__(self):
35 return (
36 f'Link(url={self.url!r}, text={self.text!r}, '
37 f'fragment={self.fragment!r}, nofollow={self.nofollow!r})'
38 )
39
[end of scrapy/link.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/link.py b/scrapy/link.py
--- a/scrapy/link.py
+++ b/scrapy/link.py
@@ -7,7 +7,22 @@
class Link:
- """Link objects represent an extracted link by the LinkExtractor."""
+ """Link objects represent an extracted link by the LinkExtractor.
+
+ Using the anchor tag sample below to illustrate the parameters::
+
+ <a href="https://example.com/nofollow.html#foo" rel="nofollow">Dont follow this one</a>
+
+ :param url: the absolute url being linked to in the anchor tag.
+ From the sample, this is ``https://example.com/nofollow.html``.
+
+ :param text: the text in the anchor tag. From the sample, this is ``Dont follow this one``.
+
+ :param fragment: the part of the url after the hash symbol. From the sample, this is ``foo``.
+
+ :param nofollow: an indication of the presence or absence of a nofollow value in the ``rel`` attribute
+ of the anchor tag.
+ """
__slots__ = ['url', 'text', 'fragment', 'nofollow']
|
{"golden_diff": "diff --git a/scrapy/link.py b/scrapy/link.py\n--- a/scrapy/link.py\n+++ b/scrapy/link.py\n@@ -7,7 +7,22 @@\n \n \n class Link:\n- \"\"\"Link objects represent an extracted link by the LinkExtractor.\"\"\"\n+ \"\"\"Link objects represent an extracted link by the LinkExtractor.\n+\n+ Using the anchor tag sample below to illustrate the parameters::\n+\n+ <a href=\"https://example.com/nofollow.html#foo\" rel=\"nofollow\">Dont follow this one</a>\n+\n+ :param url: the absolute url being linked to in the anchor tag.\n+ From the sample, this is ``https://example.com/nofollow.html``.\n+\n+ :param text: the text in the anchor tag. From the sample, this is ``Dont follow this one``.\n+\n+ :param fragment: the part of the url after the hash symbol. From the sample, this is ``foo``.\n+\n+ :param nofollow: an indication of the presence or absence of a nofollow value in the ``rel`` attribute\n+ of the anchor tag.\n+ \"\"\"\n \n __slots__ = ['url', 'text', 'fragment', 'nofollow']\n", "issue": "Document link extractor usage outside CrawlSpider rules\nhttps://docs.scrapy.org/en/latest/topics/link-extractors.html mentions that link extractors may be used outside `CrawlSpider`, but it does not go into detail on how to do that.\r\n\r\nAlso, there are broken references to `scrapy.link.Link`, we should provide reference documentation for that class.\n", "before_files": [{"content": "\"\"\"\nThis module defines the Link object used in Link extractors.\n\nFor actual link extractors implementation see scrapy.linkextractors, or\nits documentation in: docs/topics/link-extractors.rst\n\"\"\"\n\n\nclass Link:\n \"\"\"Link objects represent an extracted link by the LinkExtractor.\"\"\"\n\n __slots__ = ['url', 'text', 'fragment', 'nofollow']\n\n def __init__(self, url, text='', fragment='', nofollow=False):\n if not isinstance(url, str):\n got = url.__class__.__name__\n raise TypeError(f\"Link urls must be str objects, got {got}\")\n self.url = url\n self.text = text\n self.fragment = fragment\n self.nofollow = nofollow\n\n def __eq__(self, other):\n return (\n self.url == other.url\n and self.text == other.text\n and self.fragment == other.fragment\n and self.nofollow == other.nofollow\n )\n\n def __hash__(self):\n return hash(self.url) ^ hash(self.text) ^ hash(self.fragment) ^ hash(self.nofollow)\n\n def __repr__(self):\n return (\n f'Link(url={self.url!r}, text={self.text!r}, '\n f'fragment={self.fragment!r}, nofollow={self.nofollow!r})'\n )\n", "path": "scrapy/link.py"}]}
| 963 | 262 |
gh_patches_debug_41130
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-modules-core-3623
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
junos_template module does not handle the `overwrite` action correctly
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
junos_template
##### ANSIBLE VERSION
```
ansible 2.2.0 (detached HEAD d98bd3551e) last updated 2016/05/06 18:06:05 (GMT +100)
lib/ansible/modules/core: (detached HEAD 1f5cf669dd) last updated 2016/05/06 18:06:07 (GMT +100)
lib/ansible/modules/extras: (detached HEAD 431591c2b4) last updated 2016/05/06 18:06:07 (GMT +100)
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The argument parsing for the `junos_template` module is flawed.
If `overwrite=true` is set, `action==merge` as `merge` defaults to `True`
Trying to set `merge=false` as well, gives `FAILED! => {"changed": false, "failed": true, "msg": "parameters are mutually exclusive: ('merge', 'overwrite')"}`
If you set `merge=false`, `action==replace`.
It's impossible to get `action==overwrite`
##### STEPS TO REPRODUCE
```
---
- name: Install configuration
junos_template: >
host='{{ ansible_host }}'
port={{ ansible_port }}
src='{{ junos_conf }}'
comment='configured by ansible'
timeout=120
overwrite=true
```
##### EXPECTED RESULTS
The config is replaced in it's entirety
##### ACTUAL RESULTS
The config is merged
```
<!--- Paste verbatim command output between quotes -->
```
</issue>
<code>
[start of network/junos/junos_template.py]
1 #!/usr/bin/python
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 #
18
19 DOCUMENTATION = """
20 ---
21 module: junos_template
22 version_added: "2.1"
23 author: "Peter Sprygada (@privateip)"
24 short_description: Manage configuration on remote devices running Junos
25 description:
26 - The M(junos_template) module will load a candidate configuration
27 from a template file onto a remote device running Junos. The
28 module will return the differences in configuration if the diff
29 option is specified on the Ansible command line
30 extends_documentation_fragment: junos
31 options:
32 src:
33 description:
34 - The path to the config source. The source can be either a
35 file with config or a template that will be merged during
36 runtime. By default the task will search for the source
37 file in role or playbook root folder in templates directory.
38 required: true
39 default: null
40 backup:
41 description:
42 - When this argument is configured true, the module will backup
43 the configuration from the node prior to making any changes.
44 The backup file will be written to backup_{{ hostname }} in
45 the root of the playbook directory.
46 required: false
47 default: false
48 choices: ["true", "false"]
49 confirm:
50 description:
51 - The C(confirm) argument will configure a time out value for
52 the commit to be confirmed before it is automatically
53 rolled back. If the C(confirm) argument is set to False, this
54 argument is silently ignored. If the value for this argument
55 is set to 0, the commit is confirmed immediately.
56 required: false
57 default: 0
58 comment:
59 description:
60 - The C(comment) argument specifies a text string to be used
61 when committing the configuration. If the C(confirm) argument
62 is set to False, this argument is silently ignored.
63 required: false
64 default: configured by junos_template
65 merge:
66 description:
67 - The C(merge) argument instructs the module to merge the contents
68 of C(src) with the configuration running on the remote device. If
69 both C(merge) and C(overwrite) are set to false, the configuration
70 is replaced.
71 required: false
72 default: true
73 overwrite:
74 description:
75 - The C(overwrite) argument will overwrite the entire configuration
76 on the remote device with the contents loaded from C(src). If
77 both C(merge) and C(overwrite) are set to false, the configuration
78 is replaced.
79 required: false
80 default: false
81 config_format:
82 description:
83 - The C(format) argument specifies the format of the configuration
84 template specified in C(src). If the format argument is not
85 specified, the module will attempt to infer the configuration
86 format based of file extension. Files that end in I(xml) will set
87 the format to xml. Files that end in I(set) will set the format
88 to set and all other files will default the format to text.
89 required: false
90 default: null
91 choices: ['text', 'xml', 'set']
92 requirements:
93 - junos-eznc
94 notes:
95 - This module requires the netconf system service be enabled on
96 the remote device being managed
97 """
98
99 EXAMPLES = """
100 - junos_template:
101 src: config.j2
102 comment: update system config
103
104 - name: replace config hierarchy
105 src: config.j2
106 replace: yes
107
108 - name: overwrite the config
109 src: config.j2
110 overwrite: yes
111 """
112
113 DEFAULT_COMMENT = 'configured by junos_template'
114
115 def main():
116
117 argument_spec = dict(
118 src=dict(required=True, type='path'),
119 confirm=dict(default=0, type='int'),
120 comment=dict(default=DEFAULT_COMMENT),
121 merge=dict(default=True, type='bool'),
122 overwrite=dict(default=False, type='bool'),
123 config_format=dict(choices=['text', 'set', 'xml']),
124 backup=dict(default=False, type='bool'),
125 transport=dict(default='netconf', choices=['netconf'])
126 )
127
128 mutually_exclusive = [('merge', 'overwrite')]
129
130 module = get_module(argument_spec=argument_spec,
131 mutually_exclusive=mutually_exclusive,
132 supports_check_mode=True)
133
134 comment = module.params['comment']
135 confirm = module.params['confirm']
136 commit = not module.check_mode
137
138 merge = module.params['merge']
139 overwrite = module.params['overwrite']
140
141 src = module.params['src']
142 fmt = module.params['config_format']
143
144 if overwrite and fmt == 'set':
145 module.fail_json(msg="overwrite cannot be used when format is "
146 "set per junos documentation")
147
148 if merge:
149 action = 'merge'
150 elif overwrite:
151 action = 'overwrite'
152 else:
153 action = 'replace'
154
155 results = dict(changed=False)
156 results['_backup'] = str(module.get_config()).strip()
157
158 diff = module.load_config(src, action=action, comment=comment,
159 format=fmt, commit=commit, confirm=confirm)
160
161 if diff:
162 results['changed'] = True
163 results['diff'] = dict(prepared=diff)
164
165 module.exit_json(**results)
166
167
168 from ansible.module_utils.basic import *
169 from ansible.module_utils.junos import *
170
171 if __name__ == '__main__':
172 main()
173
[end of network/junos/junos_template.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/network/junos/junos_template.py b/network/junos/junos_template.py
--- a/network/junos/junos_template.py
+++ b/network/junos/junos_template.py
@@ -62,22 +62,13 @@
is set to False, this argument is silently ignored.
required: false
default: configured by junos_template
- merge:
+ action:
description:
- - The C(merge) argument instructs the module to merge the contents
- of C(src) with the configuration running on the remote device. If
- both C(merge) and C(overwrite) are set to false, the configuration
- is replaced.
+ - The C(action) argument specifies how the module will apply changes.
required: false
- default: true
- overwrite:
- description:
- - The C(overwrite) argument will overwrite the entire configuration
- on the remote device with the contents loaded from C(src). If
- both C(merge) and C(overwrite) are set to false, the configuration
- is replaced.
- required: false
- default: false
+ default: merge
+ choices: ['merge', 'overwrite', 'replace']
+ version_added: "2.2"
config_format:
description:
- The C(format) argument specifies the format of the configuration
@@ -103,11 +94,11 @@
- name: replace config hierarchy
src: config.j2
- replace: yes
+ action: replace
- name: overwrite the config
src: config.j2
- overwrite: yes
+ action: overwrite
"""
DEFAULT_COMMENT = 'configured by junos_template'
@@ -118,40 +109,28 @@
src=dict(required=True, type='path'),
confirm=dict(default=0, type='int'),
comment=dict(default=DEFAULT_COMMENT),
- merge=dict(default=True, type='bool'),
- overwrite=dict(default=False, type='bool'),
+ action=dict(default='merge', choices=['merge', 'overwrite', 'replace']),
config_format=dict(choices=['text', 'set', 'xml']),
backup=dict(default=False, type='bool'),
transport=dict(default='netconf', choices=['netconf'])
)
- mutually_exclusive = [('merge', 'overwrite')]
-
module = get_module(argument_spec=argument_spec,
- mutually_exclusive=mutually_exclusive,
supports_check_mode=True)
comment = module.params['comment']
confirm = module.params['confirm']
commit = not module.check_mode
- merge = module.params['merge']
- overwrite = module.params['overwrite']
+ action = module.params['action']
src = module.params['src']
fmt = module.params['config_format']
- if overwrite and fmt == 'set':
+ if action == 'overwrite' and fmt == 'set':
module.fail_json(msg="overwrite cannot be used when format is "
"set per junos documentation")
- if merge:
- action = 'merge'
- elif overwrite:
- action = 'overwrite'
- else:
- action = 'replace'
-
results = dict(changed=False)
results['_backup'] = str(module.get_config()).strip()
|
{"golden_diff": "diff --git a/network/junos/junos_template.py b/network/junos/junos_template.py\n--- a/network/junos/junos_template.py\n+++ b/network/junos/junos_template.py\n@@ -62,22 +62,13 @@\n is set to False, this argument is silently ignored.\n required: false\n default: configured by junos_template\n- merge:\n+ action:\n description:\n- - The C(merge) argument instructs the module to merge the contents\n- of C(src) with the configuration running on the remote device. If\n- both C(merge) and C(overwrite) are set to false, the configuration\n- is replaced.\n+ - The C(action) argument specifies how the module will apply changes.\n required: false\n- default: true\n- overwrite:\n- description:\n- - The C(overwrite) argument will overwrite the entire configuration\n- on the remote device with the contents loaded from C(src). If\n- both C(merge) and C(overwrite) are set to false, the configuration\n- is replaced.\n- required: false\n- default: false\n+ default: merge\n+ choices: ['merge', 'overwrite', 'replace']\n+ version_added: \"2.2\"\n config_format:\n description:\n - The C(format) argument specifies the format of the configuration\n@@ -103,11 +94,11 @@\n \n - name: replace config hierarchy\n src: config.j2\n- replace: yes\n+ action: replace\n \n - name: overwrite the config\n src: config.j2\n- overwrite: yes\n+ action: overwrite\n \"\"\"\n \n DEFAULT_COMMENT = 'configured by junos_template'\n@@ -118,40 +109,28 @@\n src=dict(required=True, type='path'),\n confirm=dict(default=0, type='int'),\n comment=dict(default=DEFAULT_COMMENT),\n- merge=dict(default=True, type='bool'),\n- overwrite=dict(default=False, type='bool'),\n+ action=dict(default='merge', choices=['merge', 'overwrite', 'replace']),\n config_format=dict(choices=['text', 'set', 'xml']),\n backup=dict(default=False, type='bool'),\n transport=dict(default='netconf', choices=['netconf'])\n )\n \n- mutually_exclusive = [('merge', 'overwrite')]\n-\n module = get_module(argument_spec=argument_spec,\n- mutually_exclusive=mutually_exclusive,\n supports_check_mode=True)\n \n comment = module.params['comment']\n confirm = module.params['confirm']\n commit = not module.check_mode\n \n- merge = module.params['merge']\n- overwrite = module.params['overwrite']\n+ action = module.params['action']\n \n src = module.params['src']\n fmt = module.params['config_format']\n \n- if overwrite and fmt == 'set':\n+ if action == 'overwrite' and fmt == 'set':\n module.fail_json(msg=\"overwrite cannot be used when format is \"\n \"set per junos documentation\")\n \n- if merge:\n- action = 'merge'\n- elif overwrite:\n- action = 'overwrite'\n- else:\n- action = 'replace'\n-\n results = dict(changed=False)\n results['_backup'] = str(module.get_config()).strip()\n", "issue": "junos_template module does not handle the `overwrite` action correctly\n<!--- Verify first that your issue/request is not already reported in GitHub -->\n##### ISSUE TYPE\n\n<!--- Pick one below and delete the rest: -->\n- Bug Report\n##### COMPONENT NAME\n\njunos_template\n##### ANSIBLE VERSION\n\n```\nansible 2.2.0 (detached HEAD d98bd3551e) last updated 2016/05/06 18:06:05 (GMT +100)\n lib/ansible/modules/core: (detached HEAD 1f5cf669dd) last updated 2016/05/06 18:06:07 (GMT +100)\n lib/ansible/modules/extras: (detached HEAD 431591c2b4) last updated 2016/05/06 18:06:07 (GMT +100)\n```\n##### OS / ENVIRONMENT\n\nN/A\n##### SUMMARY\n\nThe argument parsing for the `junos_template` module is flawed.\nIf `overwrite=true` is set, `action==merge` as `merge` defaults to `True`\nTrying to set `merge=false` as well, gives `FAILED! => {\"changed\": false, \"failed\": true, \"msg\": \"parameters are mutually exclusive: ('merge', 'overwrite')\"}`\n\nIf you set `merge=false`, `action==replace`.\n\nIt's impossible to get `action==overwrite`\n##### STEPS TO REPRODUCE\n\n```\n\n---\n- name: Install configuration\n junos_template: >\n host='{{ ansible_host }}'\n port={{ ansible_port }}\n src='{{ junos_conf }}'\n comment='configured by ansible'\n timeout=120\n overwrite=true\n\n```\n##### EXPECTED RESULTS\n\nThe config is replaced in it's entirety\n##### ACTUAL RESULTS\n\nThe config is merged\n\n```\n<!--- Paste verbatim command output between quotes -->\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = \"\"\"\n---\nmodule: junos_template\nversion_added: \"2.1\"\nauthor: \"Peter Sprygada (@privateip)\"\nshort_description: Manage configuration on remote devices running Junos\ndescription:\n - The M(junos_template) module will load a candidate configuration\n from a template file onto a remote device running Junos. The\n module will return the differences in configuration if the diff\n option is specified on the Ansible command line\nextends_documentation_fragment: junos\noptions:\n src:\n description:\n - The path to the config source. The source can be either a\n file with config or a template that will be merged during\n runtime. By default the task will search for the source\n file in role or playbook root folder in templates directory.\n required: true\n default: null\n backup:\n description:\n - When this argument is configured true, the module will backup\n the configuration from the node prior to making any changes.\n The backup file will be written to backup_{{ hostname }} in\n the root of the playbook directory.\n required: false\n default: false\n choices: [\"true\", \"false\"]\n confirm:\n description:\n - The C(confirm) argument will configure a time out value for\n the commit to be confirmed before it is automatically\n rolled back. If the C(confirm) argument is set to False, this\n argument is silently ignored. If the value for this argument\n is set to 0, the commit is confirmed immediately.\n required: false\n default: 0\n comment:\n description:\n - The C(comment) argument specifies a text string to be used\n when committing the configuration. If the C(confirm) argument\n is set to False, this argument is silently ignored.\n required: false\n default: configured by junos_template\n merge:\n description:\n - The C(merge) argument instructs the module to merge the contents\n of C(src) with the configuration running on the remote device. If\n both C(merge) and C(overwrite) are set to false, the configuration\n is replaced.\n required: false\n default: true\n overwrite:\n description:\n - The C(overwrite) argument will overwrite the entire configuration\n on the remote device with the contents loaded from C(src). If\n both C(merge) and C(overwrite) are set to false, the configuration\n is replaced.\n required: false\n default: false\n config_format:\n description:\n - The C(format) argument specifies the format of the configuration\n template specified in C(src). If the format argument is not\n specified, the module will attempt to infer the configuration\n format based of file extension. Files that end in I(xml) will set\n the format to xml. Files that end in I(set) will set the format\n to set and all other files will default the format to text.\n required: false\n default: null\n choices: ['text', 'xml', 'set']\nrequirements:\n - junos-eznc\nnotes:\n - This module requires the netconf system service be enabled on\n the remote device being managed\n\"\"\"\n\nEXAMPLES = \"\"\"\n- junos_template:\n src: config.j2\n comment: update system config\n\n- name: replace config hierarchy\n src: config.j2\n replace: yes\n\n- name: overwrite the config\n src: config.j2\n overwrite: yes\n\"\"\"\n\nDEFAULT_COMMENT = 'configured by junos_template'\n\ndef main():\n\n argument_spec = dict(\n src=dict(required=True, type='path'),\n confirm=dict(default=0, type='int'),\n comment=dict(default=DEFAULT_COMMENT),\n merge=dict(default=True, type='bool'),\n overwrite=dict(default=False, type='bool'),\n config_format=dict(choices=['text', 'set', 'xml']),\n backup=dict(default=False, type='bool'),\n transport=dict(default='netconf', choices=['netconf'])\n )\n\n mutually_exclusive = [('merge', 'overwrite')]\n\n module = get_module(argument_spec=argument_spec,\n mutually_exclusive=mutually_exclusive,\n supports_check_mode=True)\n\n comment = module.params['comment']\n confirm = module.params['confirm']\n commit = not module.check_mode\n\n merge = module.params['merge']\n overwrite = module.params['overwrite']\n\n src = module.params['src']\n fmt = module.params['config_format']\n\n if overwrite and fmt == 'set':\n module.fail_json(msg=\"overwrite cannot be used when format is \"\n \"set per junos documentation\")\n\n if merge:\n action = 'merge'\n elif overwrite:\n action = 'overwrite'\n else:\n action = 'replace'\n\n results = dict(changed=False)\n results['_backup'] = str(module.get_config()).strip()\n\n diff = module.load_config(src, action=action, comment=comment,\n format=fmt, commit=commit, confirm=confirm)\n\n if diff:\n results['changed'] = True\n results['diff'] = dict(prepared=diff)\n\n module.exit_json(**results)\n\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.junos import *\n\nif __name__ == '__main__':\n main()\n", "path": "network/junos/junos_template.py"}]}
| 2,696 | 727 |
gh_patches_debug_4975
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-19598
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
imdb.load_data function returns a python list instead of ndarray object
In Keras 3.2.1 the imdb.load_data function returns a Python list instead of a ndarray object as described in the function documentation.
In Keras 2.15 the train and test data are converted to a ndarray using the following statements before returning the tuple.
x_train, y_train = np.array(xs[:idx], dtype="object"), labels[:idx]
x_test, y_test = np.array(xs[idx:], dtype="object"), labels[idx:]
In Keras 3.2.1 the conversion is not applied, i.e.,
x_train, y_train = xs[:idx], labels[:idx]
x_test, y_test = xs[idx:], labels[idx:]
</issue>
<code>
[start of keras/src/datasets/imdb.py]
1 """IMDB sentiment classification dataset."""
2
3 import json
4
5 import numpy as np
6
7 from keras.src.api_export import keras_export
8 from keras.src.utils.file_utils import get_file
9 from keras.src.utils.python_utils import remove_long_seq
10
11
12 @keras_export("keras.datasets.imdb.load_data")
13 def load_data(
14 path="imdb.npz",
15 num_words=None,
16 skip_top=0,
17 maxlen=None,
18 seed=113,
19 start_char=1,
20 oov_char=2,
21 index_from=3,
22 **kwargs,
23 ):
24 """Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).
25
26 This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment
27 (positive/negative). Reviews have been preprocessed, and each review is
28 encoded as a list of word indexes (integers).
29 For convenience, words are indexed by overall frequency in the dataset,
30 so that for instance the integer "3" encodes the 3rd most frequent word in
31 the data. This allows for quick filtering operations such as:
32 "only consider the top 10,000 most
33 common words, but eliminate the top 20 most common words".
34
35 As a convention, "0" does not stand for a specific word, but instead is used
36 to encode the pad token.
37
38 Args:
39 path: where to cache the data (relative to `~/.keras/dataset`).
40 num_words: integer or None. Words are
41 ranked by how often they occur (in the training set) and only
42 the `num_words` most frequent words are kept. Any less frequent word
43 will appear as `oov_char` value in the sequence data. If None,
44 all words are kept. Defaults to `None`.
45 skip_top: skip the top N most frequently occurring words
46 (which may not be informative). These words will appear as
47 `oov_char` value in the dataset. When 0, no words are
48 skipped. Defaults to `0`.
49 maxlen: int or None. Maximum sequence length.
50 Any longer sequence will be truncated. None, means no truncation.
51 Defaults to `None`.
52 seed: int. Seed for reproducible data shuffling.
53 start_char: int. The start of a sequence will be marked with this
54 character. 0 is usually the padding character. Defaults to `1`.
55 oov_char: int. The out-of-vocabulary character.
56 Words that were cut out because of the `num_words` or
57 `skip_top` limits will be replaced with this character.
58 index_from: int. Index actual words with this index and higher.
59
60 Returns:
61 Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
62
63 **`x_train`, `x_test`**: lists of sequences, which are lists of indexes
64 (integers). If the num_words argument was specific, the maximum
65 possible index value is `num_words - 1`. If the `maxlen` argument was
66 specified, the largest possible sequence length is `maxlen`.
67
68 **`y_train`, `y_test`**: lists of integer labels (1 or 0).
69
70 **Note**: The 'out of vocabulary' character is only used for
71 words that were present in the training set but are not included
72 because they're not making the `num_words` cut here.
73 Words that were not seen in the training set but are in the test set
74 have simply been skipped.
75 """
76 origin_folder = (
77 "https://storage.googleapis.com/tensorflow/tf-keras-datasets/"
78 )
79 path = get_file(
80 fname=path,
81 origin=origin_folder + "imdb.npz",
82 file_hash=( # noqa: E501
83 "69664113be75683a8fe16e3ed0ab59fda8886cb3cd7ada244f7d9544e4676b9f"
84 ),
85 )
86 with np.load(path, allow_pickle=True) as f:
87 x_train, labels_train = f["x_train"], f["y_train"]
88 x_test, labels_test = f["x_test"], f["y_test"]
89
90 rng = np.random.RandomState(seed)
91 indices = np.arange(len(x_train))
92 rng.shuffle(indices)
93 x_train = x_train[indices]
94 labels_train = labels_train[indices]
95
96 indices = np.arange(len(x_test))
97 rng.shuffle(indices)
98 x_test = x_test[indices]
99 labels_test = labels_test[indices]
100
101 if start_char is not None:
102 x_train = [[start_char] + [w + index_from for w in x] for x in x_train]
103 x_test = [[start_char] + [w + index_from for w in x] for x in x_test]
104 elif index_from:
105 x_train = [[w + index_from for w in x] for x in x_train]
106 x_test = [[w + index_from for w in x] for x in x_test]
107 else:
108 x_train = [[w for w in x] for x in x_train]
109 x_test = [[w for w in x] for x in x_test]
110
111 if maxlen:
112 x_train, labels_train = remove_long_seq(maxlen, x_train, labels_train)
113 x_test, labels_test = remove_long_seq(maxlen, x_test, labels_test)
114 if not x_train or not x_test:
115 raise ValueError(
116 "After filtering for sequences shorter than maxlen="
117 f"{str(maxlen)}, no sequence was kept. Increase maxlen."
118 )
119
120 xs = x_train + x_test
121 labels = np.concatenate([labels_train, labels_test])
122
123 if not num_words:
124 num_words = max(max(x) for x in xs)
125
126 # by convention, use 2 as OOV word
127 # reserve 'index_from' (=3 by default) characters:
128 # 0 (padding), 1 (start), 2 (OOV)
129 if oov_char is not None:
130 xs = [
131 [w if (skip_top <= w < num_words) else oov_char for w in x]
132 for x in xs
133 ]
134 else:
135 xs = [[w for w in x if skip_top <= w < num_words] for x in xs]
136
137 idx = len(x_train)
138 x_train, y_train = xs[:idx], labels[:idx]
139 x_test, y_test = xs[idx:], labels[idx:]
140 return (x_train, y_train), (x_test, y_test)
141
142
143 @keras_export("keras.datasets.imdb.get_word_index")
144 def get_word_index(path="imdb_word_index.json"):
145 """Retrieves a dict mapping words to their index in the IMDB dataset.
146
147 Args:
148 path: where to cache the data (relative to `~/.keras/dataset`).
149
150 Returns:
151 The word index dictionary. Keys are word strings, values are their
152 index.
153
154 Example:
155
156 ```python
157 # Use the default parameters to keras.datasets.imdb.load_data
158 start_char = 1
159 oov_char = 2
160 index_from = 3
161 # Retrieve the training sequences.
162 (x_train, _), _ = keras.datasets.imdb.load_data(
163 start_char=start_char, oov_char=oov_char, index_from=index_from
164 )
165 # Retrieve the word index file mapping words to indices
166 word_index = keras.datasets.imdb.get_word_index()
167 # Reverse the word index to obtain a dict mapping indices to words
168 # And add `index_from` to indices to sync with `x_train`
169 inverted_word_index = dict(
170 (i + index_from, word) for (word, i) in word_index.items()
171 )
172 # Update `inverted_word_index` to include `start_char` and `oov_char`
173 inverted_word_index[start_char] = "[START]"
174 inverted_word_index[oov_char] = "[OOV]"
175 # Decode the first sequence in the dataset
176 decoded_sequence = " ".join(inverted_word_index[i] for i in x_train[0])
177 ```
178 """
179 origin_folder = (
180 "https://storage.googleapis.com/tensorflow/tf-keras-datasets/"
181 )
182 path = get_file(
183 fname=path,
184 origin=origin_folder + "imdb_word_index.json",
185 file_hash="bfafd718b763782e994055a2d397834f",
186 )
187 with open(path) as f:
188 return json.load(f)
189
[end of keras/src/datasets/imdb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras/src/datasets/imdb.py b/keras/src/datasets/imdb.py
--- a/keras/src/datasets/imdb.py
+++ b/keras/src/datasets/imdb.py
@@ -135,8 +135,8 @@
xs = [[w for w in x if skip_top <= w < num_words] for x in xs]
idx = len(x_train)
- x_train, y_train = xs[:idx], labels[:idx]
- x_test, y_test = xs[idx:], labels[idx:]
+ x_train, y_train = np.array(xs[:idx], dtype="object"), labels[:idx]
+ x_test, y_test = np.array(xs[idx:], dtype="object"), labels[idx:]
return (x_train, y_train), (x_test, y_test)
|
{"golden_diff": "diff --git a/keras/src/datasets/imdb.py b/keras/src/datasets/imdb.py\n--- a/keras/src/datasets/imdb.py\n+++ b/keras/src/datasets/imdb.py\n@@ -135,8 +135,8 @@\n xs = [[w for w in x if skip_top <= w < num_words] for x in xs]\n \n idx = len(x_train)\n- x_train, y_train = xs[:idx], labels[:idx]\n- x_test, y_test = xs[idx:], labels[idx:]\n+ x_train, y_train = np.array(xs[:idx], dtype=\"object\"), labels[:idx]\n+ x_test, y_test = np.array(xs[idx:], dtype=\"object\"), labels[idx:]\n return (x_train, y_train), (x_test, y_test)\n", "issue": "imdb.load_data function returns a python list instead of ndarray object\nIn Keras 3.2.1 the imdb.load_data function returns a Python list instead of a ndarray object as described in the function documentation. \r\nIn Keras 2.15 the train and test data are converted to a ndarray using the following statements before returning the tuple.\r\nx_train, y_train = np.array(xs[:idx], dtype=\"object\"), labels[:idx]\r\nx_test, y_test = np.array(xs[idx:], dtype=\"object\"), labels[idx:]\r\n\r\nIn Keras 3.2.1 the conversion is not applied, i.e., \r\nx_train, y_train = xs[:idx], labels[:idx]\r\nx_test, y_test = xs[idx:], labels[idx:]\r\n\n", "before_files": [{"content": "\"\"\"IMDB sentiment classification dataset.\"\"\"\n\nimport json\n\nimport numpy as np\n\nfrom keras.src.api_export import keras_export\nfrom keras.src.utils.file_utils import get_file\nfrom keras.src.utils.python_utils import remove_long_seq\n\n\n@keras_export(\"keras.datasets.imdb.load_data\")\ndef load_data(\n path=\"imdb.npz\",\n num_words=None,\n skip_top=0,\n maxlen=None,\n seed=113,\n start_char=1,\n oov_char=2,\n index_from=3,\n **kwargs,\n):\n \"\"\"Loads the [IMDB dataset](https://ai.stanford.edu/~amaas/data/sentiment/).\n\n This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment\n (positive/negative). Reviews have been preprocessed, and each review is\n encoded as a list of word indexes (integers).\n For convenience, words are indexed by overall frequency in the dataset,\n so that for instance the integer \"3\" encodes the 3rd most frequent word in\n the data. This allows for quick filtering operations such as:\n \"only consider the top 10,000 most\n common words, but eliminate the top 20 most common words\".\n\n As a convention, \"0\" does not stand for a specific word, but instead is used\n to encode the pad token.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n num_words: integer or None. Words are\n ranked by how often they occur (in the training set) and only\n the `num_words` most frequent words are kept. Any less frequent word\n will appear as `oov_char` value in the sequence data. If None,\n all words are kept. Defaults to `None`.\n skip_top: skip the top N most frequently occurring words\n (which may not be informative). These words will appear as\n `oov_char` value in the dataset. When 0, no words are\n skipped. Defaults to `0`.\n maxlen: int or None. Maximum sequence length.\n Any longer sequence will be truncated. None, means no truncation.\n Defaults to `None`.\n seed: int. Seed for reproducible data shuffling.\n start_char: int. The start of a sequence will be marked with this\n character. 0 is usually the padding character. Defaults to `1`.\n oov_char: int. The out-of-vocabulary character.\n Words that were cut out because of the `num_words` or\n `skip_top` limits will be replaced with this character.\n index_from: int. Index actual words with this index and higher.\n\n Returns:\n Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.\n\n **`x_train`, `x_test`**: lists of sequences, which are lists of indexes\n (integers). If the num_words argument was specific, the maximum\n possible index value is `num_words - 1`. If the `maxlen` argument was\n specified, the largest possible sequence length is `maxlen`.\n\n **`y_train`, `y_test`**: lists of integer labels (1 or 0).\n\n **Note**: The 'out of vocabulary' character is only used for\n words that were present in the training set but are not included\n because they're not making the `num_words` cut here.\n Words that were not seen in the training set but are in the test set\n have simply been skipped.\n \"\"\"\n origin_folder = (\n \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/\"\n )\n path = get_file(\n fname=path,\n origin=origin_folder + \"imdb.npz\",\n file_hash=( # noqa: E501\n \"69664113be75683a8fe16e3ed0ab59fda8886cb3cd7ada244f7d9544e4676b9f\"\n ),\n )\n with np.load(path, allow_pickle=True) as f:\n x_train, labels_train = f[\"x_train\"], f[\"y_train\"]\n x_test, labels_test = f[\"x_test\"], f[\"y_test\"]\n\n rng = np.random.RandomState(seed)\n indices = np.arange(len(x_train))\n rng.shuffle(indices)\n x_train = x_train[indices]\n labels_train = labels_train[indices]\n\n indices = np.arange(len(x_test))\n rng.shuffle(indices)\n x_test = x_test[indices]\n labels_test = labels_test[indices]\n\n if start_char is not None:\n x_train = [[start_char] + [w + index_from for w in x] for x in x_train]\n x_test = [[start_char] + [w + index_from for w in x] for x in x_test]\n elif index_from:\n x_train = [[w + index_from for w in x] for x in x_train]\n x_test = [[w + index_from for w in x] for x in x_test]\n else:\n x_train = [[w for w in x] for x in x_train]\n x_test = [[w for w in x] for x in x_test]\n\n if maxlen:\n x_train, labels_train = remove_long_seq(maxlen, x_train, labels_train)\n x_test, labels_test = remove_long_seq(maxlen, x_test, labels_test)\n if not x_train or not x_test:\n raise ValueError(\n \"After filtering for sequences shorter than maxlen=\"\n f\"{str(maxlen)}, no sequence was kept. Increase maxlen.\"\n )\n\n xs = x_train + x_test\n labels = np.concatenate([labels_train, labels_test])\n\n if not num_words:\n num_words = max(max(x) for x in xs)\n\n # by convention, use 2 as OOV word\n # reserve 'index_from' (=3 by default) characters:\n # 0 (padding), 1 (start), 2 (OOV)\n if oov_char is not None:\n xs = [\n [w if (skip_top <= w < num_words) else oov_char for w in x]\n for x in xs\n ]\n else:\n xs = [[w for w in x if skip_top <= w < num_words] for x in xs]\n\n idx = len(x_train)\n x_train, y_train = xs[:idx], labels[:idx]\n x_test, y_test = xs[idx:], labels[idx:]\n return (x_train, y_train), (x_test, y_test)\n\n\n@keras_export(\"keras.datasets.imdb.get_word_index\")\ndef get_word_index(path=\"imdb_word_index.json\"):\n \"\"\"Retrieves a dict mapping words to their index in the IMDB dataset.\n\n Args:\n path: where to cache the data (relative to `~/.keras/dataset`).\n\n Returns:\n The word index dictionary. Keys are word strings, values are their\n index.\n\n Example:\n\n ```python\n # Use the default parameters to keras.datasets.imdb.load_data\n start_char = 1\n oov_char = 2\n index_from = 3\n # Retrieve the training sequences.\n (x_train, _), _ = keras.datasets.imdb.load_data(\n start_char=start_char, oov_char=oov_char, index_from=index_from\n )\n # Retrieve the word index file mapping words to indices\n word_index = keras.datasets.imdb.get_word_index()\n # Reverse the word index to obtain a dict mapping indices to words\n # And add `index_from` to indices to sync with `x_train`\n inverted_word_index = dict(\n (i + index_from, word) for (word, i) in word_index.items()\n )\n # Update `inverted_word_index` to include `start_char` and `oov_char`\n inverted_word_index[start_char] = \"[START]\"\n inverted_word_index[oov_char] = \"[OOV]\"\n # Decode the first sequence in the dataset\n decoded_sequence = \" \".join(inverted_word_index[i] for i in x_train[0])\n ```\n \"\"\"\n origin_folder = (\n \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/\"\n )\n path = get_file(\n fname=path,\n origin=origin_folder + \"imdb_word_index.json\",\n file_hash=\"bfafd718b763782e994055a2d397834f\",\n )\n with open(path) as f:\n return json.load(f)\n", "path": "keras/src/datasets/imdb.py"}]}
| 3,058 | 181 |
gh_patches_debug_34049
|
rasdani/github-patches
|
git_diff
|
huggingface__text-generation-inference-1022
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
missing 2 required positional arguments: 'top_n_tokens' and 'top_n_tokens_tensor' for the galactica model
### System Info
v1.0.3
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
having a local galactica model in a folder called `facebook-galactica-30b-gptq` won't be detected since it will fail this check https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/models/__init__.py#L88
I suggest making it check `if "galactica" in in model_id` instead.
### Expected behavior
expected to detect a galactica model
</issue>
<code>
[start of server/text_generation_server/models/galactica.py]
1 import re
2 import torch
3 import torch.distributed
4
5 from typing import List, Optional, Type
6
7 from transformers import (
8 AutoTokenizer,
9 AutoConfig,
10 PreTrainedTokenizerBase,
11 )
12 from text_generation_server.models import CausalLM
13 from text_generation_server.models.causal_lm import CausalLMBatch
14 from text_generation_server.pb import generate_pb2
15 from text_generation_server.models.custom_modeling.opt_modeling import OPTForCausalLM
16 from text_generation_server.utils import (
17 NextTokenChooser,
18 StoppingCriteria,
19 initialize_torch_distributed,
20 weight_files,
21 Weights,
22 )
23
24 # CREDIT: Papers with code => https://github.com/paperswithcode/galai/blob/main/galai/utils.py
25
26 # we split individual characters inside special tokens like [START_DNA]
27 CUSTOM_SEQ_RE = re.compile(r"(\[START_(DNA|SMILES|I_SMILES|AMINO)])(.*?)(\[END_\2])")
28
29 # token added to implement a custom sequence tokenization. This token is added at
30 # corpus cleaning step and removed in pretokenization. The digits are added to increase the chance
31 # that they do not occur in the corpus. The digits are escaped so that the token does not appear
32 # literally in the source code in case we ever include it in the training data.
33 SPLIT_MARKER = f"SPL{1}T-TH{1}S-Pl3A5E"
34
35
36 def _insert_split_marker(m: re.Match):
37 """
38 Applies split marker based on a regex match of special tokens such as
39 [START_DNA].
40 Parameters
41 ----------
42 n : str
43 Input text to split
44 Returns
45 ----------
46 str - the text with the split token added
47 """
48 start_token, _, sequence, end_token = m.groups()
49 sequence = re.sub(r"(.)", rf"{SPLIT_MARKER}\1", sequence, flags=re.DOTALL)
50 return f"{start_token}{sequence}{SPLIT_MARKER}{end_token}"
51
52
53 def escape_custom_split_sequence(text):
54 """
55 Applies custom splitting to the text for GALILEO's tokenization
56 Parameters
57 ----------
58 text : str
59 Input text to split
60 Returns
61 ----------
62 str - the text with the split token added
63 """
64 return CUSTOM_SEQ_RE.sub(_insert_split_marker, text)
65
66
67 # END CREDIT
68
69
70 class GalacticaCausalLMBatch(CausalLMBatch):
71 @classmethod
72 def from_pb(
73 cls,
74 pb: generate_pb2.Batch,
75 tokenizer: PreTrainedTokenizerBase,
76 dtype: torch.dtype,
77 device: torch.device,
78 ) -> "GalacticaCausalLMBatch":
79 inputs = []
80 next_token_choosers = []
81 stopping_criterias = []
82 prefix_offsets = []
83 read_offsets = []
84 requests_idx_mapping = {}
85
86 # Parse batch
87 max_truncation = 0
88 padding_right_offset = 0
89 max_decode_tokens = 0
90 for i, r in enumerate(pb.requests):
91 requests_idx_mapping[r.id] = i
92 # Add escape_custom_split_sequence to the CausalLMBatch logic
93 inputs.append(escape_custom_split_sequence(r.inputs))
94 next_token_choosers.append(NextTokenChooser.from_pb(r.parameters, device))
95 stopping_criteria = StoppingCriteria.from_pb(
96 r.stopping_parameters, tokenizer
97 )
98 stopping_criterias.append(stopping_criteria)
99 max_truncation = max(max_truncation, r.truncate)
100 max_decode_tokens += stopping_criteria.max_new_tokens
101 padding_right_offset = max(
102 padding_right_offset, stopping_criteria.max_new_tokens
103 )
104
105 tokenized_inputs = tokenizer(
106 inputs,
107 return_tensors="pt",
108 padding=True,
109 return_token_type_ids=False,
110 truncation=True,
111 max_length=max_truncation,
112 ).to(device)
113 for _ in pb.requests:
114 input_len = tokenized_inputs["input_ids"].shape[1]
115 prefix_offsets.append(0)
116 read_offsets.append(input_len)
117
118 input_lengths = tokenized_inputs["attention_mask"].sum(1)
119 max_input_length = input_lengths.max()
120
121 input_ids = tokenized_inputs["input_ids"]
122 # Allocate maximum attention_mask
123 attention_mask = input_ids.new_zeros(
124 (pb.size, max_input_length + padding_right_offset)
125 )
126 # Copy tokenizer attention_mask into fully allocated attention_mask
127 attention_mask[:, :max_input_length] = tokenized_inputs["attention_mask"]
128
129 position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1
130 position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1)
131 all_input_ids = tokenized_inputs["input_ids"].T.split(1, dim=1)
132
133 max_tokens = len(inputs) * max_input_length + max_decode_tokens
134
135 return cls(
136 batch_id=pb.id,
137 requests=pb.requests,
138 requests_idx_mapping=requests_idx_mapping,
139 input_ids=input_ids,
140 attention_mask=attention_mask,
141 position_ids=position_ids,
142 past_key_values=None,
143 all_input_ids=list(all_input_ids),
144 input_lengths=input_lengths.tolist(),
145 prefix_offsets=prefix_offsets,
146 read_offsets=read_offsets,
147 next_token_choosers=next_token_choosers,
148 stopping_criterias=stopping_criterias,
149 max_input_length=max_input_length.item(),
150 padding_right_offset=padding_right_offset,
151 max_tokens=max_tokens,
152 )
153
154
155 class GalacticaSharded(CausalLM):
156 def __init__(
157 self,
158 model_id: str,
159 revision: Optional[str] = None,
160 quantize: Optional[str] = None,
161 dtype: Optional[torch.dtype] = None,
162 trust_remote_code: bool = False,
163 ):
164 self.process_group, rank, world_size = initialize_torch_distributed()
165 if torch.cuda.is_available():
166 device = torch.device(f"cuda:{rank}")
167 dtype = torch.float16 if dtype is None else dtype
168 else:
169 device = torch.device("cpu")
170 dtype = torch.float32
171
172 tokenizer = AutoTokenizer.from_pretrained(
173 model_id,
174 revision=revision,
175 padding_side="left",
176 truncation_side="left",
177 trust_remote_code=trust_remote_code,
178 )
179
180 config = AutoConfig.from_pretrained(
181 model_id,
182 revision=revision,
183 tp_parallel=True,
184 trust_remote_code=trust_remote_code,
185 )
186 config.quantize = quantize
187 tokenizer.pad_token_id = config.pad_token_id
188
189 torch.distributed.barrier(group=self.process_group)
190 filenames = weight_files(model_id, revision=revision, extension=".safetensors")
191 weights = Weights(
192 filenames, device=device, dtype=dtype, process_group=self.process_group
193 )
194 if config.quantize == "gptq":
195 weights._set_gptq_params(model_id)
196
197 model = OPTForCausalLM(config, weights)
198
199 torch.distributed.barrier(group=self.process_group)
200 super(CausalLM, self).__init__(
201 model=model,
202 tokenizer=tokenizer,
203 requires_padding=True,
204 dtype=dtype,
205 device=device,
206 rank=rank,
207 world_size=world_size,
208 )
209
210 @property
211 def batch_type(self) -> Type[CausalLMBatch]:
212 return GalacticaCausalLMBatch
213
214 def decode(self, generated_ids: List[int]) -> str:
215 # Do not skip special tokens as they are used for custom parsing rules of the generated text
216 return self.tokenizer.decode(
217 generated_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False
218 )
219
220 def forward(
221 self, input_ids, attention_mask, position_ids, past_key_values: Optional = None
222 ):
223 outputs = self.model.forward(
224 input_ids=input_ids,
225 attention_mask=attention_mask,
226 past_key_values=past_key_values,
227 use_cache=True,
228 )
229 return outputs.logits, outputs.past_key_values
230
[end of server/text_generation_server/models/galactica.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/server/text_generation_server/models/galactica.py b/server/text_generation_server/models/galactica.py
--- a/server/text_generation_server/models/galactica.py
+++ b/server/text_generation_server/models/galactica.py
@@ -80,6 +80,7 @@
next_token_choosers = []
stopping_criterias = []
prefix_offsets = []
+ top_n_tokens = []
read_offsets = []
requests_idx_mapping = {}
@@ -96,6 +97,7 @@
r.stopping_parameters, tokenizer
)
stopping_criterias.append(stopping_criteria)
+ top_n_tokens.append(r.top_n_tokens)
max_truncation = max(max_truncation, r.truncate)
max_decode_tokens += stopping_criteria.max_new_tokens
padding_right_offset = max(
@@ -129,6 +131,9 @@
position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1
position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1)
all_input_ids = tokenized_inputs["input_ids"].T.split(1, dim=1)
+ top_n_tokens_tensor = torch.tensor(
+ top_n_tokens, device=device, dtype=torch.int64
+ )
max_tokens = len(inputs) * max_input_length + max_decode_tokens
@@ -146,6 +151,8 @@
read_offsets=read_offsets,
next_token_choosers=next_token_choosers,
stopping_criterias=stopping_criterias,
+ top_n_tokens=top_n_tokens,
+ top_n_tokens_tensor=top_n_tokens_tensor,
max_input_length=max_input_length.item(),
padding_right_offset=padding_right_offset,
max_tokens=max_tokens,
|
{"golden_diff": "diff --git a/server/text_generation_server/models/galactica.py b/server/text_generation_server/models/galactica.py\n--- a/server/text_generation_server/models/galactica.py\n+++ b/server/text_generation_server/models/galactica.py\n@@ -80,6 +80,7 @@\n next_token_choosers = []\n stopping_criterias = []\n prefix_offsets = []\n+ top_n_tokens = []\n read_offsets = []\n requests_idx_mapping = {}\n \n@@ -96,6 +97,7 @@\n r.stopping_parameters, tokenizer\n )\n stopping_criterias.append(stopping_criteria)\n+ top_n_tokens.append(r.top_n_tokens)\n max_truncation = max(max_truncation, r.truncate)\n max_decode_tokens += stopping_criteria.max_new_tokens\n padding_right_offset = max(\n@@ -129,6 +131,9 @@\n position_ids = tokenized_inputs[\"attention_mask\"].long().cumsum(-1) - 1\n position_ids.masked_fill_(tokenized_inputs[\"attention_mask\"] == 0, 1)\n all_input_ids = tokenized_inputs[\"input_ids\"].T.split(1, dim=1)\n+ top_n_tokens_tensor = torch.tensor(\n+ top_n_tokens, device=device, dtype=torch.int64\n+ )\n \n max_tokens = len(inputs) * max_input_length + max_decode_tokens\n \n@@ -146,6 +151,8 @@\n read_offsets=read_offsets,\n next_token_choosers=next_token_choosers,\n stopping_criterias=stopping_criterias,\n+ top_n_tokens=top_n_tokens,\n+ top_n_tokens_tensor=top_n_tokens_tensor,\n max_input_length=max_input_length.item(),\n padding_right_offset=padding_right_offset,\n max_tokens=max_tokens,\n", "issue": "missing 2 required positional arguments: 'top_n_tokens' and 'top_n_tokens_tensor' for the galactica model\n### System Info\r\n\r\nv1.0.3\r\n\r\n### Information\r\n\r\n- [ ] Docker\r\n- [ ] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported command\r\n- [ ] My own modifications\r\n\r\n### Reproduction\r\n\r\nhaving a local galactica model in a folder called `facebook-galactica-30b-gptq` won't be detected since it will fail this check https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/models/__init__.py#L88\r\n\r\nI suggest making it check `if \"galactica\" in in model_id` instead.\r\n\r\n### Expected behavior\r\n\r\nexpected to detect a galactica model\n", "before_files": [{"content": "import re\nimport torch\nimport torch.distributed\n\nfrom typing import List, Optional, Type\n\nfrom transformers import (\n AutoTokenizer,\n AutoConfig,\n PreTrainedTokenizerBase,\n)\nfrom text_generation_server.models import CausalLM\nfrom text_generation_server.models.causal_lm import CausalLMBatch\nfrom text_generation_server.pb import generate_pb2\nfrom text_generation_server.models.custom_modeling.opt_modeling import OPTForCausalLM\nfrom text_generation_server.utils import (\n NextTokenChooser,\n StoppingCriteria,\n initialize_torch_distributed,\n weight_files,\n Weights,\n)\n\n# CREDIT: Papers with code => https://github.com/paperswithcode/galai/blob/main/galai/utils.py\n\n# we split individual characters inside special tokens like [START_DNA]\nCUSTOM_SEQ_RE = re.compile(r\"(\\[START_(DNA|SMILES|I_SMILES|AMINO)])(.*?)(\\[END_\\2])\")\n\n# token added to implement a custom sequence tokenization. This token is added at\n# corpus cleaning step and removed in pretokenization. The digits are added to increase the chance\n# that they do not occur in the corpus. The digits are escaped so that the token does not appear\n# literally in the source code in case we ever include it in the training data.\nSPLIT_MARKER = f\"SPL{1}T-TH{1}S-Pl3A5E\"\n\n\ndef _insert_split_marker(m: re.Match):\n \"\"\"\n Applies split marker based on a regex match of special tokens such as\n [START_DNA].\n Parameters\n ----------\n n : str\n Input text to split\n Returns\n ----------\n str - the text with the split token added\n \"\"\"\n start_token, _, sequence, end_token = m.groups()\n sequence = re.sub(r\"(.)\", rf\"{SPLIT_MARKER}\\1\", sequence, flags=re.DOTALL)\n return f\"{start_token}{sequence}{SPLIT_MARKER}{end_token}\"\n\n\ndef escape_custom_split_sequence(text):\n \"\"\"\n Applies custom splitting to the text for GALILEO's tokenization\n Parameters\n ----------\n text : str\n Input text to split\n Returns\n ----------\n str - the text with the split token added\n \"\"\"\n return CUSTOM_SEQ_RE.sub(_insert_split_marker, text)\n\n\n# END CREDIT\n\n\nclass GalacticaCausalLMBatch(CausalLMBatch):\n @classmethod\n def from_pb(\n cls,\n pb: generate_pb2.Batch,\n tokenizer: PreTrainedTokenizerBase,\n dtype: torch.dtype,\n device: torch.device,\n ) -> \"GalacticaCausalLMBatch\":\n inputs = []\n next_token_choosers = []\n stopping_criterias = []\n prefix_offsets = []\n read_offsets = []\n requests_idx_mapping = {}\n\n # Parse batch\n max_truncation = 0\n padding_right_offset = 0\n max_decode_tokens = 0\n for i, r in enumerate(pb.requests):\n requests_idx_mapping[r.id] = i\n # Add escape_custom_split_sequence to the CausalLMBatch logic\n inputs.append(escape_custom_split_sequence(r.inputs))\n next_token_choosers.append(NextTokenChooser.from_pb(r.parameters, device))\n stopping_criteria = StoppingCriteria.from_pb(\n r.stopping_parameters, tokenizer\n )\n stopping_criterias.append(stopping_criteria)\n max_truncation = max(max_truncation, r.truncate)\n max_decode_tokens += stopping_criteria.max_new_tokens\n padding_right_offset = max(\n padding_right_offset, stopping_criteria.max_new_tokens\n )\n\n tokenized_inputs = tokenizer(\n inputs,\n return_tensors=\"pt\",\n padding=True,\n return_token_type_ids=False,\n truncation=True,\n max_length=max_truncation,\n ).to(device)\n for _ in pb.requests:\n input_len = tokenized_inputs[\"input_ids\"].shape[1]\n prefix_offsets.append(0)\n read_offsets.append(input_len)\n\n input_lengths = tokenized_inputs[\"attention_mask\"].sum(1)\n max_input_length = input_lengths.max()\n\n input_ids = tokenized_inputs[\"input_ids\"]\n # Allocate maximum attention_mask\n attention_mask = input_ids.new_zeros(\n (pb.size, max_input_length + padding_right_offset)\n )\n # Copy tokenizer attention_mask into fully allocated attention_mask\n attention_mask[:, :max_input_length] = tokenized_inputs[\"attention_mask\"]\n\n position_ids = tokenized_inputs[\"attention_mask\"].long().cumsum(-1) - 1\n position_ids.masked_fill_(tokenized_inputs[\"attention_mask\"] == 0, 1)\n all_input_ids = tokenized_inputs[\"input_ids\"].T.split(1, dim=1)\n\n max_tokens = len(inputs) * max_input_length + max_decode_tokens\n\n return cls(\n batch_id=pb.id,\n requests=pb.requests,\n requests_idx_mapping=requests_idx_mapping,\n input_ids=input_ids,\n attention_mask=attention_mask,\n position_ids=position_ids,\n past_key_values=None,\n all_input_ids=list(all_input_ids),\n input_lengths=input_lengths.tolist(),\n prefix_offsets=prefix_offsets,\n read_offsets=read_offsets,\n next_token_choosers=next_token_choosers,\n stopping_criterias=stopping_criterias,\n max_input_length=max_input_length.item(),\n padding_right_offset=padding_right_offset,\n max_tokens=max_tokens,\n )\n\n\nclass GalacticaSharded(CausalLM):\n def __init__(\n self,\n model_id: str,\n revision: Optional[str] = None,\n quantize: Optional[str] = None,\n dtype: Optional[torch.dtype] = None,\n trust_remote_code: bool = False,\n ):\n self.process_group, rank, world_size = initialize_torch_distributed()\n if torch.cuda.is_available():\n device = torch.device(f\"cuda:{rank}\")\n dtype = torch.float16 if dtype is None else dtype\n else:\n device = torch.device(\"cpu\")\n dtype = torch.float32\n\n tokenizer = AutoTokenizer.from_pretrained(\n model_id,\n revision=revision,\n padding_side=\"left\",\n truncation_side=\"left\",\n trust_remote_code=trust_remote_code,\n )\n\n config = AutoConfig.from_pretrained(\n model_id,\n revision=revision,\n tp_parallel=True,\n trust_remote_code=trust_remote_code,\n )\n config.quantize = quantize\n tokenizer.pad_token_id = config.pad_token_id\n\n torch.distributed.barrier(group=self.process_group)\n filenames = weight_files(model_id, revision=revision, extension=\".safetensors\")\n weights = Weights(\n filenames, device=device, dtype=dtype, process_group=self.process_group\n )\n if config.quantize == \"gptq\":\n weights._set_gptq_params(model_id)\n\n model = OPTForCausalLM(config, weights)\n\n torch.distributed.barrier(group=self.process_group)\n super(CausalLM, self).__init__(\n model=model,\n tokenizer=tokenizer,\n requires_padding=True,\n dtype=dtype,\n device=device,\n rank=rank,\n world_size=world_size,\n )\n\n @property\n def batch_type(self) -> Type[CausalLMBatch]:\n return GalacticaCausalLMBatch\n\n def decode(self, generated_ids: List[int]) -> str:\n # Do not skip special tokens as they are used for custom parsing rules of the generated text\n return self.tokenizer.decode(\n generated_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False\n )\n\n def forward(\n self, input_ids, attention_mask, position_ids, past_key_values: Optional = None\n ):\n outputs = self.model.forward(\n input_ids=input_ids,\n attention_mask=attention_mask,\n past_key_values=past_key_values,\n use_cache=True,\n )\n return outputs.logits, outputs.past_key_values\n", "path": "server/text_generation_server/models/galactica.py"}]}
| 3,027 | 395 |
gh_patches_debug_10940
|
rasdani/github-patches
|
git_diff
|
ciudadanointeligente__votainteligente-portal-electoral-697
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Propuesta] Al momento de ser publicada no se envíá automáticamente a los candidatos.
</issue>
<code>
[start of popular_proposal/models.py]
1 # coding=utf-8
2 from __future__ import unicode_literals
3
4 from django.db import models
5 from picklefield.fields import PickledObjectField
6 from django.contrib.auth.models import User
7 from djchoices import DjangoChoices, ChoiceItem
8 from votainteligente.send_mails import send_mail
9 from django.utils.encoding import python_2_unicode_compatible
10 from django.contrib.sites.models import Site
11 from autoslug import AutoSlugField
12 from django.core.urlresolvers import reverse
13 from backend_citizen.models import Organization
14 from votainteligente.open_graph import OGPMixin
15 from elections.models import Candidate, Area
16 from django.db.models import Count
17 from django.utils.translation import ugettext_lazy as _
18 from django.conf import settings
19 from django.core.mail import mail_admins
20
21
22 class NeedingModerationManager(models.Manager):
23 def get_queryset(self):
24 qs = super(NeedingModerationManager, self).get_queryset()
25 qs = qs.filter(status=ProposalTemporaryData.Statuses.InOurSide)
26 return qs
27
28
29 class ProposalCreationMixin(object):
30 def determine_kwargs(self, **kwargs):
31 model = kwargs.pop('model_class', self.__class__)
32 for f in model._meta.fields:
33 if f.name in kwargs['data'].keys():
34 kwargs[f.name] = kwargs['data'].pop(f.name)
35 return kwargs
36
37
38 @python_2_unicode_compatible
39 class ProposalTemporaryData(models.Model, ProposalCreationMixin):
40 class Statuses(DjangoChoices):
41 InOurSide = ChoiceItem('in_our_side')
42 InTheirSide = ChoiceItem('in_their_side')
43 Rejected = ChoiceItem('rejected')
44 Accepted = ChoiceItem('accepted')
45 proposer = models.ForeignKey(User, related_name='temporary_proposals')
46 area = models.ForeignKey(Area, related_name='temporary_proposals', null=True, blank=True)
47 join_advocacy_url = models.URLField(null=True, blank=True)
48 data = PickledObjectField()
49 rejected = models.BooleanField(default=False)
50 rejected_reason = models.TextField(null=True,
51 blank=True)
52 organization = models.ForeignKey(Organization,
53 related_name='temporary_proposals',
54 null=True,
55 blank=True,
56 default=None)
57 comments = PickledObjectField()
58 status = models.CharField(max_length=16,
59 choices=Statuses.choices,
60 validators=[Statuses.validator],
61 default=Statuses.InOurSide)
62 overall_comments = models.CharField(max_length=512,
63 blank=True,
64 null=True,
65 default="")
66 created = models.DateTimeField(auto_now_add=True,
67 blank=True,
68 null=True)
69 updated = models.DateTimeField(auto_now=True,
70 blank=True,
71 null=True)
72
73 needing_moderation = NeedingModerationManager()
74 objects = models.Manager()
75
76 def save(self, *args, **kwargs):
77 creating = self.id is None
78 if not self.comments:
79 self.comments = {}
80 for key in self.data.keys():
81 if key not in self.comments.keys():
82 self.comments[key] = ''
83 return super(ProposalTemporaryData, self).save(*args, **kwargs)
84
85 def notify_new(self):
86 site = Site.objects.get_current()
87 mail_context = {
88 'area': self.area,
89 'temporary_data': self,
90 'site': site,
91 }
92 if self.proposer.email:
93 send_mail(mail_context, 'new_temporary_proposal',
94 to=[self.proposer.email])
95
96 def create_proposal(self, moderator=None):
97 self.status = ProposalTemporaryData.Statuses.Accepted
98 self.save()
99 title = self.get_title()
100 clasification = self.data.get('clasification', '')
101 org_id = self.data.pop('organization', None)
102
103 creation_kwargs = self.determine_kwargs(title=title,
104 clasification=clasification,
105 area=self.area,
106 proposer=self.proposer,
107 data=self.data,
108 temporary=self)
109 popular_proposal = PopularProposal(**creation_kwargs)
110 if org_id:
111 enrollment = self.proposer.enrollments.get(organization__id=org_id)
112 popular_proposal.organization = enrollment.organization
113 popular_proposal.save()
114 site = Site.objects.get_current()
115 mail_context = {
116 'area': self.area,
117 'temporary_data': self,
118 'moderator': moderator,
119 'site': site,
120 }
121 send_mail(mail_context, 'popular_proposal_accepted', to=[self.proposer.email])
122 return popular_proposal
123
124 def reject(self, reason, moderator=None):
125 self.rejected_reason = reason
126 self.status = ProposalTemporaryData.Statuses.Rejected
127 self.save()
128 site = Site.objects.get_current()
129 mail_context = {
130 'area': self.area,
131 'temporary_data': self,
132 'moderator': moderator,
133 'site': site,
134 }
135 send_mail(mail_context, 'popular_proposal_rejected',
136 to=[self.proposer.email])
137
138 def get_title(self):
139 return self.data.get('title', u'')
140
141 def __str__(self):
142 return self.get_title()
143
144 class ProposalsOrderedManager(models.Manager):
145 def by_likers(self, *args, **kwargs):
146 qs = self.get_queryset()
147 qs = qs.annotate(num_likers=Count('likers')).order_by('-num_likers')
148 return qs
149
150
151 @python_2_unicode_compatible
152 class PopularProposal(models.Model, OGPMixin):
153 title = models.CharField(max_length=255, default='')
154 slug = AutoSlugField(populate_from='title', unique=True)
155 proposer = models.ForeignKey(User, related_name='proposals')
156 area = models.ForeignKey(Area, related_name='proposals', null=True, blank=True)
157 join_advocacy_url = models.URLField(null=True, blank=True)
158 data = PickledObjectField()
159 created = models.DateTimeField(auto_now_add=True)
160 updated = models.DateTimeField(auto_now_add=True)
161 temporary = models.OneToOneField(ProposalTemporaryData,
162 related_name='created_proposal',
163 blank=True,
164 null=True,
165 default=None)
166 likers = models.ManyToManyField(User, through='ProposalLike')
167 organization = models.ForeignKey(Organization,
168 related_name='popular_proposals',
169 null=True)
170 background = models.TextField(null=True, blank=True, help_text=_(u"Antecedentes sobre tu propuesta"))
171 contact_details = models.TextField(null=True,
172 blank=True,
173 help_text=_(u'¿Cómo te puede contactar un candidato?'))
174 document = models.FileField(upload_to='uploads/proposal/backgrounds/%Y/%m/%d/',
175 help_text=_(u'¿Tienes algún documento para complementar tu propuesta?'),
176 null=True,
177 blank=True)
178 image = models.ImageField(upload_to='proposals/image/',
179 max_length=512,
180 null=True,
181 blank=True)
182 clasification = models.CharField(blank=True, null=True, max_length=255)
183 for_all_areas = models.BooleanField(default=False)
184
185 ogp_enabled = True
186
187 ordered = ProposalsOrderedManager()
188 objects = models.Manager()
189
190 class Meta:
191 ordering = ['for_all_areas', '-created']
192
193 def __str__(self):
194 return self.title
195
196 def get_absolute_url(self):
197 return reverse('popular_proposals:detail', kwargs={'slug': self.slug})
198
199 def save(self, *args, **kwargs):
200 creating = self.pk is None
201 super(PopularProposal, self).save(*args, **kwargs)
202 if self.pk is not None and creating:
203 self.notify_candidates_of_new()
204
205 def notify_candidates_of_new(self):
206 if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):
207 return
208 template = 'notification_for_candidates_of_new_proposal'
209 context = {'proposal': self}
210 area = Area.objects.get(id=self.area.id)
211 for election in area.elections.all():
212 for candidate in election.candidates.all():
213 for contact in candidate.contacts.all():
214 context.update({'candidate': candidate})
215 send_mail(context,
216 template,
217 to=[contact.mail])
218
219 class ProposalLike(models.Model):
220 user = models.ForeignKey(User)
221 proposal = models.ForeignKey(PopularProposal)
222 created = models.DateTimeField(auto_now_add=True)
223 updated = models.DateTimeField(auto_now_add=True)
224
225 def save(self, *args, **kwargs):
226 super(ProposalLike, self).save(*args, **kwargs)
227 created = self.pk is not None
228 if created:
229 self.numerical_notification()
230
231 def numerical_notification(self):
232 the_number = ProposalLike.objects.filter(proposal=self.proposal).count()
233 if the_number in settings.WHEN_TO_NOTIFY:
234 from popular_proposal.subscriptions import YouAreAHeroNotification, ManyCitizensSupportingNotification
235 notifier = YouAreAHeroNotification(proposal=self.proposal,
236 number=the_number)
237 notifier.notify()
238 notifier = ManyCitizensSupportingNotification(proposal=self.proposal,
239 number=the_number)
240 notifier.notify()
241
242
243 class Commitment(models.Model):
244 proposal = models.ForeignKey(PopularProposal,
245 related_name='commitments')
246 candidate = models.ForeignKey(Candidate,
247 related_name='commitments')
248 detail = models.CharField(max_length=12288,
249 null=True,
250 blank=True)
251 commited = models.NullBooleanField(default=None)
252
253 def save(self, *args, **kwargs):
254 instance = super(Commitment, self).save(*args, **kwargs)
255 from popular_proposal.subscriptions import notification_trigger
256 notification_trigger('new-commitment',
257 proposal=self.proposal,
258 commitment=self)
259 return instance
260
261 def get_absolute_url(self):
262 url = reverse('popular_proposals:commitment', kwargs={'candidate_slug': self.candidate.id,
263 'proposal_slug': self.proposal.slug})
264 return url
265
[end of popular_proposal/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/popular_proposal/models.py b/popular_proposal/models.py
--- a/popular_proposal/models.py
+++ b/popular_proposal/models.py
@@ -196,12 +196,6 @@
def get_absolute_url(self):
return reverse('popular_proposals:detail', kwargs={'slug': self.slug})
- def save(self, *args, **kwargs):
- creating = self.pk is None
- super(PopularProposal, self).save(*args, **kwargs)
- if self.pk is not None and creating:
- self.notify_candidates_of_new()
-
def notify_candidates_of_new(self):
if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):
return
|
{"golden_diff": "diff --git a/popular_proposal/models.py b/popular_proposal/models.py\n--- a/popular_proposal/models.py\n+++ b/popular_proposal/models.py\n@@ -196,12 +196,6 @@\n def get_absolute_url(self):\n return reverse('popular_proposals:detail', kwargs={'slug': self.slug})\n \n- def save(self, *args, **kwargs):\n- creating = self.pk is None\n- super(PopularProposal, self).save(*args, **kwargs)\n- if self.pk is not None and creating:\n- self.notify_candidates_of_new()\n-\n def notify_candidates_of_new(self):\n if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):\n return\n", "issue": "[Propuesta] Al momento de ser publicada no se env\u00ed\u00e1 autom\u00e1ticamente a los candidatos.\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals\n\nfrom django.db import models\nfrom picklefield.fields import PickledObjectField\nfrom django.contrib.auth.models import User\nfrom djchoices import DjangoChoices, ChoiceItem\nfrom votainteligente.send_mails import send_mail\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.contrib.sites.models import Site\nfrom autoslug import AutoSlugField\nfrom django.core.urlresolvers import reverse\nfrom backend_citizen.models import Organization\nfrom votainteligente.open_graph import OGPMixin\nfrom elections.models import Candidate, Area\nfrom django.db.models import Count\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.conf import settings\nfrom django.core.mail import mail_admins\n\n\nclass NeedingModerationManager(models.Manager):\n def get_queryset(self):\n qs = super(NeedingModerationManager, self).get_queryset()\n qs = qs.filter(status=ProposalTemporaryData.Statuses.InOurSide)\n return qs\n\n\nclass ProposalCreationMixin(object):\n def determine_kwargs(self, **kwargs):\n model = kwargs.pop('model_class', self.__class__)\n for f in model._meta.fields:\n if f.name in kwargs['data'].keys():\n kwargs[f.name] = kwargs['data'].pop(f.name)\n return kwargs\n\n\n@python_2_unicode_compatible\nclass ProposalTemporaryData(models.Model, ProposalCreationMixin):\n class Statuses(DjangoChoices):\n InOurSide = ChoiceItem('in_our_side')\n InTheirSide = ChoiceItem('in_their_side')\n Rejected = ChoiceItem('rejected')\n Accepted = ChoiceItem('accepted')\n proposer = models.ForeignKey(User, related_name='temporary_proposals')\n area = models.ForeignKey(Area, related_name='temporary_proposals', null=True, blank=True)\n join_advocacy_url = models.URLField(null=True, blank=True)\n data = PickledObjectField()\n rejected = models.BooleanField(default=False)\n rejected_reason = models.TextField(null=True,\n blank=True)\n organization = models.ForeignKey(Organization,\n related_name='temporary_proposals',\n null=True,\n blank=True,\n default=None)\n comments = PickledObjectField()\n status = models.CharField(max_length=16,\n choices=Statuses.choices,\n validators=[Statuses.validator],\n default=Statuses.InOurSide)\n overall_comments = models.CharField(max_length=512,\n blank=True,\n null=True,\n default=\"\")\n created = models.DateTimeField(auto_now_add=True,\n blank=True,\n null=True)\n updated = models.DateTimeField(auto_now=True,\n blank=True,\n null=True)\n\n needing_moderation = NeedingModerationManager()\n objects = models.Manager()\n\n def save(self, *args, **kwargs):\n creating = self.id is None\n if not self.comments:\n self.comments = {}\n for key in self.data.keys():\n if key not in self.comments.keys():\n self.comments[key] = ''\n return super(ProposalTemporaryData, self).save(*args, **kwargs)\n\n def notify_new(self):\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'site': site,\n }\n if self.proposer.email:\n send_mail(mail_context, 'new_temporary_proposal',\n to=[self.proposer.email])\n\n def create_proposal(self, moderator=None):\n self.status = ProposalTemporaryData.Statuses.Accepted\n self.save()\n title = self.get_title()\n clasification = self.data.get('clasification', '')\n org_id = self.data.pop('organization', None)\n\n creation_kwargs = self.determine_kwargs(title=title,\n clasification=clasification,\n area=self.area,\n proposer=self.proposer,\n data=self.data,\n temporary=self)\n popular_proposal = PopularProposal(**creation_kwargs)\n if org_id:\n enrollment = self.proposer.enrollments.get(organization__id=org_id)\n popular_proposal.organization = enrollment.organization\n popular_proposal.save()\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'moderator': moderator,\n 'site': site,\n }\n send_mail(mail_context, 'popular_proposal_accepted', to=[self.proposer.email])\n return popular_proposal\n\n def reject(self, reason, moderator=None):\n self.rejected_reason = reason\n self.status = ProposalTemporaryData.Statuses.Rejected\n self.save()\n site = Site.objects.get_current()\n mail_context = {\n 'area': self.area,\n 'temporary_data': self,\n 'moderator': moderator,\n 'site': site,\n }\n send_mail(mail_context, 'popular_proposal_rejected',\n to=[self.proposer.email])\n\n def get_title(self):\n return self.data.get('title', u'')\n\n def __str__(self):\n return self.get_title()\n\nclass ProposalsOrderedManager(models.Manager):\n def by_likers(self, *args, **kwargs):\n qs = self.get_queryset()\n qs = qs.annotate(num_likers=Count('likers')).order_by('-num_likers')\n return qs\n\n\n@python_2_unicode_compatible\nclass PopularProposal(models.Model, OGPMixin):\n title = models.CharField(max_length=255, default='')\n slug = AutoSlugField(populate_from='title', unique=True)\n proposer = models.ForeignKey(User, related_name='proposals')\n area = models.ForeignKey(Area, related_name='proposals', null=True, blank=True)\n join_advocacy_url = models.URLField(null=True, blank=True)\n data = PickledObjectField()\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now_add=True)\n temporary = models.OneToOneField(ProposalTemporaryData,\n related_name='created_proposal',\n blank=True,\n null=True,\n default=None)\n likers = models.ManyToManyField(User, through='ProposalLike')\n organization = models.ForeignKey(Organization,\n related_name='popular_proposals',\n null=True)\n background = models.TextField(null=True, blank=True, help_text=_(u\"Antecedentes sobre tu propuesta\"))\n contact_details = models.TextField(null=True,\n blank=True,\n help_text=_(u'\u00bfC\u00f3mo te puede contactar un candidato?'))\n document = models.FileField(upload_to='uploads/proposal/backgrounds/%Y/%m/%d/',\n help_text=_(u'\u00bfTienes alg\u00fan documento para complementar tu propuesta?'),\n null=True,\n blank=True)\n image = models.ImageField(upload_to='proposals/image/',\n max_length=512,\n null=True,\n blank=True)\n clasification = models.CharField(blank=True, null=True, max_length=255)\n for_all_areas = models.BooleanField(default=False)\n\n ogp_enabled = True\n\n ordered = ProposalsOrderedManager()\n objects = models.Manager()\n\n class Meta:\n ordering = ['for_all_areas', '-created']\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse('popular_proposals:detail', kwargs={'slug': self.slug})\n\n def save(self, *args, **kwargs):\n creating = self.pk is None\n super(PopularProposal, self).save(*args, **kwargs)\n if self.pk is not None and creating:\n self.notify_candidates_of_new()\n\n def notify_candidates_of_new(self):\n if not (settings.NOTIFY_CANDIDATES and settings.NOTIFY_CANDIDATES_OF_NEW_PROPOSAL):\n return\n template = 'notification_for_candidates_of_new_proposal'\n context = {'proposal': self}\n area = Area.objects.get(id=self.area.id)\n for election in area.elections.all():\n for candidate in election.candidates.all():\n for contact in candidate.contacts.all():\n context.update({'candidate': candidate})\n send_mail(context,\n template,\n to=[contact.mail])\n\nclass ProposalLike(models.Model):\n user = models.ForeignKey(User)\n proposal = models.ForeignKey(PopularProposal)\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now_add=True)\n\n def save(self, *args, **kwargs):\n super(ProposalLike, self).save(*args, **kwargs)\n created = self.pk is not None\n if created:\n self.numerical_notification()\n\n def numerical_notification(self):\n the_number = ProposalLike.objects.filter(proposal=self.proposal).count()\n if the_number in settings.WHEN_TO_NOTIFY:\n from popular_proposal.subscriptions import YouAreAHeroNotification, ManyCitizensSupportingNotification\n notifier = YouAreAHeroNotification(proposal=self.proposal,\n number=the_number)\n notifier.notify()\n notifier = ManyCitizensSupportingNotification(proposal=self.proposal,\n number=the_number)\n notifier.notify()\n\n\nclass Commitment(models.Model):\n proposal = models.ForeignKey(PopularProposal,\n related_name='commitments')\n candidate = models.ForeignKey(Candidate,\n related_name='commitments')\n detail = models.CharField(max_length=12288,\n null=True,\n blank=True)\n commited = models.NullBooleanField(default=None)\n\n def save(self, *args, **kwargs):\n instance = super(Commitment, self).save(*args, **kwargs)\n from popular_proposal.subscriptions import notification_trigger\n notification_trigger('new-commitment',\n proposal=self.proposal,\n commitment=self)\n return instance\n\n def get_absolute_url(self):\n url = reverse('popular_proposals:commitment', kwargs={'candidate_slug': self.candidate.id,\n 'proposal_slug': self.proposal.slug})\n return url\n", "path": "popular_proposal/models.py"}]}
| 3,341 | 170 |
gh_patches_debug_38824
|
rasdani/github-patches
|
git_diff
|
liqd__a4-opin-399
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use Generic Relation everywhere where generic fks are used
</issue>
<code>
[start of euth/documents/serializers.py]
1 from rest_framework import serializers
2
3 from .models import Document, Paragraph
4
5
6 class ParagraphSerializer(serializers.Serializer):
7 id = serializers.IntegerField(required=False)
8 name = serializers.CharField(
9 required=False,
10 max_length=Paragraph._meta.get_field('name').max_length
11 )
12 weight = serializers.IntegerField()
13 text = serializers.CharField()
14
15
16 class DocumentSerializer(serializers.ModelSerializer):
17 paragraphs = ParagraphSerializer(many=True, partial=True)
18
19 class Meta:
20 model = Document
21 exclude = ('creator',)
22
23 def create(self, validated_data):
24 paragraphs = validated_data.pop('paragraphs')
25 user = self.context['request'].user
26 document = Document.objects.create(creator=user, **validated_data)
27
28 for paragraph in paragraphs:
29 Paragraph.objects.create(document=document, **paragraph)
30
31 return document
32
33 def update(self, instance, validated_data):
34 instance.name = validated_data['name']
35 instance.save()
36 paragraphs = validated_data.pop('paragraphs')
37
38 paragraph_ids = [item['id'] for item in paragraphs if 'id' in item]
39 instance.paragraphs.exclude(id__in=paragraph_ids).delete()
40
41 for paragraph in paragraphs:
42 paragraph['document'] = instance
43 if 'id' in paragraph:
44 instance.paragraphs.filter(id=paragraph['id'])\
45 .update(**paragraph)
46 else:
47 instance.paragraphs.create(**paragraph)
48
49 return instance
50
[end of euth/documents/serializers.py]
[start of euth/documents/models.py]
1 from ckeditor.fields import RichTextField
2 from django.contrib.contenttypes.models import ContentType
3 from django.core.exceptions import ObjectDoesNotExist, ValidationError
4 from django.db import models
5 from django.utils.functional import cached_property
6 from django.utils.translation import ugettext_lazy as _
7
8 from contrib.transforms import html_transforms
9 from euth.comments import models as comment_models
10 from euth.contrib import base_models
11 from euth.modules import models as module_models
12
13
14 class Document(module_models.Item):
15 name = models.CharField(max_length=120)
16
17 def __str__(self):
18 return "{}_document_{}".format(str(self.module), self.pk)
19
20 def clean(self, *args, **kwargs):
21 if not self.pk:
22 try:
23 Document.objects.get(module=self.module)
24 raise ValidationError(
25 _('Document for that module already exists'))
26 except ObjectDoesNotExist:
27 super().clean(*args, **kwargs)
28 super().clean(*args, **kwargs)
29
30 @cached_property
31 def paragraphs_sorted(self):
32 return self.paragraphs.all().order_by('weight')
33
34 @cached_property
35 def comments(self):
36 contenttype = ContentType.objects.get_for_model(self)
37 pk = self.id
38 comments = comment_models.Comment.objects.all().filter(
39 content_type=contenttype, object_pk=pk)
40 return comments
41
42
43 class Paragraph(base_models.TimeStampedModel):
44 name = models.CharField(max_length=120, blank=True)
45 text = RichTextField()
46 weight = models.PositiveIntegerField()
47 document = models.ForeignKey(Document,
48 on_delete=models.CASCADE,
49 related_name='paragraphs')
50
51 class Meta:
52 ordering = ('weight',)
53
54 def __str__(self):
55 return "{}_paragraph_{}".format(str(self.document), self.weight)
56
57 def save(self, *args, **kwargs):
58 self.text = html_transforms.clean_html_field(
59 self.text)
60 super().save(*args, **kwargs)
61
62 @cached_property
63 def comments(self):
64 contenttype = ContentType.objects.get_for_model(self)
65 pk = self.id
66 comments = comment_models.Comment.objects.all().filter(
67 content_type=contenttype, object_pk=pk)
68 return comments
69
70 @cached_property
71 def project(self):
72 return self.document.project
73
[end of euth/documents/models.py]
[start of euth/documents/phases.py]
1 from django.utils.translation import ugettext_lazy as _
2
3 from euth import phases
4
5 from . import apps, models, views
6
7
8 class CreateDocumentPhase(phases.PhaseContent):
9 """
10 Allows no interaction for participants, only
11 creation for moderators.
12 """
13 app = apps.DocumentConfig.label
14 phase = 'create_document'
15 view = views.DocumentCreateView
16 weight = 30
17
18 name = _('Create document phase')
19 module_name = _('commenting text')
20 description = _('Create text for the project.')
21
22 features = {}
23
24
25 phases.content.register(CreateDocumentPhase())
26
27
28 class CommentPhase(phases.PhaseContent):
29 """
30 Allows only commenting of paragraphs.
31 """
32 app = apps.DocumentConfig.label
33 phase = 'comment'
34 view = views.DocumentDetailView
35 weight = 40
36
37 name = _('Comment phase')
38 module_name = _('commenting text')
39 description = _('Collect comments for the text.')
40
41 features = {
42 'comment': (models.Paragraph,),
43 }
44
45
46 phases.content.register(CommentPhase())
47
[end of euth/documents/phases.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/euth/documents/models.py b/euth/documents/models.py
--- a/euth/documents/models.py
+++ b/euth/documents/models.py
@@ -1,5 +1,5 @@
from ckeditor.fields import RichTextField
-from django.contrib.contenttypes.models import ContentType
+from django.contrib.contenttypes.fields import GenericRelation
from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.db import models
from django.utils.functional import cached_property
@@ -27,18 +27,6 @@
super().clean(*args, **kwargs)
super().clean(*args, **kwargs)
- @cached_property
- def paragraphs_sorted(self):
- return self.paragraphs.all().order_by('weight')
-
- @cached_property
- def comments(self):
- contenttype = ContentType.objects.get_for_model(self)
- pk = self.id
- comments = comment_models.Comment.objects.all().filter(
- content_type=contenttype, object_pk=pk)
- return comments
-
class Paragraph(base_models.TimeStampedModel):
name = models.CharField(max_length=120, blank=True)
@@ -47,6 +35,9 @@
document = models.ForeignKey(Document,
on_delete=models.CASCADE,
related_name='paragraphs')
+ comments = GenericRelation(comment_models.Comment,
+ related_query_name='paragraph',
+ object_id_field='object_pk')
class Meta:
ordering = ('weight',)
@@ -59,14 +50,6 @@
self.text)
super().save(*args, **kwargs)
- @cached_property
- def comments(self):
- contenttype = ContentType.objects.get_for_model(self)
- pk = self.id
- comments = comment_models.Comment.objects.all().filter(
- content_type=contenttype, object_pk=pk)
- return comments
-
@cached_property
def project(self):
return self.document.project
diff --git a/euth/documents/phases.py b/euth/documents/phases.py
--- a/euth/documents/phases.py
+++ b/euth/documents/phases.py
@@ -39,7 +39,7 @@
description = _('Collect comments for the text.')
features = {
- 'comment': (models.Paragraph,),
+ 'comment': (models.Paragraph, models.Document),
}
diff --git a/euth/documents/serializers.py b/euth/documents/serializers.py
--- a/euth/documents/serializers.py
+++ b/euth/documents/serializers.py
@@ -7,6 +7,7 @@
id = serializers.IntegerField(required=False)
name = serializers.CharField(
required=False,
+ allow_blank=True,
max_length=Paragraph._meta.get_field('name').max_length
)
weight = serializers.IntegerField()
|
{"golden_diff": "diff --git a/euth/documents/models.py b/euth/documents/models.py\n--- a/euth/documents/models.py\n+++ b/euth/documents/models.py\n@@ -1,5 +1,5 @@\n from ckeditor.fields import RichTextField\n-from django.contrib.contenttypes.models import ContentType\n+from django.contrib.contenttypes.fields import GenericRelation\n from django.core.exceptions import ObjectDoesNotExist, ValidationError\n from django.db import models\n from django.utils.functional import cached_property\n@@ -27,18 +27,6 @@\n super().clean(*args, **kwargs)\n super().clean(*args, **kwargs)\n \n- @cached_property\n- def paragraphs_sorted(self):\n- return self.paragraphs.all().order_by('weight')\n-\n- @cached_property\n- def comments(self):\n- contenttype = ContentType.objects.get_for_model(self)\n- pk = self.id\n- comments = comment_models.Comment.objects.all().filter(\n- content_type=contenttype, object_pk=pk)\n- return comments\n-\n \n class Paragraph(base_models.TimeStampedModel):\n name = models.CharField(max_length=120, blank=True)\n@@ -47,6 +35,9 @@\n document = models.ForeignKey(Document,\n on_delete=models.CASCADE,\n related_name='paragraphs')\n+ comments = GenericRelation(comment_models.Comment,\n+ related_query_name='paragraph',\n+ object_id_field='object_pk')\n \n class Meta:\n ordering = ('weight',)\n@@ -59,14 +50,6 @@\n self.text)\n super().save(*args, **kwargs)\n \n- @cached_property\n- def comments(self):\n- contenttype = ContentType.objects.get_for_model(self)\n- pk = self.id\n- comments = comment_models.Comment.objects.all().filter(\n- content_type=contenttype, object_pk=pk)\n- return comments\n-\n @cached_property\n def project(self):\n return self.document.project\ndiff --git a/euth/documents/phases.py b/euth/documents/phases.py\n--- a/euth/documents/phases.py\n+++ b/euth/documents/phases.py\n@@ -39,7 +39,7 @@\n description = _('Collect comments for the text.')\n \n features = {\n- 'comment': (models.Paragraph,),\n+ 'comment': (models.Paragraph, models.Document),\n }\n \n \ndiff --git a/euth/documents/serializers.py b/euth/documents/serializers.py\n--- a/euth/documents/serializers.py\n+++ b/euth/documents/serializers.py\n@@ -7,6 +7,7 @@\n id = serializers.IntegerField(required=False)\n name = serializers.CharField(\n required=False,\n+ allow_blank=True,\n max_length=Paragraph._meta.get_field('name').max_length\n )\n weight = serializers.IntegerField()\n", "issue": "Use Generic Relation everywhere where generic fks are used\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom .models import Document, Paragraph\n\n\nclass ParagraphSerializer(serializers.Serializer):\n id = serializers.IntegerField(required=False)\n name = serializers.CharField(\n required=False,\n max_length=Paragraph._meta.get_field('name').max_length\n )\n weight = serializers.IntegerField()\n text = serializers.CharField()\n\n\nclass DocumentSerializer(serializers.ModelSerializer):\n paragraphs = ParagraphSerializer(many=True, partial=True)\n\n class Meta:\n model = Document\n exclude = ('creator',)\n\n def create(self, validated_data):\n paragraphs = validated_data.pop('paragraphs')\n user = self.context['request'].user\n document = Document.objects.create(creator=user, **validated_data)\n\n for paragraph in paragraphs:\n Paragraph.objects.create(document=document, **paragraph)\n\n return document\n\n def update(self, instance, validated_data):\n instance.name = validated_data['name']\n instance.save()\n paragraphs = validated_data.pop('paragraphs')\n\n paragraph_ids = [item['id'] for item in paragraphs if 'id' in item]\n instance.paragraphs.exclude(id__in=paragraph_ids).delete()\n\n for paragraph in paragraphs:\n paragraph['document'] = instance\n if 'id' in paragraph:\n instance.paragraphs.filter(id=paragraph['id'])\\\n .update(**paragraph)\n else:\n instance.paragraphs.create(**paragraph)\n\n return instance\n", "path": "euth/documents/serializers.py"}, {"content": "from ckeditor.fields import RichTextField\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import models\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom contrib.transforms import html_transforms\nfrom euth.comments import models as comment_models\nfrom euth.contrib import base_models\nfrom euth.modules import models as module_models\n\n\nclass Document(module_models.Item):\n name = models.CharField(max_length=120)\n\n def __str__(self):\n return \"{}_document_{}\".format(str(self.module), self.pk)\n\n def clean(self, *args, **kwargs):\n if not self.pk:\n try:\n Document.objects.get(module=self.module)\n raise ValidationError(\n _('Document for that module already exists'))\n except ObjectDoesNotExist:\n super().clean(*args, **kwargs)\n super().clean(*args, **kwargs)\n\n @cached_property\n def paragraphs_sorted(self):\n return self.paragraphs.all().order_by('weight')\n\n @cached_property\n def comments(self):\n contenttype = ContentType.objects.get_for_model(self)\n pk = self.id\n comments = comment_models.Comment.objects.all().filter(\n content_type=contenttype, object_pk=pk)\n return comments\n\n\nclass Paragraph(base_models.TimeStampedModel):\n name = models.CharField(max_length=120, blank=True)\n text = RichTextField()\n weight = models.PositiveIntegerField()\n document = models.ForeignKey(Document,\n on_delete=models.CASCADE,\n related_name='paragraphs')\n\n class Meta:\n ordering = ('weight',)\n\n def __str__(self):\n return \"{}_paragraph_{}\".format(str(self.document), self.weight)\n\n def save(self, *args, **kwargs):\n self.text = html_transforms.clean_html_field(\n self.text)\n super().save(*args, **kwargs)\n\n @cached_property\n def comments(self):\n contenttype = ContentType.objects.get_for_model(self)\n pk = self.id\n comments = comment_models.Comment.objects.all().filter(\n content_type=contenttype, object_pk=pk)\n return comments\n\n @cached_property\n def project(self):\n return self.document.project\n", "path": "euth/documents/models.py"}, {"content": "from django.utils.translation import ugettext_lazy as _\n\nfrom euth import phases\n\nfrom . import apps, models, views\n\n\nclass CreateDocumentPhase(phases.PhaseContent):\n \"\"\"\n Allows no interaction for participants, only\n creation for moderators.\n \"\"\"\n app = apps.DocumentConfig.label\n phase = 'create_document'\n view = views.DocumentCreateView\n weight = 30\n\n name = _('Create document phase')\n module_name = _('commenting text')\n description = _('Create text for the project.')\n\n features = {}\n\n\nphases.content.register(CreateDocumentPhase())\n\n\nclass CommentPhase(phases.PhaseContent):\n \"\"\"\n Allows only commenting of paragraphs.\n \"\"\"\n app = apps.DocumentConfig.label\n phase = 'comment'\n view = views.DocumentDetailView\n weight = 40\n\n name = _('Comment phase')\n module_name = _('commenting text')\n description = _('Collect comments for the text.')\n\n features = {\n 'comment': (models.Paragraph,),\n }\n\n\nphases.content.register(CommentPhase())\n", "path": "euth/documents/phases.py"}]}
| 1,899 | 603 |
gh_patches_debug_22070
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2290
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invite existing users gives an IntegrityError
See http://sentry.support.akvo-ops.org/rsr/live/group/797/.
</issue>
<code>
[start of akvo/rest/views/employment.py]
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3
4 See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 """
7
8 from django.contrib.auth.models import Group
9 from rest_framework.decorators import api_view, permission_classes
10 from rest_framework.exceptions import PermissionDenied
11 from rest_framework.permissions import IsAuthenticated
12 from rest_framework.response import Response
13 from akvo.rsr.models import Employment
14 from ..serializers import EmploymentSerializer
15 from ..viewsets import BaseRSRViewSet
16
17
18 class EmploymentViewSet(BaseRSRViewSet):
19
20 """Employment resource."""
21
22 queryset = Employment.objects.select_related('organisation')
23 serializer_class = EmploymentSerializer
24
25
26 @api_view(['POST'])
27 @permission_classes((IsAuthenticated, ))
28 def approve_employment(request, pk=None):
29 employment = Employment.objects.get(pk=pk)
30 user = request.user
31
32 if not user.has_perm('rsr.change_employment', employment):
33 raise PermissionDenied
34
35 employment.approve(user)
36
37 return Response({'status': 'employment approved'})
38
39
40 @api_view(['POST'])
41 @permission_classes((IsAuthenticated, ))
42 def set_group(request, pk=None, group_id=None):
43 employment = Employment.objects.get(pk=pk)
44 group = Group.objects.get(pk=group_id)
45 user = request.user
46
47 if not user.has_perm('rsr.change_employment', employment):
48 raise PermissionDenied
49
50 employment.group = group
51 employment.save()
52
53 return Response({'status': 'group set'})
54
[end of akvo/rest/views/employment.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rest/views/employment.py b/akvo/rest/views/employment.py
--- a/akvo/rest/views/employment.py
+++ b/akvo/rest/views/employment.py
@@ -6,10 +6,12 @@
"""
from django.contrib.auth.models import Group
+from django.db import IntegrityError
from rest_framework.decorators import api_view, permission_classes
from rest_framework.exceptions import PermissionDenied
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
+from rest_framework import status
from akvo.rsr.models import Employment
from ..serializers import EmploymentSerializer
from ..viewsets import BaseRSRViewSet
@@ -48,6 +50,10 @@
raise PermissionDenied
employment.group = group
- employment.save()
+ try:
+ employment.save()
+ except IntegrityError:
+ return Response({'status': 'group not set', 'error': 'Employment already exists.'},
+ status=status.HTTP_400_BAD_REQUEST)
return Response({'status': 'group set'})
|
{"golden_diff": "diff --git a/akvo/rest/views/employment.py b/akvo/rest/views/employment.py\n--- a/akvo/rest/views/employment.py\n+++ b/akvo/rest/views/employment.py\n@@ -6,10 +6,12 @@\n \"\"\"\n \n from django.contrib.auth.models import Group\n+from django.db import IntegrityError\n from rest_framework.decorators import api_view, permission_classes\n from rest_framework.exceptions import PermissionDenied\n from rest_framework.permissions import IsAuthenticated\n from rest_framework.response import Response\n+from rest_framework import status\n from akvo.rsr.models import Employment\n from ..serializers import EmploymentSerializer\n from ..viewsets import BaseRSRViewSet\n@@ -48,6 +50,10 @@\n raise PermissionDenied\n \n employment.group = group\n- employment.save()\n+ try:\n+ employment.save()\n+ except IntegrityError:\n+ return Response({'status': 'group not set', 'error': 'Employment already exists.'},\n+ status=status.HTTP_400_BAD_REQUEST)\n \n return Response({'status': 'group set'})\n", "issue": "Invite existing users gives an IntegrityError\nSee http://sentry.support.akvo-ops.org/rsr/live/group/797/.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django.contrib.auth.models import Group\nfrom rest_framework.decorators import api_view, permission_classes\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom akvo.rsr.models import Employment\nfrom ..serializers import EmploymentSerializer\nfrom ..viewsets import BaseRSRViewSet\n\n\nclass EmploymentViewSet(BaseRSRViewSet):\n\n \"\"\"Employment resource.\"\"\"\n\n queryset = Employment.objects.select_related('organisation')\n serializer_class = EmploymentSerializer\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef approve_employment(request, pk=None):\n employment = Employment.objects.get(pk=pk)\n user = request.user\n\n if not user.has_perm('rsr.change_employment', employment):\n raise PermissionDenied\n\n employment.approve(user)\n\n return Response({'status': 'employment approved'})\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef set_group(request, pk=None, group_id=None):\n employment = Employment.objects.get(pk=pk)\n group = Group.objects.get(pk=group_id)\n user = request.user\n\n if not user.has_perm('rsr.change_employment', employment):\n raise PermissionDenied\n\n employment.group = group\n employment.save()\n\n return Response({'status': 'group set'})\n", "path": "akvo/rest/views/employment.py"}]}
| 1,015 | 231 |
gh_patches_debug_26325
|
rasdani/github-patches
|
git_diff
|
getredash__redash-6652
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
For self-hosted instance, Google sheet connection test failed, but can't find any logs, how to trouble shooting such cases


When test connections timeout, can't find any logs on worker/scheduler/server docker logs.
even make sure the log level is debug, can't find any related logs. how to trouble shooting this
</issue>
<code>
[start of redash/query_runner/google_spreadsheets.py]
1 import logging
2 import re
3 from base64 import b64decode
4
5 from dateutil import parser
6 from requests import Session
7 from xlsxwriter.utility import xl_col_to_name
8
9 from redash.query_runner import (
10 TYPE_BOOLEAN,
11 TYPE_DATETIME,
12 TYPE_FLOAT,
13 TYPE_INTEGER,
14 TYPE_STRING,
15 BaseQueryRunner,
16 guess_type,
17 register,
18 )
19 from redash.utils import json_dumps, json_loads
20
21 logger = logging.getLogger(__name__)
22
23 try:
24 import google.auth
25 import gspread
26 from google.oauth2.service_account import Credentials
27 from gspread.exceptions import APIError
28 from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound
29
30 enabled = True
31 except ImportError:
32 enabled = False
33
34
35 def _load_key(filename):
36 with open(filename, "rb") as f:
37 return json_loads(f.read())
38
39
40 def _get_columns_and_column_names(row):
41 column_names = []
42 columns = []
43 duplicate_counter = 1
44
45 for i, column_name in enumerate(row):
46 if not column_name:
47 column_name = "column_{}".format(xl_col_to_name(i))
48
49 if column_name in column_names:
50 column_name = "{}{}".format(column_name, duplicate_counter)
51 duplicate_counter += 1
52
53 column_names.append(column_name)
54 columns.append({"name": column_name, "friendly_name": column_name, "type": TYPE_STRING})
55
56 return columns, column_names
57
58
59 def _value_eval_list(row_values, col_types):
60 value_list = []
61 raw_values = zip(col_types, row_values)
62 for typ, rval in raw_values:
63 try:
64 if rval is None or rval == "":
65 val = None
66 elif typ == TYPE_BOOLEAN:
67 val = True if str(rval).lower() == "true" else False
68 elif typ == TYPE_DATETIME:
69 val = parser.parse(rval)
70 elif typ == TYPE_FLOAT:
71 val = float(rval)
72 elif typ == TYPE_INTEGER:
73 val = int(rval)
74 else:
75 # for TYPE_STRING and default
76 val = str(rval)
77 value_list.append(val)
78 except (ValueError, OverflowError):
79 value_list.append(rval)
80 return value_list
81
82
83 HEADER_INDEX = 0
84
85
86 class WorksheetNotFoundError(Exception):
87 def __init__(self, worksheet_num, worksheet_count):
88 message = "Worksheet number {} not found. Spreadsheet has {} worksheets. Note that the worksheet count is zero based.".format(
89 worksheet_num, worksheet_count
90 )
91 super(WorksheetNotFoundError, self).__init__(message)
92
93
94 class WorksheetNotFoundByTitleError(Exception):
95 def __init__(self, worksheet_title):
96 message = "Worksheet title '{}' not found.".format(worksheet_title)
97 super(WorksheetNotFoundByTitleError, self).__init__(message)
98
99
100 def parse_query(query):
101 values = query.split("|")
102 key = values[0] # key of the spreadsheet
103 worksheet_num_or_title = 0 # A default value for when a number of inputs is invalid
104 if len(values) == 2:
105 s = values[1].strip()
106 if len(s) > 0:
107 if re.match(r"^\"(.*?)\"$", s):
108 # A string quoted by " means a title of worksheet
109 worksheet_num_or_title = s[1:-1]
110 else:
111 # if spreadsheet contains more than one worksheet - this is the number of it
112 worksheet_num_or_title = int(s)
113
114 return key, worksheet_num_or_title
115
116
117 def parse_worksheet(worksheet):
118 if not worksheet:
119 return {"columns": [], "rows": []}
120
121 columns, column_names = _get_columns_and_column_names(worksheet[HEADER_INDEX])
122
123 if len(worksheet) > 1:
124 for j, value in enumerate(worksheet[HEADER_INDEX + 1]):
125 columns[j]["type"] = guess_type(value)
126
127 column_types = [c["type"] for c in columns]
128 rows = [dict(zip(column_names, _value_eval_list(row, column_types))) for row in worksheet[HEADER_INDEX + 1 :]]
129 data = {"columns": columns, "rows": rows}
130
131 return data
132
133
134 def parse_spreadsheet(spreadsheet, worksheet_num_or_title):
135 worksheet = None
136 if isinstance(worksheet_num_or_title, int):
137 worksheet = spreadsheet.get_worksheet_by_index(worksheet_num_or_title)
138 if worksheet is None:
139 worksheet_count = len(spreadsheet.worksheets())
140 raise WorksheetNotFoundError(worksheet_num_or_title, worksheet_count)
141 elif isinstance(worksheet_num_or_title, str):
142 worksheet = spreadsheet.get_worksheet_by_title(worksheet_num_or_title)
143 if worksheet is None:
144 raise WorksheetNotFoundByTitleError(worksheet_num_or_title)
145
146 worksheet_values = worksheet.get_all_values()
147
148 return parse_worksheet(worksheet_values)
149
150
151 def is_url_key(key):
152 return key.startswith("https://")
153
154
155 def parse_api_error(error):
156 error_data = error.response.json()
157
158 if "error" in error_data and "message" in error_data["error"]:
159 message = error_data["error"]["message"]
160 else:
161 message = str(error)
162
163 return message
164
165
166 class SpreadsheetWrapper:
167 def __init__(self, spreadsheet):
168 self.spreadsheet = spreadsheet
169
170 def worksheets(self):
171 return self.spreadsheet.worksheets()
172
173 def get_worksheet_by_index(self, index):
174 return self.spreadsheet.get_worksheet(index)
175
176 def get_worksheet_by_title(self, title):
177 try:
178 return self.spreadsheet.worksheet(title)
179 except GSWorksheetNotFound:
180 return None
181
182
183 class TimeoutSession(Session):
184 def request(self, *args, **kwargs):
185 kwargs.setdefault("timeout", 300)
186 return super(TimeoutSession, self).request(*args, **kwargs)
187
188
189 class GoogleSpreadsheet(BaseQueryRunner):
190 should_annotate_query = False
191
192 def __init__(self, configuration):
193 super(GoogleSpreadsheet, self).__init__(configuration)
194 self.syntax = "custom"
195
196 @classmethod
197 def name(cls):
198 return "Google Sheets"
199
200 @classmethod
201 def type(cls):
202 return "google_spreadsheets"
203
204 @classmethod
205 def enabled(cls):
206 return enabled
207
208 @classmethod
209 def configuration_schema(cls):
210 return {
211 "type": "object",
212 "properties": {"jsonKeyFile": {"type": "string", "title": "JSON Key File (ADC is used if omitted)"}},
213 "required": [],
214 "secret": ["jsonKeyFile"],
215 }
216
217 def _get_spreadsheet_service(self):
218 scopes = ["https://spreadsheets.google.com/feeds"]
219
220 try:
221 key = json_loads(b64decode(self.configuration["jsonKeyFile"]))
222 creds = Credentials.from_service_account_info(key, scopes=scopes)
223 except KeyError:
224 creds = google.auth.default(scopes=scopes)[0]
225
226 timeout_session = Session()
227 timeout_session.requests_session = TimeoutSession()
228 spreadsheetservice = gspread.Client(auth=creds, session=timeout_session)
229 spreadsheetservice.login()
230 return spreadsheetservice
231
232 def test_connection(self):
233 service = self._get_spreadsheet_service()
234 test_spreadsheet_key = "1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis"
235 try:
236 service.open_by_key(test_spreadsheet_key).worksheets()
237 except APIError as e:
238 message = parse_api_error(e)
239 raise Exception(message)
240
241 def run_query(self, query, user):
242 logger.debug("Spreadsheet is about to execute query: %s", query)
243 key, worksheet_num_or_title = parse_query(query)
244
245 try:
246 spreadsheet_service = self._get_spreadsheet_service()
247
248 if is_url_key(key):
249 spreadsheet = spreadsheet_service.open_by_url(key)
250 else:
251 spreadsheet = spreadsheet_service.open_by_key(key)
252
253 data = parse_spreadsheet(SpreadsheetWrapper(spreadsheet), worksheet_num_or_title)
254
255 return json_dumps(data), None
256 except gspread.SpreadsheetNotFound:
257 return (
258 None,
259 "Spreadsheet ({}) not found. Make sure you used correct id.".format(key),
260 )
261 except APIError as e:
262 return None, parse_api_error(e)
263
264
265 register(GoogleSpreadsheet)
266
[end of redash/query_runner/google_spreadsheets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/redash/query_runner/google_spreadsheets.py b/redash/query_runner/google_spreadsheets.py
--- a/redash/query_runner/google_spreadsheets.py
+++ b/redash/query_runner/google_spreadsheets.py
@@ -23,6 +23,7 @@
try:
import google.auth
import gspread
+ from google.auth.exceptions import GoogleAuthError
from google.oauth2.service_account import Credentials
from gspread.exceptions import APIError
from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound
@@ -230,13 +231,17 @@
return spreadsheetservice
def test_connection(self):
- service = self._get_spreadsheet_service()
test_spreadsheet_key = "1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis"
try:
+ service = self._get_spreadsheet_service()
service.open_by_key(test_spreadsheet_key).worksheets()
except APIError as e:
+ logger.exception(e)
message = parse_api_error(e)
raise Exception(message)
+ except GoogleAuthError as e:
+ logger.exception(e)
+ raise Exception(str(e))
def run_query(self, query, user):
logger.debug("Spreadsheet is about to execute query: %s", query)
|
{"golden_diff": "diff --git a/redash/query_runner/google_spreadsheets.py b/redash/query_runner/google_spreadsheets.py\n--- a/redash/query_runner/google_spreadsheets.py\n+++ b/redash/query_runner/google_spreadsheets.py\n@@ -23,6 +23,7 @@\n try:\n import google.auth\n import gspread\n+ from google.auth.exceptions import GoogleAuthError\n from google.oauth2.service_account import Credentials\n from gspread.exceptions import APIError\n from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound\n@@ -230,13 +231,17 @@\n return spreadsheetservice\n \n def test_connection(self):\n- service = self._get_spreadsheet_service()\n test_spreadsheet_key = \"1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis\"\n try:\n+ service = self._get_spreadsheet_service()\n service.open_by_key(test_spreadsheet_key).worksheets()\n except APIError as e:\n+ logger.exception(e)\n message = parse_api_error(e)\n raise Exception(message)\n+ except GoogleAuthError as e:\n+ logger.exception(e)\n+ raise Exception(str(e))\n \n def run_query(self, query, user):\n logger.debug(\"Spreadsheet is about to execute query: %s\", query)\n", "issue": "For self-hosted instance, Google sheet connection test failed, but can't find any logs, how to trouble shooting such cases\n\r\n\r\n\r\n\r\nWhen test connections timeout, can't find any logs on worker/scheduler/server docker logs.\r\neven make sure the log level is debug, can't find any related logs. how to trouble shooting this\n", "before_files": [{"content": "import logging\nimport re\nfrom base64 import b64decode\n\nfrom dateutil import parser\nfrom requests import Session\nfrom xlsxwriter.utility import xl_col_to_name\n\nfrom redash.query_runner import (\n TYPE_BOOLEAN,\n TYPE_DATETIME,\n TYPE_FLOAT,\n TYPE_INTEGER,\n TYPE_STRING,\n BaseQueryRunner,\n guess_type,\n register,\n)\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import google.auth\n import gspread\n from google.oauth2.service_account import Credentials\n from gspread.exceptions import APIError\n from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound\n\n enabled = True\nexcept ImportError:\n enabled = False\n\n\ndef _load_key(filename):\n with open(filename, \"rb\") as f:\n return json_loads(f.read())\n\n\ndef _get_columns_and_column_names(row):\n column_names = []\n columns = []\n duplicate_counter = 1\n\n for i, column_name in enumerate(row):\n if not column_name:\n column_name = \"column_{}\".format(xl_col_to_name(i))\n\n if column_name in column_names:\n column_name = \"{}{}\".format(column_name, duplicate_counter)\n duplicate_counter += 1\n\n column_names.append(column_name)\n columns.append({\"name\": column_name, \"friendly_name\": column_name, \"type\": TYPE_STRING})\n\n return columns, column_names\n\n\ndef _value_eval_list(row_values, col_types):\n value_list = []\n raw_values = zip(col_types, row_values)\n for typ, rval in raw_values:\n try:\n if rval is None or rval == \"\":\n val = None\n elif typ == TYPE_BOOLEAN:\n val = True if str(rval).lower() == \"true\" else False\n elif typ == TYPE_DATETIME:\n val = parser.parse(rval)\n elif typ == TYPE_FLOAT:\n val = float(rval)\n elif typ == TYPE_INTEGER:\n val = int(rval)\n else:\n # for TYPE_STRING and default\n val = str(rval)\n value_list.append(val)\n except (ValueError, OverflowError):\n value_list.append(rval)\n return value_list\n\n\nHEADER_INDEX = 0\n\n\nclass WorksheetNotFoundError(Exception):\n def __init__(self, worksheet_num, worksheet_count):\n message = \"Worksheet number {} not found. Spreadsheet has {} worksheets. Note that the worksheet count is zero based.\".format(\n worksheet_num, worksheet_count\n )\n super(WorksheetNotFoundError, self).__init__(message)\n\n\nclass WorksheetNotFoundByTitleError(Exception):\n def __init__(self, worksheet_title):\n message = \"Worksheet title '{}' not found.\".format(worksheet_title)\n super(WorksheetNotFoundByTitleError, self).__init__(message)\n\n\ndef parse_query(query):\n values = query.split(\"|\")\n key = values[0] # key of the spreadsheet\n worksheet_num_or_title = 0 # A default value for when a number of inputs is invalid\n if len(values) == 2:\n s = values[1].strip()\n if len(s) > 0:\n if re.match(r\"^\\\"(.*?)\\\"$\", s):\n # A string quoted by \" means a title of worksheet\n worksheet_num_or_title = s[1:-1]\n else:\n # if spreadsheet contains more than one worksheet - this is the number of it\n worksheet_num_or_title = int(s)\n\n return key, worksheet_num_or_title\n\n\ndef parse_worksheet(worksheet):\n if not worksheet:\n return {\"columns\": [], \"rows\": []}\n\n columns, column_names = _get_columns_and_column_names(worksheet[HEADER_INDEX])\n\n if len(worksheet) > 1:\n for j, value in enumerate(worksheet[HEADER_INDEX + 1]):\n columns[j][\"type\"] = guess_type(value)\n\n column_types = [c[\"type\"] for c in columns]\n rows = [dict(zip(column_names, _value_eval_list(row, column_types))) for row in worksheet[HEADER_INDEX + 1 :]]\n data = {\"columns\": columns, \"rows\": rows}\n\n return data\n\n\ndef parse_spreadsheet(spreadsheet, worksheet_num_or_title):\n worksheet = None\n if isinstance(worksheet_num_or_title, int):\n worksheet = spreadsheet.get_worksheet_by_index(worksheet_num_or_title)\n if worksheet is None:\n worksheet_count = len(spreadsheet.worksheets())\n raise WorksheetNotFoundError(worksheet_num_or_title, worksheet_count)\n elif isinstance(worksheet_num_or_title, str):\n worksheet = spreadsheet.get_worksheet_by_title(worksheet_num_or_title)\n if worksheet is None:\n raise WorksheetNotFoundByTitleError(worksheet_num_or_title)\n\n worksheet_values = worksheet.get_all_values()\n\n return parse_worksheet(worksheet_values)\n\n\ndef is_url_key(key):\n return key.startswith(\"https://\")\n\n\ndef parse_api_error(error):\n error_data = error.response.json()\n\n if \"error\" in error_data and \"message\" in error_data[\"error\"]:\n message = error_data[\"error\"][\"message\"]\n else:\n message = str(error)\n\n return message\n\n\nclass SpreadsheetWrapper:\n def __init__(self, spreadsheet):\n self.spreadsheet = spreadsheet\n\n def worksheets(self):\n return self.spreadsheet.worksheets()\n\n def get_worksheet_by_index(self, index):\n return self.spreadsheet.get_worksheet(index)\n\n def get_worksheet_by_title(self, title):\n try:\n return self.spreadsheet.worksheet(title)\n except GSWorksheetNotFound:\n return None\n\n\nclass TimeoutSession(Session):\n def request(self, *args, **kwargs):\n kwargs.setdefault(\"timeout\", 300)\n return super(TimeoutSession, self).request(*args, **kwargs)\n\n\nclass GoogleSpreadsheet(BaseQueryRunner):\n should_annotate_query = False\n\n def __init__(self, configuration):\n super(GoogleSpreadsheet, self).__init__(configuration)\n self.syntax = \"custom\"\n\n @classmethod\n def name(cls):\n return \"Google Sheets\"\n\n @classmethod\n def type(cls):\n return \"google_spreadsheets\"\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\"jsonKeyFile\": {\"type\": \"string\", \"title\": \"JSON Key File (ADC is used if omitted)\"}},\n \"required\": [],\n \"secret\": [\"jsonKeyFile\"],\n }\n\n def _get_spreadsheet_service(self):\n scopes = [\"https://spreadsheets.google.com/feeds\"]\n\n try:\n key = json_loads(b64decode(self.configuration[\"jsonKeyFile\"]))\n creds = Credentials.from_service_account_info(key, scopes=scopes)\n except KeyError:\n creds = google.auth.default(scopes=scopes)[0]\n\n timeout_session = Session()\n timeout_session.requests_session = TimeoutSession()\n spreadsheetservice = gspread.Client(auth=creds, session=timeout_session)\n spreadsheetservice.login()\n return spreadsheetservice\n\n def test_connection(self):\n service = self._get_spreadsheet_service()\n test_spreadsheet_key = \"1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis\"\n try:\n service.open_by_key(test_spreadsheet_key).worksheets()\n except APIError as e:\n message = parse_api_error(e)\n raise Exception(message)\n\n def run_query(self, query, user):\n logger.debug(\"Spreadsheet is about to execute query: %s\", query)\n key, worksheet_num_or_title = parse_query(query)\n\n try:\n spreadsheet_service = self._get_spreadsheet_service()\n\n if is_url_key(key):\n spreadsheet = spreadsheet_service.open_by_url(key)\n else:\n spreadsheet = spreadsheet_service.open_by_key(key)\n\n data = parse_spreadsheet(SpreadsheetWrapper(spreadsheet), worksheet_num_or_title)\n\n return json_dumps(data), None\n except gspread.SpreadsheetNotFound:\n return (\n None,\n \"Spreadsheet ({}) not found. Make sure you used correct id.\".format(key),\n )\n except APIError as e:\n return None, parse_api_error(e)\n\n\nregister(GoogleSpreadsheet)\n", "path": "redash/query_runner/google_spreadsheets.py"}]}
| 3,241 | 304 |
gh_patches_debug_22466
|
rasdani/github-patches
|
git_diff
|
encode__starlette-1262
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test_custom_middleware[trio] fails on 1-core-VM
### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [X] The bug is reproducible against the latest release and/or `master`.
- [ ] There are no similar issues or pull requests to fix it yet.
### Describe the bug
While working on [reproducible builds](https://reproducible-builds.org/) for [openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds), I found that our `python-starlette-0.16.0` package failed 1 test when running in a 1-core-VM (usually this happens due to differences in scheduling/timing)
### To reproduce
maybe run tests as `taskset 1 pytest`
or on Debian or openSUSE run
```
osc checkout openSUSE:Factory/python-starlette && cd $_
osc build --vm-type=kvm --noservice --clean -j1 standard
```
<!-- Provide a *minimal* example with steps to reproduce the bug locally.
NOTE: try to keep any external dependencies *at an absolute minimum*
(middleware, servers, proxies, certificates...).
In other words, remove anything that doesn't make the bug go away.
-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
tests should pass
### Actual behavior
`test_custom_middleware[trio]` fails
<!-- A clear and concise description of what actually happens. -->
### Debugging material
```
=================================== FAILURES ===================================
_________________________ test_custom_middleware[trio] _________________________
test_client_factory = functools.partial(<class 'starlette.testclient.TestClient'>, backend='trio', backend_options={})
def test_custom_middleware(test_client_factory):
client = test_client_factory(app)
response = client.get("/")
assert response.headers["Custom-Header"] == "Example"
with pytest.raises(Exception):
> response = client.get("/exc")
tests/middleware/test_base.py:56:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.9/site-packages/requests/sessions.py:555: in get
return self.request('GET', url, **kwargs)
starlette/testclient.py:468: in request
return super().request(
/usr/lib/python3.9/site-packages/requests/sessions.py:542: in request
resp = self.send(prep, **send_kwargs)
/usr/lib/python3.9/site-packages/requests/sessions.py:655: in send
r = adapter.send(request, **kwargs)
starlette/testclient.py:266: in send
raise exc
starlette/testclient.py:263: in send
portal.call(self.app, scope, receive, send)
/usr/lib/python3.9/site-packages/anyio/from_thread.py:229: in call
return self.start_task_soon(func, *args).result()
/usr/lib64/python3.9/concurrent/futures/_base.py:445: in result
return self.__get_result()
/usr/lib64/python3.9/concurrent/futures/_base.py:390: in __get_result
raise self._exception
/usr/lib/python3.9/site-packages/anyio/from_thread.py:176: in _call_func
retval = await retval
starlette/applications.py:112: in __call__
await self.middleware_stack(scope, receive, send)
starlette/middleware/errors.py:159: in __call__
await self.app(scope, receive, _send)
starlette/middleware/base.py:57: in __call__
task_group.cancel_scope.cancel()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <anyio._backends._trio.TaskGroup object at 0x7f1d7c50a5b0>
exc_type = <class 'RuntimeError'>
exc_val = RuntimeError('No response returned.')
exc_tb = <traceback object at 0x7f1d7c5294c0>
async def __aexit__(self, exc_type: Optional[Type[BaseException]],
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType]) -> Optional[bool]:
try:
return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)
except trio.MultiError as exc:
> raise ExceptionGroup(exc.exceptions) from None
E anyio._backends._trio.ExceptionGroup: 2 exceptions were raised in the task group:
E ----------------------------
E Traceback (most recent call last):
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 30, in coro
E await self.app(scope, request.receive, send_stream.send)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py", line 82, in __call__
E raise exc
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py", line 71, in __call__
E await self.app(scope, receive, sender)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py", line 656, in __call__
E await route.handle(scope, receive, send)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py", line 259, in handle
E await self.app(scope, receive, send)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py", line 63, in app
E response = await run_in_threadpool(func, request)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/concurrency.py", line 39, in run_in_threadpool
E return await anyio.to_thread.run_sync(func, *args)
E File "/usr/lib/python3.9/site-packages/anyio/to_thread.py", line 28, in run_sync
E return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
E File "/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py", line 170, in run_sync_in_worker_thread
E return await run_sync(wrapper, cancellable=cancellable, limiter=limiter)
E File "/usr/lib/python3.9/site-packages/trio/_threads.py", line 205, in to_thread_run_sync
E return await trio.lowlevel.wait_task_rescheduled(abort)
E File "/usr/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
E return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
E File "/usr/lib/python3.9/site-packages/outcome/_sync.py", line 111, in unwrap
E raise captured_error
E File "/usr/lib/python3.9/site-packages/trio/_threads.py", line 155, in do_release_then_return_result
E return result.unwrap()
E File "/usr/lib/python3.9/site-packages/outcome/_sync.py", line 111, in unwrap
E raise captured_error
E File "/usr/lib/python3.9/site-packages/trio/_threads.py", line 168, in worker_fn
E ret = sync_fn(*args)
E File "/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py", line 168, in wrapper
E return func(*args)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py", line 28, in exc
E raise Exception()
E Exception
E ----------------------------
E Traceback (most recent call last):
E File "/usr/lib/python3.9/site-packages/anyio/streams/memory.py", line 78, in receive
E return self.receive_nowait()
E File "/usr/lib/python3.9/site-packages/anyio/streams/memory.py", line 73, in receive_nowait
E raise WouldBlock
E anyio.WouldBlock
E
E During handling of the above exception, another exception occurred:
E
E Traceback (most recent call last):
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 35, in call_next
E message = await recv_stream.receive()
E File "/usr/lib/python3.9/site-packages/anyio/streams/memory.py", line 98, in receive
E raise EndOfStream
E anyio.EndOfStream
E
E During handling of the above exception, another exception occurred:
E
E Traceback (most recent call last):
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 55, in __call__
E response = await self.dispatch_func(request, call_next)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py", line 12, in dispatch
E response = await call_next(request)
E File "/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py", line 37, in call_next
E raise RuntimeError("No response returned.")
E RuntimeError: No response returned.
/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py:141: ExceptionGroup
=========================== short test summary info ============================
SKIPPED [3] tests/conftest.py:14: Trio not supported (yet!)
=================== 1 failed, 464 passed, 3 skipped in 4.95s ===================
```
<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.
NOTE:
- Please list tracebacks in full (don't truncate them).
- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).
-->
### Environment
- OS: Linux = openSUSE Tumbleweed 20210726
- Python version: 3.9
- Starlette version: 0.16.0
### Additional context
This bug was found while working on [reproducible builds for openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds).
</issue>
<code>
[start of starlette/middleware/base.py]
1 import typing
2
3 import anyio
4
5 from starlette.requests import Request
6 from starlette.responses import Response, StreamingResponse
7 from starlette.types import ASGIApp, Receive, Scope, Send
8
9 RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
10 DispatchFunction = typing.Callable[
11 [Request, RequestResponseEndpoint], typing.Awaitable[Response]
12 ]
13
14
15 class BaseHTTPMiddleware:
16 def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:
17 self.app = app
18 self.dispatch_func = self.dispatch if dispatch is None else dispatch
19
20 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
21 if scope["type"] != "http":
22 await self.app(scope, receive, send)
23 return
24
25 async def call_next(request: Request) -> Response:
26 send_stream, recv_stream = anyio.create_memory_object_stream()
27
28 async def coro() -> None:
29 async with send_stream:
30 await self.app(scope, request.receive, send_stream.send)
31
32 task_group.start_soon(coro)
33
34 try:
35 message = await recv_stream.receive()
36 except anyio.EndOfStream:
37 raise RuntimeError("No response returned.")
38
39 assert message["type"] == "http.response.start"
40
41 async def body_stream() -> typing.AsyncGenerator[bytes, None]:
42 async with recv_stream:
43 async for message in recv_stream:
44 assert message["type"] == "http.response.body"
45 yield message.get("body", b"")
46
47 response = StreamingResponse(
48 status_code=message["status"], content=body_stream()
49 )
50 response.raw_headers = message["headers"]
51 return response
52
53 async with anyio.create_task_group() as task_group:
54 request = Request(scope, receive=receive)
55 response = await self.dispatch_func(request, call_next)
56 await response(scope, receive, send)
57 task_group.cancel_scope.cancel()
58
59 async def dispatch(
60 self, request: Request, call_next: RequestResponseEndpoint
61 ) -> Response:
62 raise NotImplementedError() # pragma: no cover
63
[end of starlette/middleware/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py
--- a/starlette/middleware/base.py
+++ b/starlette/middleware/base.py
@@ -23,17 +23,25 @@
return
async def call_next(request: Request) -> Response:
+ app_exc: typing.Optional[Exception] = None
send_stream, recv_stream = anyio.create_memory_object_stream()
async def coro() -> None:
+ nonlocal app_exc
+
async with send_stream:
- await self.app(scope, request.receive, send_stream.send)
+ try:
+ await self.app(scope, request.receive, send_stream.send)
+ except Exception as exc:
+ app_exc = exc
task_group.start_soon(coro)
try:
message = await recv_stream.receive()
except anyio.EndOfStream:
+ if app_exc is not None:
+ raise app_exc
raise RuntimeError("No response returned.")
assert message["type"] == "http.response.start"
|
{"golden_diff": "diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py\n--- a/starlette/middleware/base.py\n+++ b/starlette/middleware/base.py\n@@ -23,17 +23,25 @@\n return\n \n async def call_next(request: Request) -> Response:\n+ app_exc: typing.Optional[Exception] = None\n send_stream, recv_stream = anyio.create_memory_object_stream()\n \n async def coro() -> None:\n+ nonlocal app_exc\n+\n async with send_stream:\n- await self.app(scope, request.receive, send_stream.send)\n+ try:\n+ await self.app(scope, request.receive, send_stream.send)\n+ except Exception as exc:\n+ app_exc = exc\n \n task_group.start_soon(coro)\n \n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n+ if app_exc is not None:\n+ raise app_exc\n raise RuntimeError(\"No response returned.\")\n \n assert message[\"type\"] == \"http.response.start\"\n", "issue": "test_custom_middleware[trio] fails on 1-core-VM\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your bug report. -->\r\n\r\n- [X] The bug is reproducible against the latest release and/or `master`.\r\n- [ ] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nWhile working on [reproducible builds](https://reproducible-builds.org/) for [openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds), I found that our `python-starlette-0.16.0` package failed 1 test when running in a 1-core-VM (usually this happens due to differences in scheduling/timing)\r\n\r\n\r\n\r\n### To reproduce\r\n\r\nmaybe run tests as `taskset 1 pytest`\r\n\r\nor on Debian or openSUSE run\r\n```\r\nosc checkout openSUSE:Factory/python-starlette && cd $_\r\nosc build --vm-type=kvm --noservice --clean -j1 standard\r\n```\r\n\r\n<!-- Provide a *minimal* example with steps to reproduce the bug locally.\r\n\r\nNOTE: try to keep any external dependencies *at an absolute minimum*\r\n(middleware, servers, proxies, certificates...).\r\nIn other words, remove anything that doesn't make the bug go away.\r\n-->\r\n\r\n### Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\ntests should pass\r\n\r\n### Actual behavior\r\n\r\n`test_custom_middleware[trio]` fails\r\n\r\n<!-- A clear and concise description of what actually happens. -->\r\n\r\n### Debugging material\r\n\r\n```\r\n =================================== FAILURES ===================================\r\n _________________________ test_custom_middleware[trio] _________________________\r\n \r\n test_client_factory = functools.partial(<class 'starlette.testclient.TestClient'>, backend='trio', backend_options={})\r\n \r\n def test_custom_middleware(test_client_factory):\r\n client = test_client_factory(app)\r\n response = client.get(\"/\")\r\n assert response.headers[\"Custom-Header\"] == \"Example\"\r\n \r\n with pytest.raises(Exception):\r\n > response = client.get(\"/exc\")\r\n \r\n tests/middleware/test_base.py:56: \r\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n /usr/lib/python3.9/site-packages/requests/sessions.py:555: in get\r\n return self.request('GET', url, **kwargs)\r\n starlette/testclient.py:468: in request\r\n return super().request(\r\n /usr/lib/python3.9/site-packages/requests/sessions.py:542: in request\r\n resp = self.send(prep, **send_kwargs)\r\n /usr/lib/python3.9/site-packages/requests/sessions.py:655: in send\r\n r = adapter.send(request, **kwargs)\r\n starlette/testclient.py:266: in send\r\n raise exc\r\n starlette/testclient.py:263: in send\r\n portal.call(self.app, scope, receive, send)\r\n /usr/lib/python3.9/site-packages/anyio/from_thread.py:229: in call\r\n return self.start_task_soon(func, *args).result()\r\n /usr/lib64/python3.9/concurrent/futures/_base.py:445: in result\r\n return self.__get_result()\r\n /usr/lib64/python3.9/concurrent/futures/_base.py:390: in __get_result\r\n raise self._exception\r\n /usr/lib/python3.9/site-packages/anyio/from_thread.py:176: in _call_func\r\n retval = await retval\r\n starlette/applications.py:112: in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n starlette/middleware/errors.py:159: in __call__\r\n await self.app(scope, receive, _send)\r\n starlette/middleware/base.py:57: in __call__\r\n task_group.cancel_scope.cancel()\r\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n \r\n self = <anyio._backends._trio.TaskGroup object at 0x7f1d7c50a5b0>\r\n exc_type = <class 'RuntimeError'>\r\n exc_val = RuntimeError('No response returned.')\r\n exc_tb = <traceback object at 0x7f1d7c5294c0>\r\n \r\n async def __aexit__(self, exc_type: Optional[Type[BaseException]],\r\n exc_val: Optional[BaseException],\r\n exc_tb: Optional[TracebackType]) -> Optional[bool]:\r\n try:\r\n return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)\r\n except trio.MultiError as exc:\r\n > raise ExceptionGroup(exc.exceptions) from None\r\n E anyio._backends._trio.ExceptionGroup: 2 exceptions were raised in the task group:\r\n E ----------------------------\r\n E Traceback (most recent call last):\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 30, in coro\r\n E await self.app(scope, request.receive, send_stream.send)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py\", line 82, in __call__\r\n E raise exc\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/exceptions.py\", line 71, in __call__\r\n E await self.app(scope, receive, sender)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py\", line 656, in __call__\r\n E await route.handle(scope, receive, send)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py\", line 259, in handle\r\n E await self.app(scope, receive, send)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/routing.py\", line 63, in app\r\n E response = await run_in_threadpool(func, request)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/concurrency.py\", line 39, in run_in_threadpool\r\n E return await anyio.to_thread.run_sync(func, *args)\r\n E File \"/usr/lib/python3.9/site-packages/anyio/to_thread.py\", line 28, in run_sync\r\n E return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,\r\n E File \"/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py\", line 170, in run_sync_in_worker_thread\r\n E return await run_sync(wrapper, cancellable=cancellable, limiter=limiter)\r\n E File \"/usr/lib/python3.9/site-packages/trio/_threads.py\", line 205, in to_thread_run_sync\r\n E return await trio.lowlevel.wait_task_rescheduled(abort)\r\n E File \"/usr/lib/python3.9/site-packages/trio/_core/_traps.py\", line 166, in wait_task_rescheduled\r\n E return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()\r\n E File \"/usr/lib/python3.9/site-packages/outcome/_sync.py\", line 111, in unwrap\r\n E raise captured_error\r\n E File \"/usr/lib/python3.9/site-packages/trio/_threads.py\", line 155, in do_release_then_return_result\r\n E return result.unwrap()\r\n E File \"/usr/lib/python3.9/site-packages/outcome/_sync.py\", line 111, in unwrap\r\n E raise captured_error\r\n E File \"/usr/lib/python3.9/site-packages/trio/_threads.py\", line 168, in worker_fn\r\n E ret = sync_fn(*args)\r\n E File \"/usr/lib/python3.9/site-packages/anyio/_backends/_trio.py\", line 168, in wrapper\r\n E return func(*args)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py\", line 28, in exc\r\n E raise Exception()\r\n E Exception\r\n E ----------------------------\r\n E Traceback (most recent call last):\r\n E File \"/usr/lib/python3.9/site-packages/anyio/streams/memory.py\", line 78, in receive\r\n E return self.receive_nowait()\r\n E File \"/usr/lib/python3.9/site-packages/anyio/streams/memory.py\", line 73, in receive_nowait\r\n E raise WouldBlock\r\n E anyio.WouldBlock\r\n E \r\n E During handling of the above exception, another exception occurred:\r\n E \r\n E Traceback (most recent call last):\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 35, in call_next\r\n E message = await recv_stream.receive()\r\n E File \"/usr/lib/python3.9/site-packages/anyio/streams/memory.py\", line 98, in receive\r\n E raise EndOfStream\r\n E anyio.EndOfStream\r\n E \r\n E During handling of the above exception, another exception occurred:\r\n E \r\n E Traceback (most recent call last):\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 55, in __call__\r\n E response = await self.dispatch_func(request, call_next)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/tests/middleware/test_base.py\", line 12, in dispatch\r\n E response = await call_next(request)\r\n E File \"/home/abuild/rpmbuild/BUILD/starlette-0.16.0/starlette/middleware/base.py\", line 37, in call_next\r\n E raise RuntimeError(\"No response returned.\")\r\n E RuntimeError: No response returned.\r\n \r\n /usr/lib/python3.9/site-packages/anyio/_backends/_trio.py:141: ExceptionGroup\r\n =========================== short test summary info ============================\r\n SKIPPED [3] tests/conftest.py:14: Trio not supported (yet!)\r\n =================== 1 failed, 464 passed, 3 skipped in 4.95s ===================\r\n```\r\n\r\n<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.\r\n\r\nNOTE:\r\n- Please list tracebacks in full (don't truncate them).\r\n- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).\r\n-->\r\n\r\n### Environment\r\n\r\n- OS: Linux = openSUSE Tumbleweed 20210726 \r\n- Python version: 3.9\r\n- Starlette version: 0.16.0\r\n\r\n### Additional context\r\n\r\nThis bug was found while working on [reproducible builds for openSUSE](https://en.opensuse.org/openSUSE:Reproducible_Builds).\n", "before_files": [{"content": "import typing\n\nimport anyio\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n async def call_next(request: Request) -> Response:\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def coro() -> None:\n async with send_stream:\n await self.app(scope, request.receive, send_stream.send)\n\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async with anyio.create_task_group() as task_group:\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, call_next)\n await response(scope, receive, send)\n task_group.cancel_scope.cancel()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}]}
| 3,642 | 227 |
gh_patches_debug_3886
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-1169
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dropping scikit-learn dependency < 0.14.1
It makes easier to set base for decoder object in nilearn which actually requires lot of backports for versions. #1148
I don't have great justifications though. Let me know if I miss something important in nilearn which we need to take into account dropping 0.13.
FYI: https://packages.debian.org/jessie/python-sklearn
Discussions are welcome.
</issue>
<code>
[start of nilearn/version.py]
1 # *- encoding: utf-8 -*-
2 """
3 nilearn version, required package versions, and utilities for checking
4 """
5 # Author: Loïc Estève, Ben Cipollini
6 # License: simplified BSD
7
8 # PEP0440 compatible formatted version, see:
9 # https://www.python.org/dev/peps/pep-0440/
10 #
11 # Generic release markers:
12 # X.Y
13 # X.Y.Z # For bugfix releases
14 #
15 # Admissible pre-release markers:
16 # X.YaN # Alpha release
17 # X.YbN # Beta release
18 # X.YrcN # Release Candidate
19 # X.Y # Final release
20 #
21 # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
22 # 'X.Y.dev0' is the canonical version of 'X.Y.dev'
23 #
24 __version__ = '0.2.5'
25
26 _NILEARN_INSTALL_MSG = 'See %s for installation information.' % (
27 'http://nilearn.github.io/introduction.html#installation')
28
29 # This is a tuple to preserve order, so that dependencies are checked
30 # in some meaningful order (more => less 'core'). We avoid using
31 # collections.OrderedDict to preserve Python 2.6 compatibility.
32 REQUIRED_MODULE_METADATA = (
33 ('numpy', {
34 'min_version': '1.6.1',
35 'required_at_installation': True,
36 'install_info': _NILEARN_INSTALL_MSG}),
37 ('scipy', {
38 'min_version': '0.9.0',
39 'required_at_installation': True,
40 'install_info': _NILEARN_INSTALL_MSG}),
41 ('sklearn', {
42 'min_version': '0.13',
43 'required_at_installation': True,
44 'install_info': _NILEARN_INSTALL_MSG}),
45 ('nibabel', {
46 'min_version': '1.1.0',
47 'required_at_installation': False}))
48
49 OPTIONAL_MATPLOTLIB_MIN_VERSION = '1.1.1'
50
51
52 def _import_module_with_version_check(
53 module_name,
54 minimum_version,
55 install_info=None):
56 """Check that module is installed with a recent enough version
57 """
58 from distutils.version import LooseVersion
59
60 try:
61 module = __import__(module_name)
62 except ImportError as exc:
63 user_friendly_info = ('Module "{0}" could not be found. {1}').format(
64 module_name,
65 install_info or 'Please install it properly to use nilearn.')
66 exc.args += (user_friendly_info,)
67 raise
68
69 # Avoid choking on modules with no __version__ attribute
70 module_version = getattr(module, '__version__', '0.0.0')
71
72 version_too_old = (not LooseVersion(module_version) >=
73 LooseVersion(minimum_version))
74
75 if version_too_old:
76 message = (
77 'A {module_name} version of at least {minimum_version} '
78 'is required to use nilearn. {module_version} was found. '
79 'Please upgrade {module_name}').format(
80 module_name=module_name,
81 minimum_version=minimum_version,
82 module_version=module_version)
83
84 raise ImportError(message)
85
86 return module
87
88
89 def _check_module_dependencies(is_nilearn_installing=False):
90 """Throw an exception if nilearn dependencies are not installed.
91
92 Parameters
93 ----------
94 is_nilearn_installing: boolean
95 if True, only error on missing packages that cannot be auto-installed.
96 if False, error on any missing package.
97
98 Throws
99 -------
100 ImportError
101 """
102
103 for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:
104 if not (is_nilearn_installing and
105 not module_metadata['required_at_installation']):
106 # Skip check only when installing and it's a module that
107 # will be auto-installed.
108 _import_module_with_version_check(
109 module_name=module_name,
110 minimum_version=module_metadata['min_version'],
111 install_info=module_metadata.get('install_info'))
112
[end of nilearn/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nilearn/version.py b/nilearn/version.py
--- a/nilearn/version.py
+++ b/nilearn/version.py
@@ -39,7 +39,7 @@
'required_at_installation': True,
'install_info': _NILEARN_INSTALL_MSG}),
('sklearn', {
- 'min_version': '0.13',
+ 'min_version': '0.14.1',
'required_at_installation': True,
'install_info': _NILEARN_INSTALL_MSG}),
('nibabel', {
|
{"golden_diff": "diff --git a/nilearn/version.py b/nilearn/version.py\n--- a/nilearn/version.py\n+++ b/nilearn/version.py\n@@ -39,7 +39,7 @@\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n- 'min_version': '0.13',\n+ 'min_version': '0.14.1',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n", "issue": "Dropping scikit-learn dependency < 0.14.1\nIt makes easier to set base for decoder object in nilearn which actually requires lot of backports for versions. #1148 \n\nI don't have great justifications though. Let me know if I miss something important in nilearn which we need to take into account dropping 0.13.\n\nFYI: https://packages.debian.org/jessie/python-sklearn\n\nDiscussions are welcome.\n\n", "before_files": [{"content": "# *- encoding: utf-8 -*-\n\"\"\"\nnilearn version, required package versions, and utilities for checking\n\"\"\"\n# Author: Lo\u00efc Est\u00e8ve, Ben Cipollini\n# License: simplified BSD\n\n# PEP0440 compatible formatted version, see:\n# https://www.python.org/dev/peps/pep-0440/\n#\n# Generic release markers:\n# X.Y\n# X.Y.Z # For bugfix releases\n#\n# Admissible pre-release markers:\n# X.YaN # Alpha release\n# X.YbN # Beta release\n# X.YrcN # Release Candidate\n# X.Y # Final release\n#\n# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.\n# 'X.Y.dev0' is the canonical version of 'X.Y.dev'\n#\n__version__ = '0.2.5'\n\n_NILEARN_INSTALL_MSG = 'See %s for installation information.' % (\n 'http://nilearn.github.io/introduction.html#installation')\n\n# This is a tuple to preserve order, so that dependencies are checked\n# in some meaningful order (more => less 'core'). We avoid using\n# collections.OrderedDict to preserve Python 2.6 compatibility.\nREQUIRED_MODULE_METADATA = (\n ('numpy', {\n 'min_version': '1.6.1',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('scipy', {\n 'min_version': '0.9.0',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('sklearn', {\n 'min_version': '0.13',\n 'required_at_installation': True,\n 'install_info': _NILEARN_INSTALL_MSG}),\n ('nibabel', {\n 'min_version': '1.1.0',\n 'required_at_installation': False}))\n\nOPTIONAL_MATPLOTLIB_MIN_VERSION = '1.1.1'\n\n\ndef _import_module_with_version_check(\n module_name,\n minimum_version,\n install_info=None):\n \"\"\"Check that module is installed with a recent enough version\n \"\"\"\n from distutils.version import LooseVersion\n\n try:\n module = __import__(module_name)\n except ImportError as exc:\n user_friendly_info = ('Module \"{0}\" could not be found. {1}').format(\n module_name,\n install_info or 'Please install it properly to use nilearn.')\n exc.args += (user_friendly_info,)\n raise\n\n # Avoid choking on modules with no __version__ attribute\n module_version = getattr(module, '__version__', '0.0.0')\n\n version_too_old = (not LooseVersion(module_version) >=\n LooseVersion(minimum_version))\n\n if version_too_old:\n message = (\n 'A {module_name} version of at least {minimum_version} '\n 'is required to use nilearn. {module_version} was found. '\n 'Please upgrade {module_name}').format(\n module_name=module_name,\n minimum_version=minimum_version,\n module_version=module_version)\n\n raise ImportError(message)\n\n return module\n\n\ndef _check_module_dependencies(is_nilearn_installing=False):\n \"\"\"Throw an exception if nilearn dependencies are not installed.\n\n Parameters\n ----------\n is_nilearn_installing: boolean\n if True, only error on missing packages that cannot be auto-installed.\n if False, error on any missing package.\n\n Throws\n -------\n ImportError\n \"\"\"\n\n for (module_name, module_metadata) in REQUIRED_MODULE_METADATA:\n if not (is_nilearn_installing and\n not module_metadata['required_at_installation']):\n # Skip check only when installing and it's a module that\n # will be auto-installed.\n _import_module_with_version_check(\n module_name=module_name,\n minimum_version=module_metadata['min_version'],\n install_info=module_metadata.get('install_info'))\n", "path": "nilearn/version.py"}]}
| 1,744 | 125 |
gh_patches_debug_19787
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-538
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pages endpoint: skip pages if parent is not visible
If a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of "full chapters".
Pages endpoint: skip pages if parent is not visible
If a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of "full chapters".
</issue>
<code>
[start of src/api/v3/pages.py]
1 from django.http import JsonResponse
2
3 from cms.models import Region
4
5
6 def transform_page(page_translation):
7 if page_translation.page.parent:
8 parent = {
9 "id": page_translation.page.parent.id,
10 "url": page_translation.page.parent.get_translation(
11 page_translation.language.code
12 ).permalink,
13 "path": page_translation.page.parent.get_translation(
14 page_translation.language.code
15 ).slug,
16 }
17 else:
18 parent = None
19 return {
20 "id": page_translation.id,
21 "url": page_translation.permalink,
22 "path": page_translation.slug,
23 "title": page_translation.title,
24 "modified_gmt": page_translation.last_updated,
25 "excerpt": page_translation.text,
26 "content": page_translation.combined_text,
27 "parent": parent,
28 "order": page_translation.page.lft, # use left edge indicator of mptt model for order
29 "available_languages": page_translation.available_languages,
30 "thumbnail": None,
31 "hash": None,
32 }
33
34
35 # pylint: disable=unused-argument
36 def pages(request, region_slug, language_code):
37 region = Region.get_current_region(request)
38 result = []
39 for page in region.pages.all():
40 page_translation = page.get_public_translation(language_code)
41 if page_translation:
42 result.append(transform_page(page_translation))
43 return JsonResponse(
44 result, safe=False
45 ) # Turn off Safe-Mode to allow serializing arrays
46
[end of src/api/v3/pages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/api/v3/pages.py b/src/api/v3/pages.py
--- a/src/api/v3/pages.py
+++ b/src/api/v3/pages.py
@@ -36,10 +36,20 @@
def pages(request, region_slug, language_code):
region = Region.get_current_region(request)
result = []
- for page in region.pages.all():
+ for page in region.pages.filter(archived=False, parent=None): # get main level
page_translation = page.get_public_translation(language_code)
if page_translation:
result.append(transform_page(page_translation))
+ result = get_children(page, language_code, result)
return JsonResponse(
result, safe=False
) # Turn off Safe-Mode to allow serializing arrays
+
+
+def get_children(parent, language_code, result):
+ for page in parent.children.filter(archived=False):
+ page_translation = page.get_public_translation(language_code)
+ if page_translation:
+ result.append(transform_page(page_translation))
+ result = get_children(page, language_code, result)
+ return result
|
{"golden_diff": "diff --git a/src/api/v3/pages.py b/src/api/v3/pages.py\n--- a/src/api/v3/pages.py\n+++ b/src/api/v3/pages.py\n@@ -36,10 +36,20 @@\n def pages(request, region_slug, language_code):\n region = Region.get_current_region(request)\n result = []\n- for page in region.pages.all():\n+ for page in region.pages.filter(archived=False, parent=None): # get main level\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n result.append(transform_page(page_translation))\n+ result = get_children(page, language_code, result)\n return JsonResponse(\n result, safe=False\n ) # Turn off Safe-Mode to allow serializing arrays\n+\n+\n+def get_children(parent, language_code, result):\n+ for page in parent.children.filter(archived=False):\n+ page_translation = page.get_public_translation(language_code)\n+ if page_translation:\n+ result.append(transform_page(page_translation))\n+ result = get_children(page, language_code, result)\n+ return result\n", "issue": "Pages endpoint: skip pages if parent is not visible\nIf a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of \"full chapters\". \nPages endpoint: skip pages if parent is not visible\nIf a parent page in the page tree is in any state that makes it invisible in the API, we do not want the children to appear in the pages endpoint. This should allow for easy deactivating of \"full chapters\". \n", "before_files": [{"content": "from django.http import JsonResponse\n\nfrom cms.models import Region\n\n\ndef transform_page(page_translation):\n if page_translation.page.parent:\n parent = {\n \"id\": page_translation.page.parent.id,\n \"url\": page_translation.page.parent.get_translation(\n page_translation.language.code\n ).permalink,\n \"path\": page_translation.page.parent.get_translation(\n page_translation.language.code\n ).slug,\n }\n else:\n parent = None\n return {\n \"id\": page_translation.id,\n \"url\": page_translation.permalink,\n \"path\": page_translation.slug,\n \"title\": page_translation.title,\n \"modified_gmt\": page_translation.last_updated,\n \"excerpt\": page_translation.text,\n \"content\": page_translation.combined_text,\n \"parent\": parent,\n \"order\": page_translation.page.lft, # use left edge indicator of mptt model for order\n \"available_languages\": page_translation.available_languages,\n \"thumbnail\": None,\n \"hash\": None,\n }\n\n\n# pylint: disable=unused-argument\ndef pages(request, region_slug, language_code):\n region = Region.get_current_region(request)\n result = []\n for page in region.pages.all():\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n result.append(transform_page(page_translation))\n return JsonResponse(\n result, safe=False\n ) # Turn off Safe-Mode to allow serializing arrays\n", "path": "src/api/v3/pages.py"}]}
| 1,041 | 237 |
gh_patches_debug_45013
|
rasdani/github-patches
|
git_diff
|
OpenMined__PySyft-5169
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix all darglint docs warnings
## Description
Currently there are 419 warnings:
```
$ flake8 src tests | wc -l
419
```
We can progressively improve this with multiple PRs.
If you are interested, please run the checker on the `src` and `tests` folders and try to fix the warnings until there are none.
Some will require adding docs that don't exist while others are just fixes.
This will likely require multiple PRs so if you are interested open a PR with a few fixes and we will go from there.
## Definition of Done
All docstring warnings are fixed and CI has the darglint checker turned on to fail the build.
</issue>
<code>
[start of src/syft/core/common/object.py]
1 # stdlib
2 from typing import Any
3 from typing import Optional
4
5 # third party
6 from google.protobuf.reflection import GeneratedProtocolMessageType
7
8 # syft relative
9 from ...proto.core.common.common_object_pb2 import ObjectWithID as ObjectWithID_PB
10 from ...util import validate_type
11 from ..common.serde.deserialize import _deserialize
12 from ..common.serde.serializable import Serializable
13 from .uid import UID
14
15
16 class ObjectWithID(Serializable):
17 """This object is the superclass for nearly all Syft objects. Subclassing
18 from this object will cause an object to be initialized with a unique id
19 using the process specified in the UID class.
20
21 .. note::
22 At the time of writing, the only class in Syft which doesn't have an ID
23 of some kind is the Client class because it's job is to point to another
24 object (which has an ID).
25
26 .. note::
27 Be aware of performance choices in this class because it is used so
28 heavily across the entire codebase. Assume every method is going to
29 be called thousands of times during the working day of an average
30 data scientist using syft (and millions of times in the context of a
31 machine learning job).
32
33 """
34
35 def __init__(self, id: Optional[UID] = None):
36 """This initializer only exists to set the id attribute, which is the
37 primary purpose of this class. It also sets the 'as_wrapper' flag
38 for the 'Serializable' superclass.
39
40 :param id: an override which can be used to set an ID for this object
41 manually. This is probably only used for deserialization.
42 :type id: UID
43
44 """
45
46 if id is None:
47 id = UID()
48
49 self._id: UID = id
50
51 # while this class is never used as a simple wrapper,
52 # it's possible that sub-classes of this class will be.
53 super().__init__()
54
55 @property
56 def id(self) -> UID:
57 """We reveal ObjectWithID.id as a property to discourage users and
58 developers of Syft from modifying .id attributes after an object
59 has been initialized.
60
61 :return: returns the unique id of the object
62 :rtype: UID
63 """
64 return self._id
65
66 def __eq__(self, other: Any) -> bool:
67 """Checks to see if two ObjectWithIDs are actually the same object.
68
69 This checks to see whether this ObjectWithIDs is equal to another by
70 comparing whether they have the same .id objects. These objects
71 come with their own __eq__ function which we assume to be correct.
72
73 :param other: this is the other ObjectWithIDs to be compared with
74 :type other: Any (note this must be Any or __eq__ fails on other types)
75 :return: returns True/False based on whether the objects are the same
76 :rtype: bool
77 """
78
79 try:
80 return self.id == other.id
81 except Exception:
82 return False
83
84 def __repr__(self) -> str:
85 """Returns a human-readable version of the ObjectWithID
86
87 Return a human-readable representation of the ObjectWithID with brackets
88 so that it can be easily spotted when nested inside of the human-
89 readable representations of other objects."""
90
91 no_dash = str(self.id.value).replace("-", "")
92 return f"<{type(self).__name__}: {no_dash}>"
93
94 def repr_short(self) -> str:
95 """Returns a SHORT human-readable version of SpecificLocation
96
97 Return a SHORT human-readable version of the ID which
98 makes it print nicer when embedded (often alongside other
99 UID objects) within other object __repr__ methods."""
100
101 return f"<{type(self).__name__}:{self.id.repr_short()}>"
102
103 def _object2proto(self) -> ObjectWithID_PB:
104 """Returns a protobuf serialization of self.
105
106 As a requirement of all objects which inherit from Serializable,
107 this method transforms the current object into the corresponding
108 Protobuf object so that it can be further serialized.
109
110 :return: returns a protobuf object
111 :rtype: ObjectWithID_PB
112
113 .. note::
114 This method is purely an internal method. Please use object.serialize() or one of
115 the other public serialization methods if you wish to serialize an
116 object.
117 """
118 return ObjectWithID_PB(id=self.id.serialize())
119
120 @staticmethod
121 def _proto2object(proto: ObjectWithID_PB) -> "ObjectWithID":
122 """Creates a ObjectWithID from a protobuf
123
124 As a requirement of all objects which inherit from Serializable,
125 this method transforms a protobuf object into an instance of this class.
126
127 :return: returns an instance of ObjectWithID
128 :rtype: ObjectWithID
129
130 .. note::
131 This method is purely an internal method. Please use syft.deserialize()
132 if you wish to deserialize an object.
133 """
134 _id = validate_type(_object=_deserialize(proto.id), _type=UID, optional=True)
135 return ObjectWithID(id=_id)
136
137 @staticmethod
138 def get_protobuf_schema() -> GeneratedProtocolMessageType:
139 """Return the type of protobuf object which stores a class of this type
140
141 As a part of serialization and deserialization, we need the ability to
142 lookup the protobuf object type directly from the object type. This
143 static method allows us to do this.
144
145 Importantly, this method is also used to create the reverse lookup ability within
146 the metaclass of Serializable. In the metaclass, it calls this method and then
147 it takes whatever type is returned from this method and adds an attribute to it
148 with the type of this class attached to it. See the MetaSerializable class for details.
149
150 :return: the type of protobuf object which corresponds to this class.
151 :rtype: GeneratedProtocolMessageType
152
153 """
154
155 return ObjectWithID_PB
156
[end of src/syft/core/common/object.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/syft/core/common/object.py b/src/syft/core/common/object.py
--- a/src/syft/core/common/object.py
+++ b/src/syft/core/common/object.py
@@ -37,9 +37,8 @@
primary purpose of this class. It also sets the 'as_wrapper' flag
for the 'Serializable' superclass.
- :param id: an override which can be used to set an ID for this object
- manually. This is probably only used for deserialization.
- :type id: UID
+ Args:
+ id: an override which can be used to set an ID for this object
"""
@@ -58,8 +57,8 @@
developers of Syft from modifying .id attributes after an object
has been initialized.
- :return: returns the unique id of the object
- :rtype: UID
+ Returns:
+ returns the unique id of the object
"""
return self._id
@@ -70,10 +69,11 @@
comparing whether they have the same .id objects. These objects
come with their own __eq__ function which we assume to be correct.
- :param other: this is the other ObjectWithIDs to be compared with
- :type other: Any (note this must be Any or __eq__ fails on other types)
- :return: returns True/False based on whether the objects are the same
- :rtype: bool
+ Args:
+ other: this is the other ObjectWithIDs to be compared with
+
+ Returns:
+ True/False based on whether the objects are the same
"""
try:
@@ -82,33 +82,39 @@
return False
def __repr__(self) -> str:
- """Returns a human-readable version of the ObjectWithID
-
+ """
Return a human-readable representation of the ObjectWithID with brackets
so that it can be easily spotted when nested inside of the human-
- readable representations of other objects."""
+ readable representations of other objects.
+
+ Returns:
+ a human-readable version of the ObjectWithID
+
+ """
no_dash = str(self.id.value).replace("-", "")
return f"<{type(self).__name__}: {no_dash}>"
def repr_short(self) -> str:
- """Returns a SHORT human-readable version of SpecificLocation
-
+ """
Return a SHORT human-readable version of the ID which
makes it print nicer when embedded (often alongside other
- UID objects) within other object __repr__ methods."""
+ UID objects) within other object __repr__ methods.
+
+ Returns:
+ a SHORT human-readable version of SpecificLocation
+ """
return f"<{type(self).__name__}:{self.id.repr_short()}>"
def _object2proto(self) -> ObjectWithID_PB:
- """Returns a protobuf serialization of self.
-
+ """
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
- :return: returns a protobuf object
- :rtype: ObjectWithID_PB
+ Returns:
+ a protobuf object that is the serialization of self.
.. note::
This method is purely an internal method. Please use object.serialize() or one of
@@ -124,8 +130,11 @@
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
- :return: returns an instance of ObjectWithID
- :rtype: ObjectWithID
+ Args:
+ proto: a protobuf object that we wish to convert to instance of this class
+
+ Returns:
+ an instance of ObjectWithID
.. note::
This method is purely an internal method. Please use syft.deserialize()
@@ -147,8 +156,8 @@
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for details.
- :return: the type of protobuf object which corresponds to this class.
- :rtype: GeneratedProtocolMessageType
+ Returns:
+ the type of protobuf object which corresponds to this class.
"""
|
{"golden_diff": "diff --git a/src/syft/core/common/object.py b/src/syft/core/common/object.py\n--- a/src/syft/core/common/object.py\n+++ b/src/syft/core/common/object.py\n@@ -37,9 +37,8 @@\n primary purpose of this class. It also sets the 'as_wrapper' flag\n for the 'Serializable' superclass.\n \n- :param id: an override which can be used to set an ID for this object\n- manually. This is probably only used for deserialization.\n- :type id: UID\n+ Args:\n+ id: an override which can be used to set an ID for this object\n \n \"\"\"\n \n@@ -58,8 +57,8 @@\n developers of Syft from modifying .id attributes after an object\n has been initialized.\n \n- :return: returns the unique id of the object\n- :rtype: UID\n+ Returns:\n+ returns the unique id of the object\n \"\"\"\n return self._id\n \n@@ -70,10 +69,11 @@\n comparing whether they have the same .id objects. These objects\n come with their own __eq__ function which we assume to be correct.\n \n- :param other: this is the other ObjectWithIDs to be compared with\n- :type other: Any (note this must be Any or __eq__ fails on other types)\n- :return: returns True/False based on whether the objects are the same\n- :rtype: bool\n+ Args:\n+ other: this is the other ObjectWithIDs to be compared with\n+\n+ Returns:\n+ True/False based on whether the objects are the same\n \"\"\"\n \n try:\n@@ -82,33 +82,39 @@\n return False\n \n def __repr__(self) -> str:\n- \"\"\"Returns a human-readable version of the ObjectWithID\n-\n+ \"\"\"\n Return a human-readable representation of the ObjectWithID with brackets\n so that it can be easily spotted when nested inside of the human-\n- readable representations of other objects.\"\"\"\n+ readable representations of other objects.\n+\n+ Returns:\n+ a human-readable version of the ObjectWithID\n+\n+ \"\"\"\n \n no_dash = str(self.id.value).replace(\"-\", \"\")\n return f\"<{type(self).__name__}: {no_dash}>\"\n \n def repr_short(self) -> str:\n- \"\"\"Returns a SHORT human-readable version of SpecificLocation\n-\n+ \"\"\"\n Return a SHORT human-readable version of the ID which\n makes it print nicer when embedded (often alongside other\n- UID objects) within other object __repr__ methods.\"\"\"\n+ UID objects) within other object __repr__ methods.\n+\n+ Returns:\n+ a SHORT human-readable version of SpecificLocation\n+ \"\"\"\n \n return f\"<{type(self).__name__}:{self.id.repr_short()}>\"\n \n def _object2proto(self) -> ObjectWithID_PB:\n- \"\"\"Returns a protobuf serialization of self.\n-\n+ \"\"\"\n As a requirement of all objects which inherit from Serializable,\n this method transforms the current object into the corresponding\n Protobuf object so that it can be further serialized.\n \n- :return: returns a protobuf object\n- :rtype: ObjectWithID_PB\n+ Returns:\n+ a protobuf object that is the serialization of self.\n \n .. note::\n This method is purely an internal method. Please use object.serialize() or one of\n@@ -124,8 +130,11 @@\n As a requirement of all objects which inherit from Serializable,\n this method transforms a protobuf object into an instance of this class.\n \n- :return: returns an instance of ObjectWithID\n- :rtype: ObjectWithID\n+ Args:\n+ proto: a protobuf object that we wish to convert to instance of this class\n+\n+ Returns:\n+ an instance of ObjectWithID\n \n .. note::\n This method is purely an internal method. Please use syft.deserialize()\n@@ -147,8 +156,8 @@\n it takes whatever type is returned from this method and adds an attribute to it\n with the type of this class attached to it. See the MetaSerializable class for details.\n \n- :return: the type of protobuf object which corresponds to this class.\n- :rtype: GeneratedProtocolMessageType\n+ Returns:\n+ the type of protobuf object which corresponds to this class.\n \n \"\"\"\n", "issue": "Fix all darglint docs warnings\n## Description\r\nCurrently there are 419 warnings:\r\n```\r\n$ flake8 src tests | wc -l\r\n419\r\n```\r\n\r\nWe can progressively improve this with multiple PRs.\r\nIf you are interested, please run the checker on the `src` and `tests` folders and try to fix the warnings until there are none.\r\nSome will require adding docs that don't exist while others are just fixes.\r\n\r\nThis will likely require multiple PRs so if you are interested open a PR with a few fixes and we will go from there.\r\n\r\n## Definition of Done\r\nAll docstring warnings are fixed and CI has the darglint checker turned on to fail the build.\n", "before_files": [{"content": "# stdlib\nfrom typing import Any\nfrom typing import Optional\n\n# third party\nfrom google.protobuf.reflection import GeneratedProtocolMessageType\n\n# syft relative\nfrom ...proto.core.common.common_object_pb2 import ObjectWithID as ObjectWithID_PB\nfrom ...util import validate_type\nfrom ..common.serde.deserialize import _deserialize\nfrom ..common.serde.serializable import Serializable\nfrom .uid import UID\n\n\nclass ObjectWithID(Serializable):\n \"\"\"This object is the superclass for nearly all Syft objects. Subclassing\n from this object will cause an object to be initialized with a unique id\n using the process specified in the UID class.\n\n .. note::\n At the time of writing, the only class in Syft which doesn't have an ID\n of some kind is the Client class because it's job is to point to another\n object (which has an ID).\n\n .. note::\n Be aware of performance choices in this class because it is used so\n heavily across the entire codebase. Assume every method is going to\n be called thousands of times during the working day of an average\n data scientist using syft (and millions of times in the context of a\n machine learning job).\n\n \"\"\"\n\n def __init__(self, id: Optional[UID] = None):\n \"\"\"This initializer only exists to set the id attribute, which is the\n primary purpose of this class. It also sets the 'as_wrapper' flag\n for the 'Serializable' superclass.\n\n :param id: an override which can be used to set an ID for this object\n manually. This is probably only used for deserialization.\n :type id: UID\n\n \"\"\"\n\n if id is None:\n id = UID()\n\n self._id: UID = id\n\n # while this class is never used as a simple wrapper,\n # it's possible that sub-classes of this class will be.\n super().__init__()\n\n @property\n def id(self) -> UID:\n \"\"\"We reveal ObjectWithID.id as a property to discourage users and\n developers of Syft from modifying .id attributes after an object\n has been initialized.\n\n :return: returns the unique id of the object\n :rtype: UID\n \"\"\"\n return self._id\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"Checks to see if two ObjectWithIDs are actually the same object.\n\n This checks to see whether this ObjectWithIDs is equal to another by\n comparing whether they have the same .id objects. These objects\n come with their own __eq__ function which we assume to be correct.\n\n :param other: this is the other ObjectWithIDs to be compared with\n :type other: Any (note this must be Any or __eq__ fails on other types)\n :return: returns True/False based on whether the objects are the same\n :rtype: bool\n \"\"\"\n\n try:\n return self.id == other.id\n except Exception:\n return False\n\n def __repr__(self) -> str:\n \"\"\"Returns a human-readable version of the ObjectWithID\n\n Return a human-readable representation of the ObjectWithID with brackets\n so that it can be easily spotted when nested inside of the human-\n readable representations of other objects.\"\"\"\n\n no_dash = str(self.id.value).replace(\"-\", \"\")\n return f\"<{type(self).__name__}: {no_dash}>\"\n\n def repr_short(self) -> str:\n \"\"\"Returns a SHORT human-readable version of SpecificLocation\n\n Return a SHORT human-readable version of the ID which\n makes it print nicer when embedded (often alongside other\n UID objects) within other object __repr__ methods.\"\"\"\n\n return f\"<{type(self).__name__}:{self.id.repr_short()}>\"\n\n def _object2proto(self) -> ObjectWithID_PB:\n \"\"\"Returns a protobuf serialization of self.\n\n As a requirement of all objects which inherit from Serializable,\n this method transforms the current object into the corresponding\n Protobuf object so that it can be further serialized.\n\n :return: returns a protobuf object\n :rtype: ObjectWithID_PB\n\n .. note::\n This method is purely an internal method. Please use object.serialize() or one of\n the other public serialization methods if you wish to serialize an\n object.\n \"\"\"\n return ObjectWithID_PB(id=self.id.serialize())\n\n @staticmethod\n def _proto2object(proto: ObjectWithID_PB) -> \"ObjectWithID\":\n \"\"\"Creates a ObjectWithID from a protobuf\n\n As a requirement of all objects which inherit from Serializable,\n this method transforms a protobuf object into an instance of this class.\n\n :return: returns an instance of ObjectWithID\n :rtype: ObjectWithID\n\n .. note::\n This method is purely an internal method. Please use syft.deserialize()\n if you wish to deserialize an object.\n \"\"\"\n _id = validate_type(_object=_deserialize(proto.id), _type=UID, optional=True)\n return ObjectWithID(id=_id)\n\n @staticmethod\n def get_protobuf_schema() -> GeneratedProtocolMessageType:\n \"\"\"Return the type of protobuf object which stores a class of this type\n\n As a part of serialization and deserialization, we need the ability to\n lookup the protobuf object type directly from the object type. This\n static method allows us to do this.\n\n Importantly, this method is also used to create the reverse lookup ability within\n the metaclass of Serializable. In the metaclass, it calls this method and then\n it takes whatever type is returned from this method and adds an attribute to it\n with the type of this class attached to it. See the MetaSerializable class for details.\n\n :return: the type of protobuf object which corresponds to this class.\n :rtype: GeneratedProtocolMessageType\n\n \"\"\"\n\n return ObjectWithID_PB\n", "path": "src/syft/core/common/object.py"}]}
| 2,331 | 973 |
gh_patches_debug_15105
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-2355
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fix: elasticsearch `bulk` API produces lists
# Description
Fixes an issue seen in production where the elasticsearch bulk API would end up sending a list through this code path and not a dictionary.
For example:
```
File \"/code/platform_be/core/logic/ingest.py\", line 378, in update_source_doc_version_history
bulk(es_client, bulk_es_updates)
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\", line 521, in bulk
for ok, item in streaming_bulk(
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\", line 436, in streaming_bulk
for data, (ok, info) in zip(
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\", line 339, in _process_bulk_chunk
resp = client.bulk(*args, operations=bulk_actions, **kwargs) # type: ignore[arg-type]
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/utils.py\", line 414, in wrapped
return api(*args, **kwargs)
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/__init__.py\", line 704, in bulk
return self.perform_request( # type: ignore[return-value]
File \"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/_base.py\", line 285, in perform_request
meta, resp_body = self.transport.perform_request(
File \"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/__init__.py\", line 242, in wrapper
attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(
File \"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\", line 58, in sanitize_body
flatten_body = _flatten_dict(body)
File \"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\", line 31, in _flatten_dict
for k, v in d.items():
AttributeError: 'list' object has no attribute 'items'"
```
## Type of change
Please delete options that are not relevant.
- [x] Bug fix (non-breaking change which fixes an issue)
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [x] Unit tests
# Does This PR Require a Core Repo Change?
- [x] No.
# Checklist:
See [contributing.md](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/CONTRIBUTING.md) for styleguide, changelog guidelines, and more.
- [ ] Followed the style guidelines of this project
- [ ] Changelogs have been updated
- [ ] Unit tests have been added
- [ ] Documentation has been updated
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This library allows tracing HTTP elasticsearch made by the
17 `elasticsearch <https://elasticsearch-py.readthedocs.io/en/master/>`_ library.
18
19 Usage
20 -----
21
22 .. code-block:: python
23
24 from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor
25 import elasticsearch
26
27
28 # instrument elasticsearch
29 ElasticsearchInstrumentor().instrument()
30
31 # Using elasticsearch as normal now will automatically generate spans
32 es = elasticsearch.Elasticsearch()
33 es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})
34 es.get(index='my-index', doc_type='my-type', id=1)
35
36 Elasticsearch instrumentation prefixes operation names with the string "Elasticsearch". This
37 can be changed to a different string by either setting the OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX
38 environment variable or by passing the prefix as an argument to the instrumentor. For example,
39
40
41 .. code-block:: python
42
43 ElasticsearchInstrumentor("my-custom-prefix").instrument()
44
45 The instrument() method accepts the following keyword args:
46 tracer_provider (TracerProvider) - an optional tracer provider
47 request_hook (Callable) - a function with extra user-defined logic to be performed before performing the request
48 this function signature is:
49 def request_hook(span: Span, method: str, url: str, kwargs)
50
51 response_hook (Callable) - a function with extra user-defined logic to be performed after performing the request
52 this function signature is:
53 def response_hook(span: Span, response: dict)
54
55 for example:
56
57 .. code: python
58
59 from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor
60 import elasticsearch
61
62 def request_hook(span, method, url, kwargs):
63 if span and span.is_recording():
64 span.set_attribute("custom_user_attribute_from_request_hook", "some-value")
65
66 def response_hook(span, response):
67 if span and span.is_recording():
68 span.set_attribute("custom_user_attribute_from_response_hook", "some-value")
69
70 # instrument elasticsearch with request and response hooks
71 ElasticsearchInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)
72
73 # Using elasticsearch as normal now will automatically generate spans,
74 # including user custom attributes added from the hooks
75 es = elasticsearch.Elasticsearch()
76 es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})
77 es.get(index='my-index', doc_type='my-type', id=1)
78
79 API
80 ---
81 """
82
83 import re
84 from logging import getLogger
85 from os import environ
86 from typing import Collection
87
88 import elasticsearch
89 import elasticsearch.exceptions
90 from wrapt import wrap_function_wrapper as _wrap
91
92 from opentelemetry.instrumentation.elasticsearch.package import _instruments
93 from opentelemetry.instrumentation.elasticsearch.version import __version__
94 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
95 from opentelemetry.instrumentation.utils import unwrap
96 from opentelemetry.semconv.trace import SpanAttributes
97 from opentelemetry.trace import SpanKind, get_tracer
98
99 from .utils import sanitize_body
100
101 # Split of elasticsearch and elastic_transport in 8.0.0+
102 # https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/release-notes.html#rn-8-0-0
103 es_transport_split = elasticsearch.VERSION[0] > 7
104 if es_transport_split:
105 import elastic_transport
106
107 logger = getLogger(__name__)
108
109
110 # Values to add as tags from the actual
111 # payload returned by Elasticsearch, if any.
112 _ATTRIBUTES_FROM_RESULT = [
113 "found",
114 "timed_out",
115 "took",
116 ]
117
118 _DEFAULT_OP_NAME = "request"
119
120
121 class ElasticsearchInstrumentor(BaseInstrumentor):
122 """An instrumentor for elasticsearch
123 See `BaseInstrumentor`
124 """
125
126 def __init__(self, span_name_prefix=None):
127 if not span_name_prefix:
128 span_name_prefix = environ.get(
129 "OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX",
130 "Elasticsearch",
131 )
132 self._span_name_prefix = span_name_prefix.strip()
133 super().__init__()
134
135 def instrumentation_dependencies(self) -> Collection[str]:
136 return _instruments
137
138 def _instrument(self, **kwargs):
139 """
140 Instruments Elasticsearch module
141 """
142 tracer_provider = kwargs.get("tracer_provider")
143 tracer = get_tracer(
144 __name__,
145 __version__,
146 tracer_provider,
147 schema_url="https://opentelemetry.io/schemas/1.11.0",
148 )
149 request_hook = kwargs.get("request_hook")
150 response_hook = kwargs.get("response_hook")
151 if es_transport_split:
152 _wrap(
153 elastic_transport,
154 "Transport.perform_request",
155 _wrap_perform_request(
156 tracer,
157 self._span_name_prefix,
158 request_hook,
159 response_hook,
160 ),
161 )
162 else:
163 _wrap(
164 elasticsearch,
165 "Transport.perform_request",
166 _wrap_perform_request(
167 tracer,
168 self._span_name_prefix,
169 request_hook,
170 response_hook,
171 ),
172 )
173
174 def _uninstrument(self, **kwargs):
175 # pylint: disable=no-member
176 unwrap(elasticsearch.Transport, "perform_request")
177
178
179 _regex_doc_url = re.compile(r"/_doc/([^/]+)")
180
181 # search api https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html
182 _regex_search_url = re.compile(r"/([^/]+)/_search[/]?")
183
184
185 def _wrap_perform_request(
186 tracer,
187 span_name_prefix,
188 request_hook=None,
189 response_hook=None,
190 ):
191 # pylint: disable=R0912,R0914
192 def wrapper(wrapped, _, args, kwargs):
193 method = url = None
194 try:
195 method, url, *_ = args
196 except IndexError:
197 logger.warning(
198 "expected perform_request to receive two positional arguments. "
199 "Got %d",
200 len(args),
201 )
202
203 op_name = span_name_prefix + (url or method or _DEFAULT_OP_NAME)
204
205 doc_id = None
206 search_target = None
207
208 if url:
209 # TODO: This regex-based solution avoids creating an unbounded number of span names, but should be replaced by instrumenting individual Elasticsearch methods instead of Transport.perform_request()
210 # A limitation of the regex is that only the '_doc' mapping type is supported. Mapping types are deprecated since Elasticsearch 7
211 # https://github.com/open-telemetry/opentelemetry-python-contrib/issues/708
212 match = _regex_doc_url.search(url)
213 if match is not None:
214 # Remove the full document ID from the URL
215 doc_span = match.span()
216 op_name = (
217 span_name_prefix
218 + url[: doc_span[0]]
219 + "/_doc/:id"
220 + url[doc_span[1] :]
221 )
222 # Put the document ID in attributes
223 doc_id = match.group(1)
224 match = _regex_search_url.search(url)
225 if match is not None:
226 op_name = span_name_prefix + "/<target>/_search"
227 search_target = match.group(1)
228
229 params = kwargs.get("params", {})
230 body = kwargs.get("body", None)
231
232 with tracer.start_as_current_span(
233 op_name,
234 kind=SpanKind.CLIENT,
235 ) as span:
236 if callable(request_hook):
237 request_hook(span, method, url, kwargs)
238
239 if span.is_recording():
240 attributes = {
241 SpanAttributes.DB_SYSTEM: "elasticsearch",
242 }
243 if url:
244 attributes["elasticsearch.url"] = url
245 if method:
246 attributes["elasticsearch.method"] = method
247 if body:
248 attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(
249 body
250 )
251 if params:
252 attributes["elasticsearch.params"] = str(params)
253 if doc_id:
254 attributes["elasticsearch.id"] = doc_id
255 if search_target:
256 attributes["elasticsearch.target"] = search_target
257 for key, value in attributes.items():
258 span.set_attribute(key, value)
259
260 rv = wrapped(*args, **kwargs)
261 if isinstance(rv, dict) and span.is_recording():
262 for member in _ATTRIBUTES_FROM_RESULT:
263 if member in rv:
264 span.set_attribute(
265 f"elasticsearch.{member}",
266 str(rv[member]),
267 )
268
269 if callable(response_hook):
270 response_hook(span, rv)
271 return rv
272
273 return wrapper
274
[end of instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py
@@ -245,9 +245,11 @@
if method:
attributes["elasticsearch.method"] = method
if body:
- attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(
- body
- )
+ # Don't set db.statement for bulk requests, as it can be very large
+ if isinstance(body, dict):
+ attributes[
+ SpanAttributes.DB_STATEMENT
+ ] = sanitize_body(body)
if params:
attributes["elasticsearch.params"] = str(params)
if doc_id:
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py\n@@ -245,9 +245,11 @@\n if method:\n attributes[\"elasticsearch.method\"] = method\n if body:\n- attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(\n- body\n- )\n+ # Don't set db.statement for bulk requests, as it can be very large\n+ if isinstance(body, dict):\n+ attributes[\n+ SpanAttributes.DB_STATEMENT\n+ ] = sanitize_body(body)\n if params:\n attributes[\"elasticsearch.params\"] = str(params)\n if doc_id:\n", "issue": "fix: elasticsearch `bulk` API produces lists\n# Description\r\n\r\nFixes an issue seen in production where the elasticsearch bulk API would end up sending a list through this code path and not a dictionary.\r\n\r\nFor example:\r\n\r\n```\r\nFile \\\"/code/platform_be/core/logic/ingest.py\\\", line 378, in update_source_doc_version_history\r\n bulk(es_client, bulk_es_updates)\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\\\", line 521, in bulk\r\n for ok, item in streaming_bulk(\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\\\", line 436, in streaming_bulk\r\n for data, (ok, info) in zip(\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/helpers/actions.py\\\", line 339, in _process_bulk_chunk\r\n resp = client.bulk(*args, operations=bulk_actions, **kwargs) # type: ignore[arg-type]\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/utils.py\\\", line 414, in wrapped\r\n return api(*args, **kwargs)\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/__init__.py\\\", line 704, in bulk\r\n return self.perform_request( # type: ignore[return-value]\r\n File \\\"/opt/python/lib/python3.10/site-packages/elasticsearch/_sync/client/_base.py\\\", line 285, in perform_request\r\n meta, resp_body = self.transport.perform_request(\r\n File \\\"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/__init__.py\\\", line 242, in wrapper\r\n attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(\r\n File \\\"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\\\", line 58, in sanitize_body\r\n flatten_body = _flatten_dict(body)\r\n File \\\"/opt/python/lib/python3.10/site-packages/opentelemetry/instrumentation/elasticsearch/utils.py\\\", line 31, in _flatten_dict\r\n for k, v in d.items():\r\n AttributeError: 'list' object has no attribute 'items'\"\r\n```\r\n\r\n## Type of change\r\n\r\nPlease delete options that are not relevant.\r\n\r\n- [x] Bug fix (non-breaking change which fixes an issue)\r\n\r\n# How Has This Been Tested?\r\n\r\nPlease describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration\r\n\r\n- [x] Unit tests\r\n\r\n# Does This PR Require a Core Repo Change?\r\n\r\n- [x] No.\r\n\r\n# Checklist:\r\n\r\nSee [contributing.md](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/CONTRIBUTING.md) for styleguide, changelog guidelines, and more.\r\n\r\n- [ ] Followed the style guidelines of this project\r\n- [ ] Changelogs have been updated\r\n- [ ] Unit tests have been added\r\n- [ ] Documentation has been updated\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows tracing HTTP elasticsearch made by the\n`elasticsearch <https://elasticsearch-py.readthedocs.io/en/master/>`_ library.\n\nUsage\n-----\n\n.. code-block:: python\n\n from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor\n import elasticsearch\n\n\n # instrument elasticsearch\n ElasticsearchInstrumentor().instrument()\n\n # Using elasticsearch as normal now will automatically generate spans\n es = elasticsearch.Elasticsearch()\n es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})\n es.get(index='my-index', doc_type='my-type', id=1)\n\nElasticsearch instrumentation prefixes operation names with the string \"Elasticsearch\". This\ncan be changed to a different string by either setting the OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX\nenvironment variable or by passing the prefix as an argument to the instrumentor. For example,\n\n\n.. code-block:: python\n\n ElasticsearchInstrumentor(\"my-custom-prefix\").instrument()\n\nThe instrument() method accepts the following keyword args:\ntracer_provider (TracerProvider) - an optional tracer provider\nrequest_hook (Callable) - a function with extra user-defined logic to be performed before performing the request\nthis function signature is:\ndef request_hook(span: Span, method: str, url: str, kwargs)\n\nresponse_hook (Callable) - a function with extra user-defined logic to be performed after performing the request\nthis function signature is:\ndef response_hook(span: Span, response: dict)\n\nfor example:\n\n.. code: python\n\n from opentelemetry.instrumentation.elasticsearch import ElasticsearchInstrumentor\n import elasticsearch\n\n def request_hook(span, method, url, kwargs):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_request_hook\", \"some-value\")\n\n def response_hook(span, response):\n if span and span.is_recording():\n span.set_attribute(\"custom_user_attribute_from_response_hook\", \"some-value\")\n\n # instrument elasticsearch with request and response hooks\n ElasticsearchInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook)\n\n # Using elasticsearch as normal now will automatically generate spans,\n # including user custom attributes added from the hooks\n es = elasticsearch.Elasticsearch()\n es.index(index='my-index', doc_type='my-type', id=1, body={'my': 'data', 'timestamp': datetime.now()})\n es.get(index='my-index', doc_type='my-type', id=1)\n\nAPI\n---\n\"\"\"\n\nimport re\nfrom logging import getLogger\nfrom os import environ\nfrom typing import Collection\n\nimport elasticsearch\nimport elasticsearch.exceptions\nfrom wrapt import wrap_function_wrapper as _wrap\n\nfrom opentelemetry.instrumentation.elasticsearch.package import _instruments\nfrom opentelemetry.instrumentation.elasticsearch.version import __version__\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.utils import unwrap\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace import SpanKind, get_tracer\n\nfrom .utils import sanitize_body\n\n# Split of elasticsearch and elastic_transport in 8.0.0+\n# https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/release-notes.html#rn-8-0-0\nes_transport_split = elasticsearch.VERSION[0] > 7\nif es_transport_split:\n import elastic_transport\n\nlogger = getLogger(__name__)\n\n\n# Values to add as tags from the actual\n# payload returned by Elasticsearch, if any.\n_ATTRIBUTES_FROM_RESULT = [\n \"found\",\n \"timed_out\",\n \"took\",\n]\n\n_DEFAULT_OP_NAME = \"request\"\n\n\nclass ElasticsearchInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for elasticsearch\n See `BaseInstrumentor`\n \"\"\"\n\n def __init__(self, span_name_prefix=None):\n if not span_name_prefix:\n span_name_prefix = environ.get(\n \"OTEL_PYTHON_ELASTICSEARCH_NAME_PREFIX\",\n \"Elasticsearch\",\n )\n self._span_name_prefix = span_name_prefix.strip()\n super().__init__()\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"\n Instruments Elasticsearch module\n \"\"\"\n tracer_provider = kwargs.get(\"tracer_provider\")\n tracer = get_tracer(\n __name__,\n __version__,\n tracer_provider,\n schema_url=\"https://opentelemetry.io/schemas/1.11.0\",\n )\n request_hook = kwargs.get(\"request_hook\")\n response_hook = kwargs.get(\"response_hook\")\n if es_transport_split:\n _wrap(\n elastic_transport,\n \"Transport.perform_request\",\n _wrap_perform_request(\n tracer,\n self._span_name_prefix,\n request_hook,\n response_hook,\n ),\n )\n else:\n _wrap(\n elasticsearch,\n \"Transport.perform_request\",\n _wrap_perform_request(\n tracer,\n self._span_name_prefix,\n request_hook,\n response_hook,\n ),\n )\n\n def _uninstrument(self, **kwargs):\n # pylint: disable=no-member\n unwrap(elasticsearch.Transport, \"perform_request\")\n\n\n_regex_doc_url = re.compile(r\"/_doc/([^/]+)\")\n\n# search api https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html\n_regex_search_url = re.compile(r\"/([^/]+)/_search[/]?\")\n\n\ndef _wrap_perform_request(\n tracer,\n span_name_prefix,\n request_hook=None,\n response_hook=None,\n):\n # pylint: disable=R0912,R0914\n def wrapper(wrapped, _, args, kwargs):\n method = url = None\n try:\n method, url, *_ = args\n except IndexError:\n logger.warning(\n \"expected perform_request to receive two positional arguments. \"\n \"Got %d\",\n len(args),\n )\n\n op_name = span_name_prefix + (url or method or _DEFAULT_OP_NAME)\n\n doc_id = None\n search_target = None\n\n if url:\n # TODO: This regex-based solution avoids creating an unbounded number of span names, but should be replaced by instrumenting individual Elasticsearch methods instead of Transport.perform_request()\n # A limitation of the regex is that only the '_doc' mapping type is supported. Mapping types are deprecated since Elasticsearch 7\n # https://github.com/open-telemetry/opentelemetry-python-contrib/issues/708\n match = _regex_doc_url.search(url)\n if match is not None:\n # Remove the full document ID from the URL\n doc_span = match.span()\n op_name = (\n span_name_prefix\n + url[: doc_span[0]]\n + \"/_doc/:id\"\n + url[doc_span[1] :]\n )\n # Put the document ID in attributes\n doc_id = match.group(1)\n match = _regex_search_url.search(url)\n if match is not None:\n op_name = span_name_prefix + \"/<target>/_search\"\n search_target = match.group(1)\n\n params = kwargs.get(\"params\", {})\n body = kwargs.get(\"body\", None)\n\n with tracer.start_as_current_span(\n op_name,\n kind=SpanKind.CLIENT,\n ) as span:\n if callable(request_hook):\n request_hook(span, method, url, kwargs)\n\n if span.is_recording():\n attributes = {\n SpanAttributes.DB_SYSTEM: \"elasticsearch\",\n }\n if url:\n attributes[\"elasticsearch.url\"] = url\n if method:\n attributes[\"elasticsearch.method\"] = method\n if body:\n attributes[SpanAttributes.DB_STATEMENT] = sanitize_body(\n body\n )\n if params:\n attributes[\"elasticsearch.params\"] = str(params)\n if doc_id:\n attributes[\"elasticsearch.id\"] = doc_id\n if search_target:\n attributes[\"elasticsearch.target\"] = search_target\n for key, value in attributes.items():\n span.set_attribute(key, value)\n\n rv = wrapped(*args, **kwargs)\n if isinstance(rv, dict) and span.is_recording():\n for member in _ATTRIBUTES_FROM_RESULT:\n if member in rv:\n span.set_attribute(\n f\"elasticsearch.{member}\",\n str(rv[member]),\n )\n\n if callable(response_hook):\n response_hook(span, rv)\n return rv\n\n return wrapper\n", "path": "instrumentation/opentelemetry-instrumentation-elasticsearch/src/opentelemetry/instrumentation/elasticsearch/__init__.py"}]}
| 3,906 | 230 |
gh_patches_debug_15390
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-1138
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nested attribute access in JS->Python type conversion
Currently the following code fails,
```js
>>> from js import window
>>> window.URL.createObjectURL
Error: Traceback (most recent call last):
File "/lib/python3.7/site-packages/pyodide.py", line 45, in eval_code
return eval(compile(expr, '<eval>', mode='eval'), ns, ns)
File "<eval>", line 1, in <module>
AttributeError: 'JsBoundMethod' object has no attribute 'createObjectURL'
```
(while `window.URL.createObjectURL` is a valid JS object) because nested attributes (i.e. attribute of an attribute) don't seem to be supported. It would have been nice to make it work, though I have not looked at how difficult that would be.
from js import fetch treats fetch as a free function
`fetch` is a member function of `window`.
However, using `from js import fetch` doesn't realize that and leads to the error:
`TypeError: 'fetch' called on an object that does not implement interface Window.`
For Reproducing the Error:
```
%%py
from js import document, Request, fetch, URL
img_tag = document.createElement('img')
req = Request.new('https://i.ibb.co/3f4yJQS/face4.jpg')
def func(response):
return response.blob()
def func2(blob):
objURL = URL.createObjectURL(blob)
img_tag.src = objURL
fetch(req).then(func).then(func2)
document.body.appendChild(img_tag)
```
</issue>
<code>
[start of src/pyodide-py/pyodide/_core.py]
1 # type: ignore
2 import platform
3
4 if platform.system() == "Emscripten":
5 from _pyodide_core import JsProxy, JsBoundMethod, JsException
6 else:
7 # Can add shims here if we are so inclined.
8 class JsException(Exception):
9 """
10 A wrapper around a Javascript Error to allow the Error to be thrown in Python.
11 """
12
13 # Defined in jsproxy.c
14
15 class JsProxy:
16 """A proxy to make a Javascript object behave like a Python object"""
17
18 # Defined in jsproxy.c
19
20 class JsBoundMethod:
21 """A proxy to make it possible to call Javascript bound methods from Python."""
22
23 # Defined in jsproxy.c
24
25
26 __all__ = [JsProxy, JsBoundMethod, JsException]
27
[end of src/pyodide-py/pyodide/_core.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pyodide-py/pyodide/_core.py b/src/pyodide-py/pyodide/_core.py
--- a/src/pyodide-py/pyodide/_core.py
+++ b/src/pyodide-py/pyodide/_core.py
@@ -2,7 +2,7 @@
import platform
if platform.system() == "Emscripten":
- from _pyodide_core import JsProxy, JsBoundMethod, JsException
+ from _pyodide_core import JsProxy, JsMethod, JsException
else:
# Can add shims here if we are so inclined.
class JsException(Exception):
@@ -17,10 +17,10 @@
# Defined in jsproxy.c
- class JsBoundMethod:
+ class JsMethod:
"""A proxy to make it possible to call Javascript bound methods from Python."""
# Defined in jsproxy.c
-__all__ = [JsProxy, JsBoundMethod, JsException]
+__all__ = [JsProxy, JsMethod, JsException]
|
{"golden_diff": "diff --git a/src/pyodide-py/pyodide/_core.py b/src/pyodide-py/pyodide/_core.py\n--- a/src/pyodide-py/pyodide/_core.py\n+++ b/src/pyodide-py/pyodide/_core.py\n@@ -2,7 +2,7 @@\n import platform\n \n if platform.system() == \"Emscripten\":\n- from _pyodide_core import JsProxy, JsBoundMethod, JsException\n+ from _pyodide_core import JsProxy, JsMethod, JsException\n else:\n # Can add shims here if we are so inclined.\n class JsException(Exception):\n@@ -17,10 +17,10 @@\n \n # Defined in jsproxy.c\n \n- class JsBoundMethod:\n+ class JsMethod:\n \"\"\"A proxy to make it possible to call Javascript bound methods from Python.\"\"\"\n \n # Defined in jsproxy.c\n \n \n-__all__ = [JsProxy, JsBoundMethod, JsException]\n+__all__ = [JsProxy, JsMethod, JsException]\n", "issue": "Nested attribute access in JS->Python type conversion\nCurrently the following code fails,\r\n```js\r\n>>> from js import window\r\n>>> window.URL.createObjectURL\r\nError: Traceback (most recent call last):\r\n File \"/lib/python3.7/site-packages/pyodide.py\", line 45, in eval_code\r\n return eval(compile(expr, '<eval>', mode='eval'), ns, ns)\r\n File \"<eval>\", line 1, in <module>\r\nAttributeError: 'JsBoundMethod' object has no attribute 'createObjectURL'\r\n```\r\n(while `window.URL.createObjectURL` is a valid JS object) because nested attributes (i.e. attribute of an attribute) don't seem to be supported. It would have been nice to make it work, though I have not looked at how difficult that would be.\nfrom js import fetch treats fetch as a free function\n`fetch` is a member function of `window`.\r\nHowever, using `from js import fetch` doesn't realize that and leads to the error:\r\n\r\n`TypeError: 'fetch' called on an object that does not implement interface Window.`\r\n\r\nFor Reproducing the Error:\r\n```\r\n%%py\r\n\r\nfrom js import document, Request, fetch, URL\r\nimg_tag = document.createElement('img')\r\nreq = Request.new('https://i.ibb.co/3f4yJQS/face4.jpg')\r\n\r\ndef func(response):\r\n return response.blob()\r\n\r\ndef func2(blob):\r\n objURL = URL.createObjectURL(blob)\r\n img_tag.src = objURL\r\n\r\nfetch(req).then(func).then(func2)\r\n\r\ndocument.body.appendChild(img_tag)\r\n```\n", "before_files": [{"content": "# type: ignore\nimport platform\n\nif platform.system() == \"Emscripten\":\n from _pyodide_core import JsProxy, JsBoundMethod, JsException\nelse:\n # Can add shims here if we are so inclined.\n class JsException(Exception):\n \"\"\"\n A wrapper around a Javascript Error to allow the Error to be thrown in Python.\n \"\"\"\n\n # Defined in jsproxy.c\n\n class JsProxy:\n \"\"\"A proxy to make a Javascript object behave like a Python object\"\"\"\n\n # Defined in jsproxy.c\n\n class JsBoundMethod:\n \"\"\"A proxy to make it possible to call Javascript bound methods from Python.\"\"\"\n\n # Defined in jsproxy.c\n\n\n__all__ = [JsProxy, JsBoundMethod, JsException]\n", "path": "src/pyodide-py/pyodide/_core.py"}]}
| 1,094 | 235 |
gh_patches_debug_5221
|
rasdani/github-patches
|
git_diff
|
saulpw__visidata-1011
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
vds: cannot export sheets containing a date column
**Small description**
It's not possible to save a sheet to a .vds file if said sheet contains a date column.
This results in the error below.
**Expected result**
It should just work(tm).
**Actual result with screenshot**
```stacktrace
Traceback (most recent call last):
File "/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/threads.py", line 215, in _toplevelTryFunc
t.status = func(*args, **kwargs)
File "/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/loaders/vds.py", line 32, in save_vds
fp.write(json.dumps(d)+NL)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type date is not JSON serializable
```
**Steps to reproduce with sample data and a .vd**
❯ cat testsheet.vd
```log
sheet col row longname input keystrokes comment
visidata_menu sheets-stack S open Sheets Stack: join or jump between the active sheets on the current stack
sheets キsheets add-row a append a blank row
sheets name キ edit-cell testsheet e edit contents of current cell
sheets キtestsheet open-row ^J open sheet referenced in current row
testsheet 0 rename-col testcol ^ edit name of current column
testsheet testcol type-date @ set type of current column to date
testsheet add-row a append a blank row
testsheet testcol 0 edit-cell 2021-06-14 e edit contents of current cell
testsheet save-all test.vds g^S save all sheets to given file or directory)
```
**Additional context**
Problem is present on v2.4 and on the develop branch (commit 3350d9fd8c9e64ebf409deae4b31085d12efeb7f)
</issue>
<code>
[start of visidata/loaders/vds.py]
1 'Custom VisiData save format'
2
3 import json
4 from visidata import *
5
6 NL='\n'
7
8 @VisiData.api
9 def open_vds(vd, p):
10 return VdsIndexSheet(p.name, source=p)
11
12
13 @VisiData.api
14 def save_vds(vd, p, *sheets):
15 'Save in custom VisiData format, preserving columns and their attributes.'
16
17 with p.open_text(mode='w') as fp:
18 for vs in sheets:
19 # class and attrs for vs
20 d = { 'name': vs.name, }
21 fp.write('#'+json.dumps(d)+NL)
22
23 # class and attrs for each column in vs
24 for col in vs.visibleCols:
25 d = col.__getstate__()
26 d['col'] = type(col).__name__
27 fp.write('#'+json.dumps(d)+NL)
28
29 with Progress(gerund='saving'):
30 for row in vs.iterdispvals(*vs.visibleCols, format=False):
31 d = {col.name:val for col, val in row.items()}
32 fp.write(json.dumps(d)+NL)
33
34
35 class VdsIndexSheet(IndexSheet):
36 def iterload(self):
37 vs = None
38 with self.source.open_text() as fp:
39 line = fp.readline()
40 while line:
41 if line.startswith('#{'):
42 d = json.loads(line[1:])
43 if 'col' not in d:
44 vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())
45 yield vs
46 line = fp.readline()
47
48
49 class VdsSheet(Sheet):
50 def newRow(self):
51 return {} # rowdef: dict
52
53 def iterload(self):
54 self.colnames = {}
55 self.columns = []
56
57 with self.source.open_text() as fp:
58 fp.seek(self.source_fpos)
59
60 # consume all metadata, create columns
61 line = fp.readline()
62 while line and line.startswith('#{'):
63 d = json.loads(line[1:])
64 if 'col' not in d:
65 raise Exception(d)
66 classname = d.pop('col')
67 if classname == 'Column':
68 classname = 'ItemColumn'
69 d['expr'] = d['name']
70
71 c = globals()[classname](d.pop('name'))
72 self.colnames[c.name] = c
73 self.addColumn(c)
74 for k, v in d.items():
75 setattr(c, k, v)
76
77 line = fp.readline()
78
79 while line and not line.startswith('#{'):
80 d = json.loads(line)
81 yield d
82 line = fp.readline()
83
[end of visidata/loaders/vds.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/visidata/loaders/vds.py b/visidata/loaders/vds.py
--- a/visidata/loaders/vds.py
+++ b/visidata/loaders/vds.py
@@ -29,7 +29,7 @@
with Progress(gerund='saving'):
for row in vs.iterdispvals(*vs.visibleCols, format=False):
d = {col.name:val for col, val in row.items()}
- fp.write(json.dumps(d)+NL)
+ fp.write(json.dumps(d, default=str)+NL)
class VdsIndexSheet(IndexSheet):
|
{"golden_diff": "diff --git a/visidata/loaders/vds.py b/visidata/loaders/vds.py\n--- a/visidata/loaders/vds.py\n+++ b/visidata/loaders/vds.py\n@@ -29,7 +29,7 @@\n with Progress(gerund='saving'):\n for row in vs.iterdispvals(*vs.visibleCols, format=False):\n d = {col.name:val for col, val in row.items()}\n- fp.write(json.dumps(d)+NL)\n+ fp.write(json.dumps(d, default=str)+NL)\n \n \n class VdsIndexSheet(IndexSheet):\n", "issue": "vds: cannot export sheets containing a date column\n**Small description**\r\n\r\nIt's not possible to save a sheet to a .vds file if said sheet contains a date column.\r\nThis results in the error below.\r\n\r\n**Expected result**\r\n\r\nIt should just work(tm).\r\n\r\n**Actual result with screenshot**\r\n\r\n```stacktrace\r\nTraceback (most recent call last):\r\n File \"/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/threads.py\", line 215, in _toplevelTryFunc\r\n t.status = func(*args, **kwargs)\r\n File \"/nix/store/srkr2wnwq95ylmgiadh28p3jiaadl5yw-visidata-2.4/lib/python3.8/site-packages/visidata/loaders/vds.py\", line 32, in save_vds\r\n fp.write(json.dumps(d)+NL)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/__init__.py\", line 231, in dumps\r\n return _default_encoder.encode(obj)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py\", line 199, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py\", line 257, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/nix/store/4s0h5aawbap3xhldxhcijvl26751qrjr-python3-3.8.9/lib/python3.8/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type date is not JSON serializable\r\n```\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\n\r\n\u276f cat testsheet.vd\r\n```log\r\nsheet\tcol\trow\tlongname\tinput\tkeystrokes\tcomment\r\nvisidata_menu\t\t\tsheets-stack\t\tS\topen Sheets Stack: join or jump between the active sheets on the current stack\r\nsheets\t\t\u30adsheets\tadd-row\t\ta\tappend a blank row\r\nsheets\tname\t\u30ad\tedit-cell\ttestsheet\te\tedit contents of current cell\r\nsheets\t\t\u30adtestsheet\topen-row\t\t^J\topen sheet referenced in current row\r\ntestsheet\t0\t\trename-col\ttestcol\t^\tedit name of current column\r\ntestsheet\ttestcol\t\ttype-date\t\t@\tset type of current column to date\r\ntestsheet\t\t\tadd-row\t\ta\tappend a blank row\r\ntestsheet\ttestcol\t0\tedit-cell\t2021-06-14\te\tedit contents of current cell\r\ntestsheet\t\t\tsave-all\ttest.vds\tg^S\tsave all sheets to given file or directory)\r\n```\r\n\r\n\r\n**Additional context**\r\n\r\nProblem is present on v2.4 and on the develop branch (commit 3350d9fd8c9e64ebf409deae4b31085d12efeb7f)\n", "before_files": [{"content": "'Custom VisiData save format'\n\nimport json\nfrom visidata import *\n\nNL='\\n'\n\[email protected]\ndef open_vds(vd, p):\n return VdsIndexSheet(p.name, source=p)\n\n\[email protected]\ndef save_vds(vd, p, *sheets):\n 'Save in custom VisiData format, preserving columns and their attributes.'\n\n with p.open_text(mode='w') as fp:\n for vs in sheets:\n # class and attrs for vs\n d = { 'name': vs.name, }\n fp.write('#'+json.dumps(d)+NL)\n\n # class and attrs for each column in vs\n for col in vs.visibleCols:\n d = col.__getstate__()\n d['col'] = type(col).__name__\n fp.write('#'+json.dumps(d)+NL)\n\n with Progress(gerund='saving'):\n for row in vs.iterdispvals(*vs.visibleCols, format=False):\n d = {col.name:val for col, val in row.items()}\n fp.write(json.dumps(d)+NL)\n\n\nclass VdsIndexSheet(IndexSheet):\n def iterload(self):\n vs = None\n with self.source.open_text() as fp:\n line = fp.readline()\n while line:\n if line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())\n yield vs\n line = fp.readline()\n\n\nclass VdsSheet(Sheet):\n def newRow(self):\n return {} # rowdef: dict\n\n def iterload(self):\n self.colnames = {}\n self.columns = []\n\n with self.source.open_text() as fp:\n fp.seek(self.source_fpos)\n\n # consume all metadata, create columns\n line = fp.readline()\n while line and line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n raise Exception(d)\n classname = d.pop('col')\n if classname == 'Column':\n classname = 'ItemColumn'\n d['expr'] = d['name']\n\n c = globals()[classname](d.pop('name'))\n self.colnames[c.name] = c\n self.addColumn(c)\n for k, v in d.items():\n setattr(c, k, v)\n\n line = fp.readline()\n\n while line and not line.startswith('#{'):\n d = json.loads(line)\n yield d\n line = fp.readline()\n", "path": "visidata/loaders/vds.py"}]}
| 2,028 | 129 |
gh_patches_debug_34383
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-11310
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Keras documentation: multi_gpu_model examples do not show properly on the home page keras.io
Some examples of usage of `multi_gpu_model` appear on the documentation of the function in the [source code](https://github.com/keras-team/keras/blob/master/keras/utils/multi_gpu_utils.py). However they do not display correctly on the [Keras home page](https://keras.io/utils/):
```Example 1 - Training models with weights merge on CPU
$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$0
Example 2 - Training models with weights merge on CPU using cpu_relocation
$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$1
Example 3 - Training models with weights merge on GPU (recommended for NV-link)
$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$2```
Keras documentation of multi_gpu_model: example 2 can be misleading
In the Keras documentation for `multi_gpu_model`, it is stated:
> To save the multi-gpu model, use .save(fname) or .save_weights(fname) with the template model (the argument you passed to multi_gpu_model), rather than the model returned by multi_gpu_model.
However in example 2 the template model is overwritten by the multi-gpu model:
```python
..
# Not needed to change the device scope for model definition:
model = Xception(weights=None, ..)
try:
model = multi_gpu_model(model, cpu_relocation=True)
print("Training using multiple GPUs..")
except:
print("Training using single GPU or CPU..")
model.compile(..)
..
```
This means that in this example it would not be possible to save the weights of the template model. I suggest rewritting to something like:
```python
..
# Not needed to change the device scope for model definition:
model = Xception(weights=None, ..)
try:
parallel_model = multi_gpu_model(model, cpu_relocation=True)
print("Training using multiple GPUs..")
except ValueError:
parallel_model = model
print("Training using single GPU or CPU..")
parallel_model.compile(..)
..
```
(I take this opportunity to except only a specific error)
</issue>
<code>
[start of keras/utils/multi_gpu_utils.py]
1 """Multi-GPU training utilities.
2 """
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6
7 from ..layers.merge import concatenate
8 from .. import backend as K
9 from ..layers.core import Lambda
10 from ..engine.training import Model
11 from ..models import clone_model
12 from ..utils.generic_utils import to_list
13
14
15 def _get_available_devices():
16 return [x.name for x in K.get_session().list_devices()]
17
18
19 def _normalize_device_name(name):
20 name = '/' + ':'.join(name.lower().replace('/', '').split(':')[-2:])
21 return name
22
23
24 def multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False):
25 """Replicates a model on different GPUs.
26
27 Specifically, this function implements single-machine
28 multi-GPU data parallelism. It works in the following way:
29
30 - Divide the model's input(s) into multiple sub-batches.
31 - Apply a model copy on each sub-batch. Every model copy
32 is executed on a dedicated GPU.
33 - Concatenate the results (on CPU) into one big batch.
34
35 E.g. if your `batch_size` is 64 and you use `gpus=2`,
36 then we will divide the input into 2 sub-batches of 32 samples,
37 process each sub-batch on one GPU, then return the full
38 batch of 64 processed samples.
39
40 This induces quasi-linear speedup on up to 8 GPUs.
41
42 This function is only available with the TensorFlow backend
43 for the time being.
44
45 # Arguments
46 model: A Keras model instance. To avoid OOM errors,
47 this model could have been built on CPU, for instance
48 (see usage example below).
49 gpus: Integer >= 2 or list of integers, number of GPUs or
50 list of GPU IDs on which to create model replicas.
51 cpu_merge: A boolean value to identify whether to force
52 merging model weights under the scope of the CPU or not.
53 cpu_relocation: A boolean value to identify whether to
54 create the model's weights under the scope of the CPU.
55 If the model is not defined under any preceding device
56 scope, you can still rescue it by activating this option.
57
58 # Returns
59 A Keras `Model` instance which can be used just like the initial
60 `model` argument, but which distributes its workload on multiple GPUs.
61
62 # Example 1 - Training models with weights merge on CPU
63
64 ```python
65 import tensorflow as tf
66 from keras.applications import Xception
67 from keras.utils import multi_gpu_model
68 import numpy as np
69
70 num_samples = 1000
71 height = 224
72 width = 224
73 num_classes = 1000
74
75 # Instantiate the base model (or "template" model).
76 # We recommend doing this with under a CPU device scope,
77 # so that the model's weights are hosted on CPU memory.
78 # Otherwise they may end up hosted on a GPU, which would
79 # complicate weight sharing.
80 with tf.device('/cpu:0'):
81 model = Xception(weights=None,
82 input_shape=(height, width, 3),
83 classes=num_classes)
84
85 # Replicates the model on 8 GPUs.
86 # This assumes that your machine has 8 available GPUs.
87 parallel_model = multi_gpu_model(model, gpus=8)
88 parallel_model.compile(loss='categorical_crossentropy',
89 optimizer='rmsprop')
90
91 # Generate dummy data.
92 x = np.random.random((num_samples, height, width, 3))
93 y = np.random.random((num_samples, num_classes))
94
95 # This `fit` call will be distributed on 8 GPUs.
96 # Since the batch size is 256, each GPU will process 32 samples.
97 parallel_model.fit(x, y, epochs=20, batch_size=256)
98
99 # Save model via the template model (which shares the same weights):
100 model.save('my_model.h5')
101 ```
102
103 # Example 2 - Training models with weights merge on CPU using cpu_relocation
104
105 ```python
106 ..
107 # Not needed to change the device scope for model definition:
108 model = Xception(weights=None, ..)
109
110 try:
111 model = multi_gpu_model(model, cpu_relocation=True)
112 print("Training using multiple GPUs..")
113 except:
114 print("Training using single GPU or CPU..")
115
116 model.compile(..)
117 ..
118 ```
119
120 # Example 3 - Training models with weights merge on GPU (recommended for NV-link)
121
122 ```python
123 ..
124 # Not needed to change the device scope for model definition:
125 model = Xception(weights=None, ..)
126
127 try:
128 model = multi_gpu_model(model, cpu_merge=False)
129 print("Training using multiple GPUs..")
130 except:
131 print("Training using single GPU or CPU..")
132
133 model.compile(..)
134 ..
135 ```
136
137 # On model saving
138
139 To save the multi-gpu model, use `.save(fname)` or `.save_weights(fname)`
140 with the template model (the argument you passed to `multi_gpu_model`),
141 rather than the model returned by `multi_gpu_model`.
142 """
143 if K.backend() != 'tensorflow':
144 raise ValueError('`multi_gpu_model` is only available '
145 'with the TensorFlow backend.')
146
147 available_devices = _get_available_devices()
148 available_devices = [_normalize_device_name(name)
149 for name in available_devices]
150 if not gpus:
151 # Using all visible GPUs when not specifying `gpus`
152 # e.g. CUDA_VISIBLE_DEVICES=0,2 python keras_mgpu.py
153 gpus = len([x for x in available_devices if 'gpu' in x])
154
155 if isinstance(gpus, (list, tuple)):
156 if len(gpus) <= 1:
157 raise ValueError('For multi-gpu usage to be effective, '
158 'call `multi_gpu_model` with `len(gpus) >= 2`. '
159 'Received: `gpus=%s`' % gpus)
160 num_gpus = len(gpus)
161 target_gpu_ids = gpus
162 else:
163 if gpus <= 1:
164 raise ValueError('For multi-gpu usage to be effective, '
165 'call `multi_gpu_model` with `gpus >= 2`. '
166 'Received: `gpus=%d`' % gpus)
167 num_gpus = gpus
168 target_gpu_ids = range(num_gpus)
169
170 import tensorflow as tf
171
172 target_devices = ['/cpu:0'] + ['/gpu:%d' % i for i in target_gpu_ids]
173 for device in target_devices:
174 if device not in available_devices:
175 raise ValueError(
176 'To call `multi_gpu_model` with `gpus=%s`, '
177 'we expect the following devices to be available: %s. '
178 'However this machine only has: %s. '
179 'Try reducing `gpus`.' % (gpus,
180 target_devices,
181 available_devices))
182
183 def get_slice(data, i, parts):
184 shape = K.shape(data)
185 batch_size = shape[:1]
186 input_shape = shape[1:]
187 step = batch_size // parts
188 if i == parts - 1:
189 size = batch_size - step * i
190 else:
191 size = step
192 size = K.concatenate([size, input_shape], axis=0)
193 stride = K.concatenate([step, input_shape * 0], axis=0)
194 start = stride * i
195 return K.slice(data, start, size)
196
197 # Relocate the model definition under CPU device scope if needed
198 if cpu_relocation:
199 with tf.device('/cpu:0'):
200 model = clone_model(model)
201
202 all_outputs = []
203 for i in range(len(model.outputs)):
204 all_outputs.append([])
205
206 # Place a copy of the model on each GPU,
207 # each getting a slice of the inputs.
208 for i, gpu_id in enumerate(target_gpu_ids):
209 with tf.device('/gpu:%d' % gpu_id):
210 with tf.name_scope('replica_%d' % gpu_id):
211 inputs = []
212 # Retrieve a slice of the input.
213 for x in model.inputs:
214 # In-place input splitting which is not only
215 # 5% ~ 12% faster but also less GPU memory
216 # duplication.
217 with tf.device(x.device):
218 input_shape = K.int_shape(x)[1:]
219 slice_i = Lambda(get_slice,
220 output_shape=input_shape,
221 arguments={'i': i,
222 'parts': num_gpus})(x)
223 inputs.append(slice_i)
224
225 # Apply model on slice
226 # (creating a model replica on the target device).
227 outputs = model(inputs)
228 outputs = to_list(outputs)
229
230 # Save the outputs for merging back together later.
231 for o in range(len(outputs)):
232 all_outputs[o].append(outputs[o])
233
234 # Deduplicate output names to handle Siamese networks.
235 occurrences = {}
236 for n in model.output_names:
237 if n not in occurrences:
238 occurrences[n] = 1
239 else:
240 occurrences[n] += 1
241 conflict_counter = {n: 0 for n, count in occurrences.items() if count > 1}
242 output_names = []
243 for n in model.output_names:
244 if n in conflict_counter:
245 conflict_counter[n] += 1
246 n += '_%d' % conflict_counter[n]
247 output_names.append(n)
248
249 # Merge outputs under expected scope.
250 with tf.device('/cpu:0' if cpu_merge else '/gpu:%d' % target_gpu_ids[0]):
251 merged = []
252 for name, outputs in zip(output_names, all_outputs):
253 merged.append(concatenate(outputs,
254 axis=0, name=name))
255 return Model(model.inputs, merged)
256
[end of keras/utils/multi_gpu_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras/utils/multi_gpu_utils.py b/keras/utils/multi_gpu_utils.py
--- a/keras/utils/multi_gpu_utils.py
+++ b/keras/utils/multi_gpu_utils.py
@@ -59,7 +59,9 @@
A Keras `Model` instance which can be used just like the initial
`model` argument, but which distributes its workload on multiple GPUs.
- # Example 1 - Training models with weights merge on CPU
+ # Examples
+
+ Example 1 - Training models with weights merge on CPU
```python
import tensorflow as tf
@@ -100,7 +102,7 @@
model.save('my_model.h5')
```
- # Example 2 - Training models with weights merge on CPU using cpu_relocation
+ Example 2 - Training models with weights merge on CPU using cpu_relocation
```python
..
@@ -108,16 +110,16 @@
model = Xception(weights=None, ..)
try:
- model = multi_gpu_model(model, cpu_relocation=True)
+ parallel_model = multi_gpu_model(model, cpu_relocation=True)
print("Training using multiple GPUs..")
- except:
+ except ValueError:
+ parallel_model = model
print("Training using single GPU or CPU..")
-
- model.compile(..)
+ parallel_model.compile(..)
..
```
- # Example 3 - Training models with weights merge on GPU (recommended for NV-link)
+ Example 3 - Training models with weights merge on GPU (recommended for NV-link)
```python
..
@@ -125,12 +127,13 @@
model = Xception(weights=None, ..)
try:
- model = multi_gpu_model(model, cpu_merge=False)
+ parallel_model = multi_gpu_model(model, cpu_merge=False)
print("Training using multiple GPUs..")
except:
+ parallel_model = model
print("Training using single GPU or CPU..")
- model.compile(..)
+ parallel_model.compile(..)
..
```
|
{"golden_diff": "diff --git a/keras/utils/multi_gpu_utils.py b/keras/utils/multi_gpu_utils.py\n--- a/keras/utils/multi_gpu_utils.py\n+++ b/keras/utils/multi_gpu_utils.py\n@@ -59,7 +59,9 @@\n A Keras `Model` instance which can be used just like the initial\n `model` argument, but which distributes its workload on multiple GPUs.\n \n- # Example 1 - Training models with weights merge on CPU\n+ # Examples\n+\n+ Example 1 - Training models with weights merge on CPU\n \n ```python\n import tensorflow as tf\n@@ -100,7 +102,7 @@\n model.save('my_model.h5')\n ```\n \n- # Example 2 - Training models with weights merge on CPU using cpu_relocation\n+ Example 2 - Training models with weights merge on CPU using cpu_relocation\n \n ```python\n ..\n@@ -108,16 +110,16 @@\n model = Xception(weights=None, ..)\n \n try:\n- model = multi_gpu_model(model, cpu_relocation=True)\n+ parallel_model = multi_gpu_model(model, cpu_relocation=True)\n print(\"Training using multiple GPUs..\")\n- except:\n+ except ValueError:\n+ parallel_model = model\n print(\"Training using single GPU or CPU..\")\n-\n- model.compile(..)\n+ parallel_model.compile(..)\n ..\n ```\n \n- # Example 3 - Training models with weights merge on GPU (recommended for NV-link)\n+ Example 3 - Training models with weights merge on GPU (recommended for NV-link)\n \n ```python\n ..\n@@ -125,12 +127,13 @@\n model = Xception(weights=None, ..)\n \n try:\n- model = multi_gpu_model(model, cpu_merge=False)\n+ parallel_model = multi_gpu_model(model, cpu_merge=False)\n print(\"Training using multiple GPUs..\")\n except:\n+ parallel_model = model\n print(\"Training using single GPU or CPU..\")\n \n- model.compile(..)\n+ parallel_model.compile(..)\n ..\n ```\n", "issue": "Keras documentation: multi_gpu_model examples do not show properly on the home page keras.io\nSome examples of usage of `multi_gpu_model` appear on the documentation of the function in the [source code](https://github.com/keras-team/keras/blob/master/keras/utils/multi_gpu_utils.py). However they do not display correctly on the [Keras home page](https://keras.io/utils/):\r\n\r\n```Example 1 - Training models with weights merge on CPU\r\n\r\n$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$0\r\n\r\nExample 2 - Training models with weights merge on CPU using cpu_relocation\r\n\r\n$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$1\r\n\r\nExample 3 - Training models with weights merge on GPU (recommended for NV-link)\r\n\r\n$Example_2_-_Training_models_with_weights_merge_on_CPU_using_cpu_relocation$2```\nKeras documentation of multi_gpu_model: example 2 can be misleading \nIn the Keras documentation for `multi_gpu_model`, it is stated:\r\n\r\n> To save the multi-gpu model, use .save(fname) or .save_weights(fname) with the template model (the argument you passed to multi_gpu_model), rather than the model returned by multi_gpu_model.\r\n\r\nHowever in example 2 the template model is overwritten by the multi-gpu model:\r\n\r\n```python\r\n ..\r\n # Not needed to change the device scope for model definition:\r\n model = Xception(weights=None, ..)\r\n try:\r\n model = multi_gpu_model(model, cpu_relocation=True)\r\n print(\"Training using multiple GPUs..\")\r\n except:\r\n print(\"Training using single GPU or CPU..\")\r\n model.compile(..)\r\n ..\r\n```\r\n\r\nThis means that in this example it would not be possible to save the weights of the template model. I suggest rewritting to something like:\r\n\r\n\r\n```python\r\n ..\r\n # Not needed to change the device scope for model definition:\r\n model = Xception(weights=None, ..)\r\n try:\r\n parallel_model = multi_gpu_model(model, cpu_relocation=True)\r\n print(\"Training using multiple GPUs..\")\r\n except ValueError:\r\n parallel_model = model\r\n print(\"Training using single GPU or CPU..\")\r\n parallel_model.compile(..)\r\n ..\r\n```\r\n\r\n(I take this opportunity to except only a specific error)\r\n\n", "before_files": [{"content": "\"\"\"Multi-GPU training utilities.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom ..layers.merge import concatenate\nfrom .. import backend as K\nfrom ..layers.core import Lambda\nfrom ..engine.training import Model\nfrom ..models import clone_model\nfrom ..utils.generic_utils import to_list\n\n\ndef _get_available_devices():\n return [x.name for x in K.get_session().list_devices()]\n\n\ndef _normalize_device_name(name):\n name = '/' + ':'.join(name.lower().replace('/', '').split(':')[-2:])\n return name\n\n\ndef multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False):\n \"\"\"Replicates a model on different GPUs.\n\n Specifically, this function implements single-machine\n multi-GPU data parallelism. It works in the following way:\n\n - Divide the model's input(s) into multiple sub-batches.\n - Apply a model copy on each sub-batch. Every model copy\n is executed on a dedicated GPU.\n - Concatenate the results (on CPU) into one big batch.\n\n E.g. if your `batch_size` is 64 and you use `gpus=2`,\n then we will divide the input into 2 sub-batches of 32 samples,\n process each sub-batch on one GPU, then return the full\n batch of 64 processed samples.\n\n This induces quasi-linear speedup on up to 8 GPUs.\n\n This function is only available with the TensorFlow backend\n for the time being.\n\n # Arguments\n model: A Keras model instance. To avoid OOM errors,\n this model could have been built on CPU, for instance\n (see usage example below).\n gpus: Integer >= 2 or list of integers, number of GPUs or\n list of GPU IDs on which to create model replicas.\n cpu_merge: A boolean value to identify whether to force\n merging model weights under the scope of the CPU or not.\n cpu_relocation: A boolean value to identify whether to\n create the model's weights under the scope of the CPU.\n If the model is not defined under any preceding device\n scope, you can still rescue it by activating this option.\n\n # Returns\n A Keras `Model` instance which can be used just like the initial\n `model` argument, but which distributes its workload on multiple GPUs.\n\n # Example 1 - Training models with weights merge on CPU\n\n ```python\n import tensorflow as tf\n from keras.applications import Xception\n from keras.utils import multi_gpu_model\n import numpy as np\n\n num_samples = 1000\n height = 224\n width = 224\n num_classes = 1000\n\n # Instantiate the base model (or \"template\" model).\n # We recommend doing this with under a CPU device scope,\n # so that the model's weights are hosted on CPU memory.\n # Otherwise they may end up hosted on a GPU, which would\n # complicate weight sharing.\n with tf.device('/cpu:0'):\n model = Xception(weights=None,\n input_shape=(height, width, 3),\n classes=num_classes)\n\n # Replicates the model on 8 GPUs.\n # This assumes that your machine has 8 available GPUs.\n parallel_model = multi_gpu_model(model, gpus=8)\n parallel_model.compile(loss='categorical_crossentropy',\n optimizer='rmsprop')\n\n # Generate dummy data.\n x = np.random.random((num_samples, height, width, 3))\n y = np.random.random((num_samples, num_classes))\n\n # This `fit` call will be distributed on 8 GPUs.\n # Since the batch size is 256, each GPU will process 32 samples.\n parallel_model.fit(x, y, epochs=20, batch_size=256)\n\n # Save model via the template model (which shares the same weights):\n model.save('my_model.h5')\n ```\n\n # Example 2 - Training models with weights merge on CPU using cpu_relocation\n\n ```python\n ..\n # Not needed to change the device scope for model definition:\n model = Xception(weights=None, ..)\n\n try:\n model = multi_gpu_model(model, cpu_relocation=True)\n print(\"Training using multiple GPUs..\")\n except:\n print(\"Training using single GPU or CPU..\")\n\n model.compile(..)\n ..\n ```\n\n # Example 3 - Training models with weights merge on GPU (recommended for NV-link)\n\n ```python\n ..\n # Not needed to change the device scope for model definition:\n model = Xception(weights=None, ..)\n\n try:\n model = multi_gpu_model(model, cpu_merge=False)\n print(\"Training using multiple GPUs..\")\n except:\n print(\"Training using single GPU or CPU..\")\n\n model.compile(..)\n ..\n ```\n\n # On model saving\n\n To save the multi-gpu model, use `.save(fname)` or `.save_weights(fname)`\n with the template model (the argument you passed to `multi_gpu_model`),\n rather than the model returned by `multi_gpu_model`.\n \"\"\"\n if K.backend() != 'tensorflow':\n raise ValueError('`multi_gpu_model` is only available '\n 'with the TensorFlow backend.')\n\n available_devices = _get_available_devices()\n available_devices = [_normalize_device_name(name)\n for name in available_devices]\n if not gpus:\n # Using all visible GPUs when not specifying `gpus`\n # e.g. CUDA_VISIBLE_DEVICES=0,2 python keras_mgpu.py\n gpus = len([x for x in available_devices if 'gpu' in x])\n\n if isinstance(gpus, (list, tuple)):\n if len(gpus) <= 1:\n raise ValueError('For multi-gpu usage to be effective, '\n 'call `multi_gpu_model` with `len(gpus) >= 2`. '\n 'Received: `gpus=%s`' % gpus)\n num_gpus = len(gpus)\n target_gpu_ids = gpus\n else:\n if gpus <= 1:\n raise ValueError('For multi-gpu usage to be effective, '\n 'call `multi_gpu_model` with `gpus >= 2`. '\n 'Received: `gpus=%d`' % gpus)\n num_gpus = gpus\n target_gpu_ids = range(num_gpus)\n\n import tensorflow as tf\n\n target_devices = ['/cpu:0'] + ['/gpu:%d' % i for i in target_gpu_ids]\n for device in target_devices:\n if device not in available_devices:\n raise ValueError(\n 'To call `multi_gpu_model` with `gpus=%s`, '\n 'we expect the following devices to be available: %s. '\n 'However this machine only has: %s. '\n 'Try reducing `gpus`.' % (gpus,\n target_devices,\n available_devices))\n\n def get_slice(data, i, parts):\n shape = K.shape(data)\n batch_size = shape[:1]\n input_shape = shape[1:]\n step = batch_size // parts\n if i == parts - 1:\n size = batch_size - step * i\n else:\n size = step\n size = K.concatenate([size, input_shape], axis=0)\n stride = K.concatenate([step, input_shape * 0], axis=0)\n start = stride * i\n return K.slice(data, start, size)\n\n # Relocate the model definition under CPU device scope if needed\n if cpu_relocation:\n with tf.device('/cpu:0'):\n model = clone_model(model)\n\n all_outputs = []\n for i in range(len(model.outputs)):\n all_outputs.append([])\n\n # Place a copy of the model on each GPU,\n # each getting a slice of the inputs.\n for i, gpu_id in enumerate(target_gpu_ids):\n with tf.device('/gpu:%d' % gpu_id):\n with tf.name_scope('replica_%d' % gpu_id):\n inputs = []\n # Retrieve a slice of the input.\n for x in model.inputs:\n # In-place input splitting which is not only\n # 5% ~ 12% faster but also less GPU memory\n # duplication.\n with tf.device(x.device):\n input_shape = K.int_shape(x)[1:]\n slice_i = Lambda(get_slice,\n output_shape=input_shape,\n arguments={'i': i,\n 'parts': num_gpus})(x)\n inputs.append(slice_i)\n\n # Apply model on slice\n # (creating a model replica on the target device).\n outputs = model(inputs)\n outputs = to_list(outputs)\n\n # Save the outputs for merging back together later.\n for o in range(len(outputs)):\n all_outputs[o].append(outputs[o])\n\n # Deduplicate output names to handle Siamese networks.\n occurrences = {}\n for n in model.output_names:\n if n not in occurrences:\n occurrences[n] = 1\n else:\n occurrences[n] += 1\n conflict_counter = {n: 0 for n, count in occurrences.items() if count > 1}\n output_names = []\n for n in model.output_names:\n if n in conflict_counter:\n conflict_counter[n] += 1\n n += '_%d' % conflict_counter[n]\n output_names.append(n)\n\n # Merge outputs under expected scope.\n with tf.device('/cpu:0' if cpu_merge else '/gpu:%d' % target_gpu_ids[0]):\n merged = []\n for name, outputs in zip(output_names, all_outputs):\n merged.append(concatenate(outputs,\n axis=0, name=name))\n return Model(model.inputs, merged)\n", "path": "keras/utils/multi_gpu_utils.py"}]}
| 3,829 | 472 |
gh_patches_debug_33630
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-1150
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Possibly incorrect hook names?
Going through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`.
The first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.
The second seems like a straight up typo. I've never heard the word "whitelister" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.
Changing the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.
Possibly incorrect hook names?
Going through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`.
The first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.
The second seems like a straight up typo. I've never heard the word "whitelister" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.
Changing the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.
</issue>
<code>
[start of wagtail/wagtailadmin/views/userbar.py]
1 from django.shortcuts import render
2 from django.contrib.auth.decorators import permission_required
3
4 from wagtail.wagtailadmin.userbar import EditPageItem, AddPageItem, ApproveModerationEditPageItem, RejectModerationEditPageItem
5 from wagtail.wagtailcore import hooks
6 from wagtail.wagtailcore.models import Page, PageRevision
7
8
9 @permission_required('wagtailadmin.access_admin', raise_exception=True)
10 def for_frontend(request, page_id):
11 items = [
12 EditPageItem(Page.objects.get(id=page_id)),
13 AddPageItem(Page.objects.get(id=page_id)),
14 ]
15
16 for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
17 fn(request, items)
18
19 # Render the items
20 rendered_items = [item.render(request) for item in items]
21
22 # Remove any unrendered items
23 rendered_items = [item for item in rendered_items if item]
24
25 # Render the edit bird
26 return render(request, 'wagtailadmin/userbar/base.html', {
27 'items': rendered_items,
28 })
29
30
31 @permission_required('wagtailadmin.access_admin', raise_exception=True)
32 def for_moderation(request, revision_id):
33 items = [
34 EditPageItem(PageRevision.objects.get(id=revision_id).page),
35 AddPageItem(PageRevision.objects.get(id=revision_id).page),
36 ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
37 RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
38 ]
39
40 for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
41 fn(request, items)
42
43 # Render the items
44 rendered_items = [item.render(request) for item in items]
45
46 # Remove any unrendered items
47 rendered_items = [item for item in rendered_items if item]
48
49 # Render the edit bird
50 return render(request, 'wagtailadmin/userbar/base.html', {
51 'items': rendered_items,
52 })
53
[end of wagtail/wagtailadmin/views/userbar.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/wagtailadmin/views/userbar.py b/wagtail/wagtailadmin/views/userbar.py
--- a/wagtail/wagtailadmin/views/userbar.py
+++ b/wagtail/wagtailadmin/views/userbar.py
@@ -1,3 +1,5 @@
+import warnings
+
from django.shortcuts import render
from django.contrib.auth.decorators import permission_required
@@ -5,6 +7,8 @@
from wagtail.wagtailcore import hooks
from wagtail.wagtailcore.models import Page, PageRevision
+from wagtail.utils.deprecation import RemovedInWagtail11Warning
+
@permission_required('wagtailadmin.access_admin', raise_exception=True)
def for_frontend(request, page_id):
@@ -13,7 +17,10 @@
AddPageItem(Page.objects.get(id=page_id)),
]
- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
+ # TODO: Remove in 1.1 release
+ run_deprecated_edit_bird_hook(request, items)
+
+ for fn in hooks.get_hooks('construct_wagtail_userbar'):
fn(request, items)
# Render the items
@@ -37,7 +44,10 @@
RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
]
- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
+ # TODO: Remove in 1.1 release
+ run_deprecated_edit_bird_hook(request, items)
+
+ for fn in hooks.get_hooks('construct_wagtail_userbar'):
fn(request, items)
# Render the items
@@ -50,3 +60,13 @@
return render(request, 'wagtailadmin/userbar/base.html', {
'items': rendered_items,
})
+
+
+def run_deprecated_edit_bird_hook(request, items):
+ for fn in hooks.get_hooks('construct_wagtail_edit_bird'):
+ fn(request, items)
+
+ warnings.warn(
+ "The 'construct_wagtail_edit_bird' hook has been renamed to 'construct_wagtail_userbar'."
+ "Please update function '%s' in '%s'." % (fn.__name__, fn.__module__), RemovedInWagtail11Warning
+ )
|
{"golden_diff": "diff --git a/wagtail/wagtailadmin/views/userbar.py b/wagtail/wagtailadmin/views/userbar.py\n--- a/wagtail/wagtailadmin/views/userbar.py\n+++ b/wagtail/wagtailadmin/views/userbar.py\n@@ -1,3 +1,5 @@\n+import warnings\n+\n from django.shortcuts import render\n from django.contrib.auth.decorators import permission_required\n \n@@ -5,6 +7,8 @@\n from wagtail.wagtailcore import hooks\n from wagtail.wagtailcore.models import Page, PageRevision\n \n+from wagtail.utils.deprecation import RemovedInWagtail11Warning\n+\n \n @permission_required('wagtailadmin.access_admin', raise_exception=True)\n def for_frontend(request, page_id):\n@@ -13,7 +17,10 @@\n AddPageItem(Page.objects.get(id=page_id)),\n ]\n \n- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n+ # TODO: Remove in 1.1 release\n+ run_deprecated_edit_bird_hook(request, items)\n+\n+ for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n \n # Render the items\n@@ -37,7 +44,10 @@\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n \n- for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n+ # TODO: Remove in 1.1 release\n+ run_deprecated_edit_bird_hook(request, items)\n+\n+ for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n \n # Render the items\n@@ -50,3 +60,13 @@\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n+\n+\n+def run_deprecated_edit_bird_hook(request, items):\n+ for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n+ fn(request, items)\n+\n+ warnings.warn(\n+ \"The 'construct_wagtail_edit_bird' hook has been renamed to 'construct_wagtail_userbar'.\"\n+ \"Please update function '%s' in '%s'.\" % (fn.__name__, fn.__module__), RemovedInWagtail11Warning\n+ )\n", "issue": "Possibly incorrect hook names?\nGoing through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`. \n\nThe first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.\n\nThe second seems like a straight up typo. I've never heard the word \"whitelister\" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.\n\nChanging the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.\n\nPossibly incorrect hook names?\nGoing through the docs, I see two rather unusual hook names: `construct_wagtail_edit_bird` and `construct_whitelister_element_rules`. \n\nThe first seems like a placeholder name that accidentally made it out of the alpha stage. Based on the docs, it seems like it should be called `construct_wagtail_userbar`.\n\nThe second seems like a straight up typo. I've never heard the word \"whitelister\" before. I'm pretty sure this hook should be called `construct_whitelisted_element_rules`.\n\nChanging the names of hooks is obviously a major undertaking, since some code bases will have already implemented them. But adding the new names and deprecating the old ones for a few releases should be entirely possible. I'd be happy to do this in a pull request, since it's only a dozen or lines of code to change, but I don't really know how wagtail handles deprecating old APIs.\n\n", "before_files": [{"content": "from django.shortcuts import render\nfrom django.contrib.auth.decorators import permission_required\n\nfrom wagtail.wagtailadmin.userbar import EditPageItem, AddPageItem, ApproveModerationEditPageItem, RejectModerationEditPageItem\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import Page, PageRevision\n\n\n@permission_required('wagtailadmin.access_admin', raise_exception=True)\ndef for_frontend(request, page_id):\n items = [\n EditPageItem(Page.objects.get(id=page_id)),\n AddPageItem(Page.objects.get(id=page_id)),\n ]\n\n for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the edit bird\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n\n\n@permission_required('wagtailadmin.access_admin', raise_exception=True)\ndef for_moderation(request, revision_id):\n items = [\n EditPageItem(PageRevision.objects.get(id=revision_id).page),\n AddPageItem(PageRevision.objects.get(id=revision_id).page),\n ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n\n for fn in hooks.get_hooks('construct_wagtail_edit_bird'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the edit bird\n return render(request, 'wagtailadmin/userbar/base.html', {\n 'items': rendered_items,\n })\n", "path": "wagtail/wagtailadmin/views/userbar.py"}]}
| 1,462 | 514 |
gh_patches_debug_3533
|
rasdani/github-patches
|
git_diff
|
OpenNMT__OpenNMT-tf-189
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash loading parallel inputs with --data_dir
I found the next issue if I follow the tutorial and try to do
data:
train_features_file:
- train_source_1.records
- train_source_2.txt
- train_source_3.txt
in main.py at the method _prefix_paths
new_path = os.path.join(prefix, path)
will crash because paths is a list and join can't be done on a list.
The fix should be just check the instance type at paths and iterate
</issue>
<code>
[start of opennmt/bin/main.py]
1 """Main script."""
2
3 import argparse
4 import json
5 import os
6 import six
7
8 import tensorflow as tf
9
10 from opennmt.models import catalog
11 from opennmt.runner import Runner
12 from opennmt.config import load_model, load_config
13 from opennmt.utils.misc import classes_in_module
14
15
16 def _prefix_paths(prefix, paths):
17 """Recursively prefix paths.
18
19 Args:
20 prefix: The prefix to apply.
21 data: A dict of relative paths.
22
23 Returns:
24 The updated dict.
25 """
26 if isinstance(paths, dict):
27 for key, path in six.iteritems(paths):
28 paths[key] = _prefix_paths(prefix, path)
29 return paths
30 else:
31 path = paths
32 new_path = os.path.join(prefix, path)
33 if os.path.isfile(new_path):
34 return new_path
35 else:
36 return path
37
38 def main():
39 parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
40 parser.add_argument("run",
41 choices=["train_and_eval", "train", "eval", "infer", "export", "score"],
42 help="Run type.")
43 parser.add_argument("--config", required=True, nargs="+",
44 help="List of configuration files.")
45 parser.add_argument("--model_type", default="", choices=list(classes_in_module(catalog)),
46 help="Model type from the catalog.")
47 parser.add_argument("--model", default="",
48 help="Custom model configuration file.")
49 parser.add_argument("--run_dir", default="",
50 help="If set, model_dir will be created relative to this location.")
51 parser.add_argument("--data_dir", default="",
52 help="If set, data files are expected to be relative to this location.")
53 parser.add_argument("--features_file", default=[], nargs="+",
54 help="Run inference on this file.")
55 parser.add_argument("--predictions_file", default="",
56 help=("File used to save predictions. If not set, predictions are printed "
57 "on the standard output."))
58 parser.add_argument("--log_prediction_time", default=False, action="store_true",
59 help="Logs some prediction time metrics.")
60 parser.add_argument("--checkpoint_path", default=None,
61 help=("Checkpoint or directory to use for inference or export "
62 "(when a directory is set, the latest checkpoint is used)."))
63 parser.add_argument("--num_gpus", type=int, default=1,
64 help="Number of GPUs to use for in-graph replication.")
65 parser.add_argument("--chief_host", default="",
66 help="hostname:port of the chief worker (for distributed training).")
67 parser.add_argument("--worker_hosts", default="",
68 help=("Comma-separated list of hostname:port of workers "
69 "(for distributed training)."))
70 parser.add_argument("--ps_hosts", default="",
71 help=("Comma-separated list of hostname:port of parameter servers "
72 "(for distributed training)."))
73 parser.add_argument("--task_type", default="chief",
74 choices=["chief", "worker", "ps", "evaluator"],
75 help="Type of the task to run (for distributed training).")
76 parser.add_argument("--task_index", type=int, default=0,
77 help="ID of the task (for distributed training).")
78 parser.add_argument("--log_level", default="INFO",
79 choices=["DEBUG", "ERROR", "FATAL", "INFO", "WARN"],
80 help="Logs verbosity.")
81 parser.add_argument("--seed", type=int, default=None,
82 help="Random seed.")
83 parser.add_argument("--gpu_allow_growth", default=False, action="store_true",
84 help="Allocate GPU memory dynamically.")
85 parser.add_argument("--intra_op_parallelism_threads", type=int, default=0,
86 help=("Number of intra op threads (0 means the system picks "
87 "an appropriate number)."))
88 parser.add_argument("--inter_op_parallelism_threads", type=int, default=0,
89 help=("Number of inter op threads (0 means the system picks "
90 "an appropriate number)."))
91 args = parser.parse_args()
92
93 tf.logging.set_verbosity(getattr(tf.logging, args.log_level))
94
95 # Setup cluster if defined.
96 if args.chief_host:
97 os.environ["TF_CONFIG"] = json.dumps({
98 "cluster": {
99 "chief": [args.chief_host],
100 "worker": args.worker_hosts.split(","),
101 "ps": args.ps_hosts.split(",")
102 },
103 "task": {
104 "type": args.task_type,
105 "index": args.task_index
106 }
107 })
108
109 # Load and merge run configurations.
110 config = load_config(args.config)
111 if args.run_dir:
112 config["model_dir"] = os.path.join(args.run_dir, config["model_dir"])
113 if args.data_dir:
114 config["data"] = _prefix_paths(args.data_dir, config["data"])
115
116 if not os.path.isdir(config["model_dir"]):
117 tf.logging.info("Creating model directory %s", config["model_dir"])
118 os.makedirs(config["model_dir"])
119
120 model = load_model(config["model_dir"], model_file=args.model, model_name=args.model_type)
121 session_config = tf.ConfigProto(
122 intra_op_parallelism_threads=args.intra_op_parallelism_threads,
123 inter_op_parallelism_threads=args.inter_op_parallelism_threads)
124 runner = Runner(
125 model,
126 config,
127 seed=args.seed,
128 num_devices=args.num_gpus,
129 gpu_allow_growth=args.gpu_allow_growth,
130 session_config=session_config)
131
132 if args.run == "train_and_eval":
133 runner.train_and_evaluate()
134 elif args.run == "train":
135 runner.train()
136 elif args.run == "eval":
137 runner.evaluate(checkpoint_path=args.checkpoint_path)
138 elif args.run == "infer":
139 if not args.features_file:
140 parser.error("--features_file is required for inference.")
141 elif len(args.features_file) == 1:
142 args.features_file = args.features_file[0]
143 runner.infer(
144 args.features_file,
145 predictions_file=args.predictions_file,
146 checkpoint_path=args.checkpoint_path,
147 log_time=args.log_prediction_time)
148 elif args.run == "export":
149 runner.export(checkpoint_path=args.checkpoint_path)
150 elif args.run == "score":
151 if not args.features_file:
152 parser.error("--features_file is required for scoring.")
153 if not args.predictions_file:
154 parser.error("--predictions_file is required for scoring.")
155 runner.score(
156 args.features_file,
157 args.predictions_file,
158 checkpoint_path=args.checkpoint_path)
159
160
161 if __name__ == "__main__":
162 main()
163
[end of opennmt/bin/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opennmt/bin/main.py b/opennmt/bin/main.py
--- a/opennmt/bin/main.py
+++ b/opennmt/bin/main.py
@@ -27,6 +27,10 @@
for key, path in six.iteritems(paths):
paths[key] = _prefix_paths(prefix, path)
return paths
+ elif isinstance(paths, list):
+ for i, path in enumerate(paths):
+ paths[i] = _prefix_paths(prefix, path)
+ return paths
else:
path = paths
new_path = os.path.join(prefix, path)
|
{"golden_diff": "diff --git a/opennmt/bin/main.py b/opennmt/bin/main.py\n--- a/opennmt/bin/main.py\n+++ b/opennmt/bin/main.py\n@@ -27,6 +27,10 @@\n for key, path in six.iteritems(paths):\n paths[key] = _prefix_paths(prefix, path)\n return paths\n+ elif isinstance(paths, list):\n+ for i, path in enumerate(paths):\n+ paths[i] = _prefix_paths(prefix, path)\n+ return paths\n else:\n path = paths\n new_path = os.path.join(prefix, path)\n", "issue": "Crash loading parallel inputs with --data_dir\nI found the next issue if I follow the tutorial and try to do\r\n\r\ndata:\r\n train_features_file:\r\n - train_source_1.records\r\n - train_source_2.txt\r\n - train_source_3.txt\r\n\r\nin main.py at the method _prefix_paths\r\nnew_path = os.path.join(prefix, path) \r\nwill crash because paths is a list and join can't be done on a list.\r\n\r\nThe fix should be just check the instance type at paths and iterate\n", "before_files": [{"content": "\"\"\"Main script.\"\"\"\n\nimport argparse\nimport json\nimport os\nimport six\n\nimport tensorflow as tf\n\nfrom opennmt.models import catalog\nfrom opennmt.runner import Runner\nfrom opennmt.config import load_model, load_config\nfrom opennmt.utils.misc import classes_in_module\n\n\ndef _prefix_paths(prefix, paths):\n \"\"\"Recursively prefix paths.\n\n Args:\n prefix: The prefix to apply.\n data: A dict of relative paths.\n\n Returns:\n The updated dict.\n \"\"\"\n if isinstance(paths, dict):\n for key, path in six.iteritems(paths):\n paths[key] = _prefix_paths(prefix, path)\n return paths\n else:\n path = paths\n new_path = os.path.join(prefix, path)\n if os.path.isfile(new_path):\n return new_path\n else:\n return path\n\ndef main():\n parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument(\"run\",\n choices=[\"train_and_eval\", \"train\", \"eval\", \"infer\", \"export\", \"score\"],\n help=\"Run type.\")\n parser.add_argument(\"--config\", required=True, nargs=\"+\",\n help=\"List of configuration files.\")\n parser.add_argument(\"--model_type\", default=\"\", choices=list(classes_in_module(catalog)),\n help=\"Model type from the catalog.\")\n parser.add_argument(\"--model\", default=\"\",\n help=\"Custom model configuration file.\")\n parser.add_argument(\"--run_dir\", default=\"\",\n help=\"If set, model_dir will be created relative to this location.\")\n parser.add_argument(\"--data_dir\", default=\"\",\n help=\"If set, data files are expected to be relative to this location.\")\n parser.add_argument(\"--features_file\", default=[], nargs=\"+\",\n help=\"Run inference on this file.\")\n parser.add_argument(\"--predictions_file\", default=\"\",\n help=(\"File used to save predictions. If not set, predictions are printed \"\n \"on the standard output.\"))\n parser.add_argument(\"--log_prediction_time\", default=False, action=\"store_true\",\n help=\"Logs some prediction time metrics.\")\n parser.add_argument(\"--checkpoint_path\", default=None,\n help=(\"Checkpoint or directory to use for inference or export \"\n \"(when a directory is set, the latest checkpoint is used).\"))\n parser.add_argument(\"--num_gpus\", type=int, default=1,\n help=\"Number of GPUs to use for in-graph replication.\")\n parser.add_argument(\"--chief_host\", default=\"\",\n help=\"hostname:port of the chief worker (for distributed training).\")\n parser.add_argument(\"--worker_hosts\", default=\"\",\n help=(\"Comma-separated list of hostname:port of workers \"\n \"(for distributed training).\"))\n parser.add_argument(\"--ps_hosts\", default=\"\",\n help=(\"Comma-separated list of hostname:port of parameter servers \"\n \"(for distributed training).\"))\n parser.add_argument(\"--task_type\", default=\"chief\",\n choices=[\"chief\", \"worker\", \"ps\", \"evaluator\"],\n help=\"Type of the task to run (for distributed training).\")\n parser.add_argument(\"--task_index\", type=int, default=0,\n help=\"ID of the task (for distributed training).\")\n parser.add_argument(\"--log_level\", default=\"INFO\",\n choices=[\"DEBUG\", \"ERROR\", \"FATAL\", \"INFO\", \"WARN\"],\n help=\"Logs verbosity.\")\n parser.add_argument(\"--seed\", type=int, default=None,\n help=\"Random seed.\")\n parser.add_argument(\"--gpu_allow_growth\", default=False, action=\"store_true\",\n help=\"Allocate GPU memory dynamically.\")\n parser.add_argument(\"--intra_op_parallelism_threads\", type=int, default=0,\n help=(\"Number of intra op threads (0 means the system picks \"\n \"an appropriate number).\"))\n parser.add_argument(\"--inter_op_parallelism_threads\", type=int, default=0,\n help=(\"Number of inter op threads (0 means the system picks \"\n \"an appropriate number).\"))\n args = parser.parse_args()\n\n tf.logging.set_verbosity(getattr(tf.logging, args.log_level))\n\n # Setup cluster if defined.\n if args.chief_host:\n os.environ[\"TF_CONFIG\"] = json.dumps({\n \"cluster\": {\n \"chief\": [args.chief_host],\n \"worker\": args.worker_hosts.split(\",\"),\n \"ps\": args.ps_hosts.split(\",\")\n },\n \"task\": {\n \"type\": args.task_type,\n \"index\": args.task_index\n }\n })\n\n # Load and merge run configurations.\n config = load_config(args.config)\n if args.run_dir:\n config[\"model_dir\"] = os.path.join(args.run_dir, config[\"model_dir\"])\n if args.data_dir:\n config[\"data\"] = _prefix_paths(args.data_dir, config[\"data\"])\n\n if not os.path.isdir(config[\"model_dir\"]):\n tf.logging.info(\"Creating model directory %s\", config[\"model_dir\"])\n os.makedirs(config[\"model_dir\"])\n\n model = load_model(config[\"model_dir\"], model_file=args.model, model_name=args.model_type)\n session_config = tf.ConfigProto(\n intra_op_parallelism_threads=args.intra_op_parallelism_threads,\n inter_op_parallelism_threads=args.inter_op_parallelism_threads)\n runner = Runner(\n model,\n config,\n seed=args.seed,\n num_devices=args.num_gpus,\n gpu_allow_growth=args.gpu_allow_growth,\n session_config=session_config)\n\n if args.run == \"train_and_eval\":\n runner.train_and_evaluate()\n elif args.run == \"train\":\n runner.train()\n elif args.run == \"eval\":\n runner.evaluate(checkpoint_path=args.checkpoint_path)\n elif args.run == \"infer\":\n if not args.features_file:\n parser.error(\"--features_file is required for inference.\")\n elif len(args.features_file) == 1:\n args.features_file = args.features_file[0]\n runner.infer(\n args.features_file,\n predictions_file=args.predictions_file,\n checkpoint_path=args.checkpoint_path,\n log_time=args.log_prediction_time)\n elif args.run == \"export\":\n runner.export(checkpoint_path=args.checkpoint_path)\n elif args.run == \"score\":\n if not args.features_file:\n parser.error(\"--features_file is required for scoring.\")\n if not args.predictions_file:\n parser.error(\"--predictions_file is required for scoring.\")\n runner.score(\n args.features_file,\n args.predictions_file,\n checkpoint_path=args.checkpoint_path)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "opennmt/bin/main.py"}]}
| 2,399 | 130 |
gh_patches_debug_39227
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-3930
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use astropy instead of pyfits to read FITS images
pyfits is not currently installable on 3.5, and it looks like AstroPy is more actively maintained.
</issue>
<code>
[start of skimage/io/_plugins/fits_plugin.py]
1 __all__ = ['imread', 'imread_collection']
2
3 import skimage.io as io
4
5 try:
6 from astropy.io import fits as pyfits
7 except ImportError:
8 try:
9 import pyfits
10 except ImportError:
11 raise ImportError(
12 "PyFITS could not be found. Please refer to\n"
13 "http://www.stsci.edu/resources/software_hardware/pyfits\n"
14 "for further instructions.")
15
16
17 def imread(fname, dtype=None):
18 """Load an image from a FITS file.
19
20 Parameters
21 ----------
22 fname : string
23 Image file name, e.g. ``test.fits``.
24 dtype : dtype, optional
25 For FITS, this argument is ignored because Stefan is planning on
26 removing the dtype argument from imread anyway.
27
28 Returns
29 -------
30 img_array : ndarray
31 Unlike plugins such as PIL, where different color bands/channels are
32 stored in the third dimension, FITS images are greyscale-only and can
33 be N-dimensional, so an array of the native FITS dimensionality is
34 returned, without color channels.
35
36 Currently if no image is found in the file, None will be returned
37
38 Notes
39 -----
40
41 Currently FITS ``imread()`` always returns the first image extension when
42 given a Multi-Extension FITS file; use ``imread_collection()`` (which does
43 lazy loading) to get all the extensions at once.
44
45 """
46
47 hdulist = pyfits.open(fname)
48
49 # Iterate over FITS image extensions, ignoring any other extension types
50 # such as binary tables, and get the first image data array:
51 img_array = None
52 for hdu in hdulist:
53 if isinstance(hdu, pyfits.ImageHDU) or \
54 isinstance(hdu, pyfits.PrimaryHDU):
55 if hdu.data is not None:
56 img_array = hdu.data
57 break
58 hdulist.close()
59
60 return img_array
61
62
63 def imread_collection(load_pattern, conserve_memory=True):
64 """Load a collection of images from one or more FITS files
65
66 Parameters
67 ----------
68 load_pattern : str or list
69 List of extensions to load. Filename globbing is currently
70 unsupported.
71 converve_memory : bool
72 If True, never keep more than one in memory at a specific
73 time. Otherwise, images will be cached once they are loaded.
74
75 Returns
76 -------
77
78 ic : ImageCollection
79 Collection of images.
80
81 """
82
83 intype = type(load_pattern)
84 if intype is not list and intype is not str:
85 raise TypeError("Input must be a filename or list of filenames")
86
87 # Ensure we have a list, otherwise we'll end up iterating over the string:
88 if intype is not list:
89 load_pattern = [load_pattern]
90
91 # Generate a list of filename/extension pairs by opening the list of
92 # files and finding the image extensions in each one:
93 ext_list = []
94 for filename in load_pattern:
95 hdulist = pyfits.open(filename)
96 for n, hdu in zip(range(len(hdulist)), hdulist):
97 if isinstance(hdu, pyfits.ImageHDU) or \
98 isinstance(hdu, pyfits.PrimaryHDU):
99 # Ignore (primary) header units with no data (use '.size'
100 # rather than '.data' to avoid actually loading the image):
101 try:
102 data_size = hdu.size()
103 except TypeError: # (size changed to int in PyFITS 3.1)
104 data_size = hdu.size
105 if data_size > 0:
106 ext_list.append((filename, n))
107 hdulist.close()
108
109 return io.ImageCollection(ext_list, load_func=FITSFactory,
110 conserve_memory=conserve_memory)
111
112
113 def FITSFactory(image_ext):
114 """Load an image extension from a FITS file and return a NumPy array
115
116 Parameters
117 ----------
118
119 image_ext : tuple
120 FITS extension to load, in the format ``(filename, ext_num)``.
121 The FITS ``(extname, extver)`` format is unsupported, since this
122 function is not called directly by the user and
123 ``imread_collection()`` does the work of figuring out which
124 extensions need loading.
125
126 """
127
128 # Expect a length-2 tuple with a filename as the first element:
129 if not isinstance(image_ext, tuple):
130 raise TypeError("Expected a tuple")
131
132 if len(image_ext) != 2:
133 raise ValueError("Expected a tuple of length 2")
134
135 filename = image_ext[0]
136 extnum = image_ext[1]
137
138 if type(filename) is not str or type(extnum) is not int:
139 raise ValueError("Expected a (filename, extension) tuple")
140
141 hdulist = pyfits.open(filename)
142
143 data = hdulist[extnum].data
144
145 hdulist.close()
146
147 if data is None:
148 raise RuntimeError(
149 "Extension %d of %s has no data" % (extnum, filename))
150
151 return data
152
[end of skimage/io/_plugins/fits_plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/io/_plugins/fits_plugin.py b/skimage/io/_plugins/fits_plugin.py
--- a/skimage/io/_plugins/fits_plugin.py
+++ b/skimage/io/_plugins/fits_plugin.py
@@ -3,15 +3,12 @@
import skimage.io as io
try:
- from astropy.io import fits as pyfits
+ from astropy.io import fits
except ImportError:
- try:
- import pyfits
- except ImportError:
- raise ImportError(
- "PyFITS could not be found. Please refer to\n"
- "http://www.stsci.edu/resources/software_hardware/pyfits\n"
- "for further instructions.")
+ raise ImportError(
+ "Astropy could not be found. It is needed to read FITS files.\n"
+ "Please refer to http://www.astropy.org for installation\n"
+ "instructions.")
def imread(fname, dtype=None):
@@ -44,14 +41,14 @@
"""
- hdulist = pyfits.open(fname)
+ hdulist = fits.open(fname)
# Iterate over FITS image extensions, ignoring any other extension types
# such as binary tables, and get the first image data array:
img_array = None
for hdu in hdulist:
- if isinstance(hdu, pyfits.ImageHDU) or \
- isinstance(hdu, pyfits.PrimaryHDU):
+ if isinstance(hdu, fits.ImageHDU) or \
+ isinstance(hdu, fits.PrimaryHDU):
if hdu.data is not None:
img_array = hdu.data
break
@@ -92,16 +89,16 @@
# files and finding the image extensions in each one:
ext_list = []
for filename in load_pattern:
- hdulist = pyfits.open(filename)
+ hdulist = fits.open(filename)
for n, hdu in zip(range(len(hdulist)), hdulist):
- if isinstance(hdu, pyfits.ImageHDU) or \
- isinstance(hdu, pyfits.PrimaryHDU):
+ if isinstance(hdu, fits.ImageHDU) or \
+ isinstance(hdu, fits.PrimaryHDU):
# Ignore (primary) header units with no data (use '.size'
# rather than '.data' to avoid actually loading the image):
try:
+ data_size = hdu.size # size is int in Astropy 3.1.2
+ except TypeError:
data_size = hdu.size()
- except TypeError: # (size changed to int in PyFITS 3.1)
- data_size = hdu.size
if data_size > 0:
ext_list.append((filename, n))
hdulist.close()
@@ -138,7 +135,7 @@
if type(filename) is not str or type(extnum) is not int:
raise ValueError("Expected a (filename, extension) tuple")
- hdulist = pyfits.open(filename)
+ hdulist = fits.open(filename)
data = hdulist[extnum].data
|
{"golden_diff": "diff --git a/skimage/io/_plugins/fits_plugin.py b/skimage/io/_plugins/fits_plugin.py\n--- a/skimage/io/_plugins/fits_plugin.py\n+++ b/skimage/io/_plugins/fits_plugin.py\n@@ -3,15 +3,12 @@\n import skimage.io as io\n \n try:\n- from astropy.io import fits as pyfits\n+ from astropy.io import fits\n except ImportError:\n- try:\n- import pyfits\n- except ImportError:\n- raise ImportError(\n- \"PyFITS could not be found. Please refer to\\n\"\n- \"http://www.stsci.edu/resources/software_hardware/pyfits\\n\"\n- \"for further instructions.\")\n+ raise ImportError(\n+ \"Astropy could not be found. It is needed to read FITS files.\\n\"\n+ \"Please refer to http://www.astropy.org for installation\\n\"\n+ \"instructions.\")\n \n \n def imread(fname, dtype=None):\n@@ -44,14 +41,14 @@\n \n \"\"\"\n \n- hdulist = pyfits.open(fname)\n+ hdulist = fits.open(fname)\n \n # Iterate over FITS image extensions, ignoring any other extension types\n # such as binary tables, and get the first image data array:\n img_array = None\n for hdu in hdulist:\n- if isinstance(hdu, pyfits.ImageHDU) or \\\n- isinstance(hdu, pyfits.PrimaryHDU):\n+ if isinstance(hdu, fits.ImageHDU) or \\\n+ isinstance(hdu, fits.PrimaryHDU):\n if hdu.data is not None:\n img_array = hdu.data\n break\n@@ -92,16 +89,16 @@\n # files and finding the image extensions in each one:\n ext_list = []\n for filename in load_pattern:\n- hdulist = pyfits.open(filename)\n+ hdulist = fits.open(filename)\n for n, hdu in zip(range(len(hdulist)), hdulist):\n- if isinstance(hdu, pyfits.ImageHDU) or \\\n- isinstance(hdu, pyfits.PrimaryHDU):\n+ if isinstance(hdu, fits.ImageHDU) or \\\n+ isinstance(hdu, fits.PrimaryHDU):\n # Ignore (primary) header units with no data (use '.size'\n # rather than '.data' to avoid actually loading the image):\n try:\n+ data_size = hdu.size # size is int in Astropy 3.1.2\n+ except TypeError:\n data_size = hdu.size()\n- except TypeError: # (size changed to int in PyFITS 3.1)\n- data_size = hdu.size\n if data_size > 0:\n ext_list.append((filename, n))\n hdulist.close()\n@@ -138,7 +135,7 @@\n if type(filename) is not str or type(extnum) is not int:\n raise ValueError(\"Expected a (filename, extension) tuple\")\n \n- hdulist = pyfits.open(filename)\n+ hdulist = fits.open(filename)\n \n data = hdulist[extnum].data\n", "issue": "Use astropy instead of pyfits to read FITS images\npyfits is not currently installable on 3.5, and it looks like AstroPy is more actively maintained.\n", "before_files": [{"content": "__all__ = ['imread', 'imread_collection']\n\nimport skimage.io as io\n\ntry:\n from astropy.io import fits as pyfits\nexcept ImportError:\n try:\n import pyfits\n except ImportError:\n raise ImportError(\n \"PyFITS could not be found. Please refer to\\n\"\n \"http://www.stsci.edu/resources/software_hardware/pyfits\\n\"\n \"for further instructions.\")\n\n\ndef imread(fname, dtype=None):\n \"\"\"Load an image from a FITS file.\n\n Parameters\n ----------\n fname : string\n Image file name, e.g. ``test.fits``.\n dtype : dtype, optional\n For FITS, this argument is ignored because Stefan is planning on\n removing the dtype argument from imread anyway.\n\n Returns\n -------\n img_array : ndarray\n Unlike plugins such as PIL, where different color bands/channels are\n stored in the third dimension, FITS images are greyscale-only and can\n be N-dimensional, so an array of the native FITS dimensionality is\n returned, without color channels.\n\n Currently if no image is found in the file, None will be returned\n\n Notes\n -----\n\n Currently FITS ``imread()`` always returns the first image extension when\n given a Multi-Extension FITS file; use ``imread_collection()`` (which does\n lazy loading) to get all the extensions at once.\n\n \"\"\"\n\n hdulist = pyfits.open(fname)\n\n # Iterate over FITS image extensions, ignoring any other extension types\n # such as binary tables, and get the first image data array:\n img_array = None\n for hdu in hdulist:\n if isinstance(hdu, pyfits.ImageHDU) or \\\n isinstance(hdu, pyfits.PrimaryHDU):\n if hdu.data is not None:\n img_array = hdu.data\n break\n hdulist.close()\n\n return img_array\n\n\ndef imread_collection(load_pattern, conserve_memory=True):\n \"\"\"Load a collection of images from one or more FITS files\n\n Parameters\n ----------\n load_pattern : str or list\n List of extensions to load. Filename globbing is currently\n unsupported.\n converve_memory : bool\n If True, never keep more than one in memory at a specific\n time. Otherwise, images will be cached once they are loaded.\n\n Returns\n -------\n\n ic : ImageCollection\n Collection of images.\n\n \"\"\"\n\n intype = type(load_pattern)\n if intype is not list and intype is not str:\n raise TypeError(\"Input must be a filename or list of filenames\")\n\n # Ensure we have a list, otherwise we'll end up iterating over the string:\n if intype is not list:\n load_pattern = [load_pattern]\n\n # Generate a list of filename/extension pairs by opening the list of\n # files and finding the image extensions in each one:\n ext_list = []\n for filename in load_pattern:\n hdulist = pyfits.open(filename)\n for n, hdu in zip(range(len(hdulist)), hdulist):\n if isinstance(hdu, pyfits.ImageHDU) or \\\n isinstance(hdu, pyfits.PrimaryHDU):\n # Ignore (primary) header units with no data (use '.size'\n # rather than '.data' to avoid actually loading the image):\n try:\n data_size = hdu.size()\n except TypeError: # (size changed to int in PyFITS 3.1)\n data_size = hdu.size\n if data_size > 0:\n ext_list.append((filename, n))\n hdulist.close()\n\n return io.ImageCollection(ext_list, load_func=FITSFactory,\n conserve_memory=conserve_memory)\n\n\ndef FITSFactory(image_ext):\n \"\"\"Load an image extension from a FITS file and return a NumPy array\n\n Parameters\n ----------\n\n image_ext : tuple\n FITS extension to load, in the format ``(filename, ext_num)``.\n The FITS ``(extname, extver)`` format is unsupported, since this\n function is not called directly by the user and\n ``imread_collection()`` does the work of figuring out which\n extensions need loading.\n\n \"\"\"\n\n # Expect a length-2 tuple with a filename as the first element:\n if not isinstance(image_ext, tuple):\n raise TypeError(\"Expected a tuple\")\n\n if len(image_ext) != 2:\n raise ValueError(\"Expected a tuple of length 2\")\n\n filename = image_ext[0]\n extnum = image_ext[1]\n\n if type(filename) is not str or type(extnum) is not int:\n raise ValueError(\"Expected a (filename, extension) tuple\")\n\n hdulist = pyfits.open(filename)\n\n data = hdulist[extnum].data\n\n hdulist.close()\n\n if data is None:\n raise RuntimeError(\n \"Extension %d of %s has no data\" % (extnum, filename))\n\n return data\n", "path": "skimage/io/_plugins/fits_plugin.py"}]}
| 2,019 | 688 |
gh_patches_debug_22343
|
rasdani/github-patches
|
git_diff
|
huggingface__diffusers-7821
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Multi-controlnet formatting issue
### Describe the bug
Hi.
There is an inconsistency between `from_pretrained` and `save_pretrained` within the Multicontrolnet class.
The from_pretrained function returns a directory structure like this: controlnet, controlnet_1, controlnet_2,
whereas save_pretrained is like this: controlnet, controlnet_1, controlnet_1_2.
When loading a saved model, if there are 3 controlnets, the last controlnet will not be loaded. (more than 2 always same issue)
### Reproduction
I don't think there is no need to reproduce the code as it's pretty clear issue.
For compatibility, how about changing the `save_pretrained` function in Multi-ControlNet to look like the code below?
```
def save_pretrained(
self,
save_directory: Union[str, os.PathLike],
is_main_process: bool = True,
save_function: Callable = None,
safe_serialization: bool = True,
variant: Optional[str] = None,
):
"""
Save a model and its configuration file to a directory, so that it can be re-loaded using the
`[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.
Arguments:
save_directory (`str` or `os.PathLike`):
Directory to which to save. Will be created if it doesn't exist.
is_main_process (`bool`, *optional*, defaults to `True`):
Whether the process calling this is the main process or not. Useful when in distributed training like
TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on
the main process to avoid race conditions.
save_function (`Callable`):
The function to use to save the state dictionary. Useful on distributed training like TPUs when one
need to replace `torch.save` by another method. Can be configured with the environment variable
`DIFFUSERS_SAVE_MODE`.
safe_serialization (`bool`, *optional*, defaults to `True`):
Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
variant (`str`, *optional*):
If specified, weights are saved in the format pytorch_model.<variant>.bin.
"""
model_path_to_save = save_directory
for idx, controlnet in enumerate(self.nets):
suffix = "" if idx == 0 else f"_{idx}"
controlnet.save_pretrained(
model_path_to_save + suffix,
is_main_process=is_main_process,
save_function=save_function,
safe_serialization=safe_serialization,
variant=variant,
)
```
### Logs
_No response_
### System Info
Diffusers 0.27.2
### Who can help?
@sayakpaul
</issue>
<code>
[start of src/diffusers/pipelines/controlnet/multicontrolnet.py]
1 import os
2 from typing import Any, Callable, Dict, List, Optional, Tuple, Union
3
4 import torch
5 from torch import nn
6
7 from ...models.controlnet import ControlNetModel, ControlNetOutput
8 from ...models.modeling_utils import ModelMixin
9 from ...utils import logging
10
11
12 logger = logging.get_logger(__name__)
13
14
15 class MultiControlNetModel(ModelMixin):
16 r"""
17 Multiple `ControlNetModel` wrapper class for Multi-ControlNet
18
19 This module is a wrapper for multiple instances of the `ControlNetModel`. The `forward()` API is designed to be
20 compatible with `ControlNetModel`.
21
22 Args:
23 controlnets (`List[ControlNetModel]`):
24 Provides additional conditioning to the unet during the denoising process. You must set multiple
25 `ControlNetModel` as a list.
26 """
27
28 def __init__(self, controlnets: Union[List[ControlNetModel], Tuple[ControlNetModel]]):
29 super().__init__()
30 self.nets = nn.ModuleList(controlnets)
31
32 def forward(
33 self,
34 sample: torch.Tensor,
35 timestep: Union[torch.Tensor, float, int],
36 encoder_hidden_states: torch.Tensor,
37 controlnet_cond: List[torch.tensor],
38 conditioning_scale: List[float],
39 class_labels: Optional[torch.Tensor] = None,
40 timestep_cond: Optional[torch.Tensor] = None,
41 attention_mask: Optional[torch.Tensor] = None,
42 added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,
43 cross_attention_kwargs: Optional[Dict[str, Any]] = None,
44 guess_mode: bool = False,
45 return_dict: bool = True,
46 ) -> Union[ControlNetOutput, Tuple]:
47 for i, (image, scale, controlnet) in enumerate(zip(controlnet_cond, conditioning_scale, self.nets)):
48 down_samples, mid_sample = controlnet(
49 sample=sample,
50 timestep=timestep,
51 encoder_hidden_states=encoder_hidden_states,
52 controlnet_cond=image,
53 conditioning_scale=scale,
54 class_labels=class_labels,
55 timestep_cond=timestep_cond,
56 attention_mask=attention_mask,
57 added_cond_kwargs=added_cond_kwargs,
58 cross_attention_kwargs=cross_attention_kwargs,
59 guess_mode=guess_mode,
60 return_dict=return_dict,
61 )
62
63 # merge samples
64 if i == 0:
65 down_block_res_samples, mid_block_res_sample = down_samples, mid_sample
66 else:
67 down_block_res_samples = [
68 samples_prev + samples_curr
69 for samples_prev, samples_curr in zip(down_block_res_samples, down_samples)
70 ]
71 mid_block_res_sample += mid_sample
72
73 return down_block_res_samples, mid_block_res_sample
74
75 def save_pretrained(
76 self,
77 save_directory: Union[str, os.PathLike],
78 is_main_process: bool = True,
79 save_function: Callable = None,
80 safe_serialization: bool = True,
81 variant: Optional[str] = None,
82 ):
83 """
84 Save a model and its configuration file to a directory, so that it can be re-loaded using the
85 `[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.
86
87 Arguments:
88 save_directory (`str` or `os.PathLike`):
89 Directory to which to save. Will be created if it doesn't exist.
90 is_main_process (`bool`, *optional*, defaults to `True`):
91 Whether the process calling this is the main process or not. Useful when in distributed training like
92 TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on
93 the main process to avoid race conditions.
94 save_function (`Callable`):
95 The function to use to save the state dictionary. Useful on distributed training like TPUs when one
96 need to replace `torch.save` by another method. Can be configured with the environment variable
97 `DIFFUSERS_SAVE_MODE`.
98 safe_serialization (`bool`, *optional*, defaults to `True`):
99 Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
100 variant (`str`, *optional*):
101 If specified, weights are saved in the format pytorch_model.<variant>.bin.
102 """
103 idx = 0
104 model_path_to_save = save_directory
105 for controlnet in self.nets:
106 controlnet.save_pretrained(
107 model_path_to_save,
108 is_main_process=is_main_process,
109 save_function=save_function,
110 safe_serialization=safe_serialization,
111 variant=variant,
112 )
113
114 idx += 1
115 model_path_to_save = model_path_to_save + f"_{idx}"
116
117 @classmethod
118 def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):
119 r"""
120 Instantiate a pretrained MultiControlNet model from multiple pre-trained controlnet models.
121
122 The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train
123 the model, you should first set it back in training mode with `model.train()`.
124
125 The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come
126 pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning
127 task.
128
129 The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those
130 weights are discarded.
131
132 Parameters:
133 pretrained_model_path (`os.PathLike`):
134 A path to a *directory* containing model weights saved using
135 [`~diffusers.pipelines.controlnet.MultiControlNetModel.save_pretrained`], e.g.,
136 `./my_model_directory/controlnet`.
137 torch_dtype (`str` or `torch.dtype`, *optional*):
138 Override the default `torch.dtype` and load the model under this dtype. If `"auto"` is passed the dtype
139 will be automatically derived from the model's weights.
140 output_loading_info(`bool`, *optional*, defaults to `False`):
141 Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
142 device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):
143 A map that specifies where each submodule should go. It doesn't need to be refined to each
144 parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the
145 same device.
146
147 To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For
148 more information about each option see [designing a device
149 map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
150 max_memory (`Dict`, *optional*):
151 A dictionary device identifier to maximum memory. Will default to the maximum memory available for each
152 GPU and the available CPU RAM if unset.
153 low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):
154 Speed up model loading by not initializing the weights and only loading the pre-trained weights. This
155 also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the
156 model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,
157 setting this argument to `True` will raise an error.
158 variant (`str`, *optional*):
159 If specified load weights from `variant` filename, *e.g.* pytorch_model.<variant>.bin. `variant` is
160 ignored when using `from_flax`.
161 use_safetensors (`bool`, *optional*, defaults to `None`):
162 If set to `None`, the `safetensors` weights will be downloaded if they're available **and** if the
163 `safetensors` library is installed. If set to `True`, the model will be forcibly loaded from
164 `safetensors` weights. If set to `False`, loading will *not* use `safetensors`.
165 """
166 idx = 0
167 controlnets = []
168
169 # load controlnet and append to list until no controlnet directory exists anymore
170 # first controlnet has to be saved under `./mydirectory/controlnet` to be compliant with `DiffusionPipeline.from_prertained`
171 # second, third, ... controlnets have to be saved under `./mydirectory/controlnet_1`, `./mydirectory/controlnet_2`, ...
172 model_path_to_load = pretrained_model_path
173 while os.path.isdir(model_path_to_load):
174 controlnet = ControlNetModel.from_pretrained(model_path_to_load, **kwargs)
175 controlnets.append(controlnet)
176
177 idx += 1
178 model_path_to_load = pretrained_model_path + f"_{idx}"
179
180 logger.info(f"{len(controlnets)} controlnets loaded from {pretrained_model_path}.")
181
182 if len(controlnets) == 0:
183 raise ValueError(
184 f"No ControlNets found under {os.path.dirname(pretrained_model_path)}. Expected at least {pretrained_model_path + '_0'}."
185 )
186
187 return cls(controlnets)
188
[end of src/diffusers/pipelines/controlnet/multicontrolnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/diffusers/pipelines/controlnet/multicontrolnet.py b/src/diffusers/pipelines/controlnet/multicontrolnet.py
--- a/src/diffusers/pipelines/controlnet/multicontrolnet.py
+++ b/src/diffusers/pipelines/controlnet/multicontrolnet.py
@@ -100,20 +100,16 @@
variant (`str`, *optional*):
If specified, weights are saved in the format pytorch_model.<variant>.bin.
"""
- idx = 0
- model_path_to_save = save_directory
- for controlnet in self.nets:
+ for idx, controlnet in enumerate(self.nets):
+ suffix = "" if idx == 0 else f"_{idx}"
controlnet.save_pretrained(
- model_path_to_save,
+ save_directory + suffix,
is_main_process=is_main_process,
save_function=save_function,
safe_serialization=safe_serialization,
variant=variant,
)
- idx += 1
- model_path_to_save = model_path_to_save + f"_{idx}"
-
@classmethod
def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):
r"""
|
{"golden_diff": "diff --git a/src/diffusers/pipelines/controlnet/multicontrolnet.py b/src/diffusers/pipelines/controlnet/multicontrolnet.py\n--- a/src/diffusers/pipelines/controlnet/multicontrolnet.py\n+++ b/src/diffusers/pipelines/controlnet/multicontrolnet.py\n@@ -100,20 +100,16 @@\n variant (`str`, *optional*):\n If specified, weights are saved in the format pytorch_model.<variant>.bin.\n \"\"\"\n- idx = 0\n- model_path_to_save = save_directory\n- for controlnet in self.nets:\n+ for idx, controlnet in enumerate(self.nets):\n+ suffix = \"\" if idx == 0 else f\"_{idx}\"\n controlnet.save_pretrained(\n- model_path_to_save,\n+ save_directory + suffix,\n is_main_process=is_main_process,\n save_function=save_function,\n safe_serialization=safe_serialization,\n variant=variant,\n )\n \n- idx += 1\n- model_path_to_save = model_path_to_save + f\"_{idx}\"\n-\n @classmethod\n def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):\n r\"\"\"\n", "issue": "Multi-controlnet formatting issue\n### Describe the bug\r\n\r\nHi.\r\nThere is an inconsistency between `from_pretrained` and `save_pretrained` within the Multicontrolnet class.\r\nThe from_pretrained function returns a directory structure like this: controlnet, controlnet_1, controlnet_2, \r\nwhereas save_pretrained is like this: controlnet, controlnet_1, controlnet_1_2.\r\nWhen loading a saved model, if there are 3 controlnets, the last controlnet will not be loaded. (more than 2 always same issue)\r\n\r\n\r\n### Reproduction\r\n\r\nI don't think there is no need to reproduce the code as it's pretty clear issue.\r\nFor compatibility, how about changing the `save_pretrained` function in Multi-ControlNet to look like the code below? \r\n```\r\ndef save_pretrained(\r\n self,\r\n save_directory: Union[str, os.PathLike],\r\n is_main_process: bool = True,\r\n save_function: Callable = None,\r\n safe_serialization: bool = True,\r\n variant: Optional[str] = None,\r\n ):\r\n \"\"\"\r\n Save a model and its configuration file to a directory, so that it can be re-loaded using the\r\n `[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.\r\n\r\n Arguments:\r\n save_directory (`str` or `os.PathLike`):\r\n Directory to which to save. Will be created if it doesn't exist.\r\n is_main_process (`bool`, *optional*, defaults to `True`):\r\n Whether the process calling this is the main process or not. Useful when in distributed training like\r\n TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on\r\n the main process to avoid race conditions.\r\n save_function (`Callable`):\r\n The function to use to save the state dictionary. Useful on distributed training like TPUs when one\r\n need to replace `torch.save` by another method. Can be configured with the environment variable\r\n `DIFFUSERS_SAVE_MODE`.\r\n safe_serialization (`bool`, *optional*, defaults to `True`):\r\n Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).\r\n variant (`str`, *optional*):\r\n If specified, weights are saved in the format pytorch_model.<variant>.bin.\r\n \"\"\"\r\n model_path_to_save = save_directory\r\n for idx, controlnet in enumerate(self.nets):\r\n suffix = \"\" if idx == 0 else f\"_{idx}\"\r\n controlnet.save_pretrained(\r\n model_path_to_save + suffix,\r\n is_main_process=is_main_process,\r\n save_function=save_function,\r\n safe_serialization=safe_serialization,\r\n variant=variant,\r\n )\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nDiffusers 0.27.2\r\n\r\n### Who can help?\r\n\r\n@sayakpaul \n", "before_files": [{"content": "import os\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Union\n\nimport torch\nfrom torch import nn\n\nfrom ...models.controlnet import ControlNetModel, ControlNetOutput\nfrom ...models.modeling_utils import ModelMixin\nfrom ...utils import logging\n\n\nlogger = logging.get_logger(__name__)\n\n\nclass MultiControlNetModel(ModelMixin):\n r\"\"\"\n Multiple `ControlNetModel` wrapper class for Multi-ControlNet\n\n This module is a wrapper for multiple instances of the `ControlNetModel`. The `forward()` API is designed to be\n compatible with `ControlNetModel`.\n\n Args:\n controlnets (`List[ControlNetModel]`):\n Provides additional conditioning to the unet during the denoising process. You must set multiple\n `ControlNetModel` as a list.\n \"\"\"\n\n def __init__(self, controlnets: Union[List[ControlNetModel], Tuple[ControlNetModel]]):\n super().__init__()\n self.nets = nn.ModuleList(controlnets)\n\n def forward(\n self,\n sample: torch.Tensor,\n timestep: Union[torch.Tensor, float, int],\n encoder_hidden_states: torch.Tensor,\n controlnet_cond: List[torch.tensor],\n conditioning_scale: List[float],\n class_labels: Optional[torch.Tensor] = None,\n timestep_cond: Optional[torch.Tensor] = None,\n attention_mask: Optional[torch.Tensor] = None,\n added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,\n cross_attention_kwargs: Optional[Dict[str, Any]] = None,\n guess_mode: bool = False,\n return_dict: bool = True,\n ) -> Union[ControlNetOutput, Tuple]:\n for i, (image, scale, controlnet) in enumerate(zip(controlnet_cond, conditioning_scale, self.nets)):\n down_samples, mid_sample = controlnet(\n sample=sample,\n timestep=timestep,\n encoder_hidden_states=encoder_hidden_states,\n controlnet_cond=image,\n conditioning_scale=scale,\n class_labels=class_labels,\n timestep_cond=timestep_cond,\n attention_mask=attention_mask,\n added_cond_kwargs=added_cond_kwargs,\n cross_attention_kwargs=cross_attention_kwargs,\n guess_mode=guess_mode,\n return_dict=return_dict,\n )\n\n # merge samples\n if i == 0:\n down_block_res_samples, mid_block_res_sample = down_samples, mid_sample\n else:\n down_block_res_samples = [\n samples_prev + samples_curr\n for samples_prev, samples_curr in zip(down_block_res_samples, down_samples)\n ]\n mid_block_res_sample += mid_sample\n\n return down_block_res_samples, mid_block_res_sample\n\n def save_pretrained(\n self,\n save_directory: Union[str, os.PathLike],\n is_main_process: bool = True,\n save_function: Callable = None,\n safe_serialization: bool = True,\n variant: Optional[str] = None,\n ):\n \"\"\"\n Save a model and its configuration file to a directory, so that it can be re-loaded using the\n `[`~pipelines.controlnet.MultiControlNetModel.from_pretrained`]` class method.\n\n Arguments:\n save_directory (`str` or `os.PathLike`):\n Directory to which to save. Will be created if it doesn't exist.\n is_main_process (`bool`, *optional*, defaults to `True`):\n Whether the process calling this is the main process or not. Useful when in distributed training like\n TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on\n the main process to avoid race conditions.\n save_function (`Callable`):\n The function to use to save the state dictionary. Useful on distributed training like TPUs when one\n need to replace `torch.save` by another method. Can be configured with the environment variable\n `DIFFUSERS_SAVE_MODE`.\n safe_serialization (`bool`, *optional*, defaults to `True`):\n Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).\n variant (`str`, *optional*):\n If specified, weights are saved in the format pytorch_model.<variant>.bin.\n \"\"\"\n idx = 0\n model_path_to_save = save_directory\n for controlnet in self.nets:\n controlnet.save_pretrained(\n model_path_to_save,\n is_main_process=is_main_process,\n save_function=save_function,\n safe_serialization=safe_serialization,\n variant=variant,\n )\n\n idx += 1\n model_path_to_save = model_path_to_save + f\"_{idx}\"\n\n @classmethod\n def from_pretrained(cls, pretrained_model_path: Optional[Union[str, os.PathLike]], **kwargs):\n r\"\"\"\n Instantiate a pretrained MultiControlNet model from multiple pre-trained controlnet models.\n\n The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train\n the model, you should first set it back in training mode with `model.train()`.\n\n The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come\n pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning\n task.\n\n The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those\n weights are discarded.\n\n Parameters:\n pretrained_model_path (`os.PathLike`):\n A path to a *directory* containing model weights saved using\n [`~diffusers.pipelines.controlnet.MultiControlNetModel.save_pretrained`], e.g.,\n `./my_model_directory/controlnet`.\n torch_dtype (`str` or `torch.dtype`, *optional*):\n Override the default `torch.dtype` and load the model under this dtype. If `\"auto\"` is passed the dtype\n will be automatically derived from the model's weights.\n output_loading_info(`bool`, *optional*, defaults to `False`):\n Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.\n device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):\n A map that specifies where each submodule should go. It doesn't need to be refined to each\n parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the\n same device.\n\n To have Accelerate compute the most optimized `device_map` automatically, set `device_map=\"auto\"`. For\n more information about each option see [designing a device\n map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).\n max_memory (`Dict`, *optional*):\n A dictionary device identifier to maximum memory. Will default to the maximum memory available for each\n GPU and the available CPU RAM if unset.\n low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):\n Speed up model loading by not initializing the weights and only loading the pre-trained weights. This\n also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the\n model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,\n setting this argument to `True` will raise an error.\n variant (`str`, *optional*):\n If specified load weights from `variant` filename, *e.g.* pytorch_model.<variant>.bin. `variant` is\n ignored when using `from_flax`.\n use_safetensors (`bool`, *optional*, defaults to `None`):\n If set to `None`, the `safetensors` weights will be downloaded if they're available **and** if the\n `safetensors` library is installed. If set to `True`, the model will be forcibly loaded from\n `safetensors` weights. If set to `False`, loading will *not* use `safetensors`.\n \"\"\"\n idx = 0\n controlnets = []\n\n # load controlnet and append to list until no controlnet directory exists anymore\n # first controlnet has to be saved under `./mydirectory/controlnet` to be compliant with `DiffusionPipeline.from_prertained`\n # second, third, ... controlnets have to be saved under `./mydirectory/controlnet_1`, `./mydirectory/controlnet_2`, ...\n model_path_to_load = pretrained_model_path\n while os.path.isdir(model_path_to_load):\n controlnet = ControlNetModel.from_pretrained(model_path_to_load, **kwargs)\n controlnets.append(controlnet)\n\n idx += 1\n model_path_to_load = pretrained_model_path + f\"_{idx}\"\n\n logger.info(f\"{len(controlnets)} controlnets loaded from {pretrained_model_path}.\")\n\n if len(controlnets) == 0:\n raise ValueError(\n f\"No ControlNets found under {os.path.dirname(pretrained_model_path)}. Expected at least {pretrained_model_path + '_0'}.\"\n )\n\n return cls(controlnets)\n", "path": "src/diffusers/pipelines/controlnet/multicontrolnet.py"}]}
| 3,626 | 273 |
gh_patches_debug_34926
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-3770
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clarify the `num_workers` in `ThreadDataLoader`
**Is your feature request related to a problem? Please describe.**
When I was introducing GPU transforms and the associated `ThreadDataLoader` to users, got several times feedback about the `num_workers` arg, which is confusing that users think it means the multi-threads in `ThreadDataLoader`, but actually it's the multi-processing workers of PyTorch DataLoader.
Would be nice to clarify this arg and the use cases.
</issue>
<code>
[start of monai/data/thread_buffer.py]
1 # Copyright (c) MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12
13 from queue import Empty, Full, Queue
14 from threading import Thread
15
16 from monai.data import DataLoader, Dataset
17
18
19 class ThreadBuffer:
20 """
21 Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values
22 to be queued up asynchronously. The internal thread will continue running so long as the source has values or until
23 the stop() method is called.
24
25 One issue raised by using a thread in this way is that during the lifetime of the thread the source object is being
26 iterated over, so if the thread hasn't finished another attempt to iterate over it will raise an exception or yield
27 unexpected results. To ensure the thread releases the iteration and proper cleanup is done the stop() method must
28 be called which will join with the thread.
29
30 Args:
31 src: Source data iterable
32 buffer_size: Number of items to buffer from the source
33 timeout: Time to wait for an item from the buffer, or to wait while the buffer is full when adding items
34 """
35
36 def __init__(self, src, buffer_size: int = 1, timeout: float = 0.01):
37 self.src = src
38 self.buffer_size = buffer_size
39 self.timeout = timeout
40 self.buffer: Queue = Queue(self.buffer_size)
41 self.gen_thread = None
42 self.is_running = False
43
44 def enqueue_values(self):
45 for src_val in self.src:
46 while self.is_running:
47 try:
48 self.buffer.put(src_val, timeout=self.timeout)
49 except Full:
50 pass # try to add the item again
51 else:
52 break # successfully added the item, quit trying
53 else: # quit the thread cleanly when requested to stop
54 break
55
56 def stop(self):
57 self.is_running = False # signal the thread to exit
58
59 if self.gen_thread is not None:
60 self.gen_thread.join()
61
62 self.gen_thread = None
63
64 def __iter__(self):
65
66 self.is_running = True
67 self.gen_thread = Thread(target=self.enqueue_values, daemon=True)
68 self.gen_thread.start()
69
70 try:
71 while self.is_running and (self.gen_thread.is_alive() or not self.buffer.empty()):
72 try:
73 yield self.buffer.get(timeout=self.timeout)
74 except Empty:
75 pass # queue was empty this time, try again
76 finally:
77 self.stop() # ensure thread completion
78
79
80 class ThreadDataLoader(DataLoader):
81 """
82 Subclass of `DataLoader` using a `ThreadBuffer` object to implement `__iter__` method asynchronously. This will
83 iterate over data from the loader as expected however the data is generated on a separate thread. Use this class
84 where a `DataLoader` instance is required and not just an iterable object.
85
86 The default behaviour with `repeats` set to 1 is to yield each batch as it is generated, however with a higher
87 value the generated batch is yielded that many times while underlying dataset asynchronously generates the next.
88 Typically not all relevant information is learned from a batch in a single iteration so training multiple times
89 on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch
90 generation process more time to produce a result.
91
92 See:
93 * Fischetti et al. "Faster SGD training by minibatch persistency." ArXiv (2018) https://arxiv.org/abs/1806.07353
94 * Dami et al., "Faster Neural Network Training with Data Echoing" ArXiv (2020) https://arxiv.org/abs/1907.05550
95 * Ramezani et al. "GCN meets GPU: Decoupling "When to Sample" from "How to Sample"." NeurIPS (2020).
96 https://proceedings.neurips.cc/paper/2020/file/d714d2c5a796d5814c565d78dd16188d-Paper.pdf
97
98 Args:
99 dataset: input dataset.
100 buffer_size: number of items to buffer from the data source.
101 buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.
102 num_workers: number of the multi-processing workers in PyTorch DataLoader.
103 repeats: number of times to yield the same batch
104 """
105
106 def __init__(
107 self,
108 dataset: Dataset,
109 buffer_size: int = 1,
110 buffer_timeout: float = 0.01,
111 num_workers: int = 0,
112 repeats: int = 1,
113 **kwargs,
114 ):
115 super().__init__(dataset, num_workers, **kwargs)
116 self.buffer_size = buffer_size
117 self.buffer_timeout = buffer_timeout
118 self.repeats = repeats
119
120 def __iter__(self):
121 buffer = ThreadBuffer(src=super().__iter__(), buffer_size=self.buffer_size, timeout=self.buffer_timeout)
122
123 for batch in buffer:
124 for _ in range(self.repeats):
125 yield batch
126
[end of monai/data/thread_buffer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/monai/data/thread_buffer.py b/monai/data/thread_buffer.py
--- a/monai/data/thread_buffer.py
+++ b/monai/data/thread_buffer.py
@@ -89,6 +89,12 @@
on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch
generation process more time to produce a result.
+ Another typical usage is to accelerate light-weight preprocessing (usually cached all the deterministic transforms
+ and no IO operations), because it leverages the separate thread to execute preprocessing to avoid unnecessary IPC
+ between multiple workers of DataLoader. And as CUDA may not work well with the multi-processing of DataLoader,
+ `ThreadDataLoader` can be useful for GPU transforms. For more details:
+ https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_model_training_guide.md.
+
See:
* Fischetti et al. "Faster SGD training by minibatch persistency." ArXiv (2018) https://arxiv.org/abs/1806.07353
* Dami et al., "Faster Neural Network Training with Data Echoing" ArXiv (2020) https://arxiv.org/abs/1907.05550
@@ -99,20 +105,15 @@
dataset: input dataset.
buffer_size: number of items to buffer from the data source.
buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.
- num_workers: number of the multi-processing workers in PyTorch DataLoader.
- repeats: number of times to yield the same batch
+ repeats: number of times to yield the same batch.
+ kwargs: other arguments for `DataLoader` except for `dataset`.
+
"""
def __init__(
- self,
- dataset: Dataset,
- buffer_size: int = 1,
- buffer_timeout: float = 0.01,
- num_workers: int = 0,
- repeats: int = 1,
- **kwargs,
+ self, dataset: Dataset, buffer_size: int = 1, buffer_timeout: float = 0.01, repeats: int = 1, **kwargs
):
- super().__init__(dataset, num_workers, **kwargs)
+ super().__init__(dataset, **kwargs)
self.buffer_size = buffer_size
self.buffer_timeout = buffer_timeout
self.repeats = repeats
|
{"golden_diff": "diff --git a/monai/data/thread_buffer.py b/monai/data/thread_buffer.py\n--- a/monai/data/thread_buffer.py\n+++ b/monai/data/thread_buffer.py\n@@ -89,6 +89,12 @@\n on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch\n generation process more time to produce a result.\n \n+ Another typical usage is to accelerate light-weight preprocessing (usually cached all the deterministic transforms\n+ and no IO operations), because it leverages the separate thread to execute preprocessing to avoid unnecessary IPC\n+ between multiple workers of DataLoader. And as CUDA may not work well with the multi-processing of DataLoader,\n+ `ThreadDataLoader` can be useful for GPU transforms. For more details:\n+ https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_model_training_guide.md.\n+\n See:\n * Fischetti et al. \"Faster SGD training by minibatch persistency.\" ArXiv (2018) https://arxiv.org/abs/1806.07353\n * Dami et al., \"Faster Neural Network Training with Data Echoing\" ArXiv (2020) https://arxiv.org/abs/1907.05550\n@@ -99,20 +105,15 @@\n dataset: input dataset.\n buffer_size: number of items to buffer from the data source.\n buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.\n- num_workers: number of the multi-processing workers in PyTorch DataLoader.\n- repeats: number of times to yield the same batch\n+ repeats: number of times to yield the same batch.\n+ kwargs: other arguments for `DataLoader` except for `dataset`.\n+\n \"\"\"\n \n def __init__(\n- self,\n- dataset: Dataset,\n- buffer_size: int = 1,\n- buffer_timeout: float = 0.01,\n- num_workers: int = 0,\n- repeats: int = 1,\n- **kwargs,\n+ self, dataset: Dataset, buffer_size: int = 1, buffer_timeout: float = 0.01, repeats: int = 1, **kwargs\n ):\n- super().__init__(dataset, num_workers, **kwargs)\n+ super().__init__(dataset, **kwargs)\n self.buffer_size = buffer_size\n self.buffer_timeout = buffer_timeout\n self.repeats = repeats\n", "issue": "Clarify the `num_workers` in `ThreadDataLoader`\n**Is your feature request related to a problem? Please describe.**\r\nWhen I was introducing GPU transforms and the associated `ThreadDataLoader` to users, got several times feedback about the `num_workers` arg, which is confusing that users think it means the multi-threads in `ThreadDataLoader`, but actually it's the multi-processing workers of PyTorch DataLoader.\r\nWould be nice to clarify this arg and the use cases.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom queue import Empty, Full, Queue\nfrom threading import Thread\n\nfrom monai.data import DataLoader, Dataset\n\n\nclass ThreadBuffer:\n \"\"\"\n Iterates over values from self.src in a separate thread but yielding them in the current thread. This allows values\n to be queued up asynchronously. The internal thread will continue running so long as the source has values or until\n the stop() method is called.\n\n One issue raised by using a thread in this way is that during the lifetime of the thread the source object is being\n iterated over, so if the thread hasn't finished another attempt to iterate over it will raise an exception or yield\n unexpected results. To ensure the thread releases the iteration and proper cleanup is done the stop() method must\n be called which will join with the thread.\n\n Args:\n src: Source data iterable\n buffer_size: Number of items to buffer from the source\n timeout: Time to wait for an item from the buffer, or to wait while the buffer is full when adding items\n \"\"\"\n\n def __init__(self, src, buffer_size: int = 1, timeout: float = 0.01):\n self.src = src\n self.buffer_size = buffer_size\n self.timeout = timeout\n self.buffer: Queue = Queue(self.buffer_size)\n self.gen_thread = None\n self.is_running = False\n\n def enqueue_values(self):\n for src_val in self.src:\n while self.is_running:\n try:\n self.buffer.put(src_val, timeout=self.timeout)\n except Full:\n pass # try to add the item again\n else:\n break # successfully added the item, quit trying\n else: # quit the thread cleanly when requested to stop\n break\n\n def stop(self):\n self.is_running = False # signal the thread to exit\n\n if self.gen_thread is not None:\n self.gen_thread.join()\n\n self.gen_thread = None\n\n def __iter__(self):\n\n self.is_running = True\n self.gen_thread = Thread(target=self.enqueue_values, daemon=True)\n self.gen_thread.start()\n\n try:\n while self.is_running and (self.gen_thread.is_alive() or not self.buffer.empty()):\n try:\n yield self.buffer.get(timeout=self.timeout)\n except Empty:\n pass # queue was empty this time, try again\n finally:\n self.stop() # ensure thread completion\n\n\nclass ThreadDataLoader(DataLoader):\n \"\"\"\n Subclass of `DataLoader` using a `ThreadBuffer` object to implement `__iter__` method asynchronously. This will\n iterate over data from the loader as expected however the data is generated on a separate thread. Use this class\n where a `DataLoader` instance is required and not just an iterable object.\n\n The default behaviour with `repeats` set to 1 is to yield each batch as it is generated, however with a higher\n value the generated batch is yielded that many times while underlying dataset asynchronously generates the next.\n Typically not all relevant information is learned from a batch in a single iteration so training multiple times\n on the same batch will still produce good training with minimal short-term overfitting while allowing a slow batch\n generation process more time to produce a result.\n\n See:\n * Fischetti et al. \"Faster SGD training by minibatch persistency.\" ArXiv (2018) https://arxiv.org/abs/1806.07353\n * Dami et al., \"Faster Neural Network Training with Data Echoing\" ArXiv (2020) https://arxiv.org/abs/1907.05550\n * Ramezani et al. \"GCN meets GPU: Decoupling \"When to Sample\" from \"How to Sample\".\" NeurIPS (2020).\n https://proceedings.neurips.cc/paper/2020/file/d714d2c5a796d5814c565d78dd16188d-Paper.pdf\n\n Args:\n dataset: input dataset.\n buffer_size: number of items to buffer from the data source.\n buffer_timeout: time to wait for an item from the buffer, or to wait while the buffer is full when adding items.\n num_workers: number of the multi-processing workers in PyTorch DataLoader.\n repeats: number of times to yield the same batch\n \"\"\"\n\n def __init__(\n self,\n dataset: Dataset,\n buffer_size: int = 1,\n buffer_timeout: float = 0.01,\n num_workers: int = 0,\n repeats: int = 1,\n **kwargs,\n ):\n super().__init__(dataset, num_workers, **kwargs)\n self.buffer_size = buffer_size\n self.buffer_timeout = buffer_timeout\n self.repeats = repeats\n\n def __iter__(self):\n buffer = ThreadBuffer(src=super().__iter__(), buffer_size=self.buffer_size, timeout=self.buffer_timeout)\n\n for batch in buffer:\n for _ in range(self.repeats):\n yield batch\n", "path": "monai/data/thread_buffer.py"}]}
| 2,150 | 562 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.