problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_10569
|
rasdani/github-patches
|
git_diff
|
gratipay__gratipay.com-2047
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Username change fails silently
When you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.
[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849
Thanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:
The Aspen ticket for this is: gittip/aspen-python#279
Username change fails silently
When you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.
[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849
Thanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:
The Aspen ticket for this is: gittip/aspen-python#279
</issue>
<code>
[start of gittip/exceptions.py]
1 """
2 This module contains exceptions shared across application code.
3 """
4
5 from __future__ import print_function, unicode_literals
6
7
8
9 class UnknownPlatform(Exception): pass
10
11 class ProblemChangingUsername(Exception):
12 def __str__(self):
13 return self.msg.format(self.args[0])
14
15 class UsernameIsEmpty(ProblemChangingUsername):
16 msg = "You need to provide a username!"
17
18 class UsernameTooLong(ProblemChangingUsername):
19 msg = "The username '{}' is too long."
20
21 # Not passing the potentially unicode characters back because of:
22 # https://github.com/gittip/aspen-python/issues/177
23 class UsernameContainsInvalidCharacters(ProblemChangingUsername):
24 msg = "That username contains invalid characters."
25
26 class UsernameIsRestricted(ProblemChangingUsername):
27 msg = "The username '{}' is restricted."
28
29 class UsernameAlreadyTaken(ProblemChangingUsername):
30 msg = "The username '{}' is already taken."
31
32 class TooGreedy(Exception): pass
33 class NoSelfTipping(Exception): pass
34 class BadAmount(Exception): pass
35
[end of gittip/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gittip/exceptions.py b/gittip/exceptions.py
--- a/gittip/exceptions.py
+++ b/gittip/exceptions.py
@@ -18,10 +18,8 @@
class UsernameTooLong(ProblemChangingUsername):
msg = "The username '{}' is too long."
-# Not passing the potentially unicode characters back because of:
-# https://github.com/gittip/aspen-python/issues/177
class UsernameContainsInvalidCharacters(ProblemChangingUsername):
- msg = "That username contains invalid characters."
+ msg = "The username '{}' contains invalid characters."
class UsernameIsRestricted(ProblemChangingUsername):
msg = "The username '{}' is restricted."
|
{"golden_diff": "diff --git a/gittip/exceptions.py b/gittip/exceptions.py\n--- a/gittip/exceptions.py\n+++ b/gittip/exceptions.py\n@@ -18,10 +18,8 @@\n class UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n \n-# Not passing the potentially unicode characters back because of:\n-# https://github.com/gittip/aspen-python/issues/177\n class UsernameContainsInvalidCharacters(ProblemChangingUsername):\n- msg = \"That username contains invalid characters.\"\n+ msg = \"The username '{}' contains invalid characters.\"\n \n class UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n", "issue": "Username change fails silently\nWhen you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.\n\n[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849\n\nThanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:\n\nThe Aspen ticket for this is: gittip/aspen-python#279\n\nUsername change fails silently\nWhen you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.\n\n[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849\n\nThanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:\n\nThe Aspen ticket for this is: gittip/aspen-python#279\n\n", "before_files": [{"content": "\"\"\"\nThis module contains exceptions shared across application code.\n\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\n\n\nclass UnknownPlatform(Exception): pass\n\nclass ProblemChangingUsername(Exception):\n def __str__(self):\n return self.msg.format(self.args[0])\n\nclass UsernameIsEmpty(ProblemChangingUsername):\n msg = \"You need to provide a username!\"\n\nclass UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n\n# Not passing the potentially unicode characters back because of:\n# https://github.com/gittip/aspen-python/issues/177\nclass UsernameContainsInvalidCharacters(ProblemChangingUsername):\n msg = \"That username contains invalid characters.\"\n\nclass UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n\nclass UsernameAlreadyTaken(ProblemChangingUsername):\n msg = \"The username '{}' is already taken.\"\n\nclass TooGreedy(Exception): pass\nclass NoSelfTipping(Exception): pass\nclass BadAmount(Exception): pass\n", "path": "gittip/exceptions.py"}]}
| 1,231 | 153 |
gh_patches_debug_9668
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-1928
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TTL setting in IGMP.igmpize method
#### Brief description
IGMP.igmpize method in Scapy 2.3.3 was setting IP.ttl value to 1. Currently ttl is not modified by this function.
I consider this a bug as in docstring one can read:
> 4. ttl = 1 (RFC 2236, section 2)
The new behaviour has been introduced in revision 8329256dcaefb0aa7e1c9ec95e15abdd7607362a
If you agree it is actually a bug, then I can provide fix :)
#### Environment
- Scapy version: 2.4.2
- Python 2.7
#### How to reproduce
**Scapy 2.4.2**
```
>>> p = Ether() / IP(ttl=64) / IGMP()
>>> p.ttl
64
>>> p[IGMP].igmpize()
True
>>> p.ttl
64
```
**Scapy 2.3.3**
```
>>> p = Ether() / IP(ttl=64) / IGMP()
>>> p.ttl
64
>>> p[IGMP].igmpize(p[IP], p[Ether])
True
>>> p.ttl
1
```
#### Actual result
```
>>> p[IGMP].igmpize()
True
>>> p.ttl
64
```
#### Expected result
```
>>> p[IGMP].igmpize()
True
>>> p.ttl
1
```
#### Related resources
Full docstring of igmpize:
```
"""Called to explicitly fixup the packet according to the IGMP RFC
The rules are:
General:
1. the Max Response time is meaningful only in Membership Queries and should be zero
IP:
1. Send General Group Query to 224.0.0.1 (all systems)
2. Send Leave Group to 224.0.0.2 (all routers)
3a.Otherwise send the packet to the group address
3b.Send reports/joins to the group address
4. ttl = 1 (RFC 2236, section 2)
5. send the packet with the router alert IP option (RFC 2236, section 2)
Ether:
1. Recalculate destination
Returns:
True The tuple ether/ip/self passed all check and represents
a proper IGMP packet.
False One of more validation checks failed and no fields
were adjusted.
The function will examine the IGMP message to assure proper format.
Corrections will be attempted if possible. The IP header is then properly
adjusted to ensure correct formatting and assignment. The Ethernet header
is then adjusted to the proper IGMP packet format.
"""
```
</issue>
<code>
[start of scapy/contrib/igmp.py]
1 #! /usr/bin/env python
2
3 # This file is part of Scapy
4 # Scapy is free software: you can redistribute it and/or modify
5 # it under the terms of the GNU General Public License as published by
6 # the Free Software Foundation, either version 2 of the License, or
7 # any later version.
8 #
9 # Scapy is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with Scapy. If not, see <http://www.gnu.org/licenses/>.
16
17 # flake8: noqa: E501
18
19 # scapy.contrib.description = Internet Group Management Protocol v1/v2 (IGMP/IGMPv2)
20 # scapy.contrib.status = loads
21
22 from __future__ import print_function
23 from scapy.compat import chb, orb
24 from scapy.error import warning
25 from scapy.fields import ByteEnumField, ByteField, IPField, XShortField
26 from scapy.layers.inet import IP, IPOption_Router_Alert
27 from scapy.layers.l2 import Ether, getmacbyip
28 from scapy.packet import bind_layers, Packet
29 from scapy.utils import atol, checksum
30
31
32 def isValidMCAddr(ip):
33 """convert dotted quad string to long and check the first octet"""
34 FirstOct = atol(ip) >> 24 & 0xFF
35 return (FirstOct >= 224) and (FirstOct <= 239)
36
37
38 class IGMP(Packet):
39 """IGMP Message Class for v1 and v2.
40
41 This class is derived from class Packet. You need call "igmpize()"
42 so the packet is transformed according the RFC when sent.
43 a=Ether(src="00:01:02:03:04:05")
44 b=IP(src="1.2.3.4")
45 c=IGMP(type=0x12, gaddr="224.2.3.4")
46 x = a/b/c
47 x[IGMP].igmpize()
48 sendp(a/b/c, iface="en0")
49
50 Parameters:
51 type IGMP type field, 0x11, 0x12, 0x16 or 0x17
52 mrcode Maximum Response time (zero for v1)
53 gaddr Multicast Group Address 224.x.x.x/4
54
55 See RFC2236, Section 2. Introduction for definitions of proper
56 IGMPv2 message format http://www.faqs.org/rfcs/rfc2236.html
57
58 """
59 name = "IGMP"
60
61 igmptypes = {0x11: "Group Membership Query",
62 0x12: "Version 1 - Membership Report",
63 0x16: "Version 2 - Membership Report",
64 0x17: "Leave Group"}
65
66 fields_desc = [ByteEnumField("type", 0x11, igmptypes),
67 ByteField("mrcode", 20),
68 XShortField("chksum", None),
69 IPField("gaddr", "0.0.0.0")]
70
71 def post_build(self, p, pay):
72 """Called implicitly before a packet is sent to compute and place IGMP checksum.
73
74 Parameters:
75 self The instantiation of an IGMP class
76 p The IGMP message in hex in network byte order
77 pay Additional payload for the IGMP message
78 """
79 p += pay
80 if self.chksum is None:
81 ck = checksum(p)
82 p = p[:2] + chb(ck >> 8) + chb(ck & 0xff) + p[4:]
83 return p
84
85 @classmethod
86 def dispatch_hook(cls, _pkt=None, *args, **kargs):
87 if _pkt and len(_pkt) >= 4:
88 from scapy.contrib.igmpv3 import IGMPv3
89 if orb(_pkt[0]) in [0x22, 0x30, 0x31, 0x32]:
90 return IGMPv3
91 if orb(_pkt[0]) == 0x11 and len(_pkt) >= 12:
92 return IGMPv3
93 return IGMP
94
95 def igmpize(self):
96 """Called to explicitly fixup the packet according to the IGMP RFC
97
98 The rules are:
99 General:
100 1. the Max Response time is meaningful only in Membership Queries and should be zero
101 IP:
102 1. Send General Group Query to 224.0.0.1 (all systems)
103 2. Send Leave Group to 224.0.0.2 (all routers)
104 3a.Otherwise send the packet to the group address
105 3b.Send reports/joins to the group address
106 4. ttl = 1 (RFC 2236, section 2)
107 5. send the packet with the router alert IP option (RFC 2236, section 2)
108 Ether:
109 1. Recalculate destination
110
111 Returns:
112 True The tuple ether/ip/self passed all check and represents
113 a proper IGMP packet.
114 False One of more validation checks failed and no fields
115 were adjusted.
116
117 The function will examine the IGMP message to assure proper format.
118 Corrections will be attempted if possible. The IP header is then properly
119 adjusted to ensure correct formatting and assignment. The Ethernet header
120 is then adjusted to the proper IGMP packet format.
121 """
122 gaddr = self.gaddr if hasattr(self, "gaddr") and self.gaddr else "0.0.0.0" # noqa: E501
123 underlayer = self.underlayer
124 if self.type not in [0x11, 0x30]: # General Rule 1 # noqa: E501
125 self.mrcode = 0
126 if isinstance(underlayer, IP):
127 if (self.type == 0x11):
128 if (gaddr == "0.0.0.0"):
129 underlayer.dst = "224.0.0.1" # IP rule 1 # noqa: E501
130 elif isValidMCAddr(gaddr):
131 underlayer.dst = gaddr # IP rule 3a # noqa: E501
132 else:
133 warning("Invalid IGMP Group Address detected !")
134 return False
135 elif ((self.type == 0x17) and isValidMCAddr(gaddr)):
136 underlayer.dst = "224.0.0.2" # IP rule 2 # noqa: E501
137 elif ((self.type == 0x12) or (self.type == 0x16)) and (isValidMCAddr(gaddr)): # noqa: E501
138 underlayer.dst = gaddr # IP rule 3b # noqa: E501
139 else:
140 warning("Invalid IGMP Type detected !")
141 return False
142 if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501
143 underlayer.options.append(IPOption_Router_Alert())
144 _root = self.firstlayer()
145 if _root.haslayer(Ether):
146 # Force recalculate Ether dst
147 _root[Ether].dst = getmacbyip(underlayer.dst) # Ether rule 1 # noqa: E501
148 from scapy.contrib.igmpv3 import IGMPv3
149 if isinstance(self, IGMPv3):
150 self.encode_maxrespcode()
151 return True
152
153 def mysummary(self):
154 """Display a summary of the IGMP object."""
155 if isinstance(self.underlayer, IP):
156 return self.underlayer.sprintf("IGMP: %IP.src% > %IP.dst% %IGMP.type% %IGMP.gaddr%") # noqa: E501
157 else:
158 return self.sprintf("IGMP %IGMP.type% %IGMP.gaddr%")
159
160
161 bind_layers(IP, IGMP, frag=0,
162 proto=2,
163 ttl=1)
164
[end of scapy/contrib/igmp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scapy/contrib/igmp.py b/scapy/contrib/igmp.py
--- a/scapy/contrib/igmp.py
+++ b/scapy/contrib/igmp.py
@@ -141,6 +141,7 @@
return False
if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501
underlayer.options.append(IPOption_Router_Alert())
+ underlayer.ttl = 1 # IP rule 4
_root = self.firstlayer()
if _root.haslayer(Ether):
# Force recalculate Ether dst
|
{"golden_diff": "diff --git a/scapy/contrib/igmp.py b/scapy/contrib/igmp.py\n--- a/scapy/contrib/igmp.py\n+++ b/scapy/contrib/igmp.py\n@@ -141,6 +141,7 @@\n return False\n if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501\n underlayer.options.append(IPOption_Router_Alert())\n+ underlayer.ttl = 1 # IP rule 4\n _root = self.firstlayer()\n if _root.haslayer(Ether):\n # Force recalculate Ether dst\n", "issue": "TTL setting in IGMP.igmpize method\n#### Brief description\r\n\r\nIGMP.igmpize method in Scapy 2.3.3 was setting IP.ttl value to 1. Currently ttl is not modified by this function. \r\n\r\nI consider this a bug as in docstring one can read: \r\n> 4. ttl = 1 (RFC 2236, section 2)\r\n\r\nThe new behaviour has been introduced in revision 8329256dcaefb0aa7e1c9ec95e15abdd7607362a\r\n\r\nIf you agree it is actually a bug, then I can provide fix :) \r\n\r\n#### Environment\r\n\r\n- Scapy version: 2.4.2\r\n- Python 2.7\r\n\r\n#### How to reproduce\r\n\r\n**Scapy 2.4.2**\r\n\r\n```\r\n>>> p = Ether() / IP(ttl=64) / IGMP()\r\n>>> p.ttl\r\n64\r\n>>> p[IGMP].igmpize()\r\nTrue\r\n>>> p.ttl\r\n64\r\n```\r\n\r\n**Scapy 2.3.3**\r\n\r\n```\r\n>>> p = Ether() / IP(ttl=64) / IGMP()\r\n>>> p.ttl\r\n64\r\n>>> p[IGMP].igmpize(p[IP], p[Ether])\r\nTrue\r\n>>> p.ttl\r\n1\r\n```\r\n\r\n\r\n#### Actual result\r\n\r\n```\r\n>>> p[IGMP].igmpize()\r\nTrue\r\n>>> p.ttl\r\n64\r\n```\r\n\r\n#### Expected result\r\n\r\n```\r\n>>> p[IGMP].igmpize()\r\nTrue\r\n>>> p.ttl\r\n1\r\n```\r\n\r\n#### Related resources\r\n\r\nFull docstring of igmpize:\r\n\r\n```\r\n \"\"\"Called to explicitly fixup the packet according to the IGMP RFC\r\n\r\n The rules are:\r\n General:\r\n 1. the Max Response time is meaningful only in Membership Queries and should be zero\r\n IP:\r\n 1. Send General Group Query to 224.0.0.1 (all systems)\r\n 2. Send Leave Group to 224.0.0.2 (all routers)\r\n 3a.Otherwise send the packet to the group address\r\n 3b.Send reports/joins to the group address\r\n 4. ttl = 1 (RFC 2236, section 2)\r\n 5. send the packet with the router alert IP option (RFC 2236, section 2)\r\n Ether:\r\n 1. Recalculate destination\r\n\r\n Returns:\r\n True The tuple ether/ip/self passed all check and represents\r\n a proper IGMP packet.\r\n False One of more validation checks failed and no fields\r\n were adjusted.\r\n\r\n The function will examine the IGMP message to assure proper format.\r\n Corrections will be attempted if possible. The IP header is then properly\r\n adjusted to ensure correct formatting and assignment. The Ethernet header\r\n is then adjusted to the proper IGMP packet format.\r\n \"\"\"\r\n```\n", "before_files": [{"content": "#! /usr/bin/env python\n\n# This file is part of Scapy\n# Scapy is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 2 of the License, or\n# any later version.\n#\n# Scapy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Scapy. If not, see <http://www.gnu.org/licenses/>.\n\n# flake8: noqa: E501\n\n# scapy.contrib.description = Internet Group Management Protocol v1/v2 (IGMP/IGMPv2)\n# scapy.contrib.status = loads\n\nfrom __future__ import print_function\nfrom scapy.compat import chb, orb\nfrom scapy.error import warning\nfrom scapy.fields import ByteEnumField, ByteField, IPField, XShortField\nfrom scapy.layers.inet import IP, IPOption_Router_Alert\nfrom scapy.layers.l2 import Ether, getmacbyip\nfrom scapy.packet import bind_layers, Packet\nfrom scapy.utils import atol, checksum\n\n\ndef isValidMCAddr(ip):\n \"\"\"convert dotted quad string to long and check the first octet\"\"\"\n FirstOct = atol(ip) >> 24 & 0xFF\n return (FirstOct >= 224) and (FirstOct <= 239)\n\n\nclass IGMP(Packet):\n \"\"\"IGMP Message Class for v1 and v2.\n\nThis class is derived from class Packet. You need call \"igmpize()\"\nso the packet is transformed according the RFC when sent.\na=Ether(src=\"00:01:02:03:04:05\")\nb=IP(src=\"1.2.3.4\")\nc=IGMP(type=0x12, gaddr=\"224.2.3.4\")\nx = a/b/c\nx[IGMP].igmpize()\nsendp(a/b/c, iface=\"en0\")\n\n Parameters:\n type IGMP type field, 0x11, 0x12, 0x16 or 0x17\n mrcode Maximum Response time (zero for v1)\n gaddr Multicast Group Address 224.x.x.x/4\n\nSee RFC2236, Section 2. Introduction for definitions of proper\nIGMPv2 message format http://www.faqs.org/rfcs/rfc2236.html\n\n \"\"\"\n name = \"IGMP\"\n\n igmptypes = {0x11: \"Group Membership Query\",\n 0x12: \"Version 1 - Membership Report\",\n 0x16: \"Version 2 - Membership Report\",\n 0x17: \"Leave Group\"}\n\n fields_desc = [ByteEnumField(\"type\", 0x11, igmptypes),\n ByteField(\"mrcode\", 20),\n XShortField(\"chksum\", None),\n IPField(\"gaddr\", \"0.0.0.0\")]\n\n def post_build(self, p, pay):\n \"\"\"Called implicitly before a packet is sent to compute and place IGMP checksum.\n\n Parameters:\n self The instantiation of an IGMP class\n p The IGMP message in hex in network byte order\n pay Additional payload for the IGMP message\n \"\"\"\n p += pay\n if self.chksum is None:\n ck = checksum(p)\n p = p[:2] + chb(ck >> 8) + chb(ck & 0xff) + p[4:]\n return p\n\n @classmethod\n def dispatch_hook(cls, _pkt=None, *args, **kargs):\n if _pkt and len(_pkt) >= 4:\n from scapy.contrib.igmpv3 import IGMPv3\n if orb(_pkt[0]) in [0x22, 0x30, 0x31, 0x32]:\n return IGMPv3\n if orb(_pkt[0]) == 0x11 and len(_pkt) >= 12:\n return IGMPv3\n return IGMP\n\n def igmpize(self):\n \"\"\"Called to explicitly fixup the packet according to the IGMP RFC\n\n The rules are:\n General:\n 1. the Max Response time is meaningful only in Membership Queries and should be zero\n IP:\n 1. Send General Group Query to 224.0.0.1 (all systems)\n 2. Send Leave Group to 224.0.0.2 (all routers)\n 3a.Otherwise send the packet to the group address\n 3b.Send reports/joins to the group address\n 4. ttl = 1 (RFC 2236, section 2)\n 5. send the packet with the router alert IP option (RFC 2236, section 2)\n Ether:\n 1. Recalculate destination\n\n Returns:\n True The tuple ether/ip/self passed all check and represents\n a proper IGMP packet.\n False One of more validation checks failed and no fields\n were adjusted.\n\n The function will examine the IGMP message to assure proper format.\n Corrections will be attempted if possible. The IP header is then properly\n adjusted to ensure correct formatting and assignment. The Ethernet header\n is then adjusted to the proper IGMP packet format.\n \"\"\"\n gaddr = self.gaddr if hasattr(self, \"gaddr\") and self.gaddr else \"0.0.0.0\" # noqa: E501\n underlayer = self.underlayer\n if self.type not in [0x11, 0x30]: # General Rule 1 # noqa: E501\n self.mrcode = 0\n if isinstance(underlayer, IP):\n if (self.type == 0x11):\n if (gaddr == \"0.0.0.0\"):\n underlayer.dst = \"224.0.0.1\" # IP rule 1 # noqa: E501\n elif isValidMCAddr(gaddr):\n underlayer.dst = gaddr # IP rule 3a # noqa: E501\n else:\n warning(\"Invalid IGMP Group Address detected !\")\n return False\n elif ((self.type == 0x17) and isValidMCAddr(gaddr)):\n underlayer.dst = \"224.0.0.2\" # IP rule 2 # noqa: E501\n elif ((self.type == 0x12) or (self.type == 0x16)) and (isValidMCAddr(gaddr)): # noqa: E501\n underlayer.dst = gaddr # IP rule 3b # noqa: E501\n else:\n warning(\"Invalid IGMP Type detected !\")\n return False\n if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501\n underlayer.options.append(IPOption_Router_Alert())\n _root = self.firstlayer()\n if _root.haslayer(Ether):\n # Force recalculate Ether dst\n _root[Ether].dst = getmacbyip(underlayer.dst) # Ether rule 1 # noqa: E501\n from scapy.contrib.igmpv3 import IGMPv3\n if isinstance(self, IGMPv3):\n self.encode_maxrespcode()\n return True\n\n def mysummary(self):\n \"\"\"Display a summary of the IGMP object.\"\"\"\n if isinstance(self.underlayer, IP):\n return self.underlayer.sprintf(\"IGMP: %IP.src% > %IP.dst% %IGMP.type% %IGMP.gaddr%\") # noqa: E501\n else:\n return self.sprintf(\"IGMP %IGMP.type% %IGMP.gaddr%\")\n\n\nbind_layers(IP, IGMP, frag=0,\n proto=2,\n ttl=1)\n", "path": "scapy/contrib/igmp.py"}]}
| 3,412 | 146 |
gh_patches_debug_23361
|
rasdani/github-patches
|
git_diff
|
NVIDIA__NVFlare-106
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing config_validator option in provisioning tool
Provisioning tool needs to have config_validator option so the generated fed_server.json can have that information.
</issue>
<code>
[start of nvflare/lighter/impl/static_file.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17
18 from nvflare.lighter.spec import Builder
19 from nvflare.lighter.utils import sh_replace
20
21
22 class StaticFileBuilder(Builder):
23 def __init__(self, enable_byoc=False, config_folder="", docker_image=""):
24 self.enable_byoc = enable_byoc
25 self.config_folder = config_folder
26 self.docker_image = docker_image
27
28 def _write(self, file_full_path, content, mode, exe=False):
29 mode = mode + "w"
30 with open(file_full_path, mode) as f:
31 f.write(content)
32 if exe:
33 os.chmod(file_full_path, 0o755)
34
35 def _build_server(self, server, ctx):
36 config = json.loads(self.template["fed_server"])
37 dest_dir = self.get_kit_dir(server, ctx)
38 server_0 = config["servers"][0]
39 server_0["name"] = self.study_name
40 admin_port = server.props.get("admin_port", 8003)
41 ctx["admin_port"] = admin_port
42 fed_learn_port = server.props.get("fed_learn_port", 8002)
43 ctx["fed_learn_port"] = fed_learn_port
44 ctx["server_name"] = server.name
45 server_0["service"]["target"] = f"{server.name}:{fed_learn_port}"
46 server_0["admin_host"] = server.name
47 server_0["admin_port"] = admin_port
48 config["enable_byoc"] = server.enable_byoc
49 self._write(os.path.join(dest_dir, "fed_server.json"), json.dumps(config), "t")
50 replacement_dict = {
51 "admin_port": admin_port,
52 "fed_learn_port": fed_learn_port,
53 "config_folder": self.config_folder,
54 "docker_image": self.docker_image,
55 }
56 if self.docker_image:
57 self._write(
58 os.path.join(dest_dir, "docker.sh"),
59 sh_replace(self.template["docker_svr_sh"], replacement_dict),
60 "t",
61 exe=True,
62 )
63 self._write(
64 os.path.join(dest_dir, "start.sh"),
65 self.template["start_svr_sh"],
66 "t",
67 exe=True,
68 )
69 self._write(
70 os.path.join(dest_dir, "sub_start.sh"),
71 sh_replace(self.template["sub_start_svr_sh"], replacement_dict),
72 "t",
73 exe=True,
74 )
75 self._write(
76 os.path.join(dest_dir, "log.config"),
77 self.template["log_config"],
78 "t",
79 )
80 self._write(
81 os.path.join(dest_dir, "readme.txt"),
82 self.template["readme_fs"],
83 "t",
84 )
85 self._write(
86 os.path.join(dest_dir, "stop_fl.sh"),
87 self.template["stop_fl_sh"],
88 "t",
89 exe=True,
90 )
91
92 def _build_client(self, client, ctx):
93 config = json.loads(self.template["fed_client"])
94 dest_dir = self.get_kit_dir(client, ctx)
95 fed_learn_port = ctx.get("fed_learn_port")
96 server_name = ctx.get("server_name")
97 config["servers"][0]["service"]["target"] = f"{server_name}:{fed_learn_port}"
98 config["servers"][0]["name"] = self.study_name
99 config["enable_byoc"] = client.enable_byoc
100 replacement_dict = {
101 "client_name": f"{client.subject}",
102 "config_folder": self.config_folder,
103 "docker_image": self.docker_image,
104 }
105
106 self._write(os.path.join(dest_dir, "fed_client.json"), json.dumps(config), "t")
107 if self.docker_image:
108 self._write(
109 os.path.join(dest_dir, "docker.sh"),
110 sh_replace(self.template["docker_cln_sh"], replacement_dict),
111 "t",
112 exe=True,
113 )
114 self._write(
115 os.path.join(dest_dir, "start.sh"),
116 self.template["start_cln_sh"],
117 "t",
118 exe=True,
119 )
120 self._write(
121 os.path.join(dest_dir, "sub_start.sh"),
122 sh_replace(self.template["sub_start_cln_sh"], replacement_dict),
123 "t",
124 exe=True,
125 )
126 self._write(
127 os.path.join(dest_dir, "log.config"),
128 self.template["log_config"],
129 "t",
130 )
131 self._write(
132 os.path.join(dest_dir, "readme.txt"),
133 self.template["readme_fc"],
134 "t",
135 )
136 self._write(
137 os.path.join(dest_dir, "stop_fl.sh"),
138 self.template["stop_fl_sh"],
139 "t",
140 exe=True,
141 )
142
143 def _build_admin(self, admin, ctx):
144 dest_dir = self.get_kit_dir(admin, ctx)
145 admin_port = ctx.get("admin_port")
146 server_name = ctx.get("server_name")
147
148 replacement_dict = {
149 "cn": f"{server_name}",
150 "admin_port": f"{admin_port}",
151 "docker_image": self.docker_image,
152 }
153 if self.docker_image:
154 self._write(
155 os.path.join(dest_dir, "docker.sh"),
156 sh_replace(self.template["docker_adm_sh"], replacement_dict),
157 "t",
158 exe=True,
159 )
160 self._write(
161 os.path.join(dest_dir, "fl_admin.sh"),
162 sh_replace(self.template["fl_admin_sh"], replacement_dict),
163 "t",
164 exe=True,
165 )
166 self._write(
167 os.path.join(dest_dir, "readme.txt"),
168 self.template["readme_am"],
169 "t",
170 )
171
172 def build(self, study, ctx):
173 self.template = ctx.get("template")
174 server = study.get_participants_by_type("server")
175 self.study_name = study.name
176 self._build_server(server, ctx)
177
178 for client in study.get_participants_by_type("client", first_only=False):
179 self._build_client(client, ctx)
180
181 for admin in study.get_participants_by_type("admin", first_only=False):
182 self._build_admin(admin, ctx)
183
[end of nvflare/lighter/impl/static_file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nvflare/lighter/impl/static_file.py b/nvflare/lighter/impl/static_file.py
--- a/nvflare/lighter/impl/static_file.py
+++ b/nvflare/lighter/impl/static_file.py
@@ -20,10 +20,11 @@
class StaticFileBuilder(Builder):
- def __init__(self, enable_byoc=False, config_folder="", docker_image=""):
+ def __init__(self, enable_byoc=False, config_folder="", app_validator="", docker_image=""):
self.enable_byoc = enable_byoc
self.config_folder = config_folder
self.docker_image = docker_image
+ self.app_validator = app_validator
def _write(self, file_full_path, content, mode, exe=False):
mode = mode + "w"
@@ -46,6 +47,8 @@
server_0["admin_host"] = server.name
server_0["admin_port"] = admin_port
config["enable_byoc"] = server.enable_byoc
+ if self.app_validator:
+ config["app_validator"] = {"path": self.app_validator}
self._write(os.path.join(dest_dir, "fed_server.json"), json.dumps(config), "t")
replacement_dict = {
"admin_port": admin_port,
|
{"golden_diff": "diff --git a/nvflare/lighter/impl/static_file.py b/nvflare/lighter/impl/static_file.py\n--- a/nvflare/lighter/impl/static_file.py\n+++ b/nvflare/lighter/impl/static_file.py\n@@ -20,10 +20,11 @@\n \n \n class StaticFileBuilder(Builder):\n- def __init__(self, enable_byoc=False, config_folder=\"\", docker_image=\"\"):\n+ def __init__(self, enable_byoc=False, config_folder=\"\", app_validator=\"\", docker_image=\"\"):\n self.enable_byoc = enable_byoc\n self.config_folder = config_folder\n self.docker_image = docker_image\n+ self.app_validator = app_validator\n \n def _write(self, file_full_path, content, mode, exe=False):\n mode = mode + \"w\"\n@@ -46,6 +47,8 @@\n server_0[\"admin_host\"] = server.name\n server_0[\"admin_port\"] = admin_port\n config[\"enable_byoc\"] = server.enable_byoc\n+ if self.app_validator:\n+ config[\"app_validator\"] = {\"path\": self.app_validator}\n self._write(os.path.join(dest_dir, \"fed_server.json\"), json.dumps(config), \"t\")\n replacement_dict = {\n \"admin_port\": admin_port,\n", "issue": "Missing config_validator option in provisioning tool\nProvisioning tool needs to have config_validator option so the generated fed_server.json can have that information.\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\nimport os\n\nfrom nvflare.lighter.spec import Builder\nfrom nvflare.lighter.utils import sh_replace\n\n\nclass StaticFileBuilder(Builder):\n def __init__(self, enable_byoc=False, config_folder=\"\", docker_image=\"\"):\n self.enable_byoc = enable_byoc\n self.config_folder = config_folder\n self.docker_image = docker_image\n\n def _write(self, file_full_path, content, mode, exe=False):\n mode = mode + \"w\"\n with open(file_full_path, mode) as f:\n f.write(content)\n if exe:\n os.chmod(file_full_path, 0o755)\n\n def _build_server(self, server, ctx):\n config = json.loads(self.template[\"fed_server\"])\n dest_dir = self.get_kit_dir(server, ctx)\n server_0 = config[\"servers\"][0]\n server_0[\"name\"] = self.study_name\n admin_port = server.props.get(\"admin_port\", 8003)\n ctx[\"admin_port\"] = admin_port\n fed_learn_port = server.props.get(\"fed_learn_port\", 8002)\n ctx[\"fed_learn_port\"] = fed_learn_port\n ctx[\"server_name\"] = server.name\n server_0[\"service\"][\"target\"] = f\"{server.name}:{fed_learn_port}\"\n server_0[\"admin_host\"] = server.name\n server_0[\"admin_port\"] = admin_port\n config[\"enable_byoc\"] = server.enable_byoc\n self._write(os.path.join(dest_dir, \"fed_server.json\"), json.dumps(config), \"t\")\n replacement_dict = {\n \"admin_port\": admin_port,\n \"fed_learn_port\": fed_learn_port,\n \"config_folder\": self.config_folder,\n \"docker_image\": self.docker_image,\n }\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_svr_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"start.sh\"),\n self.template[\"start_svr_sh\"],\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"sub_start.sh\"),\n sh_replace(self.template[\"sub_start_svr_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"log.config\"),\n self.template[\"log_config\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_fs\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"stop_fl.sh\"),\n self.template[\"stop_fl_sh\"],\n \"t\",\n exe=True,\n )\n\n def _build_client(self, client, ctx):\n config = json.loads(self.template[\"fed_client\"])\n dest_dir = self.get_kit_dir(client, ctx)\n fed_learn_port = ctx.get(\"fed_learn_port\")\n server_name = ctx.get(\"server_name\")\n config[\"servers\"][0][\"service\"][\"target\"] = f\"{server_name}:{fed_learn_port}\"\n config[\"servers\"][0][\"name\"] = self.study_name\n config[\"enable_byoc\"] = client.enable_byoc\n replacement_dict = {\n \"client_name\": f\"{client.subject}\",\n \"config_folder\": self.config_folder,\n \"docker_image\": self.docker_image,\n }\n\n self._write(os.path.join(dest_dir, \"fed_client.json\"), json.dumps(config), \"t\")\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_cln_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"start.sh\"),\n self.template[\"start_cln_sh\"],\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"sub_start.sh\"),\n sh_replace(self.template[\"sub_start_cln_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"log.config\"),\n self.template[\"log_config\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_fc\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"stop_fl.sh\"),\n self.template[\"stop_fl_sh\"],\n \"t\",\n exe=True,\n )\n\n def _build_admin(self, admin, ctx):\n dest_dir = self.get_kit_dir(admin, ctx)\n admin_port = ctx.get(\"admin_port\")\n server_name = ctx.get(\"server_name\")\n\n replacement_dict = {\n \"cn\": f\"{server_name}\",\n \"admin_port\": f\"{admin_port}\",\n \"docker_image\": self.docker_image,\n }\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_adm_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"fl_admin.sh\"),\n sh_replace(self.template[\"fl_admin_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_am\"],\n \"t\",\n )\n\n def build(self, study, ctx):\n self.template = ctx.get(\"template\")\n server = study.get_participants_by_type(\"server\")\n self.study_name = study.name\n self._build_server(server, ctx)\n\n for client in study.get_participants_by_type(\"client\", first_only=False):\n self._build_client(client, ctx)\n\n for admin in study.get_participants_by_type(\"admin\", first_only=False):\n self._build_admin(admin, ctx)\n", "path": "nvflare/lighter/impl/static_file.py"}]}
| 2,448 | 283 |
gh_patches_debug_60356
|
rasdani/github-patches
|
git_diff
|
blaze__blaze-1037
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cytoolz is required to import blaze, but it's not listed in requirements_strict.txt
In a fresh virtualenv, `pip install blaze && python -c "import blaze"` fails with:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/__init__.py", line 18, in <module>
from .utils import ignoring
File "/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/utils.py", line 7, in <module>
from cytoolz import nth
ImportError: No module named cytoolz
```
Is there a reason cytoolz isn't in the strict requirements if it's necessary to even import the top-level module?
</issue>
<code>
[start of blaze/utils.py]
1 from __future__ import absolute_import, division, print_function
2
3 import os
4 import datetime
5 from functools import wraps
6
7 from cytoolz import nth
8 from itertools import islice
9 from collections import Iterator
10 from multiprocessing.pool import ThreadPool
11
12 # these are used throughout blaze, don't remove them
13 from odo.utils import tmpfile, filetext, filetexts, raises, keywords, ignoring
14
15 import psutil
16 import numpy as np
17
18 # Imports that replace older utils.
19 from .compatibility import map, zip
20
21 from .dispatch import dispatch
22
23 thread_pool = ThreadPool(psutil.NUM_CPUS)
24
25
26 def nth_list(n, seq):
27 """
28
29 >>> tuple(nth_list([0, 1, 4], 'Hello'))
30 ('H', 'e', 'o')
31 >>> tuple(nth_list([4, 1, 0], 'Hello'))
32 ('o', 'e', 'H')
33 >>> tuple(nth_list([0, 0, 0], 'Hello'))
34 ('H', 'H', 'H')
35 """
36 seq = iter(seq)
37
38 result = []
39 old = 0
40 item = next(seq)
41 for index in sorted(n):
42 for i in range(index - old):
43 item = next(seq)
44 result.append(item)
45 old = index
46
47 order = [x[1] for x in sorted(zip(n, range(len(n))))]
48 return (result[i] for i in order)
49
50
51 def get(ind, coll, lazy=False):
52 """
53
54 >>> get(0, 'Hello')
55 'H'
56
57 >>> get([1, 0], 'Hello')
58 ('e', 'H')
59
60 >>> get(slice(1, 4), 'Hello')
61 ('e', 'l', 'l')
62
63 >>> get(slice(1, 4), 'Hello', lazy=True)
64 <itertools.islice object at ...>
65 """
66 if isinstance(ind, list):
67 result = nth_list(ind, coll)
68 elif isinstance(ind, slice):
69 result = islice(coll, ind.start, ind.stop, ind.step)
70 else:
71 if isinstance(coll, Iterator):
72 result = nth(ind, coll)
73 else:
74 result = coll[ind]
75 if not lazy and isinstance(result, Iterator):
76 result = tuple(result)
77 return result
78
79
80 def ndget(ind, data):
81 """
82 Get from N-Dimensional getable
83
84 Can index with elements, lists, or slices. Mimic's numpy fancy indexing on
85 generic indexibles.
86
87 >>> data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
88 >>> ndget(0, data)
89 [[1, 2], [3, 4]]
90 >>> ndget((0, 1), data)
91 [3, 4]
92 >>> ndget((0, 0, 0), data)
93 1
94 >>> ndget((slice(0, 2), [0, 1], 0), data)
95 ((1, 3), (5, 7))
96 """
97 if isinstance(ind, tuple) and len(ind) == 1:
98 ind = ind[0]
99 if not isinstance(ind, tuple):
100 return get(ind, data)
101 result = get(ind[0], data)
102 if isinstance(ind[0], (list, slice)):
103 return type(result)(ndget(ind[1:], row) for row in result)
104 else:
105 return ndget(ind[1:], result)
106
107
108 def normalize_to_date(dt):
109 if isinstance(dt, datetime.datetime) and not dt.time():
110 return dt.date()
111 else:
112 return dt
113
114
115 def assert_allclose(lhs, rhs):
116 for tb in map(zip, lhs, rhs):
117 for left, right in tb:
118 if isinstance(left, (np.floating, float)):
119 # account for nans
120 assert np.all(np.isclose(left, right, equal_nan=True))
121 continue
122 if isinstance(left, datetime.datetime):
123 left = normalize_to_date(left)
124 if isinstance(right, datetime.datetime):
125 right = normalize_to_date(right)
126 assert left == right
127
128
129 def example(filename, datapath=os.path.join('examples', 'data')):
130 import blaze
131 return os.path.join(os.path.dirname(blaze.__file__), datapath, filename)
132
133
134 def available_memory():
135 return psutil.virtual_memory().available
136
137
138 def listpack(x):
139 """
140 >>> listpack(1)
141 [1]
142 >>> listpack((1, 2))
143 [1, 2]
144 >>> listpack([1, 2])
145 [1, 2]
146 """
147 if isinstance(x, tuple):
148 return list(x)
149 elif isinstance(x, list):
150 return x
151 else:
152 return [x]
153
154
155 @dispatch(datetime.datetime)
156 def json_dumps(dt):
157 s = dt.isoformat()
158 if not dt.tzname():
159 s = s + 'Z'
160 return s
161
[end of blaze/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/blaze/utils.py b/blaze/utils.py
--- a/blaze/utils.py
+++ b/blaze/utils.py
@@ -4,7 +4,11 @@
import datetime
from functools import wraps
-from cytoolz import nth
+try:
+ from cytoolz import nth
+except ImportError:
+ from toolz import nth
+
from itertools import islice
from collections import Iterator
from multiprocessing.pool import ThreadPool
|
{"golden_diff": "diff --git a/blaze/utils.py b/blaze/utils.py\n--- a/blaze/utils.py\n+++ b/blaze/utils.py\n@@ -4,7 +4,11 @@\n import datetime\n from functools import wraps\n \n-from cytoolz import nth\n+try:\n+ from cytoolz import nth\n+except ImportError:\n+ from toolz import nth\n+\n from itertools import islice\n from collections import Iterator\n from multiprocessing.pool import ThreadPool\n", "issue": "cytoolz is required to import blaze, but it's not listed in requirements_strict.txt\nIn a fresh virtualenv, `pip install blaze && python -c \"import blaze\"` fails with:\n\n```\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/__init__.py\", line 18, in <module>\n from .utils import ignoring\n File \"/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/utils.py\", line 7, in <module>\n from cytoolz import nth\nImportError: No module named cytoolz\n```\n\nIs there a reason cytoolz isn't in the strict requirements if it's necessary to even import the top-level module?\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport datetime\nfrom functools import wraps\n\nfrom cytoolz import nth\nfrom itertools import islice\nfrom collections import Iterator\nfrom multiprocessing.pool import ThreadPool\n\n# these are used throughout blaze, don't remove them\nfrom odo.utils import tmpfile, filetext, filetexts, raises, keywords, ignoring\n\nimport psutil\nimport numpy as np\n\n# Imports that replace older utils.\nfrom .compatibility import map, zip\n\nfrom .dispatch import dispatch\n\nthread_pool = ThreadPool(psutil.NUM_CPUS)\n\n\ndef nth_list(n, seq):\n \"\"\"\n\n >>> tuple(nth_list([0, 1, 4], 'Hello'))\n ('H', 'e', 'o')\n >>> tuple(nth_list([4, 1, 0], 'Hello'))\n ('o', 'e', 'H')\n >>> tuple(nth_list([0, 0, 0], 'Hello'))\n ('H', 'H', 'H')\n \"\"\"\n seq = iter(seq)\n\n result = []\n old = 0\n item = next(seq)\n for index in sorted(n):\n for i in range(index - old):\n item = next(seq)\n result.append(item)\n old = index\n\n order = [x[1] for x in sorted(zip(n, range(len(n))))]\n return (result[i] for i in order)\n\n\ndef get(ind, coll, lazy=False):\n \"\"\"\n\n >>> get(0, 'Hello')\n 'H'\n\n >>> get([1, 0], 'Hello')\n ('e', 'H')\n\n >>> get(slice(1, 4), 'Hello')\n ('e', 'l', 'l')\n\n >>> get(slice(1, 4), 'Hello', lazy=True)\n <itertools.islice object at ...>\n \"\"\"\n if isinstance(ind, list):\n result = nth_list(ind, coll)\n elif isinstance(ind, slice):\n result = islice(coll, ind.start, ind.stop, ind.step)\n else:\n if isinstance(coll, Iterator):\n result = nth(ind, coll)\n else:\n result = coll[ind]\n if not lazy and isinstance(result, Iterator):\n result = tuple(result)\n return result\n\n\ndef ndget(ind, data):\n \"\"\"\n Get from N-Dimensional getable\n\n Can index with elements, lists, or slices. Mimic's numpy fancy indexing on\n generic indexibles.\n\n >>> data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]\n >>> ndget(0, data)\n [[1, 2], [3, 4]]\n >>> ndget((0, 1), data)\n [3, 4]\n >>> ndget((0, 0, 0), data)\n 1\n >>> ndget((slice(0, 2), [0, 1], 0), data)\n ((1, 3), (5, 7))\n \"\"\"\n if isinstance(ind, tuple) and len(ind) == 1:\n ind = ind[0]\n if not isinstance(ind, tuple):\n return get(ind, data)\n result = get(ind[0], data)\n if isinstance(ind[0], (list, slice)):\n return type(result)(ndget(ind[1:], row) for row in result)\n else:\n return ndget(ind[1:], result)\n\n\ndef normalize_to_date(dt):\n if isinstance(dt, datetime.datetime) and not dt.time():\n return dt.date()\n else:\n return dt\n\n\ndef assert_allclose(lhs, rhs):\n for tb in map(zip, lhs, rhs):\n for left, right in tb:\n if isinstance(left, (np.floating, float)):\n # account for nans\n assert np.all(np.isclose(left, right, equal_nan=True))\n continue\n if isinstance(left, datetime.datetime):\n left = normalize_to_date(left)\n if isinstance(right, datetime.datetime):\n right = normalize_to_date(right)\n assert left == right\n\n\ndef example(filename, datapath=os.path.join('examples', 'data')):\n import blaze\n return os.path.join(os.path.dirname(blaze.__file__), datapath, filename)\n\n\ndef available_memory():\n return psutil.virtual_memory().available\n\n\ndef listpack(x):\n \"\"\"\n >>> listpack(1)\n [1]\n >>> listpack((1, 2))\n [1, 2]\n >>> listpack([1, 2])\n [1, 2]\n \"\"\"\n if isinstance(x, tuple):\n return list(x)\n elif isinstance(x, list):\n return x\n else:\n return [x]\n\n\n@dispatch(datetime.datetime)\ndef json_dumps(dt):\n s = dt.isoformat()\n if not dt.tzname():\n s = s + 'Z'\n return s\n", "path": "blaze/utils.py"}]}
| 2,174 | 96 |
gh_patches_debug_47848
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-404
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rate stars don't work
You should be able to click to give a star rating to a book on the book page, it doesn't do anything.
</issue>
<code>
[start of bookwyrm/activitypub/note.py]
1 ''' note serializer and children thereof '''
2 from dataclasses import dataclass, field
3 from typing import Dict, List
4
5 from .base_activity import ActivityObject, Link
6 from .image import Image
7
8 @dataclass(init=False)
9 class Tombstone(ActivityObject):
10 ''' the placeholder for a deleted status '''
11 published: str
12 deleted: str
13 type: str = 'Tombstone'
14
15
16 @dataclass(init=False)
17 class Note(ActivityObject):
18 ''' Note activity '''
19 published: str
20 attributedTo: str
21 content: str
22 to: List[str] = field(default_factory=lambda: [])
23 cc: List[str] = field(default_factory=lambda: [])
24 replies: Dict = field(default_factory=lambda: {})
25 inReplyTo: str = ''
26 summary: str = ''
27 tag: List[Link] = field(default_factory=lambda: [])
28 attachment: List[Image] = field(default_factory=lambda: [])
29 sensitive: bool = False
30 type: str = 'Note'
31
32
33 @dataclass(init=False)
34 class Article(Note):
35 ''' what's an article except a note with more fields '''
36 name: str
37 type: str = 'Article'
38
39
40 @dataclass(init=False)
41 class GeneratedNote(Note):
42 ''' just a re-typed note '''
43 type: str = 'GeneratedNote'
44
45
46 @dataclass(init=False)
47 class Comment(Note):
48 ''' like a note but with a book '''
49 inReplyToBook: str
50 type: str = 'Comment'
51
52
53 @dataclass(init=False)
54 class Review(Comment):
55 ''' a full book review '''
56 name: str
57 rating: int = None
58 type: str = 'Review'
59
60
61 @dataclass(init=False)
62 class Quotation(Comment):
63 ''' a quote and commentary on a book '''
64 quote: str
65 type: str = 'Quotation'
66
[end of bookwyrm/activitypub/note.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bookwyrm/activitypub/note.py b/bookwyrm/activitypub/note.py
--- a/bookwyrm/activitypub/note.py
+++ b/bookwyrm/activitypub/note.py
@@ -53,7 +53,7 @@
@dataclass(init=False)
class Review(Comment):
''' a full book review '''
- name: str
+ name: str = None
rating: int = None
type: str = 'Review'
|
{"golden_diff": "diff --git a/bookwyrm/activitypub/note.py b/bookwyrm/activitypub/note.py\n--- a/bookwyrm/activitypub/note.py\n+++ b/bookwyrm/activitypub/note.py\n@@ -53,7 +53,7 @@\n @dataclass(init=False)\n class Review(Comment):\n ''' a full book review '''\n- name: str\n+ name: str = None\n rating: int = None\n type: str = 'Review'\n", "issue": "Rate stars don't work\nYou should be able to click to give a star rating to a book on the book page, it doesn't do anything.\n", "before_files": [{"content": "''' note serializer and children thereof '''\nfrom dataclasses import dataclass, field\nfrom typing import Dict, List\n\nfrom .base_activity import ActivityObject, Link\nfrom .image import Image\n\n@dataclass(init=False)\nclass Tombstone(ActivityObject):\n ''' the placeholder for a deleted status '''\n published: str\n deleted: str\n type: str = 'Tombstone'\n\n\n@dataclass(init=False)\nclass Note(ActivityObject):\n ''' Note activity '''\n published: str\n attributedTo: str\n content: str\n to: List[str] = field(default_factory=lambda: [])\n cc: List[str] = field(default_factory=lambda: [])\n replies: Dict = field(default_factory=lambda: {})\n inReplyTo: str = ''\n summary: str = ''\n tag: List[Link] = field(default_factory=lambda: [])\n attachment: List[Image] = field(default_factory=lambda: [])\n sensitive: bool = False\n type: str = 'Note'\n\n\n@dataclass(init=False)\nclass Article(Note):\n ''' what's an article except a note with more fields '''\n name: str\n type: str = 'Article'\n\n\n@dataclass(init=False)\nclass GeneratedNote(Note):\n ''' just a re-typed note '''\n type: str = 'GeneratedNote'\n\n\n@dataclass(init=False)\nclass Comment(Note):\n ''' like a note but with a book '''\n inReplyToBook: str\n type: str = 'Comment'\n\n\n@dataclass(init=False)\nclass Review(Comment):\n ''' a full book review '''\n name: str\n rating: int = None\n type: str = 'Review'\n\n\n@dataclass(init=False)\nclass Quotation(Comment):\n ''' a quote and commentary on a book '''\n quote: str\n type: str = 'Quotation'\n", "path": "bookwyrm/activitypub/note.py"}]}
| 1,090 | 103 |
gh_patches_debug_18295
|
rasdani/github-patches
|
git_diff
|
avocado-framework__avocado-5562
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Handle "could not import module" errors gracefully
**Describe the bug**
Avocado does not handle "could not import module" errors very gracefully, with error messages that are quite cryptic.
**Steps to reproduce**
Write a valid `avocado-instrumented` test, but with an invalid import. Example:
```python
from avocado import Test
import foo
class PassTest(Test):
"""
Example test that passes.
:avocado: tags=fast
"""
def test(self):
"""
A test simply doesn't have to fail in order to pass
"""
```
And run it:
```
$ avocado run examples/tests/passtest.py
JOB ID : 3fee9803715e414a16c3dcf1ddb9ff2f6dc6c0bd
JOB LOG : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/job.log
(1/1) examples/tests/passtest.py:PassTest.test: STARTED
(1/1) examples/tests/passtest.py:PassTest.test: ERROR: Test.__init__() got an unexpected keyword argument 'run.results_dir' (0.01 s)
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/results.html
JOB TIME : 1.47 s
```
**Expected behavior**
Instead of "unexpected argument..." a more clear error message such as: "failed to import the file containing the test" or something similar.
**Current behavior**
From original reporter @jnsnow:
```
(08/27) tests/protocol.py:Connect.testBadUNIX: ERROR:
Test.__init__() got an unexpected keyword argument 'run.results_dir'
(0.01 s)
```
**System information (please complete the following information):**
- OS: ```LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 36 (Thirty Six)
Release: 36
Codename: ThirtySix```
- Avocado version: 5a0c5b2348da450397287a0954e4c335c0d590a9
- Avocado installation method: git
</issue>
<code>
[start of avocado/core/utils/loader.py]
1 import importlib
2 import inspect
3 import os
4 import sys
5
6 from avocado.core import test
7 from avocado.utils import stacktrace
8
9
10 class TestError(test.Test):
11 """
12 Generic test error.
13 """
14
15 def __init__(self, *args, **kwargs):
16 exception = kwargs.pop("exception")
17 test.Test.__init__(self, *args, **kwargs)
18 self.exception = exception
19
20 def test(self):
21 self.error(self.exception)
22
23
24 def load_test(test_factory):
25 """
26 Load test from the test factory.
27
28 :param test_factory: a pair of test class and parameters.
29 :type test_factory: tuple
30 :return: an instance of :class:`avocado.core.test.Test`.
31 """
32 test_class, test_parameters = test_factory
33 if "modulePath" in test_parameters:
34 test_path = test_parameters.pop("modulePath")
35 else:
36 test_path = None
37 if isinstance(test_class, str):
38 module_name = os.path.basename(test_path).split(".")[0]
39 test_module_dir = os.path.abspath(os.path.dirname(test_path))
40 # Tests with local dir imports need this
41 try:
42 sys.path.insert(0, test_module_dir)
43 test_module = importlib.import_module(module_name)
44 except: # pylint: disable=W0702
45 # On load_module exception we fake the test class and pass
46 # the exc_info as parameter to be logged.
47 test_parameters["methodName"] = "test"
48 exception = stacktrace.prepare_exc_info(sys.exc_info())
49 test_parameters["exception"] = exception
50 return TestError(**test_parameters)
51 finally:
52 if test_module_dir in sys.path:
53 sys.path.remove(test_module_dir)
54 for _, obj in inspect.getmembers(test_module):
55 if (
56 inspect.isclass(obj)
57 and obj.__name__ == test_class
58 and inspect.getmodule(obj) == test_module
59 ):
60 if issubclass(obj, test.Test):
61 test_class = obj
62 break
63 if "run.results_dir" in test_parameters:
64 test_parameters["base_logdir"] = test_parameters.pop("run.results_dir")
65 test_instance = test_class(**test_parameters)
66
67 return test_instance
68
[end of avocado/core/utils/loader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/avocado/core/utils/loader.py b/avocado/core/utils/loader.py
--- a/avocado/core/utils/loader.py
+++ b/avocado/core/utils/loader.py
@@ -30,6 +30,8 @@
:return: an instance of :class:`avocado.core.test.Test`.
"""
test_class, test_parameters = test_factory
+ if "run.results_dir" in test_parameters:
+ test_parameters["base_logdir"] = test_parameters.pop("run.results_dir")
if "modulePath" in test_parameters:
test_path = test_parameters.pop("modulePath")
else:
@@ -60,8 +62,6 @@
if issubclass(obj, test.Test):
test_class = obj
break
- if "run.results_dir" in test_parameters:
- test_parameters["base_logdir"] = test_parameters.pop("run.results_dir")
test_instance = test_class(**test_parameters)
return test_instance
|
{"golden_diff": "diff --git a/avocado/core/utils/loader.py b/avocado/core/utils/loader.py\n--- a/avocado/core/utils/loader.py\n+++ b/avocado/core/utils/loader.py\n@@ -30,6 +30,8 @@\n :return: an instance of :class:`avocado.core.test.Test`.\n \"\"\"\n test_class, test_parameters = test_factory\n+ if \"run.results_dir\" in test_parameters:\n+ test_parameters[\"base_logdir\"] = test_parameters.pop(\"run.results_dir\")\n if \"modulePath\" in test_parameters:\n test_path = test_parameters.pop(\"modulePath\")\n else:\n@@ -60,8 +62,6 @@\n if issubclass(obj, test.Test):\n test_class = obj\n break\n- if \"run.results_dir\" in test_parameters:\n- test_parameters[\"base_logdir\"] = test_parameters.pop(\"run.results_dir\")\n test_instance = test_class(**test_parameters)\n \n return test_instance\n", "issue": "Handle \"could not import module\" errors gracefully\n**Describe the bug**\r\nAvocado does not handle \"could not import module\" errors very gracefully, with error messages that are quite cryptic.\r\n\r\n**Steps to reproduce**\r\nWrite a valid `avocado-instrumented` test, but with an invalid import. Example:\r\n\r\n```python\r\nfrom avocado import Test\r\n\r\nimport foo\r\n\r\n\r\nclass PassTest(Test):\r\n\r\n \"\"\"\r\n Example test that passes.\r\n\r\n :avocado: tags=fast\r\n \"\"\"\r\n\r\n def test(self):\r\n \"\"\"\r\n A test simply doesn't have to fail in order to pass\r\n \"\"\"\r\n```\r\n\r\nAnd run it:\r\n\r\n```\r\n$ avocado run examples/tests/passtest.py \r\nJOB ID : 3fee9803715e414a16c3dcf1ddb9ff2f6dc6c0bd\r\nJOB LOG : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/job.log\r\n (1/1) examples/tests/passtest.py:PassTest.test: STARTED\r\n (1/1) examples/tests/passtest.py:PassTest.test: ERROR: Test.__init__() got an unexpected keyword argument 'run.results_dir' (0.01 s)\r\nRESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0\r\nJOB HTML : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/results.html\r\nJOB TIME : 1.47 s\r\n```\r\n\r\n**Expected behavior**\r\nInstead of \"unexpected argument...\" a more clear error message such as: \"failed to import the file containing the test\" or something similar. \r\n\r\n**Current behavior**\r\n\r\nFrom original reporter @jnsnow:\r\n\r\n```\r\n(08/27) tests/protocol.py:Connect.testBadUNIX: ERROR:\r\n Test.__init__() got an unexpected keyword argument 'run.results_dir'\r\n (0.01 s)\r\n```\r\n\r\n**System information (please complete the following information):**\r\n - OS: ```LSB Version:\t:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch\r\nDistributor ID:\tFedora\r\nDescription:\tFedora release 36 (Thirty Six)\r\nRelease:\t36\r\nCodename:\tThirtySix```\r\n - Avocado version: 5a0c5b2348da450397287a0954e4c335c0d590a9\r\n - Avocado installation method: git\r\n\n", "before_files": [{"content": "import importlib\nimport inspect\nimport os\nimport sys\n\nfrom avocado.core import test\nfrom avocado.utils import stacktrace\n\n\nclass TestError(test.Test):\n \"\"\"\n Generic test error.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n exception = kwargs.pop(\"exception\")\n test.Test.__init__(self, *args, **kwargs)\n self.exception = exception\n\n def test(self):\n self.error(self.exception)\n\n\ndef load_test(test_factory):\n \"\"\"\n Load test from the test factory.\n\n :param test_factory: a pair of test class and parameters.\n :type test_factory: tuple\n :return: an instance of :class:`avocado.core.test.Test`.\n \"\"\"\n test_class, test_parameters = test_factory\n if \"modulePath\" in test_parameters:\n test_path = test_parameters.pop(\"modulePath\")\n else:\n test_path = None\n if isinstance(test_class, str):\n module_name = os.path.basename(test_path).split(\".\")[0]\n test_module_dir = os.path.abspath(os.path.dirname(test_path))\n # Tests with local dir imports need this\n try:\n sys.path.insert(0, test_module_dir)\n test_module = importlib.import_module(module_name)\n except: # pylint: disable=W0702\n # On load_module exception we fake the test class and pass\n # the exc_info as parameter to be logged.\n test_parameters[\"methodName\"] = \"test\"\n exception = stacktrace.prepare_exc_info(sys.exc_info())\n test_parameters[\"exception\"] = exception\n return TestError(**test_parameters)\n finally:\n if test_module_dir in sys.path:\n sys.path.remove(test_module_dir)\n for _, obj in inspect.getmembers(test_module):\n if (\n inspect.isclass(obj)\n and obj.__name__ == test_class\n and inspect.getmodule(obj) == test_module\n ):\n if issubclass(obj, test.Test):\n test_class = obj\n break\n if \"run.results_dir\" in test_parameters:\n test_parameters[\"base_logdir\"] = test_parameters.pop(\"run.results_dir\")\n test_instance = test_class(**test_parameters)\n\n return test_instance\n", "path": "avocado/core/utils/loader.py"}]}
| 1,797 | 210 |
gh_patches_debug_56084
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5611
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of examples/inference/benchmark_ops/benchmark_rmsnorm.py]
1 import torch
2
3 from colossalai.kernel.kernel_loader import InferenceOpsLoader
4 from colossalai.kernel.triton import rms_layernorm
5
6 try:
7 import triton # noqa
8 except ImportError:
9 print("please install triton from https://github.com/openai/triton")
10
11 inference_ops = InferenceOpsLoader().load()
12
13 # Triton benchmark plot attributions
14 configs = [
15 triton.testing.Benchmark(
16 x_names=["SEQUENCE_TOTAL"],
17 x_vals=[i for i in range(128, 1025, 128)],
18 line_arg="provider",
19 line_vals=[
20 "vllm_rms_layernorm",
21 "triton_rms_layernorm",
22 "cuda_rms_layernorm",
23 "vllm_rms_layernorm_with_residual",
24 "triton_rms_layernorm_with_residual",
25 "cuda_rms_layernorm_with_residual",
26 ],
27 line_names=[
28 "vllm_rms_layernorm",
29 "triton_rms_layernorm",
30 "cuda_rms_layernorm",
31 "vllm_rms_layernorm_with_residual",
32 "triton_rms_layernorm_with_residual",
33 "cuda_rms_layernorm_with_residual",
34 ],
35 styles=[("red", "-"), ("blue", "-"), ("yellow", "-"), ("red", "--"), ("blue", "--"), ("yellow", "--")],
36 ylabel="ms",
37 plot_name=f"RMSNorm benchmarking results",
38 args={"HIDDEN_SIZE": 1024},
39 )
40 ]
41
42
43 @triton.testing.perf_report(configs)
44 def benchmark_rms_layernorm(
45 provider: str,
46 SEQUENCE_TOTAL: int,
47 HIDDEN_SIZE: int,
48 ):
49 try:
50 from vllm.model_executor.layers.layernorm import RMSNorm
51 except ImportError:
52 raise ImportError("Please install vllm from https://github.com/vllm-project/vllm")
53
54 warmup = 10
55 rep = 1000
56
57 dtype = torch.float16
58 eps = 1e-5
59 x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)
60 w_shape = (x_shape[-1],)
61 residual = torch.rand(x_shape, dtype=dtype, device="cuda")
62 weight = torch.ones(w_shape, dtype=dtype, device="cuda")
63 vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device="cuda")
64 x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device="cuda")
65 if provider == "vllm_rms_layernorm":
66 fn = lambda: vllm_norm(x)
67 elif provider == "triton_rms_layernorm":
68 fn = lambda: rms_layernorm(x, weight, eps=eps)
69 elif provider == "cuda_rms_layernorm":
70 out = torch.empty_like(x)
71 fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)
72 elif provider == "vllm_rms_layernorm_with_residual":
73 fn = lambda: vllm_norm(x, residual=residual)
74 elif provider == "triton_rms_layernorm_with_residual":
75 fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)
76 elif provider == "cuda_rms_layernorm_with_residual":
77 fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)
78 else:
79 raise ValueError("Undefined provider.")
80
81 ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
82
83 return ms
84
85
86 if __name__ == "__main__":
87 benchmark_rms_layernorm.run(save_path=".", print_data=True)
88
[end of examples/inference/benchmark_ops/benchmark_rmsnorm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/inference/benchmark_ops/benchmark_rmsnorm.py b/examples/inference/benchmark_ops/benchmark_rmsnorm.py
--- a/examples/inference/benchmark_ops/benchmark_rmsnorm.py
+++ b/examples/inference/benchmark_ops/benchmark_rmsnorm.py
@@ -35,7 +35,7 @@
styles=[("red", "-"), ("blue", "-"), ("yellow", "-"), ("red", "--"), ("blue", "--"), ("yellow", "--")],
ylabel="ms",
plot_name=f"RMSNorm benchmarking results",
- args={"HIDDEN_SIZE": 1024},
+ args={"HIDDEN_SIZE": 5120},
)
]
|
{"golden_diff": "diff --git a/examples/inference/benchmark_ops/benchmark_rmsnorm.py b/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n--- a/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n+++ b/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n@@ -35,7 +35,7 @@\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n- args={\"HIDDEN_SIZE\": 1024},\n+ args={\"HIDDEN_SIZE\": 5120},\n )\n ]\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import torch\n\nfrom colossalai.kernel.kernel_loader import InferenceOpsLoader\nfrom colossalai.kernel.triton import rms_layernorm\n\ntry:\n import triton # noqa\nexcept ImportError:\n print(\"please install triton from https://github.com/openai/triton\")\n\ninference_ops = InferenceOpsLoader().load()\n\n# Triton benchmark plot attributions\nconfigs = [\n triton.testing.Benchmark(\n x_names=[\"SEQUENCE_TOTAL\"],\n x_vals=[i for i in range(128, 1025, 128)],\n line_arg=\"provider\",\n line_vals=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n line_names=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n args={\"HIDDEN_SIZE\": 1024},\n )\n]\n\n\[email protected]_report(configs)\ndef benchmark_rms_layernorm(\n provider: str,\n SEQUENCE_TOTAL: int,\n HIDDEN_SIZE: int,\n):\n try:\n from vllm.model_executor.layers.layernorm import RMSNorm\n except ImportError:\n raise ImportError(\"Please install vllm from https://github.com/vllm-project/vllm\")\n\n warmup = 10\n rep = 1000\n\n dtype = torch.float16\n eps = 1e-5\n x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)\n w_shape = (x_shape[-1],)\n residual = torch.rand(x_shape, dtype=dtype, device=\"cuda\")\n weight = torch.ones(w_shape, dtype=dtype, device=\"cuda\")\n vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device=\"cuda\")\n x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device=\"cuda\")\n if provider == \"vllm_rms_layernorm\":\n fn = lambda: vllm_norm(x)\n elif provider == \"triton_rms_layernorm\":\n fn = lambda: rms_layernorm(x, weight, eps=eps)\n elif provider == \"cuda_rms_layernorm\":\n out = torch.empty_like(x)\n fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)\n elif provider == \"vllm_rms_layernorm_with_residual\":\n fn = lambda: vllm_norm(x, residual=residual)\n elif provider == \"triton_rms_layernorm_with_residual\":\n fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)\n elif provider == \"cuda_rms_layernorm_with_residual\":\n fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)\n else:\n raise ValueError(\"Undefined provider.\")\n\n ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)\n\n return ms\n\n\nif __name__ == \"__main__\":\n benchmark_rms_layernorm.run(save_path=\".\", print_data=True)\n", "path": "examples/inference/benchmark_ops/benchmark_rmsnorm.py"}]}
| 1,597 | 154 |
gh_patches_debug_33315
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-666
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Idea: have a way to specify context via command line
Something like repeat arguments:
```
cookiecutter mytemplate -Cname=my-project -Cgithub-user=ionelmc
```
Or maybe the whole json?
```
cookiecutter mytemplate --context='{"name": "my-project", "github-user": "ionelmc"}'
```
Or variable arguments?
```
cookiecutter mytemplate --context-name=my-project --context-github-user=ionelmc
```
</issue>
<code>
[start of cookiecutter/cli.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.cli
6 -----------------
7
8 Main `cookiecutter` CLI.
9 """
10
11 import os
12 import sys
13 import logging
14 import json
15
16 import click
17
18 from cookiecutter import __version__
19 from cookiecutter.config import USER_CONFIG_PATH
20 from cookiecutter.main import cookiecutter
21 from cookiecutter.exceptions import (
22 OutputDirExistsException,
23 InvalidModeException,
24 FailedHookException,
25 UndefinedVariableInTemplate,
26 UnknownExtension,
27 RepositoryNotFound
28 )
29
30 logger = logging.getLogger(__name__)
31
32
33 def version_msg():
34 """Returns the Cookiecutter version, location and Python powering it."""
35 python_version = sys.version[:3]
36 location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
37 message = u'Cookiecutter %(version)s from {} (Python {})'
38 return message.format(location, python_version)
39
40
41 @click.command(context_settings=dict(help_option_names=[u'-h', u'--help']))
42 @click.version_option(__version__, u'-V', u'--version', message=version_msg())
43 @click.argument(u'template')
44 @click.option(
45 u'--no-input', is_flag=True,
46 help=u'Do not prompt for parameters and only use cookiecutter.json '
47 u'file content',
48 )
49 @click.option(
50 u'-c', u'--checkout',
51 help=u'branch, tag or commit to checkout after git clone',
52 )
53 @click.option(
54 '-v', '--verbose',
55 is_flag=True, help='Print debug information', default=False
56 )
57 @click.option(
58 u'--replay', is_flag=True,
59 help=u'Do not prompt for parameters and only use information entered '
60 u'previously',
61 )
62 @click.option(
63 u'-f', u'--overwrite-if-exists', is_flag=True,
64 help=u'Overwrite the contents of the output directory if it already exists'
65 )
66 @click.option(
67 u'-o', u'--output-dir', default='.', type=click.Path(),
68 help=u'Where to output the generated project dir into'
69 )
70 @click.option(
71 u'--config-file', type=click.Path(), default=USER_CONFIG_PATH,
72 help=u'User configuration file'
73 )
74 @click.option(
75 u'--default-config', is_flag=True,
76 help=u'Do not load a config file. Use the defaults instead'
77 )
78 def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,
79 output_dir, config_file, default_config):
80 """Create a project from a Cookiecutter project template (TEMPLATE)."""
81 if verbose:
82 logging.basicConfig(
83 format=u'%(levelname)s %(filename)s: %(message)s',
84 level=logging.DEBUG
85 )
86 else:
87 # Log info and above to console
88 logging.basicConfig(
89 format=u'%(levelname)s: %(message)s',
90 level=logging.INFO
91 )
92
93 try:
94 # If you _need_ to support a local template in a directory
95 # called 'help', use a qualified path to the directory.
96 if template == u'help':
97 click.echo(click.get_current_context().get_help())
98 sys.exit(0)
99
100 user_config = None if default_config else config_file
101
102 cookiecutter(
103 template, checkout, no_input,
104 replay=replay,
105 overwrite_if_exists=overwrite_if_exists,
106 output_dir=output_dir,
107 config_file=user_config
108 )
109 except (OutputDirExistsException,
110 InvalidModeException,
111 FailedHookException,
112 UnknownExtension,
113 RepositoryNotFound) as e:
114 click.echo(e)
115 sys.exit(1)
116 except UndefinedVariableInTemplate as undefined_err:
117 click.echo('{}'.format(undefined_err.message))
118 click.echo('Error message: {}'.format(undefined_err.error.message))
119
120 context_str = json.dumps(
121 undefined_err.context,
122 indent=4,
123 sort_keys=True
124 )
125 click.echo('Context: {}'.format(context_str))
126 sys.exit(1)
127
128
129 if __name__ == "__main__":
130 main()
131
[end of cookiecutter/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py
--- a/cookiecutter/cli.py
+++ b/cookiecutter/cli.py
@@ -38,9 +38,23 @@
return message.format(location, python_version)
+def validate_extra_context(ctx, param, value):
+ for s in value:
+ if '=' not in s:
+ raise click.BadParameter(
+ 'EXTRA_CONTEXT should contain items of the form key=value; '
+ "'{}' doesn't match that form".format(s)
+ )
+
+ # Convert tuple -- e.g.: (u'program_name=foobar', u'startsecs=66')
+ # to dict -- e.g.: {'program_name': 'foobar', 'startsecs': '66'}
+ return dict(s.split('=', 1) for s in value) or None
+
+
@click.command(context_settings=dict(help_option_names=[u'-h', u'--help']))
@click.version_option(__version__, u'-V', u'--version', message=version_msg())
@click.argument(u'template')
[email protected](u'extra_context', nargs=-1, callback=validate_extra_context)
@click.option(
u'--no-input', is_flag=True,
help=u'Do not prompt for parameters and only use cookiecutter.json '
@@ -75,8 +89,8 @@
u'--default-config', is_flag=True,
help=u'Do not load a config file. Use the defaults instead'
)
-def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,
- output_dir, config_file, default_config):
+def main(template, extra_context, no_input, checkout, verbose, replay,
+ overwrite_if_exists, output_dir, config_file, default_config):
"""Create a project from a Cookiecutter project template (TEMPLATE)."""
if verbose:
logging.basicConfig(
@@ -101,6 +115,7 @@
cookiecutter(
template, checkout, no_input,
+ extra_context=extra_context,
replay=replay,
overwrite_if_exists=overwrite_if_exists,
output_dir=output_dir,
|
{"golden_diff": "diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py\n--- a/cookiecutter/cli.py\n+++ b/cookiecutter/cli.py\n@@ -38,9 +38,23 @@\n return message.format(location, python_version)\n \n \n+def validate_extra_context(ctx, param, value):\n+ for s in value:\n+ if '=' not in s:\n+ raise click.BadParameter(\n+ 'EXTRA_CONTEXT should contain items of the form key=value; '\n+ \"'{}' doesn't match that form\".format(s)\n+ )\n+\n+ # Convert tuple -- e.g.: (u'program_name=foobar', u'startsecs=66')\n+ # to dict -- e.g.: {'program_name': 'foobar', 'startsecs': '66'}\n+ return dict(s.split('=', 1) for s in value) or None\n+\n+\n @click.command(context_settings=dict(help_option_names=[u'-h', u'--help']))\n @click.version_option(__version__, u'-V', u'--version', message=version_msg())\n @click.argument(u'template')\[email protected](u'extra_context', nargs=-1, callback=validate_extra_context)\n @click.option(\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n@@ -75,8 +89,8 @@\n u'--default-config', is_flag=True,\n help=u'Do not load a config file. Use the defaults instead'\n )\n-def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n- output_dir, config_file, default_config):\n+def main(template, extra_context, no_input, checkout, verbose, replay,\n+ overwrite_if_exists, output_dir, config_file, default_config):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n@@ -101,6 +115,7 @@\n \n cookiecutter(\n template, checkout, no_input,\n+ extra_context=extra_context,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir,\n", "issue": "Idea: have a way to specify context via command line\nSomething like repeat arguments:\n\n```\ncookiecutter mytemplate -Cname=my-project -Cgithub-user=ionelmc\n```\n\nOr maybe the whole json?\n\n```\ncookiecutter mytemplate --context='{\"name\": \"my-project\", \"github-user\": \"ionelmc\"}'\n```\n\nOr variable arguments?\n\n```\ncookiecutter mytemplate --context-name=my-project --context-github-user=ionelmc\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.cli\n-----------------\n\nMain `cookiecutter` CLI.\n\"\"\"\n\nimport os\nimport sys\nimport logging\nimport json\n\nimport click\n\nfrom cookiecutter import __version__\nfrom cookiecutter.config import USER_CONFIG_PATH\nfrom cookiecutter.main import cookiecutter\nfrom cookiecutter.exceptions import (\n OutputDirExistsException,\n InvalidModeException,\n FailedHookException,\n UndefinedVariableInTemplate,\n UnknownExtension,\n RepositoryNotFound\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef version_msg():\n \"\"\"Returns the Cookiecutter version, location and Python powering it.\"\"\"\n python_version = sys.version[:3]\n location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n message = u'Cookiecutter %(version)s from {} (Python {})'\n return message.format(location, python_version)\n\n\[email protected](context_settings=dict(help_option_names=[u'-h', u'--help']))\[email protected]_option(__version__, u'-V', u'--version', message=version_msg())\[email protected](u'template')\[email protected](\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n u'file content',\n)\[email protected](\n u'-c', u'--checkout',\n help=u'branch, tag or commit to checkout after git clone',\n)\[email protected](\n '-v', '--verbose',\n is_flag=True, help='Print debug information', default=False\n)\[email protected](\n u'--replay', is_flag=True,\n help=u'Do not prompt for parameters and only use information entered '\n u'previously',\n)\[email protected](\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n)\[email protected](\n u'-o', u'--output-dir', default='.', type=click.Path(),\n help=u'Where to output the generated project dir into'\n)\[email protected](\n u'--config-file', type=click.Path(), default=USER_CONFIG_PATH,\n help=u'User configuration file'\n)\[email protected](\n u'--default-config', is_flag=True,\n help=u'Do not load a config file. Use the defaults instead'\n)\ndef main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n output_dir, config_file, default_config):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n format=u'%(levelname)s %(filename)s: %(message)s',\n level=logging.DEBUG\n )\n else:\n # Log info and above to console\n logging.basicConfig(\n format=u'%(levelname)s: %(message)s',\n level=logging.INFO\n )\n\n try:\n # If you _need_ to support a local template in a directory\n # called 'help', use a qualified path to the directory.\n if template == u'help':\n click.echo(click.get_current_context().get_help())\n sys.exit(0)\n\n user_config = None if default_config else config_file\n\n cookiecutter(\n template, checkout, no_input,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir,\n config_file=user_config\n )\n except (OutputDirExistsException,\n InvalidModeException,\n FailedHookException,\n UnknownExtension,\n RepositoryNotFound) as e:\n click.echo(e)\n sys.exit(1)\n except UndefinedVariableInTemplate as undefined_err:\n click.echo('{}'.format(undefined_err.message))\n click.echo('Error message: {}'.format(undefined_err.error.message))\n\n context_str = json.dumps(\n undefined_err.context,\n indent=4,\n sort_keys=True\n )\n click.echo('Context: {}'.format(context_str))\n sys.exit(1)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "cookiecutter/cli.py"}]}
| 1,791 | 475 |
gh_patches_debug_11410
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-4764
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
unable to plot file from repo subfolder
DVC version: `1.8.4`
```
#!/bin/bash
set -ex
rm -rf wspace
mkdir wspace
pushd wspace
mkdir repo
pushd repo
git init --quiet
dvc init --quiet
mkdir subfolder
pushd subfolder
echo '[{"val":1},{"val":3}]' >> plot.json
dvc plots show plot.json
```
fails with `unexpected error - 'plot.json'`
Need to fix for both Git repo and DVC repo.
Probably related to: #4665 #4559
Also, as @sudoandros mentioned under #4665 this error does not happen when dealing with `metrics`.
</issue>
<code>
[start of dvc/repo/plots/__init__.py]
1 import logging
2
3 from funcy import cached_property, first, project
4
5 from dvc.exceptions import DvcException, NoPlotsError
6 from dvc.repo.collect import collect
7 from dvc.schema import PLOT_PROPS
8 from dvc.tree.repo import RepoTree
9 from dvc.utils import relpath
10
11 logger = logging.getLogger(__name__)
12
13
14 class NotAPlotError(DvcException):
15 def __init__(self, out):
16 super().__init__(
17 f"'{out}' is not a known plot. Use `dvc plots modify` to turn it "
18 "into one."
19 )
20
21
22 class PropsNotFoundError(DvcException):
23 pass
24
25
26 class Plots:
27 def __init__(self, repo):
28 self.repo = repo
29
30 def collect(self, targets=None, revs=None):
31 """Collects all props and data for plots.
32
33 Returns a structure like:
34 {rev: {plots.csv: {
35 props: {x: ..., "header": ..., ...},
36 data: "...data as a string...",
37 }}}
38 Data parsing is postponed, since it's affected by props.
39 """
40 targets = [targets] if isinstance(targets, str) else targets or []
41 data = {}
42 for rev in self.repo.brancher(revs=revs):
43 # .brancher() adds unwanted workspace
44 if revs is not None and rev not in revs:
45 continue
46 rev = rev or "workspace"
47
48 tree = RepoTree(self.repo)
49 plots = _collect_plots(self.repo, targets, rev)
50 for path_info, props in plots.items():
51 datafile = relpath(path_info, self.repo.root_dir)
52 if rev not in data:
53 data[rev] = {}
54 data[rev].update({datafile: {"props": props}})
55
56 # Load data from git or dvc cache
57 try:
58 with tree.open(path_info) as fd:
59 data[rev][datafile]["data"] = fd.read()
60 except FileNotFoundError:
61 # This might happen simply because cache is absent
62 pass
63
64 return data
65
66 @staticmethod
67 def render(data, revs=None, props=None, templates=None):
68 """Renders plots"""
69 props = props or {}
70
71 # Merge data by plot file and apply overriding props
72 plots = _prepare_plots(data, revs, props)
73
74 return {
75 datafile: _render(datafile, desc["data"], desc["props"], templates)
76 for datafile, desc in plots.items()
77 }
78
79 def show(self, targets=None, revs=None, props=None, templates=None):
80 from .data import NoMetricInHistoryError
81
82 data = self.collect(targets, revs)
83
84 # If any mentioned plot doesn't have any data then that's an error
85 targets = [targets] if isinstance(targets, str) else targets or []
86 for target in targets:
87 if not any("data" in d[target] for d in data.values()):
88 raise NoMetricInHistoryError(target)
89
90 # No data at all is a special error with a special message
91 if not data:
92 raise NoPlotsError()
93
94 if templates is None:
95 templates = self.templates
96 return self.render(data, revs, props, templates)
97
98 def diff(self, *args, **kwargs):
99 from .diff import diff
100
101 return diff(self.repo, *args, **kwargs)
102
103 @staticmethod
104 def _unset(out, props):
105 missing = list(set(props) - set(out.plot.keys()))
106 if missing:
107 raise PropsNotFoundError(
108 f"display properties {missing} not found in plot '{out}'"
109 )
110
111 for prop in props:
112 out.plot.pop(prop)
113
114 def modify(self, path, props=None, unset=None):
115 from dvc.dvcfile import Dvcfile
116
117 props = props or {}
118 template = props.get("template")
119 if template:
120 self.templates.get_template(template)
121
122 (out,) = self.repo.find_outs_by_path(path)
123 if not out.plot and unset is not None:
124 raise NotAPlotError(out)
125
126 # This out will become a plot unless it is one already
127 if not isinstance(out.plot, dict):
128 out.plot = {}
129
130 if unset:
131 self._unset(out, unset)
132
133 out.plot.update(props)
134
135 # Empty dict will move it to non-plots
136 if not out.plot:
137 out.plot = True
138
139 out.verify_metric()
140
141 dvcfile = Dvcfile(self.repo, out.stage.path)
142 dvcfile.dump(out.stage, update_lock=False)
143
144 @cached_property
145 def templates(self):
146 from .template import PlotTemplates
147
148 return PlotTemplates(self.repo.dvc_dir)
149
150
151 def _is_plot(out):
152 return bool(out.plot)
153
154
155 def _collect_plots(repo, targets=None, rev=None):
156 plots, path_infos = collect(
157 repo, output_filter=_is_plot, targets=targets, rev=rev
158 )
159 result = {plot.path_info: _plot_props(plot) for plot in plots}
160 result.update({path_info: {} for path_info in path_infos})
161 return result
162
163
164 def _plot_props(out):
165 if not out.plot:
166 raise NotAPlotError(out)
167 if isinstance(out.plot, list):
168 raise DvcException("Multiple plots per data file not supported.")
169 if isinstance(out.plot, bool):
170 return {}
171
172 return project(out.plot, PLOT_PROPS)
173
174
175 def _prepare_plots(data, revs, props):
176 """Groups data by plot file.
177
178 Also resolves props conflicts between revs and applies global props.
179 """
180 # we go in order revs are supplied on props conflict first ones win.
181 revs = iter(data) if revs is None else revs
182
183 plots, props_revs = {}, {}
184 for rev in revs:
185 # Asked for revision without data
186 if rev not in data:
187 continue
188
189 for datafile, desc in data[rev].items():
190 # We silently skip on an absent data file,
191 # see also try/except/pass in .collect()
192 if "data" not in desc:
193 continue
194
195 # props from command line overwrite plot props from out definition
196 full_props = {**desc["props"], **props}
197
198 if datafile in plots:
199 saved = plots[datafile]
200 if saved["props"] != full_props:
201 logger.warning(
202 f"Inconsistent plot props for '{datafile}' in "
203 f"'{props_revs[datafile]}' and '{rev}'. "
204 f"Going to use ones from '{props_revs[datafile]}'"
205 )
206
207 saved["data"][rev] = desc["data"]
208 else:
209 plots[datafile] = {
210 "props": full_props,
211 "data": {rev: desc["data"]},
212 }
213 # Save rev we got props from
214 props_revs[datafile] = rev
215
216 return plots
217
218
219 def _render(datafile, datas, props, templates):
220 from .data import PlotData, plot_data
221
222 # Copy it to not modify a passed value
223 props = props.copy()
224
225 # Add x and y to fields if set
226 fields = props.get("fields")
227 if fields is not None:
228 fields = {*fields, props.get("x"), props.get("y")} - {None}
229
230 template = templates.load(props.get("template") or "default")
231
232 # If x is not set add index field
233 if not props.get("x") and template.has_anchor("x"):
234 props["append_index"] = True
235 props["x"] = PlotData.INDEX_FIELD
236
237 # Parse all data, preprocess it and collect as a list of dicts
238 data = []
239 for rev, datablob in datas.items():
240 rev_data = plot_data(datafile, rev, datablob).to_datapoints(
241 fields=fields,
242 path=props.get("path"),
243 header=props.get("header", True),
244 append_index=props.get("append_index", False),
245 )
246 data.extend(rev_data)
247
248 # If y is not set then use last field not used yet
249 if not props.get("y") and template.has_anchor("y"):
250 fields = list(first(data))
251 skip = (PlotData.REVISION_FIELD, props.get("x"))
252 props["y"] = first(f for f in reversed(fields) if f not in skip)
253
254 return template.render(data, props=props)
255
[end of dvc/repo/plots/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/repo/plots/__init__.py b/dvc/repo/plots/__init__.py
--- a/dvc/repo/plots/__init__.py
+++ b/dvc/repo/plots/__init__.py
@@ -84,7 +84,8 @@
# If any mentioned plot doesn't have any data then that's an error
targets = [targets] if isinstance(targets, str) else targets or []
for target in targets:
- if not any("data" in d[target] for d in data.values()):
+ rpath = relpath(target, self.repo.root_dir)
+ if not any("data" in d[rpath] for d in data.values()):
raise NoMetricInHistoryError(target)
# No data at all is a special error with a special message
|
{"golden_diff": "diff --git a/dvc/repo/plots/__init__.py b/dvc/repo/plots/__init__.py\n--- a/dvc/repo/plots/__init__.py\n+++ b/dvc/repo/plots/__init__.py\n@@ -84,7 +84,8 @@\n # If any mentioned plot doesn't have any data then that's an error\n targets = [targets] if isinstance(targets, str) else targets or []\n for target in targets:\n- if not any(\"data\" in d[target] for d in data.values()):\n+ rpath = relpath(target, self.repo.root_dir)\n+ if not any(\"data\" in d[rpath] for d in data.values()):\n raise NoMetricInHistoryError(target)\n \n # No data at all is a special error with a special message\n", "issue": "unable to plot file from repo subfolder\nDVC version: `1.8.4`\r\n\r\n```\r\n#!/bin/bash\r\n\r\nset -ex\r\n\r\nrm -rf wspace\r\nmkdir wspace\r\npushd wspace\r\n\r\nmkdir repo\r\npushd repo\r\n\r\ngit init --quiet\r\ndvc init --quiet\r\n\r\nmkdir subfolder\r\npushd subfolder\r\n\r\necho '[{\"val\":1},{\"val\":3}]' >> plot.json\r\n\r\ndvc plots show plot.json\r\n```\r\n\r\nfails with `unexpected error - 'plot.json'`\r\n\r\nNeed to fix for both Git repo and DVC repo.\r\n\r\nProbably related to: #4665 #4559\r\n\r\nAlso, as @sudoandros mentioned under #4665 this error does not happen when dealing with `metrics`.\n", "before_files": [{"content": "import logging\n\nfrom funcy import cached_property, first, project\n\nfrom dvc.exceptions import DvcException, NoPlotsError\nfrom dvc.repo.collect import collect\nfrom dvc.schema import PLOT_PROPS\nfrom dvc.tree.repo import RepoTree\nfrom dvc.utils import relpath\n\nlogger = logging.getLogger(__name__)\n\n\nclass NotAPlotError(DvcException):\n def __init__(self, out):\n super().__init__(\n f\"'{out}' is not a known plot. Use `dvc plots modify` to turn it \"\n \"into one.\"\n )\n\n\nclass PropsNotFoundError(DvcException):\n pass\n\n\nclass Plots:\n def __init__(self, repo):\n self.repo = repo\n\n def collect(self, targets=None, revs=None):\n \"\"\"Collects all props and data for plots.\n\n Returns a structure like:\n {rev: {plots.csv: {\n props: {x: ..., \"header\": ..., ...},\n data: \"...data as a string...\",\n }}}\n Data parsing is postponed, since it's affected by props.\n \"\"\"\n targets = [targets] if isinstance(targets, str) else targets or []\n data = {}\n for rev in self.repo.brancher(revs=revs):\n # .brancher() adds unwanted workspace\n if revs is not None and rev not in revs:\n continue\n rev = rev or \"workspace\"\n\n tree = RepoTree(self.repo)\n plots = _collect_plots(self.repo, targets, rev)\n for path_info, props in plots.items():\n datafile = relpath(path_info, self.repo.root_dir)\n if rev not in data:\n data[rev] = {}\n data[rev].update({datafile: {\"props\": props}})\n\n # Load data from git or dvc cache\n try:\n with tree.open(path_info) as fd:\n data[rev][datafile][\"data\"] = fd.read()\n except FileNotFoundError:\n # This might happen simply because cache is absent\n pass\n\n return data\n\n @staticmethod\n def render(data, revs=None, props=None, templates=None):\n \"\"\"Renders plots\"\"\"\n props = props or {}\n\n # Merge data by plot file and apply overriding props\n plots = _prepare_plots(data, revs, props)\n\n return {\n datafile: _render(datafile, desc[\"data\"], desc[\"props\"], templates)\n for datafile, desc in plots.items()\n }\n\n def show(self, targets=None, revs=None, props=None, templates=None):\n from .data import NoMetricInHistoryError\n\n data = self.collect(targets, revs)\n\n # If any mentioned plot doesn't have any data then that's an error\n targets = [targets] if isinstance(targets, str) else targets or []\n for target in targets:\n if not any(\"data\" in d[target] for d in data.values()):\n raise NoMetricInHistoryError(target)\n\n # No data at all is a special error with a special message\n if not data:\n raise NoPlotsError()\n\n if templates is None:\n templates = self.templates\n return self.render(data, revs, props, templates)\n\n def diff(self, *args, **kwargs):\n from .diff import diff\n\n return diff(self.repo, *args, **kwargs)\n\n @staticmethod\n def _unset(out, props):\n missing = list(set(props) - set(out.plot.keys()))\n if missing:\n raise PropsNotFoundError(\n f\"display properties {missing} not found in plot '{out}'\"\n )\n\n for prop in props:\n out.plot.pop(prop)\n\n def modify(self, path, props=None, unset=None):\n from dvc.dvcfile import Dvcfile\n\n props = props or {}\n template = props.get(\"template\")\n if template:\n self.templates.get_template(template)\n\n (out,) = self.repo.find_outs_by_path(path)\n if not out.plot and unset is not None:\n raise NotAPlotError(out)\n\n # This out will become a plot unless it is one already\n if not isinstance(out.plot, dict):\n out.plot = {}\n\n if unset:\n self._unset(out, unset)\n\n out.plot.update(props)\n\n # Empty dict will move it to non-plots\n if not out.plot:\n out.plot = True\n\n out.verify_metric()\n\n dvcfile = Dvcfile(self.repo, out.stage.path)\n dvcfile.dump(out.stage, update_lock=False)\n\n @cached_property\n def templates(self):\n from .template import PlotTemplates\n\n return PlotTemplates(self.repo.dvc_dir)\n\n\ndef _is_plot(out):\n return bool(out.plot)\n\n\ndef _collect_plots(repo, targets=None, rev=None):\n plots, path_infos = collect(\n repo, output_filter=_is_plot, targets=targets, rev=rev\n )\n result = {plot.path_info: _plot_props(plot) for plot in plots}\n result.update({path_info: {} for path_info in path_infos})\n return result\n\n\ndef _plot_props(out):\n if not out.plot:\n raise NotAPlotError(out)\n if isinstance(out.plot, list):\n raise DvcException(\"Multiple plots per data file not supported.\")\n if isinstance(out.plot, bool):\n return {}\n\n return project(out.plot, PLOT_PROPS)\n\n\ndef _prepare_plots(data, revs, props):\n \"\"\"Groups data by plot file.\n\n Also resolves props conflicts between revs and applies global props.\n \"\"\"\n # we go in order revs are supplied on props conflict first ones win.\n revs = iter(data) if revs is None else revs\n\n plots, props_revs = {}, {}\n for rev in revs:\n # Asked for revision without data\n if rev not in data:\n continue\n\n for datafile, desc in data[rev].items():\n # We silently skip on an absent data file,\n # see also try/except/pass in .collect()\n if \"data\" not in desc:\n continue\n\n # props from command line overwrite plot props from out definition\n full_props = {**desc[\"props\"], **props}\n\n if datafile in plots:\n saved = plots[datafile]\n if saved[\"props\"] != full_props:\n logger.warning(\n f\"Inconsistent plot props for '{datafile}' in \"\n f\"'{props_revs[datafile]}' and '{rev}'. \"\n f\"Going to use ones from '{props_revs[datafile]}'\"\n )\n\n saved[\"data\"][rev] = desc[\"data\"]\n else:\n plots[datafile] = {\n \"props\": full_props,\n \"data\": {rev: desc[\"data\"]},\n }\n # Save rev we got props from\n props_revs[datafile] = rev\n\n return plots\n\n\ndef _render(datafile, datas, props, templates):\n from .data import PlotData, plot_data\n\n # Copy it to not modify a passed value\n props = props.copy()\n\n # Add x and y to fields if set\n fields = props.get(\"fields\")\n if fields is not None:\n fields = {*fields, props.get(\"x\"), props.get(\"y\")} - {None}\n\n template = templates.load(props.get(\"template\") or \"default\")\n\n # If x is not set add index field\n if not props.get(\"x\") and template.has_anchor(\"x\"):\n props[\"append_index\"] = True\n props[\"x\"] = PlotData.INDEX_FIELD\n\n # Parse all data, preprocess it and collect as a list of dicts\n data = []\n for rev, datablob in datas.items():\n rev_data = plot_data(datafile, rev, datablob).to_datapoints(\n fields=fields,\n path=props.get(\"path\"),\n header=props.get(\"header\", True),\n append_index=props.get(\"append_index\", False),\n )\n data.extend(rev_data)\n\n # If y is not set then use last field not used yet\n if not props.get(\"y\") and template.has_anchor(\"y\"):\n fields = list(first(data))\n skip = (PlotData.REVISION_FIELD, props.get(\"x\"))\n props[\"y\"] = first(f for f in reversed(fields) if f not in skip)\n\n return template.render(data, props=props)\n", "path": "dvc/repo/plots/__init__.py"}]}
| 3,202 | 179 |
gh_patches_debug_24258
|
rasdani/github-patches
|
git_diff
|
PyGithub__PyGithub-638
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Team add_membership should take parameter for role
The github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership.
Team.add_membership should allow the same.
Team add_membership method needs a test
Team add_membership should take parameter for role
The github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership.
Team.add_membership should allow the same.
</issue>
<code>
[start of github/Team.py]
1 # -*- coding: utf-8 -*-
2
3 # ########################## Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 AKFish <[email protected]> #
8 # Copyright 2013 Vincent Jacques <[email protected]> #
9 # Copyright 2013 martinqt <[email protected]> #
10 # #
11 # This file is part of PyGithub. #
12 # http://pygithub.github.io/PyGithub/v1/index.html #
13 # #
14 # PyGithub is free software: you can redistribute it and/or modify it under #
15 # the terms of the GNU Lesser General Public License as published by the Free #
16 # Software Foundation, either version 3 of the License, or (at your option) #
17 # any later version. #
18 # #
19 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
20 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
21 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
22 # details. #
23 # #
24 # You should have received a copy of the GNU Lesser General Public License #
25 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
26 # #
27 # ##############################################################################
28
29 import github.GithubObject
30 import github.PaginatedList
31
32 import github.Repository
33 import github.NamedUser
34
35
36 class Team(github.GithubObject.CompletableGithubObject):
37 """
38 This class represents Teams. The reference can be found here http://developer.github.com/v3/orgs/teams/
39 """
40
41 def __repr__(self):
42 return self.get__repr__({"id": self._id.value, "name": self._name.value})
43
44 @property
45 def id(self):
46 """
47 :type: integer
48 """
49 self._completeIfNotSet(self._id)
50 return self._id.value
51
52 @property
53 def members_count(self):
54 """
55 :type: integer
56 """
57 self._completeIfNotSet(self._members_count)
58 return self._members_count.value
59
60 @property
61 def members_url(self):
62 """
63 :type: string
64 """
65 self._completeIfNotSet(self._members_url)
66 return self._members_url.value
67
68 @property
69 def name(self):
70 """
71 :type: string
72 """
73 self._completeIfNotSet(self._name)
74 return self._name.value
75
76 @property
77 def permission(self):
78 """
79 :type: string
80 """
81 self._completeIfNotSet(self._permission)
82 return self._permission.value
83
84 @property
85 def repos_count(self):
86 """
87 :type: integer
88 """
89 self._completeIfNotSet(self._repos_count)
90 return self._repos_count.value
91
92 @property
93 def repositories_url(self):
94 """
95 :type: string
96 """
97 self._completeIfNotSet(self._repositories_url)
98 return self._repositories_url.value
99
100 @property
101 def slug(self):
102 """
103 :type: string
104 """
105 self._completeIfNotSet(self._slug)
106 return self._slug.value
107
108 @property
109 def url(self):
110 """
111 :type: string
112 """
113 self._completeIfNotSet(self._url)
114 return self._url.value
115
116 def add_to_members(self, member):
117 """
118 :calls: `PUT /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_
119 :param member: :class:`github.NamedUser.NamedUser`
120 :rtype: None
121 """
122 assert isinstance(member, github.NamedUser.NamedUser), member
123 headers, data = self._requester.requestJsonAndCheck(
124 "PUT",
125 self.url + "/members/" + member._identity
126 )
127
128 def add_membership(self, member):
129 """
130 :calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_
131 :param member: :class:`github.Nameduser.NamedUser`
132 :rtype: None
133 """
134 assert isinstance(member, github.NamedUser.NamedUser), member
135 headers, data = self._requester.requestJsonAndCheck(
136 "PUT",
137 self.url + "/memberships/" + member._identity
138 )
139
140 def add_to_repos(self, repo):
141 """
142 :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_
143 :param repo: :class:`github.Repository.Repository`
144 :rtype: None
145 """
146 assert isinstance(repo, github.Repository.Repository), repo
147 headers, data = self._requester.requestJsonAndCheck(
148 "PUT",
149 self.url + "/repos/" + repo._identity
150 )
151
152 def set_repo_permission(self, repo, permission):
153 """
154 :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_
155 :param repo: :class:`github.Repository.Repository`
156 :param permission: string
157 :rtype: None
158 """
159 assert isinstance(repo, github.Repository.Repository), repo
160 put_parameters = {
161 "permission": permission,
162 }
163 headers, data = self._requester.requestJsonAndCheck(
164 "PUT",
165 self.url + "/repos/" + repo._identity,
166 input=put_parameters
167 )
168
169 def delete(self):
170 """
171 :calls: `DELETE /teams/:id <http://developer.github.com/v3/orgs/teams>`_
172 :rtype: None
173 """
174 headers, data = self._requester.requestJsonAndCheck(
175 "DELETE",
176 self.url
177 )
178
179 def edit(self, name, permission=github.GithubObject.NotSet):
180 """
181 :calls: `PATCH /teams/:id <http://developer.github.com/v3/orgs/teams>`_
182 :param name: string
183 :param permission: string
184 :rtype: None
185 """
186 assert isinstance(name, (str, unicode)), name
187 assert permission is github.GithubObject.NotSet or isinstance(permission, (str, unicode)), permission
188 post_parameters = {
189 "name": name,
190 }
191 if permission is not github.GithubObject.NotSet:
192 post_parameters["permission"] = permission
193 headers, data = self._requester.requestJsonAndCheck(
194 "PATCH",
195 self.url,
196 input=post_parameters
197 )
198 self._useAttributes(data)
199
200 def get_members(self):
201 """
202 :calls: `GET /teams/:id/members <http://developer.github.com/v3/orgs/teams>`_
203 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser`
204 """
205 return github.PaginatedList.PaginatedList(
206 github.NamedUser.NamedUser,
207 self._requester,
208 self.url + "/members",
209 None
210 )
211
212 def get_repos(self):
213 """
214 :calls: `GET /teams/:id/repos <http://developer.github.com/v3/orgs/teams>`_
215 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository`
216 """
217 return github.PaginatedList.PaginatedList(
218 github.Repository.Repository,
219 self._requester,
220 self.url + "/repos",
221 None
222 )
223
224 def has_in_members(self, member):
225 """
226 :calls: `GET /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_
227 :param member: :class:`github.NamedUser.NamedUser`
228 :rtype: bool
229 """
230 assert isinstance(member, github.NamedUser.NamedUser), member
231 status, headers, data = self._requester.requestJson(
232 "GET",
233 self.url + "/members/" + member._identity
234 )
235 return status == 204
236
237 def has_in_repos(self, repo):
238 """
239 :calls: `GET /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_
240 :param repo: :class:`github.Repository.Repository`
241 :rtype: bool
242 """
243 assert isinstance(repo, github.Repository.Repository), repo
244 status, headers, data = self._requester.requestJson(
245 "GET",
246 self.url + "/repos/" + repo._identity
247 )
248 return status == 204
249
250 def remove_from_members(self, member):
251 """
252 :calls: `DELETE /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_
253 :param member: :class:`github.NamedUser.NamedUser`
254 :rtype: None
255 """
256 assert isinstance(member, github.NamedUser.NamedUser), member
257 headers, data = self._requester.requestJsonAndCheck(
258 "DELETE",
259 self.url + "/members/" + member._identity
260 )
261
262 def remove_from_repos(self, repo):
263 """
264 :calls: `DELETE /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_
265 :param repo: :class:`github.Repository.Repository`
266 :rtype: None
267 """
268 assert isinstance(repo, github.Repository.Repository), repo
269 headers, data = self._requester.requestJsonAndCheck(
270 "DELETE",
271 self.url + "/repos/" + repo._identity
272 )
273
274 @property
275 def _identity(self):
276 return self.id
277
278 def _initAttributes(self):
279 self._id = github.GithubObject.NotSet
280 self._members_count = github.GithubObject.NotSet
281 self._members_url = github.GithubObject.NotSet
282 self._name = github.GithubObject.NotSet
283 self._permission = github.GithubObject.NotSet
284 self._repos_count = github.GithubObject.NotSet
285 self._repositories_url = github.GithubObject.NotSet
286 self._slug = github.GithubObject.NotSet
287 self._url = github.GithubObject.NotSet
288
289 def _useAttributes(self, attributes):
290 if "id" in attributes: # pragma no branch
291 self._id = self._makeIntAttribute(attributes["id"])
292 if "members_count" in attributes: # pragma no branch
293 self._members_count = self._makeIntAttribute(attributes["members_count"])
294 if "members_url" in attributes: # pragma no branch
295 self._members_url = self._makeStringAttribute(attributes["members_url"])
296 if "name" in attributes: # pragma no branch
297 self._name = self._makeStringAttribute(attributes["name"])
298 if "permission" in attributes: # pragma no branch
299 self._permission = self._makeStringAttribute(attributes["permission"])
300 if "repos_count" in attributes: # pragma no branch
301 self._repos_count = self._makeIntAttribute(attributes["repos_count"])
302 if "repositories_url" in attributes: # pragma no branch
303 self._repositories_url = self._makeStringAttribute(attributes["repositories_url"])
304 if "slug" in attributes: # pragma no branch
305 self._slug = self._makeStringAttribute(attributes["slug"])
306 if "url" in attributes: # pragma no branch
307 self._url = self._makeStringAttribute(attributes["url"])
308
[end of github/Team.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/github/Team.py b/github/Team.py
--- a/github/Team.py
+++ b/github/Team.py
@@ -125,16 +125,29 @@
self.url + "/members/" + member._identity
)
- def add_membership(self, member):
+ def add_membership(self, member, role=github.GithubObject.NotSet):
"""
:calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_
:param member: :class:`github.Nameduser.NamedUser`
+ :param role: string
:rtype: None
"""
assert isinstance(member, github.NamedUser.NamedUser), member
+ assert role is github.GithubObject.NotSet or isinstance(
+ role, (str, unicode)), role
+ if role is not github.GithubObject.NotSet:
+ assert role in ['member', 'maintainer']
+ put_parameters = {
+ "role": role,
+ }
+ else:
+ put_parameters = {
+ "role": "member",
+ }
headers, data = self._requester.requestJsonAndCheck(
"PUT",
- self.url + "/memberships/" + member._identity
+ self.url + "/memberships/" + member._identity,
+ input=put_parameters
)
def add_to_repos(self, repo):
|
{"golden_diff": "diff --git a/github/Team.py b/github/Team.py\n--- a/github/Team.py\n+++ b/github/Team.py\n@@ -125,16 +125,29 @@\n self.url + \"/members/\" + member._identity\n )\n \n- def add_membership(self, member):\n+ def add_membership(self, member, role=github.GithubObject.NotSet):\n \"\"\"\n :calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.Nameduser.NamedUser`\n+ :param role: string\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n+ assert role is github.GithubObject.NotSet or isinstance(\n+ role, (str, unicode)), role\n+ if role is not github.GithubObject.NotSet:\n+ assert role in ['member', 'maintainer']\n+ put_parameters = {\n+ \"role\": role,\n+ }\n+ else:\n+ put_parameters = {\n+ \"role\": \"member\",\n+ }\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n- self.url + \"/memberships/\" + member._identity\n+ self.url + \"/memberships/\" + member._identity,\n+ input=put_parameters\n )\n \n def add_to_repos(self, repo):\n", "issue": "Team add_membership should take parameter for role\nThe github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership. \r\n\r\nTeam.add_membership should allow the same.\nTeam add_membership method needs a test\n\nTeam add_membership should take parameter for role\nThe github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership. \r\n\r\nTeam.add_membership should allow the same.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport github.GithubObject\nimport github.PaginatedList\n\nimport github.Repository\nimport github.NamedUser\n\n\nclass Team(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents Teams. The reference can be found here http://developer.github.com/v3/orgs/teams/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"id\": self._id.value, \"name\": self._name.value})\n\n @property\n def id(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._id)\n return self._id.value\n\n @property\n def members_count(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._members_count)\n return self._members_count.value\n\n @property\n def members_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._members_url)\n return self._members_url.value\n\n @property\n def name(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._name)\n return self._name.value\n\n @property\n def permission(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._permission)\n return self._permission.value\n\n @property\n def repos_count(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._repos_count)\n return self._repos_count.value\n\n @property\n def repositories_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._repositories_url)\n return self._repositories_url.value\n\n @property\n def slug(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._slug)\n return self._slug.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n def add_to_members(self, member):\n \"\"\"\n :calls: `PUT /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/members/\" + member._identity\n )\n\n def add_membership(self, member):\n \"\"\"\n :calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.Nameduser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/memberships/\" + member._identity\n )\n\n def add_to_repos(self, repo):\n \"\"\"\n :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/repos/\" + repo._identity\n )\n\n def set_repo_permission(self, repo, permission):\n \"\"\"\n :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :param permission: string\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n put_parameters = {\n \"permission\": permission,\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/repos/\" + repo._identity,\n input=put_parameters\n )\n\n def delete(self):\n \"\"\"\n :calls: `DELETE /teams/:id <http://developer.github.com/v3/orgs/teams>`_\n :rtype: None\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url\n )\n\n def edit(self, name, permission=github.GithubObject.NotSet):\n \"\"\"\n :calls: `PATCH /teams/:id <http://developer.github.com/v3/orgs/teams>`_\n :param name: string\n :param permission: string\n :rtype: None\n \"\"\"\n assert isinstance(name, (str, unicode)), name\n assert permission is github.GithubObject.NotSet or isinstance(permission, (str, unicode)), permission\n post_parameters = {\n \"name\": name,\n }\n if permission is not github.GithubObject.NotSet:\n post_parameters[\"permission\"] = permission\n headers, data = self._requester.requestJsonAndCheck(\n \"PATCH\",\n self.url,\n input=post_parameters\n )\n self._useAttributes(data)\n\n def get_members(self):\n \"\"\"\n :calls: `GET /teams/:id/members <http://developer.github.com/v3/orgs/teams>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.NamedUser.NamedUser,\n self._requester,\n self.url + \"/members\",\n None\n )\n\n def get_repos(self):\n \"\"\"\n :calls: `GET /teams/:id/repos <http://developer.github.com/v3/orgs/teams>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.Repository.Repository,\n self._requester,\n self.url + \"/repos\",\n None\n )\n\n def has_in_members(self, member):\n \"\"\"\n :calls: `GET /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: bool\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n status, headers, data = self._requester.requestJson(\n \"GET\",\n self.url + \"/members/\" + member._identity\n )\n return status == 204\n\n def has_in_repos(self, repo):\n \"\"\"\n :calls: `GET /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: bool\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n status, headers, data = self._requester.requestJson(\n \"GET\",\n self.url + \"/repos/\" + repo._identity\n )\n return status == 204\n\n def remove_from_members(self, member):\n \"\"\"\n :calls: `DELETE /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url + \"/members/\" + member._identity\n )\n\n def remove_from_repos(self, repo):\n \"\"\"\n :calls: `DELETE /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url + \"/repos/\" + repo._identity\n )\n\n @property\n def _identity(self):\n return self.id\n\n def _initAttributes(self):\n self._id = github.GithubObject.NotSet\n self._members_count = github.GithubObject.NotSet\n self._members_url = github.GithubObject.NotSet\n self._name = github.GithubObject.NotSet\n self._permission = github.GithubObject.NotSet\n self._repos_count = github.GithubObject.NotSet\n self._repositories_url = github.GithubObject.NotSet\n self._slug = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"id\" in attributes: # pragma no branch\n self._id = self._makeIntAttribute(attributes[\"id\"])\n if \"members_count\" in attributes: # pragma no branch\n self._members_count = self._makeIntAttribute(attributes[\"members_count\"])\n if \"members_url\" in attributes: # pragma no branch\n self._members_url = self._makeStringAttribute(attributes[\"members_url\"])\n if \"name\" in attributes: # pragma no branch\n self._name = self._makeStringAttribute(attributes[\"name\"])\n if \"permission\" in attributes: # pragma no branch\n self._permission = self._makeStringAttribute(attributes[\"permission\"])\n if \"repos_count\" in attributes: # pragma no branch\n self._repos_count = self._makeIntAttribute(attributes[\"repos_count\"])\n if \"repositories_url\" in attributes: # pragma no branch\n self._repositories_url = self._makeStringAttribute(attributes[\"repositories_url\"])\n if \"slug\" in attributes: # pragma no branch\n self._slug = self._makeStringAttribute(attributes[\"slug\"])\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n", "path": "github/Team.py"}]}
| 3,976 | 313 |
gh_patches_debug_366
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-1485
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use UTC dates
https://github.com/python-telegram-bot/python-telegram-bot/blob/439790375ed8ed493c43e464aa8e2b60a77939db/telegram/utils/helpers.py#L78-L90
Should probably be using `tz=timezone.utc`. Python's `datetime` isn't the best, and `fromtimestamp` by default sets no `tz` information, which uses the local time, which in turn is generally a bad idea.
</issue>
<code>
[start of telegram/utils/helpers.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2018
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains helper functions."""
20 from collections import defaultdict
21
22 try:
23 import ujson as json
24 except ImportError:
25 import json
26 from html import escape
27
28 import re
29 import signal
30 from datetime import datetime
31
32 # From https://stackoverflow.com/questions/2549939/get-signal-names-from-numbers-in-python
33 _signames = {v: k
34 for k, v in reversed(sorted(vars(signal).items()))
35 if k.startswith('SIG') and not k.startswith('SIG_')}
36
37
38 def get_signal_name(signum):
39 """Returns the signal name of the given signal number."""
40 return _signames[signum]
41
42
43 # Not using future.backports.datetime here as datetime value might be an input from the user,
44 # making every isinstace() call more delicate. So we just use our own compat layer.
45 if hasattr(datetime, 'timestamp'):
46 # Python 3.3+
47 def _timestamp(dt_obj):
48 return dt_obj.timestamp()
49 else:
50 # Python < 3.3 (incl 2.7)
51 from time import mktime
52
53 def _timestamp(dt_obj):
54 return mktime(dt_obj.timetuple())
55
56
57 def escape_markdown(text):
58 """Helper function to escape telegram markup symbols."""
59 escape_chars = '\*_`\['
60 return re.sub(r'([%s])' % escape_chars, r'\\\1', text)
61
62
63 def to_timestamp(dt_obj):
64 """
65 Args:
66 dt_obj (:class:`datetime.datetime`):
67
68 Returns:
69 int:
70
71 """
72 if not dt_obj:
73 return None
74
75 return int(_timestamp(dt_obj))
76
77
78 def from_timestamp(unixtime):
79 """
80 Args:
81 unixtime (int):
82
83 Returns:
84 datetime.datetime:
85
86 """
87 if not unixtime:
88 return None
89
90 return datetime.fromtimestamp(unixtime)
91
92
93 def mention_html(user_id, name):
94 """
95 Args:
96 user_id (:obj:`int`) The user's id which you want to mention.
97 name (:obj:`str`) The name the mention is showing.
98
99 Returns:
100 :obj:`str`: The inline mention for the user as html.
101 """
102 if isinstance(user_id, int):
103 return u'<a href="tg://user?id={}">{}</a>'.format(user_id, escape(name))
104
105
106 def mention_markdown(user_id, name):
107 """
108 Args:
109 user_id (:obj:`int`) The user's id which you want to mention.
110 name (:obj:`str`) The name the mention is showing.
111
112 Returns:
113 :obj:`str`: The inline mention for the user as markdown.
114 """
115 if isinstance(user_id, int):
116 return u'[{}](tg://user?id={})'.format(escape_markdown(name), user_id)
117
118
119 def effective_message_type(entity):
120 """
121 Extracts the type of message as a string identifier from a :class:`telegram.Message` or a
122 :class:`telegram.Update`.
123
124 Args:
125 entity (:obj:`Update` | :obj:`Message`) The ``update`` or ``message`` to extract from
126
127 Returns:
128 str: One of ``Message.MESSAGE_TYPES``
129
130 """
131
132 # Importing on file-level yields cyclic Import Errors
133 from telegram import Message
134 from telegram import Update
135
136 if isinstance(entity, Message):
137 message = entity
138 elif isinstance(entity, Update):
139 message = entity.effective_message
140 else:
141 raise TypeError("entity is not Message or Update (got: {})".format(type(entity)))
142
143 for i in Message.MESSAGE_TYPES:
144 if getattr(message, i, None):
145 return i
146
147 return None
148
149
150 def enocde_conversations_to_json(conversations):
151 """Helper method to encode a conversations dict (that uses tuples as keys) to a
152 JSON-serializable way. Use :attr:`_decode_conversations_from_json` to decode.
153
154 Args:
155 conversations (:obj:`dict`): The conversations dict to transofrm to JSON.
156
157 Returns:
158 :obj:`str`: The JSON-serialized conversations dict
159 """
160 tmp = {}
161 for handler, states in conversations.items():
162 tmp[handler] = {}
163 for key, state in states.items():
164 tmp[handler][json.dumps(key)] = state
165 return json.dumps(tmp)
166
167
168 def decode_conversations_from_json(json_string):
169 """Helper method to decode a conversations dict (that uses tuples as keys) from a
170 JSON-string created with :attr:`_encode_conversations_to_json`.
171
172 Args:
173 json_string (:obj:`str`): The conversations dict as JSON string.
174
175 Returns:
176 :obj:`dict`: The conversations dict after decoding
177 """
178 tmp = json.loads(json_string)
179 conversations = {}
180 for handler, states in tmp.items():
181 conversations[handler] = {}
182 for key, state in states.items():
183 conversations[handler][tuple(json.loads(key))] = state
184 return conversations
185
186
187 def decode_user_chat_data_from_json(data):
188 """Helper method to decode chat or user data (that uses ints as keys) from a
189 JSON-string.
190
191 Args:
192 data (:obj:`str`): The user/chat_data dict as JSON string.
193
194 Returns:
195 :obj:`dict`: The user/chat_data defaultdict after decoding
196 """
197
198 tmp = defaultdict(dict)
199 decoded_data = json.loads(data)
200 for user, data in decoded_data.items():
201 user = int(user)
202 tmp[user] = {}
203 for key, value in data.items():
204 try:
205 key = int(key)
206 except ValueError:
207 pass
208 tmp[user][key] = value
209 return tmp
210
[end of telegram/utils/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram/utils/helpers.py b/telegram/utils/helpers.py
--- a/telegram/utils/helpers.py
+++ b/telegram/utils/helpers.py
@@ -87,7 +87,7 @@
if not unixtime:
return None
- return datetime.fromtimestamp(unixtime)
+ return datetime.utcfromtimestamp(unixtime)
def mention_html(user_id, name):
|
{"golden_diff": "diff --git a/telegram/utils/helpers.py b/telegram/utils/helpers.py\n--- a/telegram/utils/helpers.py\n+++ b/telegram/utils/helpers.py\n@@ -87,7 +87,7 @@\n if not unixtime:\n return None\n \n- return datetime.fromtimestamp(unixtime)\n+ return datetime.utcfromtimestamp(unixtime)\n \n \n def mention_html(user_id, name):\n", "issue": "Use UTC dates\nhttps://github.com/python-telegram-bot/python-telegram-bot/blob/439790375ed8ed493c43e464aa8e2b60a77939db/telegram/utils/helpers.py#L78-L90\r\n\r\nShould probably be using `tz=timezone.utc`. Python's `datetime` isn't the best, and `fromtimestamp` by default sets no `tz` information, which uses the local time, which in turn is generally a bad idea.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains helper functions.\"\"\"\nfrom collections import defaultdict\n\ntry:\n import ujson as json\nexcept ImportError:\n import json\nfrom html import escape\n\nimport re\nimport signal\nfrom datetime import datetime\n\n# From https://stackoverflow.com/questions/2549939/get-signal-names-from-numbers-in-python\n_signames = {v: k\n for k, v in reversed(sorted(vars(signal).items()))\n if k.startswith('SIG') and not k.startswith('SIG_')}\n\n\ndef get_signal_name(signum):\n \"\"\"Returns the signal name of the given signal number.\"\"\"\n return _signames[signum]\n\n\n# Not using future.backports.datetime here as datetime value might be an input from the user,\n# making every isinstace() call more delicate. So we just use our own compat layer.\nif hasattr(datetime, 'timestamp'):\n # Python 3.3+\n def _timestamp(dt_obj):\n return dt_obj.timestamp()\nelse:\n # Python < 3.3 (incl 2.7)\n from time import mktime\n\n def _timestamp(dt_obj):\n return mktime(dt_obj.timetuple())\n\n\ndef escape_markdown(text):\n \"\"\"Helper function to escape telegram markup symbols.\"\"\"\n escape_chars = '\\*_`\\['\n return re.sub(r'([%s])' % escape_chars, r'\\\\\\1', text)\n\n\ndef to_timestamp(dt_obj):\n \"\"\"\n Args:\n dt_obj (:class:`datetime.datetime`):\n\n Returns:\n int:\n\n \"\"\"\n if not dt_obj:\n return None\n\n return int(_timestamp(dt_obj))\n\n\ndef from_timestamp(unixtime):\n \"\"\"\n Args:\n unixtime (int):\n\n Returns:\n datetime.datetime:\n\n \"\"\"\n if not unixtime:\n return None\n\n return datetime.fromtimestamp(unixtime)\n\n\ndef mention_html(user_id, name):\n \"\"\"\n Args:\n user_id (:obj:`int`) The user's id which you want to mention.\n name (:obj:`str`) The name the mention is showing.\n\n Returns:\n :obj:`str`: The inline mention for the user as html.\n \"\"\"\n if isinstance(user_id, int):\n return u'<a href=\"tg://user?id={}\">{}</a>'.format(user_id, escape(name))\n\n\ndef mention_markdown(user_id, name):\n \"\"\"\n Args:\n user_id (:obj:`int`) The user's id which you want to mention.\n name (:obj:`str`) The name the mention is showing.\n\n Returns:\n :obj:`str`: The inline mention for the user as markdown.\n \"\"\"\n if isinstance(user_id, int):\n return u'[{}](tg://user?id={})'.format(escape_markdown(name), user_id)\n\n\ndef effective_message_type(entity):\n \"\"\"\n Extracts the type of message as a string identifier from a :class:`telegram.Message` or a\n :class:`telegram.Update`.\n\n Args:\n entity (:obj:`Update` | :obj:`Message`) The ``update`` or ``message`` to extract from\n\n Returns:\n str: One of ``Message.MESSAGE_TYPES``\n\n \"\"\"\n\n # Importing on file-level yields cyclic Import Errors\n from telegram import Message\n from telegram import Update\n\n if isinstance(entity, Message):\n message = entity\n elif isinstance(entity, Update):\n message = entity.effective_message\n else:\n raise TypeError(\"entity is not Message or Update (got: {})\".format(type(entity)))\n\n for i in Message.MESSAGE_TYPES:\n if getattr(message, i, None):\n return i\n\n return None\n\n\ndef enocde_conversations_to_json(conversations):\n \"\"\"Helper method to encode a conversations dict (that uses tuples as keys) to a\n JSON-serializable way. Use :attr:`_decode_conversations_from_json` to decode.\n\n Args:\n conversations (:obj:`dict`): The conversations dict to transofrm to JSON.\n\n Returns:\n :obj:`str`: The JSON-serialized conversations dict\n \"\"\"\n tmp = {}\n for handler, states in conversations.items():\n tmp[handler] = {}\n for key, state in states.items():\n tmp[handler][json.dumps(key)] = state\n return json.dumps(tmp)\n\n\ndef decode_conversations_from_json(json_string):\n \"\"\"Helper method to decode a conversations dict (that uses tuples as keys) from a\n JSON-string created with :attr:`_encode_conversations_to_json`.\n\n Args:\n json_string (:obj:`str`): The conversations dict as JSON string.\n\n Returns:\n :obj:`dict`: The conversations dict after decoding\n \"\"\"\n tmp = json.loads(json_string)\n conversations = {}\n for handler, states in tmp.items():\n conversations[handler] = {}\n for key, state in states.items():\n conversations[handler][tuple(json.loads(key))] = state\n return conversations\n\n\ndef decode_user_chat_data_from_json(data):\n \"\"\"Helper method to decode chat or user data (that uses ints as keys) from a\n JSON-string.\n\n Args:\n data (:obj:`str`): The user/chat_data dict as JSON string.\n\n Returns:\n :obj:`dict`: The user/chat_data defaultdict after decoding\n \"\"\"\n\n tmp = defaultdict(dict)\n decoded_data = json.loads(data)\n for user, data in decoded_data.items():\n user = int(user)\n tmp[user] = {}\n for key, value in data.items():\n try:\n key = int(key)\n except ValueError:\n pass\n tmp[user][key] = value\n return tmp\n", "path": "telegram/utils/helpers.py"}]}
| 2,583 | 84 |
gh_patches_debug_4402
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-1769
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JSON Cache for AssumeRoleProvider not truncating files
When we open a file for writing, if we're reusing the same file (same cache key) we don't truncate the file before writing. If the second JSON response is smaller it will result in extra data at the end of the JSON document.
This will trigger a json parsing error, which raises a KeyError, which causes the cred provider to retrieve a new set of temporary credentials because it thinks the file is not in the cache.
</issue>
<code>
[start of awscli/customizations/assumerole.py]
1 import os
2 import json
3 import logging
4
5 from botocore.exceptions import ProfileNotFound
6
7 LOG = logging.getLogger(__name__)
8
9
10 def register_assume_role_provider(event_handlers):
11 event_handlers.register('session-initialized',
12 inject_assume_role_provider_cache,
13 unique_id='inject_assume_role_cred_provider_cache')
14
15
16 def inject_assume_role_provider_cache(session, **kwargs):
17 try:
18 cred_chain = session.get_component('credential_provider')
19 except ProfileNotFound:
20 # If a user has provided a profile that does not exist,
21 # trying to retrieve components/config on the session
22 # will raise ProfileNotFound. Sometimes this is invalid:
23 #
24 # "ec2 describe-instances --profile unknown"
25 #
26 # and sometimes this is perfectly valid:
27 #
28 # "configure set region us-west-2 --profile brand-new-profile"
29 #
30 # Because we can't know (and don't want to know) whether
31 # the customer is trying to do something valid, we just
32 # immediately return. If it's invalid something else
33 # up the stack will raise ProfileNotFound, otherwise
34 # the configure (and other) commands will work as expected.
35 LOG.debug("ProfileNotFound caught when trying to inject "
36 "assume-role cred provider cache. Not configuring "
37 "JSONFileCache for assume-role.")
38 return
39 provider = cred_chain.get_provider('assume-role')
40 provider.cache = JSONFileCache()
41
42
43 class JSONFileCache(object):
44 """JSON file cache.
45
46 This provides a dict like interface that stores JSON serializable
47 objects.
48
49 The objects are serialized to JSON and stored in a file. These
50 values can be retrieved at a later time.
51
52 """
53
54 CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache'))
55
56 def __init__(self, working_dir=CACHE_DIR):
57 self._working_dir = working_dir
58
59 def __contains__(self, cache_key):
60 actual_key = self._convert_cache_key(cache_key)
61 return os.path.isfile(actual_key)
62
63 def __getitem__(self, cache_key):
64 """Retrieve value from a cache key."""
65 actual_key = self._convert_cache_key(cache_key)
66 try:
67 with open(actual_key) as f:
68 return json.load(f)
69 except (OSError, ValueError, IOError):
70 raise KeyError(cache_key)
71
72 def __setitem__(self, cache_key, value):
73 full_key = self._convert_cache_key(cache_key)
74 try:
75 file_content = json.dumps(value)
76 except (TypeError, ValueError):
77 raise ValueError("Value cannot be cached, must be "
78 "JSON serializable: %s" % value)
79 if not os.path.isdir(self._working_dir):
80 os.makedirs(self._working_dir)
81 with os.fdopen(os.open(full_key,
82 os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:
83 f.write(file_content)
84
85 def _convert_cache_key(self, cache_key):
86 full_path = os.path.join(self._working_dir, cache_key + '.json')
87 return full_path
88
[end of awscli/customizations/assumerole.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/customizations/assumerole.py b/awscli/customizations/assumerole.py
--- a/awscli/customizations/assumerole.py
+++ b/awscli/customizations/assumerole.py
@@ -80,6 +80,7 @@
os.makedirs(self._working_dir)
with os.fdopen(os.open(full_key,
os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:
+ f.truncate()
f.write(file_content)
def _convert_cache_key(self, cache_key):
|
{"golden_diff": "diff --git a/awscli/customizations/assumerole.py b/awscli/customizations/assumerole.py\n--- a/awscli/customizations/assumerole.py\n+++ b/awscli/customizations/assumerole.py\n@@ -80,6 +80,7 @@\n os.makedirs(self._working_dir)\n with os.fdopen(os.open(full_key,\n os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:\n+ f.truncate()\n f.write(file_content)\n \n def _convert_cache_key(self, cache_key):\n", "issue": "JSON Cache for AssumeRoleProvider not truncating files\nWhen we open a file for writing, if we're reusing the same file (same cache key) we don't truncate the file before writing. If the second JSON response is smaller it will result in extra data at the end of the JSON document.\n\nThis will trigger a json parsing error, which raises a KeyError, which causes the cred provider to retrieve a new set of temporary credentials because it thinks the file is not in the cache.\n\n", "before_files": [{"content": "import os\nimport json\nimport logging\n\nfrom botocore.exceptions import ProfileNotFound\n\nLOG = logging.getLogger(__name__)\n\n\ndef register_assume_role_provider(event_handlers):\n event_handlers.register('session-initialized',\n inject_assume_role_provider_cache,\n unique_id='inject_assume_role_cred_provider_cache')\n\n\ndef inject_assume_role_provider_cache(session, **kwargs):\n try:\n cred_chain = session.get_component('credential_provider')\n except ProfileNotFound:\n # If a user has provided a profile that does not exist,\n # trying to retrieve components/config on the session\n # will raise ProfileNotFound. Sometimes this is invalid:\n #\n # \"ec2 describe-instances --profile unknown\"\n #\n # and sometimes this is perfectly valid:\n #\n # \"configure set region us-west-2 --profile brand-new-profile\"\n #\n # Because we can't know (and don't want to know) whether\n # the customer is trying to do something valid, we just\n # immediately return. If it's invalid something else\n # up the stack will raise ProfileNotFound, otherwise\n # the configure (and other) commands will work as expected.\n LOG.debug(\"ProfileNotFound caught when trying to inject \"\n \"assume-role cred provider cache. Not configuring \"\n \"JSONFileCache for assume-role.\")\n return\n provider = cred_chain.get_provider('assume-role')\n provider.cache = JSONFileCache()\n\n\nclass JSONFileCache(object):\n \"\"\"JSON file cache.\n\n This provides a dict like interface that stores JSON serializable\n objects.\n\n The objects are serialized to JSON and stored in a file. These\n values can be retrieved at a later time.\n\n \"\"\"\n\n CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache'))\n\n def __init__(self, working_dir=CACHE_DIR):\n self._working_dir = working_dir\n\n def __contains__(self, cache_key):\n actual_key = self._convert_cache_key(cache_key)\n return os.path.isfile(actual_key)\n\n def __getitem__(self, cache_key):\n \"\"\"Retrieve value from a cache key.\"\"\"\n actual_key = self._convert_cache_key(cache_key)\n try:\n with open(actual_key) as f:\n return json.load(f)\n except (OSError, ValueError, IOError):\n raise KeyError(cache_key)\n\n def __setitem__(self, cache_key, value):\n full_key = self._convert_cache_key(cache_key)\n try:\n file_content = json.dumps(value)\n except (TypeError, ValueError):\n raise ValueError(\"Value cannot be cached, must be \"\n \"JSON serializable: %s\" % value)\n if not os.path.isdir(self._working_dir):\n os.makedirs(self._working_dir)\n with os.fdopen(os.open(full_key,\n os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:\n f.write(file_content)\n\n def _convert_cache_key(self, cache_key):\n full_path = os.path.join(self._working_dir, cache_key + '.json')\n return full_path\n", "path": "awscli/customizations/assumerole.py"}]}
| 1,481 | 124 |
gh_patches_debug_11538
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-2279
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pyobject truncates code at comment
See https://github.com/sphinx-doc/sphinx/issues/2253
Example rendered docs:
http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#handling-web-requests-and-responses
rst syntax:
https://github.com/Pylons/pyramid/blame/master/docs/quick_tour.rst#L119-L120
Source code:
https://github.com/Pylons/pyramid/blob/master/docs/quick_tour/requests/app.py#L7
When the bug is fixed and released, we will need to:
- revert the source code sample to use `#` style comments
- bump up the Sphinx version
</issue>
<code>
[start of setup.py]
1 ##############################################################################
2 #
3 # Copyright (c) 2008-2013 Agendaless Consulting and Contributors.
4 # All Rights Reserved.
5 #
6 # This software is subject to the provisions of the BSD-like license at
7 # http://www.repoze.org/LICENSE.txt. A copy of the license should accompany
8 # this distribution. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL
9 # EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,
10 # THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND
11 # FITNESS FOR A PARTICULAR PURPOSE
12 #
13 ##############################################################################
14
15 import os
16 import sys
17
18 from setuptools import setup, find_packages
19
20 py_version = sys.version_info[:2]
21
22 PY3 = py_version[0] == 3
23
24 if PY3:
25 if py_version < (3, 2):
26 raise RuntimeError('On Python 3, Pyramid requires Python 3.2 or better')
27 else:
28 if py_version < (2, 6):
29 raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')
30
31 here = os.path.abspath(os.path.dirname(__file__))
32 try:
33 with open(os.path.join(here, 'README.rst')) as f:
34 README = f.read()
35 with open(os.path.join(here, 'CHANGES.txt')) as f:
36 CHANGES = f.read()
37 except IOError:
38 README = CHANGES = ''
39
40 install_requires=[
41 'setuptools',
42 'WebOb >= 1.3.1', # request.domain and CookieProfile
43 'repoze.lru >= 0.4', # py3 compat
44 'zope.interface >= 3.8.0', # has zope.interface.registry
45 'zope.deprecation >= 3.5.0', # py3 compat
46 'venusian >= 1.0a3', # ``ignore``
47 'translationstring >= 0.4', # py3 compat
48 'PasteDeploy >= 1.5.0', # py3 compat
49 ]
50
51 tests_require = [
52 'WebTest >= 1.3.1', # py3 compat
53 ]
54
55 if not PY3:
56 tests_require.append('zope.component>=3.11.0')
57
58 docs_extras = [
59 'Sphinx >= 1.3.4',
60 'docutils',
61 'repoze.sphinx.autointerface',
62 'pylons_sphinx_latesturl',
63 'pylons-sphinx-themes',
64 'sphinxcontrib-programoutput',
65 ]
66
67 testing_extras = tests_require + [
68 'nose',
69 'coverage',
70 'virtualenv', # for scaffolding tests
71 ]
72
73 setup(name='pyramid',
74 version='1.5.8',
75 description='The Pyramid Web Framework, a Pylons project',
76 long_description=README + '\n\n' + CHANGES,
77 classifiers=[
78 "Intended Audience :: Developers",
79 "Programming Language :: Python",
80 "Programming Language :: Python :: 2.6",
81 "Programming Language :: Python :: 2.7",
82 "Programming Language :: Python :: 3",
83 "Programming Language :: Python :: 3.2",
84 "Programming Language :: Python :: 3.3",
85 "Programming Language :: Python :: 3.4",
86 "Programming Language :: Python :: 3.5",
87 "Programming Language :: Python :: Implementation :: CPython",
88 "Programming Language :: Python :: Implementation :: PyPy",
89 "Framework :: Pyramid",
90 "Topic :: Internet :: WWW/HTTP",
91 "Topic :: Internet :: WWW/HTTP :: WSGI",
92 "License :: Repoze Public License",
93 ],
94 keywords='web wsgi pylons pyramid',
95 author="Chris McDonough, Agendaless Consulting",
96 author_email="[email protected]",
97 url="http://docs.pylonsproject.org/en/latest/docs/pyramid.html",
98 license="BSD-derived (http://www.repoze.org/LICENSE.txt)",
99 packages=find_packages(),
100 include_package_data=True,
101 zip_safe=False,
102 install_requires = install_requires,
103 extras_require = {
104 'testing':testing_extras,
105 'docs':docs_extras,
106 },
107 tests_require = tests_require,
108 test_suite="pyramid.tests",
109 entry_points = """\
110 [pyramid.scaffold]
111 starter=pyramid.scaffolds:StarterProjectTemplate
112 zodb=pyramid.scaffolds:ZODBProjectTemplate
113 alchemy=pyramid.scaffolds:AlchemyProjectTemplate
114 [console_scripts]
115 pcreate = pyramid.scripts.pcreate:main
116 pserve = pyramid.scripts.pserve:main
117 pshell = pyramid.scripts.pshell:main
118 proutes = pyramid.scripts.proutes:main
119 pviews = pyramid.scripts.pviews:main
120 ptweens = pyramid.scripts.ptweens:main
121 prequest = pyramid.scripts.prequest:main
122 pdistreport = pyramid.scripts.pdistreport:main
123 [paste.server_runner]
124 wsgiref = pyramid.scripts.pserve:wsgiref_server_runner
125 cherrypy = pyramid.scripts.pserve:cherrypy_server_runner
126 """
127 )
128
129
[end of setup.py]
[start of docs/quick_tour/requests/app.py]
1 from wsgiref.simple_server import make_server
2 from pyramid.config import Configurator
3 from pyramid.response import Response
4
5
6 def hello_world(request):
7 """ Some parameters from a request such as /?name=lisa """
8 url = request.url
9 name = request.params.get('name', 'No Name Provided')
10
11 body = 'URL %s with name: %s' % (url, name)
12 return Response(
13 content_type="text/plain",
14 body=body
15 )
16
17
18 if __name__ == '__main__':
19 config = Configurator()
20 config.add_route('hello', '/')
21 config.add_view(hello_world, route_name='hello')
22 app = config.make_wsgi_app()
23 server = make_server('0.0.0.0', 6543, app)
24 server.serve_forever()
25
[end of docs/quick_tour/requests/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/quick_tour/requests/app.py b/docs/quick_tour/requests/app.py
--- a/docs/quick_tour/requests/app.py
+++ b/docs/quick_tour/requests/app.py
@@ -4,7 +4,7 @@
def hello_world(request):
- """ Some parameters from a request such as /?name=lisa """
+ # Some parameters from a request such as /?name=lisa
url = request.url
name = request.params.get('name', 'No Name Provided')
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@
tests_require.append('zope.component>=3.11.0')
docs_extras = [
- 'Sphinx >= 1.3.4',
+ 'Sphinx >= 1.3.5',
'docutils',
'repoze.sphinx.autointerface',
'pylons_sphinx_latesturl',
|
{"golden_diff": "diff --git a/docs/quick_tour/requests/app.py b/docs/quick_tour/requests/app.py\n--- a/docs/quick_tour/requests/app.py\n+++ b/docs/quick_tour/requests/app.py\n@@ -4,7 +4,7 @@\n \n \n def hello_world(request):\n- \"\"\" Some parameters from a request such as /?name=lisa \"\"\"\n+ # Some parameters from a request such as /?name=lisa\n url = request.url\n name = request.params.get('name', 'No Name Provided')\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,7 +56,7 @@\n tests_require.append('zope.component>=3.11.0')\n \n docs_extras = [\n- 'Sphinx >= 1.3.4',\n+ 'Sphinx >= 1.3.5',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n", "issue": "pyobject truncates code at comment\nSee https://github.com/sphinx-doc/sphinx/issues/2253\n\nExample rendered docs:\nhttp://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#handling-web-requests-and-responses\n\nrst syntax:\nhttps://github.com/Pylons/pyramid/blame/master/docs/quick_tour.rst#L119-L120\n\nSource code:\nhttps://github.com/Pylons/pyramid/blob/master/docs/quick_tour/requests/app.py#L7\n\nWhen the bug is fixed and released, we will need to:\n- revert the source code sample to use `#` style comments\n- bump up the Sphinx version\n\n", "before_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\npy_version = sys.version_info[:2]\n\nPY3 = py_version[0] == 3\n\nif PY3:\n if py_version < (3, 2):\n raise RuntimeError('On Python 3, Pyramid requires Python 3.2 or better')\nelse:\n if py_version < (2, 6):\n raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires=[\n 'setuptools',\n 'WebOb >= 1.3.1', # request.domain and CookieProfile\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n ]\n\nif not PY3:\n tests_require.append('zope.component>=3.11.0')\n\ndocs_extras = [\n 'Sphinx >= 1.3.4',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-programoutput',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.5.8',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"http://docs.pylonsproject.org/en/latest/docs/pyramid.html\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires = install_requires,\n extras_require = {\n 'testing':testing_extras,\n 'docs':docs_extras,\n },\n tests_require = tests_require,\n test_suite=\"pyramid.tests\",\n entry_points = \"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n\n", "path": "setup.py"}, {"content": "from wsgiref.simple_server import make_server\nfrom pyramid.config import Configurator\nfrom pyramid.response import Response\n\n\ndef hello_world(request):\n \"\"\" Some parameters from a request such as /?name=lisa \"\"\"\n url = request.url\n name = request.params.get('name', 'No Name Provided')\n\n body = 'URL %s with name: %s' % (url, name)\n return Response(\n content_type=\"text/plain\",\n body=body\n )\n\n\nif __name__ == '__main__':\n config = Configurator()\n config.add_route('hello', '/')\n config.add_view(hello_world, route_name='hello')\n app = config.make_wsgi_app()\n server = make_server('0.0.0.0', 6543, app)\n server.serve_forever()\n", "path": "docs/quick_tour/requests/app.py"}]}
| 2,326 | 223 |
gh_patches_debug_29757
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1646
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ChallengeList page should use pagination
This takes a while to load with many challenges and the Automated Evaluation boolean no longer works.
</issue>
<code>
[start of app/grandchallenge/challenges/views.py]
1 from django.contrib import messages
2 from django.contrib.messages.views import SuccessMessageMixin
3 from django.core.paginator import EmptyPage, Paginator
4 from django.db.models import Q
5 from django.utils.html import format_html
6 from django.views.generic import (
7 CreateView,
8 DeleteView,
9 ListView,
10 TemplateView,
11 UpdateView,
12 )
13
14 from grandchallenge.challenges.filters import ChallengeFilter
15 from grandchallenge.challenges.forms import (
16 ChallengeCreateForm,
17 ChallengeUpdateForm,
18 ExternalChallengeUpdateForm,
19 )
20 from grandchallenge.challenges.models import (
21 Challenge,
22 ExternalChallenge,
23 )
24 from grandchallenge.core.permissions.mixins import (
25 UserIsChallengeAdminMixin,
26 UserIsNotAnonMixin,
27 UserIsStaffMixin,
28 )
29 from grandchallenge.core.templatetags.random_encode import random_encode
30 from grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin
31 from grandchallenge.subdomains.utils import reverse
32
33
34 class ChallengeCreate(UserIsNotAnonMixin, SuccessMessageMixin, CreateView):
35 model = Challenge
36 form_class = ChallengeCreateForm
37 success_message = "Challenge successfully created"
38
39 def form_valid(self, form):
40 form.instance.creator = self.request.user
41 return super().form_valid(form)
42
43
44 class ChallengeList(TemplateView):
45 paginate_by = 40
46 template_name = "challenges/challenge_list.html"
47
48 @property
49 def _current_page(self):
50 return int(self.request.GET.get("page", 1))
51
52 @property
53 def _filters_applied(self):
54 return any(k for k in self.request.GET if k.lower() != "page")
55
56 def _get_page(self):
57 self.int_filter = ChallengeFilter(
58 self.request.GET,
59 Challenge.objects.filter(hidden=False)
60 .prefetch_related("phase_set", "publications")
61 .order_by("-created"),
62 )
63 self.ext_filter = ChallengeFilter(
64 self.request.GET,
65 ExternalChallenge.objects.filter(hidden=False)
66 .prefetch_related("publications")
67 .order_by("-created"),
68 )
69
70 int_paginator = Paginator(self.int_filter.qs, self.paginate_by // 2)
71 ext_paginator = Paginator(self.ext_filter.qs, self.paginate_by // 2)
72
73 num_pages = max(int_paginator.num_pages, ext_paginator.num_pages)
74 num_results = int_paginator.count + ext_paginator.count
75
76 try:
77 int_page = int_paginator.page(self._current_page)
78 except EmptyPage:
79 int_page = []
80
81 try:
82 ext_page = ext_paginator.page(self._current_page)
83 except EmptyPage:
84 ext_page = []
85
86 return [*int_page, *ext_page], num_pages, num_results
87
88 def get_context_data(self, *, object_list=None, **kwargs):
89 context = super().get_context_data(**kwargs)
90
91 page_obj, num_pages, num_results = self._get_page()
92
93 context.update(
94 {
95 "int_filter": self.int_filter,
96 "filters_applied": self._filters_applied,
97 "page_obj": page_obj,
98 "num_pages": num_pages,
99 "num_results": num_results,
100 "current_page": self._current_page,
101 "next_page": self._current_page + 1,
102 "previous_page": self._current_page - 1,
103 "jumbotron_title": "Challenges",
104 "jumbotron_description": format_html(
105 (
106 "Here is an overview of all challenges that have been "
107 "organised within the area of medical image analysis "
108 "that we are aware of. Please <a href='{}'>contact "
109 "us</a> if you want to advertise your challenge or "
110 "know of any study that would fit in this overview."
111 ),
112 random_encode("mailto:[email protected]"),
113 ),
114 }
115 )
116
117 return context
118
119
120 class UsersChallengeList(UserIsNotAnonMixin, ListView):
121 model = Challenge
122 template_name = "challenges/challenge_users_list.html"
123
124 def get_queryset(self):
125 queryset = super().get_queryset()
126 if not self.request.user.is_superuser:
127 queryset = queryset.filter(
128 Q(participants_group__in=self.request.user.groups.all())
129 | Q(admins_group__in=self.request.user.groups.all())
130 )
131 return queryset
132
133
134 class ChallengeUpdate(
135 UserIsChallengeAdminMixin,
136 SuccessMessageMixin,
137 ChallengeSubdomainObjectMixin,
138 UpdateView,
139 ):
140 model = Challenge
141 slug_field = "short_name__iexact"
142 slug_url_kwarg = "challenge_short_name"
143 form_class = ChallengeUpdateForm
144 success_message = "Challenge successfully updated"
145 template_name_suffix = "_update"
146
147
148 class ExternalChallengeCreate(
149 UserIsStaffMixin, SuccessMessageMixin, CreateView
150 ):
151 model = ExternalChallenge
152 form_class = ExternalChallengeUpdateForm
153 success_message = (
154 "Your challenge has been successfully submitted. "
155 "An admin will review your challenge before it is published."
156 )
157
158 def form_valid(self, form):
159 form.instance.creator = self.request.user
160 return super().form_valid(form)
161
162 def get_success_url(self):
163 return reverse("challenges:list")
164
165
166 class ExternalChallengeUpdate(
167 UserIsStaffMixin, SuccessMessageMixin, UpdateView
168 ):
169 model = ExternalChallenge
170 slug_field = "short_name__iexact"
171 slug_url_kwarg = "short_name"
172 form_class = ExternalChallengeUpdateForm
173 template_name_suffix = "_update"
174 success_message = "Challenge updated"
175
176 def get_success_url(self):
177 return reverse("challenges:list")
178
179
180 class ExternalChallengeList(UserIsStaffMixin, ListView):
181 model = ExternalChallenge
182
183
184 class ExternalChallengeDelete(UserIsStaffMixin, DeleteView):
185 model = ExternalChallenge
186 slug_field = "short_name__iexact"
187 slug_url_kwarg = "short_name"
188 success_message = "External challenge was successfully deleted"
189
190 def get_success_url(self):
191 return reverse("challenges:external-list")
192
193 def delete(self, request, *args, **kwargs):
194 messages.success(self.request, self.success_message)
195 return super().delete(request, *args, **kwargs)
196
[end of app/grandchallenge/challenges/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/challenges/views.py b/app/grandchallenge/challenges/views.py
--- a/app/grandchallenge/challenges/views.py
+++ b/app/grandchallenge/challenges/views.py
@@ -27,6 +27,7 @@
UserIsStaffMixin,
)
from grandchallenge.core.templatetags.random_encode import random_encode
+from grandchallenge.datatables.views import Column, PaginatedTableListView
from grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin
from grandchallenge.subdomains.utils import reverse
@@ -117,12 +118,33 @@
return context
-class UsersChallengeList(UserIsNotAnonMixin, ListView):
+class UsersChallengeList(UserIsNotAnonMixin, PaginatedTableListView):
model = Challenge
template_name = "challenges/challenge_users_list.html"
+ row_template = "challenges/challenge_users_row.html"
+ search_fields = [
+ "title",
+ "short_name",
+ "description",
+ ]
+ columns = [
+ Column(title="Name", sort_field="short_name"),
+ Column(title="Created", sort_field="created"),
+ Column(title="Admins", sort_field="created"),
+ Column(title="Description", sort_field="description"),
+ Column(title="Automated Evaluation", sort_field="use_evaluation"),
+ ]
+ default_sort_column = 1
def get_queryset(self):
- queryset = super().get_queryset()
+ queryset = (
+ super()
+ .get_queryset()
+ .prefetch_related(
+ "admins_group__user_set__user_profile",
+ "admins_group__user_set__verification",
+ )
+ )
if not self.request.user.is_superuser:
queryset = queryset.filter(
Q(participants_group__in=self.request.user.groups.all())
|
{"golden_diff": "diff --git a/app/grandchallenge/challenges/views.py b/app/grandchallenge/challenges/views.py\n--- a/app/grandchallenge/challenges/views.py\n+++ b/app/grandchallenge/challenges/views.py\n@@ -27,6 +27,7 @@\n UserIsStaffMixin,\n )\n from grandchallenge.core.templatetags.random_encode import random_encode\n+from grandchallenge.datatables.views import Column, PaginatedTableListView\n from grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin\n from grandchallenge.subdomains.utils import reverse\n \n@@ -117,12 +118,33 @@\n return context\n \n \n-class UsersChallengeList(UserIsNotAnonMixin, ListView):\n+class UsersChallengeList(UserIsNotAnonMixin, PaginatedTableListView):\n model = Challenge\n template_name = \"challenges/challenge_users_list.html\"\n+ row_template = \"challenges/challenge_users_row.html\"\n+ search_fields = [\n+ \"title\",\n+ \"short_name\",\n+ \"description\",\n+ ]\n+ columns = [\n+ Column(title=\"Name\", sort_field=\"short_name\"),\n+ Column(title=\"Created\", sort_field=\"created\"),\n+ Column(title=\"Admins\", sort_field=\"created\"),\n+ Column(title=\"Description\", sort_field=\"description\"),\n+ Column(title=\"Automated Evaluation\", sort_field=\"use_evaluation\"),\n+ ]\n+ default_sort_column = 1\n \n def get_queryset(self):\n- queryset = super().get_queryset()\n+ queryset = (\n+ super()\n+ .get_queryset()\n+ .prefetch_related(\n+ \"admins_group__user_set__user_profile\",\n+ \"admins_group__user_set__verification\",\n+ )\n+ )\n if not self.request.user.is_superuser:\n queryset = queryset.filter(\n Q(participants_group__in=self.request.user.groups.all())\n", "issue": "ChallengeList page should use pagination\nThis takes a while to load with many challenges and the Automated Evaluation boolean no longer works.\r\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.paginator import EmptyPage, Paginator\nfrom django.db.models import Q\nfrom django.utils.html import format_html\nfrom django.views.generic import (\n CreateView,\n DeleteView,\n ListView,\n TemplateView,\n UpdateView,\n)\n\nfrom grandchallenge.challenges.filters import ChallengeFilter\nfrom grandchallenge.challenges.forms import (\n ChallengeCreateForm,\n ChallengeUpdateForm,\n ExternalChallengeUpdateForm,\n)\nfrom grandchallenge.challenges.models import (\n Challenge,\n ExternalChallenge,\n)\nfrom grandchallenge.core.permissions.mixins import (\n UserIsChallengeAdminMixin,\n UserIsNotAnonMixin,\n UserIsStaffMixin,\n)\nfrom grandchallenge.core.templatetags.random_encode import random_encode\nfrom grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin\nfrom grandchallenge.subdomains.utils import reverse\n\n\nclass ChallengeCreate(UserIsNotAnonMixin, SuccessMessageMixin, CreateView):\n model = Challenge\n form_class = ChallengeCreateForm\n success_message = \"Challenge successfully created\"\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n return super().form_valid(form)\n\n\nclass ChallengeList(TemplateView):\n paginate_by = 40\n template_name = \"challenges/challenge_list.html\"\n\n @property\n def _current_page(self):\n return int(self.request.GET.get(\"page\", 1))\n\n @property\n def _filters_applied(self):\n return any(k for k in self.request.GET if k.lower() != \"page\")\n\n def _get_page(self):\n self.int_filter = ChallengeFilter(\n self.request.GET,\n Challenge.objects.filter(hidden=False)\n .prefetch_related(\"phase_set\", \"publications\")\n .order_by(\"-created\"),\n )\n self.ext_filter = ChallengeFilter(\n self.request.GET,\n ExternalChallenge.objects.filter(hidden=False)\n .prefetch_related(\"publications\")\n .order_by(\"-created\"),\n )\n\n int_paginator = Paginator(self.int_filter.qs, self.paginate_by // 2)\n ext_paginator = Paginator(self.ext_filter.qs, self.paginate_by // 2)\n\n num_pages = max(int_paginator.num_pages, ext_paginator.num_pages)\n num_results = int_paginator.count + ext_paginator.count\n\n try:\n int_page = int_paginator.page(self._current_page)\n except EmptyPage:\n int_page = []\n\n try:\n ext_page = ext_paginator.page(self._current_page)\n except EmptyPage:\n ext_page = []\n\n return [*int_page, *ext_page], num_pages, num_results\n\n def get_context_data(self, *, object_list=None, **kwargs):\n context = super().get_context_data(**kwargs)\n\n page_obj, num_pages, num_results = self._get_page()\n\n context.update(\n {\n \"int_filter\": self.int_filter,\n \"filters_applied\": self._filters_applied,\n \"page_obj\": page_obj,\n \"num_pages\": num_pages,\n \"num_results\": num_results,\n \"current_page\": self._current_page,\n \"next_page\": self._current_page + 1,\n \"previous_page\": self._current_page - 1,\n \"jumbotron_title\": \"Challenges\",\n \"jumbotron_description\": format_html(\n (\n \"Here is an overview of all challenges that have been \"\n \"organised within the area of medical image analysis \"\n \"that we are aware of. Please <a href='{}'>contact \"\n \"us</a> if you want to advertise your challenge or \"\n \"know of any study that would fit in this overview.\"\n ),\n random_encode(\"mailto:[email protected]\"),\n ),\n }\n )\n\n return context\n\n\nclass UsersChallengeList(UserIsNotAnonMixin, ListView):\n model = Challenge\n template_name = \"challenges/challenge_users_list.html\"\n\n def get_queryset(self):\n queryset = super().get_queryset()\n if not self.request.user.is_superuser:\n queryset = queryset.filter(\n Q(participants_group__in=self.request.user.groups.all())\n | Q(admins_group__in=self.request.user.groups.all())\n )\n return queryset\n\n\nclass ChallengeUpdate(\n UserIsChallengeAdminMixin,\n SuccessMessageMixin,\n ChallengeSubdomainObjectMixin,\n UpdateView,\n):\n model = Challenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"challenge_short_name\"\n form_class = ChallengeUpdateForm\n success_message = \"Challenge successfully updated\"\n template_name_suffix = \"_update\"\n\n\nclass ExternalChallengeCreate(\n UserIsStaffMixin, SuccessMessageMixin, CreateView\n):\n model = ExternalChallenge\n form_class = ExternalChallengeUpdateForm\n success_message = (\n \"Your challenge has been successfully submitted. \"\n \"An admin will review your challenge before it is published.\"\n )\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse(\"challenges:list\")\n\n\nclass ExternalChallengeUpdate(\n UserIsStaffMixin, SuccessMessageMixin, UpdateView\n):\n model = ExternalChallenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"short_name\"\n form_class = ExternalChallengeUpdateForm\n template_name_suffix = \"_update\"\n success_message = \"Challenge updated\"\n\n def get_success_url(self):\n return reverse(\"challenges:list\")\n\n\nclass ExternalChallengeList(UserIsStaffMixin, ListView):\n model = ExternalChallenge\n\n\nclass ExternalChallengeDelete(UserIsStaffMixin, DeleteView):\n model = ExternalChallenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"short_name\"\n success_message = \"External challenge was successfully deleted\"\n\n def get_success_url(self):\n return reverse(\"challenges:external-list\")\n\n def delete(self, request, *args, **kwargs):\n messages.success(self.request, self.success_message)\n return super().delete(request, *args, **kwargs)\n", "path": "app/grandchallenge/challenges/views.py"}]}
| 2,368 | 399 |
gh_patches_debug_49851
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-15890
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OpenIDC SSO through apache stopped working after update to 3.7.6
### Deployment Type
Self-hosted
### NetBox Version
v3.7.6
### Python Version
3.9
### Steps to Reproduce
This is a longstanding NetBox instance. It runs under gunicorn, proxied through apache which is configured to use mod_auth_openid for authentication.
NetBox's configuration includes:
REMOTE_AUTH_ENABLED = True
REMOTE_AUTH_BACKEND = 'netbox.authentication.RemoteUserBackend'
REMOTE_AUTH_HEADER = 'HTTP_OIDC_CLAIM_PREFERRED_USERNAME'
REMOTE_AUTH_AUTO_CREATE_USER = True
This was working fine until the update to 3.7.6 following our usual procedure:
Pull and checkout v3.7.6.
Run upgrade.sh
Restart NetBox gunicorn service, netbox-rq and apache
Since the upgrade, NetBox has presented a login box instead of logging in as the REMOTE_AUTH_HEADER user. Using tcpdump, I can see the "OIDC_CLAIM_preferred_username" header is being sent to gunicorn. Other instances using the same OpenIDC configuration are working.
### Expected Behavior
REMOTE_AUTH login using OpenIDC credentials.
### Observed Behavior
The web frontend prompts for username and password.
</issue>
<code>
[start of contrib/gunicorn.py]
1 # The IP address (typically localhost) and port that the NetBox WSGI process should listen on
2 bind = '127.0.0.1:8001'
3
4 # Number of gunicorn workers to spawn. This should typically be 2n+1, where
5 # n is the number of CPU cores present.
6 workers = 5
7
8 # Number of threads per worker process
9 threads = 3
10
11 # Timeout (in seconds) for a request to complete
12 timeout = 120
13
14 # The maximum number of requests a worker can handle before being respawned
15 max_requests = 5000
16 max_requests_jitter = 500
17
[end of contrib/gunicorn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/contrib/gunicorn.py b/contrib/gunicorn.py
--- a/contrib/gunicorn.py
+++ b/contrib/gunicorn.py
@@ -14,3 +14,7 @@
# The maximum number of requests a worker can handle before being respawned
max_requests = 5000
max_requests_jitter = 500
+
+# Uncomment this line to accept HTTP headers containing underscores, e.g. for remote
+# authentication support. See https://docs.gunicorn.org/en/stable/settings.html#header-map
+# header-map = 'dangerous'
|
{"golden_diff": "diff --git a/contrib/gunicorn.py b/contrib/gunicorn.py\n--- a/contrib/gunicorn.py\n+++ b/contrib/gunicorn.py\n@@ -14,3 +14,7 @@\n # The maximum number of requests a worker can handle before being respawned\n max_requests = 5000\n max_requests_jitter = 500\n+\n+# Uncomment this line to accept HTTP headers containing underscores, e.g. for remote\n+# authentication support. See https://docs.gunicorn.org/en/stable/settings.html#header-map\n+# header-map = 'dangerous'\n", "issue": "OpenIDC SSO through apache stopped working after update to 3.7.6\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.6\n\n### Python Version\n\n3.9\n\n### Steps to Reproduce\n\nThis is a longstanding NetBox instance. It runs under gunicorn, proxied through apache which is configured to use mod_auth_openid for authentication. \r\n\r\nNetBox's configuration includes:\r\nREMOTE_AUTH_ENABLED = True\r\nREMOTE_AUTH_BACKEND = 'netbox.authentication.RemoteUserBackend'\r\nREMOTE_AUTH_HEADER = 'HTTP_OIDC_CLAIM_PREFERRED_USERNAME'\r\nREMOTE_AUTH_AUTO_CREATE_USER = True\r\n\r\nThis was working fine until the update to 3.7.6 following our usual procedure:\r\n\r\nPull and checkout v3.7.6.\r\n\r\nRun upgrade.sh\r\n\r\nRestart NetBox gunicorn service, netbox-rq and apache\r\n\r\nSince the upgrade, NetBox has presented a login box instead of logging in as the REMOTE_AUTH_HEADER user. Using tcpdump, I can see the \"OIDC_CLAIM_preferred_username\" header is being sent to gunicorn. Other instances using the same OpenIDC configuration are working.\r\n\n\n### Expected Behavior\n\nREMOTE_AUTH login using OpenIDC credentials.\n\n### Observed Behavior\n\nThe web frontend prompts for username and password.\n", "before_files": [{"content": "# The IP address (typically localhost) and port that the NetBox WSGI process should listen on\nbind = '127.0.0.1:8001'\n\n# Number of gunicorn workers to spawn. This should typically be 2n+1, where\n# n is the number of CPU cores present.\nworkers = 5\n\n# Number of threads per worker process\nthreads = 3\n\n# Timeout (in seconds) for a request to complete\ntimeout = 120\n\n# The maximum number of requests a worker can handle before being respawned\nmax_requests = 5000\nmax_requests_jitter = 500\n", "path": "contrib/gunicorn.py"}]}
| 970 | 124 |
gh_patches_debug_30377
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-2845
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Using 'Have-I-been-pwned' policy breaks flows in Authentik 2022.4.1
**Describe the bug**
Using a 'Have-I-been-pwned' policy on a password prompt within a flow breaks the flow.
**To Reproduce**
Steps to reproduce the behavior:
1. Use Authentik 2022.3.3
2. Use all the default settings/flows, so a clean install
3. Add a have-i-been-pwned policy to the default-password-change flow on the default-password-change-prompt stage.
4. This stage binding has the following settings:
- _Evaluate on plan: True_
- _Re-evaluate policies: False_
- _Invalid respones action: RETRY returns the error message and a similar challenge to the executor._
- _Policy engine mode: ALL, all policies must match to include this stage access._
5. Go to the Flow Overview and Execute flow with current user, see that the have-i-been pwned policy works correctly.
6. Use Authentik 2022.4.1
7. Repeat steps 2 - 5 described above
8. See that you will receive an error message 'Password not set in context'.
**Expected behavior**
The password should be checked, and the flow should not crash with the error 'Password not set in context'.
**Version and Deployment (please complete the following information):**
- authentik version: 2022.4.1
- Deployment: tested both Docker & K8S
**Additional context**
I repeated these steps multiple times and I keep getting the same issue. Therefore I think it is safe to assume that this is a bug introduced in the update from version 2022.3.3 to version 2022.4.1
</issue>
<code>
[start of authentik/policies/hibp/models.py]
1 """authentik HIBP Models"""
2 from hashlib import sha1
3
4 from django.db import models
5 from django.utils.translation import gettext as _
6 from rest_framework.serializers import BaseSerializer
7 from structlog.stdlib import get_logger
8
9 from authentik.lib.utils.http import get_http_session
10 from authentik.policies.models import Policy, PolicyResult
11 from authentik.policies.types import PolicyRequest
12
13 LOGGER = get_logger()
14
15
16 class HaveIBeenPwendPolicy(Policy):
17 """Check if password is on HaveIBeenPwned's list by uploading the first
18 5 characters of the SHA1 Hash."""
19
20 password_field = models.TextField(
21 default="password",
22 help_text=_("Field key to check, field keys defined in Prompt stages are available."),
23 )
24
25 allowed_count = models.IntegerField(default=0)
26
27 @property
28 def serializer(self) -> BaseSerializer:
29 from authentik.policies.hibp.api import HaveIBeenPwendPolicySerializer
30
31 return HaveIBeenPwendPolicySerializer
32
33 @property
34 def component(self) -> str:
35 return "ak-policy-hibp-form"
36
37 def passes(self, request: PolicyRequest) -> PolicyResult:
38 """Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5
39 characters of Password in request and checks if full hash is in response. Returns 0
40 if Password is not in result otherwise the count of how many times it was used."""
41 if self.password_field not in request.context:
42 LOGGER.warning(
43 "Password field not set in Policy Request",
44 field=self.password_field,
45 fields=request.context.keys(),
46 )
47 return PolicyResult(False, _("Password not set in context"))
48 password = str(request.context[self.password_field])
49
50 pw_hash = sha1(password.encode("utf-8")).hexdigest() # nosec
51 url = f"https://api.pwnedpasswords.com/range/{pw_hash[:5]}"
52 result = get_http_session().get(url).text
53 final_count = 0
54 for line in result.split("\r\n"):
55 full_hash, count = line.split(":")
56 if pw_hash[5:] == full_hash.lower():
57 final_count = int(count)
58 LOGGER.debug("got hibp result", count=final_count, hash=pw_hash[:5])
59 if final_count > self.allowed_count:
60 message = _("Password exists on %(count)d online lists." % {"count": final_count})
61 return PolicyResult(False, message)
62 return PolicyResult(True)
63
64 class Meta:
65
66 verbose_name = _("Have I Been Pwned Policy")
67 verbose_name_plural = _("Have I Been Pwned Policies")
68
[end of authentik/policies/hibp/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/policies/hibp/models.py b/authentik/policies/hibp/models.py
--- a/authentik/policies/hibp/models.py
+++ b/authentik/policies/hibp/models.py
@@ -9,6 +9,7 @@
from authentik.lib.utils.http import get_http_session
from authentik.policies.models import Policy, PolicyResult
from authentik.policies.types import PolicyRequest
+from authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT
LOGGER = get_logger()
@@ -38,14 +39,17 @@
"""Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5
characters of Password in request and checks if full hash is in response. Returns 0
if Password is not in result otherwise the count of how many times it was used."""
- if self.password_field not in request.context:
+ password = request.context.get(PLAN_CONTEXT_PROMPT, {}).get(
+ self.password_field, request.context.get(self.password_field)
+ )
+ if not password:
LOGGER.warning(
"Password field not set in Policy Request",
field=self.password_field,
fields=request.context.keys(),
)
return PolicyResult(False, _("Password not set in context"))
- password = str(request.context[self.password_field])
+ password = str(password)
pw_hash = sha1(password.encode("utf-8")).hexdigest() # nosec
url = f"https://api.pwnedpasswords.com/range/{pw_hash[:5]}"
|
{"golden_diff": "diff --git a/authentik/policies/hibp/models.py b/authentik/policies/hibp/models.py\n--- a/authentik/policies/hibp/models.py\n+++ b/authentik/policies/hibp/models.py\n@@ -9,6 +9,7 @@\n from authentik.lib.utils.http import get_http_session\n from authentik.policies.models import Policy, PolicyResult\n from authentik.policies.types import PolicyRequest\n+from authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT\n \n LOGGER = get_logger()\n \n@@ -38,14 +39,17 @@\n \"\"\"Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5\n characters of Password in request and checks if full hash is in response. Returns 0\n if Password is not in result otherwise the count of how many times it was used.\"\"\"\n- if self.password_field not in request.context:\n+ password = request.context.get(PLAN_CONTEXT_PROMPT, {}).get(\n+ self.password_field, request.context.get(self.password_field)\n+ )\n+ if not password:\n LOGGER.warning(\n \"Password field not set in Policy Request\",\n field=self.password_field,\n fields=request.context.keys(),\n )\n return PolicyResult(False, _(\"Password not set in context\"))\n- password = str(request.context[self.password_field])\n+ password = str(password)\n \n pw_hash = sha1(password.encode(\"utf-8\")).hexdigest() # nosec\n url = f\"https://api.pwnedpasswords.com/range/{pw_hash[:5]}\"\n", "issue": "Using 'Have-I-been-pwned' policy breaks flows in Authentik 2022.4.1\n**Describe the bug**\r\nUsing a 'Have-I-been-pwned' policy on a password prompt within a flow breaks the flow.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Use Authentik 2022.3.3\r\n2. Use all the default settings/flows, so a clean install\r\n3. Add a have-i-been-pwned policy to the default-password-change flow on the default-password-change-prompt stage.\r\n4. This stage binding has the following settings:\r\n- _Evaluate on plan: True_\r\n- _Re-evaluate policies: False_\r\n- _Invalid respones action: RETRY returns the error message and a similar challenge to the executor._\r\n- _Policy engine mode: ALL, all policies must match to include this stage access._\r\n5. Go to the Flow Overview and Execute flow with current user, see that the have-i-been pwned policy works correctly.\r\n6. Use Authentik 2022.4.1\r\n7. Repeat steps 2 - 5 described above\r\n8. See that you will receive an error message 'Password not set in context'.\r\n\r\n**Expected behavior**\r\nThe password should be checked, and the flow should not crash with the error 'Password not set in context'.\r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik version: 2022.4.1\r\n - Deployment: tested both Docker & K8S\r\n\r\n**Additional context**\r\nI repeated these steps multiple times and I keep getting the same issue. Therefore I think it is safe to assume that this is a bug introduced in the update from version 2022.3.3 to version 2022.4.1\r\n\n", "before_files": [{"content": "\"\"\"authentik HIBP Models\"\"\"\nfrom hashlib import sha1\n\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom rest_framework.serializers import BaseSerializer\nfrom structlog.stdlib import get_logger\n\nfrom authentik.lib.utils.http import get_http_session\nfrom authentik.policies.models import Policy, PolicyResult\nfrom authentik.policies.types import PolicyRequest\n\nLOGGER = get_logger()\n\n\nclass HaveIBeenPwendPolicy(Policy):\n \"\"\"Check if password is on HaveIBeenPwned's list by uploading the first\n 5 characters of the SHA1 Hash.\"\"\"\n\n password_field = models.TextField(\n default=\"password\",\n help_text=_(\"Field key to check, field keys defined in Prompt stages are available.\"),\n )\n\n allowed_count = models.IntegerField(default=0)\n\n @property\n def serializer(self) -> BaseSerializer:\n from authentik.policies.hibp.api import HaveIBeenPwendPolicySerializer\n\n return HaveIBeenPwendPolicySerializer\n\n @property\n def component(self) -> str:\n return \"ak-policy-hibp-form\"\n\n def passes(self, request: PolicyRequest) -> PolicyResult:\n \"\"\"Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5\n characters of Password in request and checks if full hash is in response. Returns 0\n if Password is not in result otherwise the count of how many times it was used.\"\"\"\n if self.password_field not in request.context:\n LOGGER.warning(\n \"Password field not set in Policy Request\",\n field=self.password_field,\n fields=request.context.keys(),\n )\n return PolicyResult(False, _(\"Password not set in context\"))\n password = str(request.context[self.password_field])\n\n pw_hash = sha1(password.encode(\"utf-8\")).hexdigest() # nosec\n url = f\"https://api.pwnedpasswords.com/range/{pw_hash[:5]}\"\n result = get_http_session().get(url).text\n final_count = 0\n for line in result.split(\"\\r\\n\"):\n full_hash, count = line.split(\":\")\n if pw_hash[5:] == full_hash.lower():\n final_count = int(count)\n LOGGER.debug(\"got hibp result\", count=final_count, hash=pw_hash[:5])\n if final_count > self.allowed_count:\n message = _(\"Password exists on %(count)d online lists.\" % {\"count\": final_count})\n return PolicyResult(False, message)\n return PolicyResult(True)\n\n class Meta:\n\n verbose_name = _(\"Have I Been Pwned Policy\")\n verbose_name_plural = _(\"Have I Been Pwned Policies\")\n", "path": "authentik/policies/hibp/models.py"}]}
| 1,646 | 346 |
gh_patches_debug_19098
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-5373
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
</issue>
<code>
[start of pulpcore/tasking/_util.py]
1 import asyncio
2 import importlib
3 import logging
4 import os
5 import resource
6 import signal
7 import sys
8 import threading
9 import time
10 from gettext import gettext as _
11
12 from django.conf import settings
13 from django.db import connection, transaction
14 from django.db.models import Q
15 from django.utils import timezone
16 from django_guid import set_guid
17 from django_guid.utils import generate_guid
18 from pulpcore.app.models import Task, TaskSchedule
19 from pulpcore.app.role_util import get_users_with_perms
20 from pulpcore.app.util import (
21 set_current_user,
22 set_domain,
23 configure_analytics,
24 configure_cleanup,
25 )
26 from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP
27 from pulpcore.exceptions import AdvisoryLockError
28 from pulpcore.tasking.tasks import dispatch, execute_task
29
30 _logger = logging.getLogger(__name__)
31
32
33 class PGAdvisoryLock:
34 """
35 A context manager that will hold a postgres advisory lock non-blocking.
36
37 The locks can be chosen from a lock group to avoid collisions. They will never collide with the
38 locks used for tasks.
39 """
40
41 def __init__(self, lock, lock_group=0):
42 self.lock_group = lock_group
43 self.lock = lock
44
45 def __enter__(self):
46 with connection.cursor() as cursor:
47 cursor.execute("SELECT pg_try_advisory_lock(%s, %s)", [self.lock_group, self.lock])
48 acquired = cursor.fetchone()[0]
49 if not acquired:
50 raise AdvisoryLockError("Could not acquire lock.")
51 return self
52
53 def __exit__(self, exc_type, exc_value, traceback):
54 with connection.cursor() as cursor:
55 cursor.execute("SELECT pg_advisory_unlock(%s, %s)", [self.lock_group, self.lock])
56 released = cursor.fetchone()[0]
57 if not released:
58 raise RuntimeError("Lock not held.")
59
60
61 def startup_hook():
62 configure_analytics()
63 configure_cleanup()
64
65
66 def delete_incomplete_resources(task):
67 """
68 Delete all incomplete created-resources on a canceled task.
69
70 Args:
71 task (Task): A task.
72 """
73 if task.state != TASK_STATES.CANCELING:
74 raise RuntimeError(_("Task must be canceling."))
75 for model in (r.content_object for r in task.created_resources.all()):
76 try:
77 if model.complete:
78 continue
79 except AttributeError:
80 continue
81 try:
82 with transaction.atomic():
83 model.delete()
84 except Exception as error:
85 _logger.error(_("Delete created resource, failed: {}").format(str(error)))
86
87
88 def write_memory_usage(path):
89 _logger.info("Writing task memory data to {}".format(path))
90
91 with open(path, "w") as file:
92 file.write("# Seconds\tMemory in MB\n")
93 seconds = 0
94 while True:
95 current_mb_in_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024
96 file.write(f"{seconds}\t{current_mb_in_use:.2f}\n")
97 file.flush()
98 time.sleep(5)
99 seconds += 5
100
101
102 def child_signal_handler(sig, frame):
103 _logger.debug("Signal %s recieved by %s.", sig, os.getpid())
104 # Reset signal handlers to default
105 # If you kill the process a second time it's not graceful anymore.
106 signal.signal(signal.SIGINT, signal.SIG_DFL)
107 signal.signal(signal.SIGTERM, signal.SIG_DFL)
108 signal.signal(signal.SIGHUP, signal.SIG_DFL)
109 signal.signal(signal.SIGUSR1, signal.SIG_DFL)
110
111 if sig == signal.SIGUSR1:
112 sys.exit()
113
114
115 def perform_task(task_pk, task_working_dir_rel_path):
116 """Setup the environment to handle a task and execute it.
117 This must be called as a subprocess, while the parent holds the advisory lock of the task."""
118 signal.signal(signal.SIGINT, child_signal_handler)
119 signal.signal(signal.SIGTERM, child_signal_handler)
120 signal.signal(signal.SIGHUP, child_signal_handler)
121 signal.signal(signal.SIGUSR1, child_signal_handler)
122 if settings.TASK_DIAGNOSTICS:
123 diagnostics_dir = VAR_TMP_PULP / str(task_pk)
124 diagnostics_dir.mkdir(parents=True, exist_ok=True)
125 mem_diagnostics_path = diagnostics_dir / "memory.datum"
126 # It would be better to have this recording happen in the parent process instead of here
127 # https://github.com/pulp/pulpcore/issues/2337
128 mem_diagnostics_thread = threading.Thread(
129 target=write_memory_usage, args=(mem_diagnostics_path,), daemon=True
130 )
131 mem_diagnostics_thread.start()
132 # All processes need to create their own postgres connection
133 connection.connection = None
134 task = Task.objects.select_related("pulp_domain").get(pk=task_pk)
135 user = get_users_with_perms(task, with_group_users=False).first()
136 # Isolate from the parent asyncio.
137 asyncio.set_event_loop(asyncio.new_event_loop())
138 # Set current contexts
139 set_guid(task.logging_cid)
140 set_current_user(user)
141 set_domain(task.pulp_domain)
142 os.chdir(task_working_dir_rel_path)
143
144 # set up profiling
145 if settings.TASK_DIAGNOSTICS and importlib.util.find_spec("pyinstrument") is not None:
146 from pyinstrument import Profiler
147
148 with Profiler() as profiler:
149 execute_task(task)
150
151 profile_file = diagnostics_dir / "pyinstrument.html"
152 _logger.info("Writing task profile data to {}".format(profile_file))
153 with open(profile_file, "w+") as f:
154 f.write(profiler.output_html())
155 else:
156 execute_task(task)
157
158
159 def dispatch_scheduled_tasks():
160 # Warning, dispatch_scheduled_tasks is not race condition free!
161 now = timezone.now()
162 # Dispatch all tasks old enough and not still running
163 for task_schedule in TaskSchedule.objects.filter(next_dispatch__lte=now).filter(
164 Q(last_task=None) | Q(last_task__state__in=TASK_FINAL_STATES)
165 ):
166 try:
167 if task_schedule.dispatch_interval is None:
168 # This was a timed one shot task schedule
169 task_schedule.next_dispatch = None
170 else:
171 # This is a recurring task schedule
172 while task_schedule.next_dispatch < now:
173 # Do not schedule in the past
174 task_schedule.next_dispatch += task_schedule.dispatch_interval
175 set_guid(generate_guid())
176 with transaction.atomic():
177 task_schedule.last_task = dispatch(
178 task_schedule.task_name,
179 )
180 task_schedule.save(update_fields=["next_dispatch", "last_task"])
181
182 _logger.info(
183 "Dispatched scheduled task {task_name} as task id {task_id}".format(
184 task_name=task_schedule.task_name, task_id=task_schedule.last_task.pk
185 )
186 )
187 except Exception as e:
188 _logger.warning(
189 "Dispatching scheduled task {task_name} failed. {error}".format(
190 task_name=task_schedule.task_name, error=str(e)
191 )
192 )
193
[end of pulpcore/tasking/_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py
--- a/pulpcore/tasking/_util.py
+++ b/pulpcore/tasking/_util.py
@@ -15,7 +15,7 @@
from django.utils import timezone
from django_guid import set_guid
from django_guid.utils import generate_guid
-from pulpcore.app.models import Task, TaskSchedule
+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule
from pulpcore.app.role_util import get_users_with_perms
from pulpcore.app.util import (
set_current_user,
@@ -73,6 +73,8 @@
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
|
{"golden_diff": "diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py\n--- a/pulpcore/tasking/_util.py\n+++ b/pulpcore/tasking/_util.py\n@@ -15,7 +15,7 @@\n from django.utils import timezone\n from django_guid import set_guid\n from django_guid.utils import generate_guid\n-from pulpcore.app.models import Task, TaskSchedule\n+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule\n from pulpcore.app.role_util import get_users_with_perms\n from pulpcore.app.util import (\n set_current_user,\n@@ -73,6 +73,8 @@\n if task.state != TASK_STATES.CANCELING:\n raise RuntimeError(_(\"Task must be canceling.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n+ if isinstance(model, (Artifact, Content)):\n+ continue\n try:\n if model.complete:\n continue\n", "issue": "Task cleanup must not delete content nor artifacts\nDeleting content or artifacts outside of orphan cleanup is breaking the rules.\r\nAnd no, we cannot get away with that.\r\n\n", "before_files": [{"content": "import asyncio\nimport importlib\nimport logging\nimport os\nimport resource\nimport signal\nimport sys\nimport threading\nimport time\nfrom gettext import gettext as _\n\nfrom django.conf import settings\nfrom django.db import connection, transaction\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django_guid import set_guid\nfrom django_guid.utils import generate_guid\nfrom pulpcore.app.models import Task, TaskSchedule\nfrom pulpcore.app.role_util import get_users_with_perms\nfrom pulpcore.app.util import (\n set_current_user,\n set_domain,\n configure_analytics,\n configure_cleanup,\n)\nfrom pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP\nfrom pulpcore.exceptions import AdvisoryLockError\nfrom pulpcore.tasking.tasks import dispatch, execute_task\n\n_logger = logging.getLogger(__name__)\n\n\nclass PGAdvisoryLock:\n \"\"\"\n A context manager that will hold a postgres advisory lock non-blocking.\n\n The locks can be chosen from a lock group to avoid collisions. They will never collide with the\n locks used for tasks.\n \"\"\"\n\n def __init__(self, lock, lock_group=0):\n self.lock_group = lock_group\n self.lock = lock\n\n def __enter__(self):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_try_advisory_lock(%s, %s)\", [self.lock_group, self.lock])\n acquired = cursor.fetchone()[0]\n if not acquired:\n raise AdvisoryLockError(\"Could not acquire lock.\")\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_advisory_unlock(%s, %s)\", [self.lock_group, self.lock])\n released = cursor.fetchone()[0]\n if not released:\n raise RuntimeError(\"Lock not held.\")\n\n\ndef startup_hook():\n configure_analytics()\n configure_cleanup()\n\n\ndef delete_incomplete_resources(task):\n \"\"\"\n Delete all incomplete created-resources on a canceled task.\n\n Args:\n task (Task): A task.\n \"\"\"\n if task.state != TASK_STATES.CANCELING:\n raise RuntimeError(_(\"Task must be canceling.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n try:\n if model.complete:\n continue\n except AttributeError:\n continue\n try:\n with transaction.atomic():\n model.delete()\n except Exception as error:\n _logger.error(_(\"Delete created resource, failed: {}\").format(str(error)))\n\n\ndef write_memory_usage(path):\n _logger.info(\"Writing task memory data to {}\".format(path))\n\n with open(path, \"w\") as file:\n file.write(\"# Seconds\\tMemory in MB\\n\")\n seconds = 0\n while True:\n current_mb_in_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024\n file.write(f\"{seconds}\\t{current_mb_in_use:.2f}\\n\")\n file.flush()\n time.sleep(5)\n seconds += 5\n\n\ndef child_signal_handler(sig, frame):\n _logger.debug(\"Signal %s recieved by %s.\", sig, os.getpid())\n # Reset signal handlers to default\n # If you kill the process a second time it's not graceful anymore.\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n signal.signal(signal.SIGHUP, signal.SIG_DFL)\n signal.signal(signal.SIGUSR1, signal.SIG_DFL)\n\n if sig == signal.SIGUSR1:\n sys.exit()\n\n\ndef perform_task(task_pk, task_working_dir_rel_path):\n \"\"\"Setup the environment to handle a task and execute it.\n This must be called as a subprocess, while the parent holds the advisory lock of the task.\"\"\"\n signal.signal(signal.SIGINT, child_signal_handler)\n signal.signal(signal.SIGTERM, child_signal_handler)\n signal.signal(signal.SIGHUP, child_signal_handler)\n signal.signal(signal.SIGUSR1, child_signal_handler)\n if settings.TASK_DIAGNOSTICS:\n diagnostics_dir = VAR_TMP_PULP / str(task_pk)\n diagnostics_dir.mkdir(parents=True, exist_ok=True)\n mem_diagnostics_path = diagnostics_dir / \"memory.datum\"\n # It would be better to have this recording happen in the parent process instead of here\n # https://github.com/pulp/pulpcore/issues/2337\n mem_diagnostics_thread = threading.Thread(\n target=write_memory_usage, args=(mem_diagnostics_path,), daemon=True\n )\n mem_diagnostics_thread.start()\n # All processes need to create their own postgres connection\n connection.connection = None\n task = Task.objects.select_related(\"pulp_domain\").get(pk=task_pk)\n user = get_users_with_perms(task, with_group_users=False).first()\n # Isolate from the parent asyncio.\n asyncio.set_event_loop(asyncio.new_event_loop())\n # Set current contexts\n set_guid(task.logging_cid)\n set_current_user(user)\n set_domain(task.pulp_domain)\n os.chdir(task_working_dir_rel_path)\n\n # set up profiling\n if settings.TASK_DIAGNOSTICS and importlib.util.find_spec(\"pyinstrument\") is not None:\n from pyinstrument import Profiler\n\n with Profiler() as profiler:\n execute_task(task)\n\n profile_file = diagnostics_dir / \"pyinstrument.html\"\n _logger.info(\"Writing task profile data to {}\".format(profile_file))\n with open(profile_file, \"w+\") as f:\n f.write(profiler.output_html())\n else:\n execute_task(task)\n\n\ndef dispatch_scheduled_tasks():\n # Warning, dispatch_scheduled_tasks is not race condition free!\n now = timezone.now()\n # Dispatch all tasks old enough and not still running\n for task_schedule in TaskSchedule.objects.filter(next_dispatch__lte=now).filter(\n Q(last_task=None) | Q(last_task__state__in=TASK_FINAL_STATES)\n ):\n try:\n if task_schedule.dispatch_interval is None:\n # This was a timed one shot task schedule\n task_schedule.next_dispatch = None\n else:\n # This is a recurring task schedule\n while task_schedule.next_dispatch < now:\n # Do not schedule in the past\n task_schedule.next_dispatch += task_schedule.dispatch_interval\n set_guid(generate_guid())\n with transaction.atomic():\n task_schedule.last_task = dispatch(\n task_schedule.task_name,\n )\n task_schedule.save(update_fields=[\"next_dispatch\", \"last_task\"])\n\n _logger.info(\n \"Dispatched scheduled task {task_name} as task id {task_id}\".format(\n task_name=task_schedule.task_name, task_id=task_schedule.last_task.pk\n )\n )\n except Exception as e:\n _logger.warning(\n \"Dispatching scheduled task {task_name} failed. {error}\".format(\n task_name=task_schedule.task_name, error=str(e)\n )\n )\n", "path": "pulpcore/tasking/_util.py"}]}
| 2,530 | 204 |
gh_patches_debug_14722
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2226
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
construct_yaml_map leading to TypeError: unhashable type: 'dict_node'
*cfn-lint version: 0.58.1*
*Unhandled `TypeError: unhashable type: 'dict_node'` thrown from construct_yaml_map.*
Please provide as much information as possible:
Utilising some slightly edited form of the example eks-nodegroup cfn, [see here for the failing cfn](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/eks-nodegroup.yaml) or [here, for the stacktrace](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/README.md). The problem occurrs on line 259 `VPCZoneIdentifier: !Join [ ',', [ !ImportValue Fn::Sub: "${NetworkStackName}-SubnetsPrivate01", !ImportValue Fn::Sub: "${NetworkStackName}-SubnetsPrivate02" ] ]`. (Was trying to provide a list by string concat as that's what's listed as the example input when subnets is a parameter as in the example cfn, but I've changed that for the example I provide here.) I changed the line to an expanded `!Join` ~
```
VPCZoneIdentifier: !Join
- ','
- - Fn::ImportValue: !Sub "${NetworkStackName}-SubnetsPrivate01"
- Fn::ImportValue: !Sub "${NetworkStackName}-SubnetsPrivate02"
```
And it now correctly yields a normal error, rather than hitting the exception.
I found this similar previous issue ~ https://github.com/aws-cloudformation/cfn-lint/issues/348
</issue>
<code>
[start of src/cfnlint/decode/cfn_yaml.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import fileinput
6 import logging
7 import sys
8 import six
9 from yaml.composer import Composer
10 from yaml.reader import Reader
11 from yaml.scanner import Scanner
12 from yaml.resolver import Resolver
13 from yaml import ScalarNode
14 from yaml import SequenceNode
15 from yaml import MappingNode
16 from yaml.constructor import SafeConstructor
17 from yaml.constructor import ConstructorError
18 import cfnlint
19 from cfnlint.decode.node import str_node, dict_node, list_node, sub_node
20
21 try:
22 from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports
23 cyaml = True
24 except ImportError:
25 from yaml.parser import Parser # pylint: disable=ungrouped-imports
26 cyaml = False
27
28 UNCONVERTED_SUFFIXES = ['Ref', 'Condition']
29 FN_PREFIX = 'Fn::'
30
31 LOGGER = logging.getLogger(__name__)
32
33
34 class CfnParseError(ConstructorError):
35 """
36 Error thrown when the template contains Cfn Error
37 """
38
39 def __init__(self, filename, errors):
40
41 if isinstance(errors, cfnlint.rules.Match):
42 errors = [errors]
43
44 # Call the base class constructor with the parameters it needs
45 super(CfnParseError, self).__init__(errors[0].message)
46
47 # Now for your custom code...
48 self.filename = filename
49 self.matches = errors
50
51 def build_match(filename, message, line_number, column_number, key):
52 return cfnlint.rules.Match(
53 line_number + 1, column_number + 1, line_number + 1,
54 column_number + 1 + len(key), filename, cfnlint.rules.ParseError(), message=message)
55
56 class NodeConstructor(SafeConstructor):
57 """
58 Node Constructors for loading different types in Yaml
59 """
60
61 def __init__(self, filename):
62 # Call the base class constructor
63 super(NodeConstructor, self).__init__()
64
65 self.filename = filename
66
67 # To support lazy loading, the original constructors first yield
68 # an empty object, then fill them in when iterated. Due to
69 # laziness we omit this behaviour (and will only do "deep
70 # construction") by first exhausting iterators, then yielding
71 # copies.
72 def construct_yaml_map(self, node):
73
74 # Check for duplicate keys on the current level, this is not desirable
75 # because a dict does not support this. It overwrites it with the last
76 # occurance, which can give unexpected results
77 mapping = {}
78 self.flatten_mapping(node)
79 for key_node, value_node in node.value:
80 key = self.construct_object(key_node, False)
81 value = self.construct_object(value_node, False)
82
83 for key_dup in mapping:
84 if key_dup == key:
85 raise CfnParseError(
86 self.filename,
87 [
88 build_match(
89 filename=self.filename,
90 message='Duplicate resource found "{}" (line {})'.format(
91 key, key_dup.start_mark.line + 1),
92 line_number=key_dup.start_mark.line,
93 column_number=key_dup.start_mark.column,
94 key=key
95 ),
96 build_match(
97 filename=self.filename,
98 message='Duplicate resource found "{}" (line {})'.format(
99 key, key_node.start_mark.line + 1),
100 line_number=key_node.start_mark.line,
101 column_number=key_node.start_mark.column,
102 key=key
103 ),
104 ]
105 )
106 mapping[key] = value
107
108 obj, = SafeConstructor.construct_yaml_map(self, node)
109
110 if len(mapping) == 1:
111 if 'Fn::Sub' in mapping:
112 return sub_node(obj, node.start_mark, node.end_mark)
113
114 return dict_node(obj, node.start_mark, node.end_mark)
115
116 def construct_yaml_str(self, node):
117 obj = SafeConstructor.construct_yaml_str(self, node)
118 assert isinstance(obj, (six.string_types))
119 return str_node(obj, node.start_mark, node.end_mark)
120
121 def construct_yaml_seq(self, node):
122 obj, = SafeConstructor.construct_yaml_seq(self, node)
123 assert isinstance(obj, list)
124 return list_node(obj, node.start_mark, node.end_mark)
125
126 def construct_yaml_null_error(self, node):
127 """Throw a null error"""
128 raise CfnParseError(
129 self.filename,
130 [
131 build_match(
132 filename=self.filename,
133 message='Null value at line {0} column {1}'.format(
134 node.start_mark.line + 1, node.start_mark.column + 1),
135 line_number=node.start_mark.line,
136 column_number=node.start_mark.column,
137 key=' ',
138 )
139 ]
140 )
141
142
143 NodeConstructor.add_constructor(
144 u'tag:yaml.org,2002:map',
145 NodeConstructor.construct_yaml_map)
146
147 NodeConstructor.add_constructor(
148 u'tag:yaml.org,2002:str',
149 NodeConstructor.construct_yaml_str)
150
151 NodeConstructor.add_constructor(
152 u'tag:yaml.org,2002:seq',
153 NodeConstructor.construct_yaml_seq)
154
155 NodeConstructor.add_constructor(
156 u'tag:yaml.org,2002:null',
157 NodeConstructor.construct_yaml_null_error)
158
159
160 class MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):
161 """
162 Class for marked loading YAML
163 """
164 # pylint: disable=non-parent-init-called,super-init-not-called
165
166 def __init__(self, stream, filename):
167 Reader.__init__(self, stream)
168 Scanner.__init__(self)
169 if cyaml:
170 Parser.__init__(self, stream)
171 else:
172 Parser.__init__(self)
173 Composer.__init__(self)
174 SafeConstructor.__init__(self)
175 Resolver.__init__(self)
176 NodeConstructor.__init__(self, filename)
177
178
179 def multi_constructor(loader, tag_suffix, node):
180 """
181 Deal with !Ref style function format
182 """
183
184 if tag_suffix not in UNCONVERTED_SUFFIXES:
185 tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)
186
187 constructor = None
188 if tag_suffix == 'Fn::GetAtt':
189 constructor = construct_getatt
190 elif isinstance(node, ScalarNode):
191 constructor = loader.construct_scalar
192 elif isinstance(node, SequenceNode):
193 constructor = loader.construct_sequence
194 elif isinstance(node, MappingNode):
195 constructor = loader.construct_mapping
196 else:
197 raise 'Bad tag: !{}'.format(tag_suffix)
198
199 if tag_suffix == 'Fn::Sub':
200 return sub_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)
201
202 return dict_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)
203
204
205 def construct_getatt(node):
206 """
207 Reconstruct !GetAtt into a list
208 """
209
210 if isinstance(node.value, (six.string_types)):
211 return list_node(node.value.split('.', 1), node.start_mark, node.end_mark)
212 if isinstance(node.value, list):
213 return list_node([s.value for s in node.value], node.start_mark, node.end_mark)
214
215 raise ValueError('Unexpected node type: {}'.format(type(node.value)))
216
217
218 def loads(yaml_string, fname=None):
219 """
220 Load the given YAML string
221 """
222 loader = MarkedLoader(yaml_string, fname)
223 loader.add_multi_constructor('!', multi_constructor)
224 template = loader.get_single_data()
225 # Convert an empty file to an empty dict
226 if template is None:
227 template = {}
228
229 return template
230
231
232 def load(filename):
233 """
234 Load the given YAML file
235 """
236
237 content = ''
238
239 if not sys.stdin.isatty():
240 for line in fileinput.input(files=filename):
241 content = content + line
242 else:
243 with open(filename) as fp:
244 content = fp.read()
245
246 return loads(content, filename)
247
[end of src/cfnlint/decode/cfn_yaml.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/decode/cfn_yaml.py b/src/cfnlint/decode/cfn_yaml.py
--- a/src/cfnlint/decode/cfn_yaml.py
+++ b/src/cfnlint/decode/cfn_yaml.py
@@ -103,7 +103,21 @@
),
]
)
- mapping[key] = value
+ try:
+ mapping[key] = value
+ except:
+ raise CfnParseError(
+ self.filename,
+ [
+ build_match(
+ filename=self.filename,
+ message=f'Unhashable type "{key}" (line {key.start_mark.line + 1})',
+ line_number=key.start_mark.line,
+ column_number=key.start_mark.column,
+ key=key
+ ),
+ ]
+ )
obj, = SafeConstructor.construct_yaml_map(self, node)
|
{"golden_diff": "diff --git a/src/cfnlint/decode/cfn_yaml.py b/src/cfnlint/decode/cfn_yaml.py\n--- a/src/cfnlint/decode/cfn_yaml.py\n+++ b/src/cfnlint/decode/cfn_yaml.py\n@@ -103,7 +103,21 @@\n ),\n ]\n )\n- mapping[key] = value\n+ try:\n+ mapping[key] = value\n+ except:\n+ raise CfnParseError(\n+ self.filename,\n+ [\n+ build_match(\n+ filename=self.filename,\n+ message=f'Unhashable type \"{key}\" (line {key.start_mark.line + 1})',\n+ line_number=key.start_mark.line,\n+ column_number=key.start_mark.column,\n+ key=key\n+ ),\n+ ]\n+ )\n \n obj, = SafeConstructor.construct_yaml_map(self, node)\n", "issue": "construct_yaml_map leading to TypeError: unhashable type: 'dict_node'\n*cfn-lint version: 0.58.1*\r\n\r\n*Unhandled `TypeError: unhashable type: 'dict_node'` thrown from construct_yaml_map.*\r\n\r\nPlease provide as much information as possible:\r\nUtilising some slightly edited form of the example eks-nodegroup cfn, [see here for the failing cfn](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/eks-nodegroup.yaml) or [here, for the stacktrace](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/README.md). The problem occurrs on line 259 `VPCZoneIdentifier: !Join [ ',', [ !ImportValue Fn::Sub: \"${NetworkStackName}-SubnetsPrivate01\", !ImportValue Fn::Sub: \"${NetworkStackName}-SubnetsPrivate02\" ] ]`. (Was trying to provide a list by string concat as that's what's listed as the example input when subnets is a parameter as in the example cfn, but I've changed that for the example I provide here.) I changed the line to an expanded `!Join` ~\r\n```\r\n VPCZoneIdentifier: !Join\r\n - ','\r\n - - Fn::ImportValue: !Sub \"${NetworkStackName}-SubnetsPrivate01\"\r\n - Fn::ImportValue: !Sub \"${NetworkStackName}-SubnetsPrivate02\"\r\n```\r\nAnd it now correctly yields a normal error, rather than hitting the exception.\r\nI found this similar previous issue ~ https://github.com/aws-cloudformation/cfn-lint/issues/348\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport fileinput\nimport logging\nimport sys\nimport six\nfrom yaml.composer import Composer\nfrom yaml.reader import Reader\nfrom yaml.scanner import Scanner\nfrom yaml.resolver import Resolver\nfrom yaml import ScalarNode\nfrom yaml import SequenceNode\nfrom yaml import MappingNode\nfrom yaml.constructor import SafeConstructor\nfrom yaml.constructor import ConstructorError\nimport cfnlint\nfrom cfnlint.decode.node import str_node, dict_node, list_node, sub_node\n\ntry:\n from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports\n cyaml = True\nexcept ImportError:\n from yaml.parser import Parser # pylint: disable=ungrouped-imports\n cyaml = False\n\nUNCONVERTED_SUFFIXES = ['Ref', 'Condition']\nFN_PREFIX = 'Fn::'\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass CfnParseError(ConstructorError):\n \"\"\"\n Error thrown when the template contains Cfn Error\n \"\"\"\n\n def __init__(self, filename, errors):\n\n if isinstance(errors, cfnlint.rules.Match):\n errors = [errors]\n\n # Call the base class constructor with the parameters it needs\n super(CfnParseError, self).__init__(errors[0].message)\n\n # Now for your custom code...\n self.filename = filename\n self.matches = errors\n\ndef build_match(filename, message, line_number, column_number, key):\n return cfnlint.rules.Match(\n line_number + 1, column_number + 1, line_number + 1,\n column_number + 1 + len(key), filename, cfnlint.rules.ParseError(), message=message)\n\nclass NodeConstructor(SafeConstructor):\n \"\"\"\n Node Constructors for loading different types in Yaml\n \"\"\"\n\n def __init__(self, filename):\n # Call the base class constructor\n super(NodeConstructor, self).__init__()\n\n self.filename = filename\n\n # To support lazy loading, the original constructors first yield\n # an empty object, then fill them in when iterated. Due to\n # laziness we omit this behaviour (and will only do \"deep\n # construction\") by first exhausting iterators, then yielding\n # copies.\n def construct_yaml_map(self, node):\n\n # Check for duplicate keys on the current level, this is not desirable\n # because a dict does not support this. It overwrites it with the last\n # occurance, which can give unexpected results\n mapping = {}\n self.flatten_mapping(node)\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, False)\n value = self.construct_object(value_node, False)\n\n for key_dup in mapping:\n if key_dup == key:\n raise CfnParseError(\n self.filename,\n [\n build_match(\n filename=self.filename,\n message='Duplicate resource found \"{}\" (line {})'.format(\n key, key_dup.start_mark.line + 1),\n line_number=key_dup.start_mark.line,\n column_number=key_dup.start_mark.column,\n key=key\n ),\n build_match(\n filename=self.filename,\n message='Duplicate resource found \"{}\" (line {})'.format(\n key, key_node.start_mark.line + 1),\n line_number=key_node.start_mark.line,\n column_number=key_node.start_mark.column,\n key=key\n ),\n ]\n )\n mapping[key] = value\n\n obj, = SafeConstructor.construct_yaml_map(self, node)\n\n if len(mapping) == 1:\n if 'Fn::Sub' in mapping:\n return sub_node(obj, node.start_mark, node.end_mark)\n\n return dict_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_str(self, node):\n obj = SafeConstructor.construct_yaml_str(self, node)\n assert isinstance(obj, (six.string_types))\n return str_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_seq(self, node):\n obj, = SafeConstructor.construct_yaml_seq(self, node)\n assert isinstance(obj, list)\n return list_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_null_error(self, node):\n \"\"\"Throw a null error\"\"\"\n raise CfnParseError(\n self.filename,\n [\n build_match(\n filename=self.filename,\n message='Null value at line {0} column {1}'.format(\n node.start_mark.line + 1, node.start_mark.column + 1),\n line_number=node.start_mark.line,\n column_number=node.start_mark.column,\n key=' ',\n )\n ]\n )\n\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:map',\n NodeConstructor.construct_yaml_map)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:str',\n NodeConstructor.construct_yaml_str)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:seq',\n NodeConstructor.construct_yaml_seq)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:null',\n NodeConstructor.construct_yaml_null_error)\n\n\nclass MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):\n \"\"\"\n Class for marked loading YAML\n \"\"\"\n # pylint: disable=non-parent-init-called,super-init-not-called\n\n def __init__(self, stream, filename):\n Reader.__init__(self, stream)\n Scanner.__init__(self)\n if cyaml:\n Parser.__init__(self, stream)\n else:\n Parser.__init__(self)\n Composer.__init__(self)\n SafeConstructor.__init__(self)\n Resolver.__init__(self)\n NodeConstructor.__init__(self, filename)\n\n\ndef multi_constructor(loader, tag_suffix, node):\n \"\"\"\n Deal with !Ref style function format\n \"\"\"\n\n if tag_suffix not in UNCONVERTED_SUFFIXES:\n tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)\n\n constructor = None\n if tag_suffix == 'Fn::GetAtt':\n constructor = construct_getatt\n elif isinstance(node, ScalarNode):\n constructor = loader.construct_scalar\n elif isinstance(node, SequenceNode):\n constructor = loader.construct_sequence\n elif isinstance(node, MappingNode):\n constructor = loader.construct_mapping\n else:\n raise 'Bad tag: !{}'.format(tag_suffix)\n\n if tag_suffix == 'Fn::Sub':\n return sub_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n return dict_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n\ndef construct_getatt(node):\n \"\"\"\n Reconstruct !GetAtt into a list\n \"\"\"\n\n if isinstance(node.value, (six.string_types)):\n return list_node(node.value.split('.', 1), node.start_mark, node.end_mark)\n if isinstance(node.value, list):\n return list_node([s.value for s in node.value], node.start_mark, node.end_mark)\n\n raise ValueError('Unexpected node type: {}'.format(type(node.value)))\n\n\ndef loads(yaml_string, fname=None):\n \"\"\"\n Load the given YAML string\n \"\"\"\n loader = MarkedLoader(yaml_string, fname)\n loader.add_multi_constructor('!', multi_constructor)\n template = loader.get_single_data()\n # Convert an empty file to an empty dict\n if template is None:\n template = {}\n\n return template\n\n\ndef load(filename):\n \"\"\"\n Load the given YAML file\n \"\"\"\n\n content = ''\n\n if not sys.stdin.isatty():\n for line in fileinput.input(files=filename):\n content = content + line\n else:\n with open(filename) as fp:\n content = fp.read()\n\n return loads(content, filename)\n", "path": "src/cfnlint/decode/cfn_yaml.py"}]}
| 3,218 | 194 |
gh_patches_debug_49167
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-672
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support ElasticSearch 7.14
The python package `elasticsearch-py` introduced the `terms_enum` parameter from ElasticSearch 7.14. This is currently not being instrumented and breaking tests.
</issue>
<code>
[start of src/scout_apm/instruments/elasticsearch.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import logging
5 from collections import namedtuple
6
7 import wrapt
8
9 from scout_apm.compat import get_pos_args, unwrap_decorators
10 from scout_apm.core.tracked_request import TrackedRequest
11
12 try:
13 from elasticsearch import Elasticsearch, Transport
14 except ImportError: # pragma: no cover
15 Elasticsearch = None
16 Transport = None
17
18 logger = logging.getLogger(__name__)
19
20
21 def ensure_installed():
22 logger.debug("Instrumenting elasticsearch.")
23
24 if Elasticsearch is None:
25 logger.debug(
26 "Couldn't import elasticsearch.Elasticsearch - probably not installed."
27 )
28 else:
29 ensure_client_instrumented()
30 ensure_transport_instrumented()
31
32
33 ClientMethod = namedtuple("ClientMethod", ["name", "takes_index_argument"])
34
35 CLIENT_METHODS = [
36 ClientMethod("bulk", True),
37 ClientMethod("clear_scroll", False),
38 ClientMethod("close", False),
39 ClientMethod("close_point_in_time", False),
40 ClientMethod("count", True),
41 ClientMethod("create", True),
42 ClientMethod("delete", True),
43 ClientMethod("delete_by_query", True),
44 ClientMethod("delete_by_query_rethrottle", False),
45 ClientMethod("delete_script", False),
46 ClientMethod("exists", True),
47 ClientMethod("exists_source", True),
48 ClientMethod("explain", True),
49 ClientMethod("field_caps", True),
50 ClientMethod("get", True),
51 ClientMethod("get_script", False),
52 ClientMethod("get_script_context", False),
53 ClientMethod("get_script_languages", False),
54 ClientMethod("get_source", True),
55 ClientMethod("index", True),
56 ClientMethod("info", False),
57 ClientMethod("mget", True),
58 ClientMethod("msearch", True),
59 ClientMethod("msearch_template", True),
60 ClientMethod("mtermvectors", True),
61 ClientMethod("open_point_in_time", True),
62 ClientMethod("ping", False),
63 ClientMethod("put_script", False),
64 ClientMethod("rank_eval", True),
65 ClientMethod("reindex", False),
66 ClientMethod("reindex_rethrottle", False),
67 ClientMethod("render_search_template", False),
68 ClientMethod("scripts_painless_execute", False),
69 ClientMethod("scroll", False),
70 ClientMethod("search", True),
71 ClientMethod("search_shards", True),
72 ClientMethod("search_template", True),
73 ClientMethod("termvectors", True),
74 ClientMethod("update", True),
75 ClientMethod("update_by_query", True),
76 ClientMethod("update_by_query_rethrottle", False),
77 ]
78
79
80 have_patched_client = False
81
82
83 def ensure_client_instrumented():
84 global have_patched_client
85
86 if not have_patched_client:
87 for name, takes_index_argument in CLIENT_METHODS:
88 try:
89 method = getattr(Elasticsearch, name)
90 if takes_index_argument:
91 wrapped = wrap_client_index_method(method)
92 else:
93 wrapped = wrap_client_method(method)
94 setattr(Elasticsearch, name, wrapped)
95 except Exception as exc:
96 logger.warning(
97 "Failed to instrument elasticsearch.Elasticsearch.%s: %r",
98 name,
99 exc,
100 exc_info=exc,
101 )
102
103 have_patched_client = True
104
105
106 @wrapt.decorator
107 def wrap_client_index_method(wrapped, instance, args, kwargs):
108 # elasticsearch-py 7.5.1 changed the order of arguments for client methods,
109 # so to be safe we need to inspect the wrapped method's positional
110 # arguments to see if we should pull it from there
111 if "index" in kwargs:
112 index = kwargs["index"]
113 else:
114 unwrapped = unwrap_decorators(wrapped)
115 pos_args = get_pos_args(unwrapped)
116 try:
117 index_index = pos_args.index("index")
118 except ValueError: # pragma: no cover
119 # This guards against the method not accepting an 'index' argument
120 # but they all do - for now
121 index = ""
122 else:
123 try:
124 index = args[index_index - 1] # subtract 'self'
125 except IndexError:
126 index = ""
127
128 if isinstance(index, (list, tuple)):
129 index = ",".join(index)
130 if index == "":
131 index = "Unknown"
132 index = index.title()
133
134 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
135 operation = "Elasticsearch/{}/{}".format(index, camel_name)
136 tracked_request = TrackedRequest.instance()
137 with tracked_request.span(operation=operation, ignore_children=True):
138 return wrapped(*args, **kwargs)
139
140
141 @wrapt.decorator
142 def wrap_client_method(wrapped, instance, args, kwargs):
143 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
144 operation = "Elasticsearch/{}".format(camel_name)
145 tracked_request = TrackedRequest.instance()
146 with tracked_request.span(operation=operation, ignore_children=True):
147 return wrapped(*args, **kwargs)
148
149
150 have_patched_transport = False
151
152
153 def ensure_transport_instrumented():
154 global have_patched_transport
155
156 if not have_patched_transport:
157 try:
158 Transport.perform_request = wrapped_perform_request(
159 Transport.perform_request
160 )
161 except Exception as exc:
162 logger.warning(
163 "Failed to instrument elasticsearch.Transport.perform_request: %r",
164 exc,
165 exc_info=exc,
166 )
167
168 have_patched_transport = True
169
170
171 def _sanitize_name(name):
172 try:
173 op = name.split("/")[-1]
174 op = op[1:] # chop leading '_' from op
175 known_names = (
176 "bench",
177 "bulk",
178 "count",
179 "exists",
180 "explain",
181 "field_stats",
182 "health",
183 "mget",
184 "mlt",
185 "mpercolate",
186 "msearch",
187 "mtermvectors",
188 "percolate",
189 "query",
190 "scroll",
191 "search_shards",
192 "source",
193 "suggest",
194 "template",
195 "termvectors",
196 "update",
197 "search",
198 )
199 if op in known_names:
200 return op.title()
201 return "Unknown"
202 except Exception:
203 return "Unknown"
204
205
206 @wrapt.decorator
207 def wrapped_perform_request(wrapped, instance, args, kwargs):
208 try:
209 op = _sanitize_name(args[1])
210 except IndexError:
211 op = "Unknown"
212
213 tracked_request = TrackedRequest.instance()
214 with tracked_request.span(
215 operation="Elasticsearch/{}".format(op),
216 ignore_children=True,
217 ):
218 return wrapped(*args, **kwargs)
219
[end of src/scout_apm/instruments/elasticsearch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py
--- a/src/scout_apm/instruments/elasticsearch.py
+++ b/src/scout_apm/instruments/elasticsearch.py
@@ -71,6 +71,7 @@
ClientMethod("search_shards", True),
ClientMethod("search_template", True),
ClientMethod("termvectors", True),
+ ClientMethod("terms_enum", True),
ClientMethod("update", True),
ClientMethod("update_by_query", True),
ClientMethod("update_by_query_rethrottle", False),
|
{"golden_diff": "diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py\n--- a/src/scout_apm/instruments/elasticsearch.py\n+++ b/src/scout_apm/instruments/elasticsearch.py\n@@ -71,6 +71,7 @@\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n+ ClientMethod(\"terms_enum\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n", "issue": "Support ElasticSearch 7.14\nThe python package `elasticsearch-py` introduced the `terms_enum` parameter from ElasticSearch 7.14. This is currently not being instrumented and breaking tests.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nfrom collections import namedtuple\n\nimport wrapt\n\nfrom scout_apm.compat import get_pos_args, unwrap_decorators\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ntry:\n from elasticsearch import Elasticsearch, Transport\nexcept ImportError: # pragma: no cover\n Elasticsearch = None\n Transport = None\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_installed():\n logger.debug(\"Instrumenting elasticsearch.\")\n\n if Elasticsearch is None:\n logger.debug(\n \"Couldn't import elasticsearch.Elasticsearch - probably not installed.\"\n )\n else:\n ensure_client_instrumented()\n ensure_transport_instrumented()\n\n\nClientMethod = namedtuple(\"ClientMethod\", [\"name\", \"takes_index_argument\"])\n\nCLIENT_METHODS = [\n ClientMethod(\"bulk\", True),\n ClientMethod(\"clear_scroll\", False),\n ClientMethod(\"close\", False),\n ClientMethod(\"close_point_in_time\", False),\n ClientMethod(\"count\", True),\n ClientMethod(\"create\", True),\n ClientMethod(\"delete\", True),\n ClientMethod(\"delete_by_query\", True),\n ClientMethod(\"delete_by_query_rethrottle\", False),\n ClientMethod(\"delete_script\", False),\n ClientMethod(\"exists\", True),\n ClientMethod(\"exists_source\", True),\n ClientMethod(\"explain\", True),\n ClientMethod(\"field_caps\", True),\n ClientMethod(\"get\", True),\n ClientMethod(\"get_script\", False),\n ClientMethod(\"get_script_context\", False),\n ClientMethod(\"get_script_languages\", False),\n ClientMethod(\"get_source\", True),\n ClientMethod(\"index\", True),\n ClientMethod(\"info\", False),\n ClientMethod(\"mget\", True),\n ClientMethod(\"msearch\", True),\n ClientMethod(\"msearch_template\", True),\n ClientMethod(\"mtermvectors\", True),\n ClientMethod(\"open_point_in_time\", True),\n ClientMethod(\"ping\", False),\n ClientMethod(\"put_script\", False),\n ClientMethod(\"rank_eval\", True),\n ClientMethod(\"reindex\", False),\n ClientMethod(\"reindex_rethrottle\", False),\n ClientMethod(\"render_search_template\", False),\n ClientMethod(\"scripts_painless_execute\", False),\n ClientMethod(\"scroll\", False),\n ClientMethod(\"search\", True),\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n]\n\n\nhave_patched_client = False\n\n\ndef ensure_client_instrumented():\n global have_patched_client\n\n if not have_patched_client:\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n if takes_index_argument:\n wrapped = wrap_client_index_method(method)\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n\n have_patched_client = True\n\n\[email protected]\ndef wrap_client_index_method(wrapped, instance, args, kwargs):\n # elasticsearch-py 7.5.1 changed the order of arguments for client methods,\n # so to be safe we need to inspect the wrapped method's positional\n # arguments to see if we should pull it from there\n if \"index\" in kwargs:\n index = kwargs[\"index\"]\n else:\n unwrapped = unwrap_decorators(wrapped)\n pos_args = get_pos_args(unwrapped)\n try:\n index_index = pos_args.index(\"index\")\n except ValueError: # pragma: no cover\n # This guards against the method not accepting an 'index' argument\n # but they all do - for now\n index = \"\"\n else:\n try:\n index = args[index_index - 1] # subtract 'self'\n except IndexError:\n index = \"\"\n\n if isinstance(index, (list, tuple)):\n index = \",\".join(index)\n if index == \"\":\n index = \"Unknown\"\n index = index.title()\n\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}/{}\".format(index, camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\[email protected]\ndef wrap_client_method(wrapped, instance, args, kwargs):\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}\".format(camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\nhave_patched_transport = False\n\n\ndef ensure_transport_instrumented():\n global have_patched_transport\n\n if not have_patched_transport:\n try:\n Transport.perform_request = wrapped_perform_request(\n Transport.perform_request\n )\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Transport.perform_request: %r\",\n exc,\n exc_info=exc,\n )\n\n have_patched_transport = True\n\n\ndef _sanitize_name(name):\n try:\n op = name.split(\"/\")[-1]\n op = op[1:] # chop leading '_' from op\n known_names = (\n \"bench\",\n \"bulk\",\n \"count\",\n \"exists\",\n \"explain\",\n \"field_stats\",\n \"health\",\n \"mget\",\n \"mlt\",\n \"mpercolate\",\n \"msearch\",\n \"mtermvectors\",\n \"percolate\",\n \"query\",\n \"scroll\",\n \"search_shards\",\n \"source\",\n \"suggest\",\n \"template\",\n \"termvectors\",\n \"update\",\n \"search\",\n )\n if op in known_names:\n return op.title()\n return \"Unknown\"\n except Exception:\n return \"Unknown\"\n\n\[email protected]\ndef wrapped_perform_request(wrapped, instance, args, kwargs):\n try:\n op = _sanitize_name(args[1])\n except IndexError:\n op = \"Unknown\"\n\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(\n operation=\"Elasticsearch/{}\".format(op),\n ignore_children=True,\n ):\n return wrapped(*args, **kwargs)\n", "path": "src/scout_apm/instruments/elasticsearch.py"}]}
| 2,592 | 134 |
gh_patches_debug_5128
|
rasdani/github-patches
|
git_diff
|
geopandas__geopandas-202
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove mapquest from geocoder
The mapquest API is now only available to enterprise customers and has been removed from geopy.
See [geopy docs](https://geopy.readthedocs.org/en/1.10.0/#id4)
</issue>
<code>
[start of geopandas/tools/geocoding.py]
1 from collections import defaultdict
2 import time
3
4 from fiona.crs import from_epsg
5 import numpy as np
6 import pandas as pd
7 from shapely.geometry import Point
8 from six import iteritems
9
10 import geopandas as gpd
11
12
13 def _throttle_time(provider):
14 """ Amount of time to wait between requests to a geocoding API.
15
16 Currently implemented for Nominatim, as their terms of service
17 require a maximum of 1 request per second.
18 https://wiki.openstreetmap.org/wiki/Nominatim_usage_policy
19 """
20 if provider == 'nominatim':
21 return 1
22 else:
23 return 0
24
25
26 def geocode(strings, provider='googlev3', **kwargs):
27 """
28 Geocode a set of strings and get a GeoDataFrame of the resulting points.
29
30 Parameters
31 ----------
32 strings : list or Series of addresses to geocode
33 provider : geopy geocoder to use, default 'googlev3'
34 Some providers require additional arguments such as access keys
35 See each geocoder's specific parameters in geopy.geocoders
36 * googlev3, default
37 * bing
38 * google
39 * yahoo
40 * mapquest
41 * openmapquest
42
43 Ensure proper use of the results by consulting the Terms of Service for
44 your provider.
45
46 Geocoding requires geopy. Install it using 'pip install geopy'. See also
47 https://github.com/geopy/geopy
48
49 Example
50 -------
51 >>> df = geocode(['boston, ma', '1600 pennsylvania ave. washington, dc'])
52
53 address \
54 0 Boston, MA, USA
55 1 1600 Pennsylvania Avenue Northwest, President'...
56
57 geometry
58 0 POINT (-71.0597732 42.3584308)
59 1 POINT (-77.0365305 38.8977332)
60
61 """
62 return _query(strings, True, provider, **kwargs)
63
64
65 def reverse_geocode(points, provider='googlev3', **kwargs):
66 """
67 Reverse geocode a set of points and get a GeoDataFrame of the resulting
68 addresses.
69
70 The points
71
72 Parameters
73 ----------
74 points : list or Series of Shapely Point objects.
75 x coordinate is longitude
76 y coordinate is latitude
77 provider : geopy geocoder to use, default 'googlev3'
78 These are the same options as the geocode() function
79 Some providers require additional arguments such as access keys
80 See each geocoder's specific parameters in geopy.geocoders
81 * googlev3, default
82 * bing
83 * google
84 * yahoo
85 * mapquest
86 * openmapquest
87
88 Ensure proper use of the results by consulting the Terms of Service for
89 your provider.
90
91 Reverse geocoding requires geopy. Install it using 'pip install geopy'.
92 See also https://github.com/geopy/geopy
93
94 Example
95 -------
96 >>> df = reverse_geocode([Point(-71.0594869, 42.3584697),
97 Point(-77.0365305, 38.8977332)])
98
99 address \
100 0 29 Court Square, Boston, MA 02108, USA
101 1 1600 Pennsylvania Avenue Northwest, President'...
102
103 geometry
104 0 POINT (-71.0594869 42.3584697)
105 1 POINT (-77.0365305 38.8977332)
106
107 """
108 return _query(points, False, provider, **kwargs)
109
110
111 def _query(data, forward, provider, **kwargs):
112 import geopy
113 from geopy.geocoders.base import GeocoderQueryError
114
115 if not isinstance(data, pd.Series):
116 data = pd.Series(data)
117
118 # workaround changed name in 0.96
119 try:
120 Yahoo = geopy.geocoders.YahooPlaceFinder
121 except AttributeError:
122 Yahoo = geopy.geocoders.Yahoo
123
124 coders = {'googlev3': geopy.geocoders.GoogleV3,
125 'bing': geopy.geocoders.Bing,
126 'yahoo': Yahoo,
127 'mapquest': geopy.geocoders.MapQuest,
128 'openmapquest': geopy.geocoders.OpenMapQuest,
129 'nominatim': geopy.geocoders.Nominatim}
130
131 if provider not in coders:
132 raise ValueError('Unknown geocoding provider: {0}'.format(provider))
133
134 coder = coders[provider](**kwargs)
135 results = {}
136 for i, s in iteritems(data):
137 try:
138 if forward:
139 results[i] = coder.geocode(s)
140 else:
141 results[i] = coder.reverse((s.y, s.x), exactly_one=True)
142 except (GeocoderQueryError, ValueError):
143 results[i] = (None, None)
144 time.sleep(_throttle_time(provider))
145
146 df = _prepare_geocode_result(results)
147 return df
148
149
150 def _prepare_geocode_result(results):
151 """
152 Helper function for the geocode function
153
154 Takes a dict where keys are index entries, values are tuples containing:
155 (address, (lat, lon))
156
157 """
158 # Prepare the data for the DataFrame as a dict of lists
159 d = defaultdict(list)
160 index = []
161
162 for i, s in iteritems(results):
163 address, loc = s
164
165 # loc is lat, lon and we want lon, lat
166 if loc is None:
167 p = Point()
168 else:
169 p = Point(loc[1], loc[0])
170
171 if address is None:
172 address = np.nan
173
174 d['geometry'].append(p)
175 d['address'].append(address)
176 index.append(i)
177
178 df = gpd.GeoDataFrame(d, index=index)
179 df.crs = from_epsg(4326)
180
181 return df
182
[end of geopandas/tools/geocoding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/geopandas/tools/geocoding.py b/geopandas/tools/geocoding.py
--- a/geopandas/tools/geocoding.py
+++ b/geopandas/tools/geocoding.py
@@ -124,7 +124,6 @@
coders = {'googlev3': geopy.geocoders.GoogleV3,
'bing': geopy.geocoders.Bing,
'yahoo': Yahoo,
- 'mapquest': geopy.geocoders.MapQuest,
'openmapquest': geopy.geocoders.OpenMapQuest,
'nominatim': geopy.geocoders.Nominatim}
|
{"golden_diff": "diff --git a/geopandas/tools/geocoding.py b/geopandas/tools/geocoding.py\n--- a/geopandas/tools/geocoding.py\n+++ b/geopandas/tools/geocoding.py\n@@ -124,7 +124,6 @@\n coders = {'googlev3': geopy.geocoders.GoogleV3,\n 'bing': geopy.geocoders.Bing,\n 'yahoo': Yahoo,\n- 'mapquest': geopy.geocoders.MapQuest,\n 'openmapquest': geopy.geocoders.OpenMapQuest,\n 'nominatim': geopy.geocoders.Nominatim}\n", "issue": "Remove mapquest from geocoder\nThe mapquest API is now only available to enterprise customers and has been removed from geopy.\n\nSee [geopy docs](https://geopy.readthedocs.org/en/1.10.0/#id4)\n\n", "before_files": [{"content": "from collections import defaultdict\nimport time\n\nfrom fiona.crs import from_epsg\nimport numpy as np\nimport pandas as pd\nfrom shapely.geometry import Point\nfrom six import iteritems\n\nimport geopandas as gpd\n\n\ndef _throttle_time(provider):\n \"\"\" Amount of time to wait between requests to a geocoding API.\n\n Currently implemented for Nominatim, as their terms of service\n require a maximum of 1 request per second.\n https://wiki.openstreetmap.org/wiki/Nominatim_usage_policy\n \"\"\"\n if provider == 'nominatim':\n return 1\n else:\n return 0\n\n\ndef geocode(strings, provider='googlev3', **kwargs):\n \"\"\"\n Geocode a set of strings and get a GeoDataFrame of the resulting points.\n\n Parameters\n ----------\n strings : list or Series of addresses to geocode\n provider : geopy geocoder to use, default 'googlev3'\n Some providers require additional arguments such as access keys\n See each geocoder's specific parameters in geopy.geocoders\n * googlev3, default\n * bing\n * google\n * yahoo\n * mapquest\n * openmapquest\n\n Ensure proper use of the results by consulting the Terms of Service for\n your provider.\n\n Geocoding requires geopy. Install it using 'pip install geopy'. See also\n https://github.com/geopy/geopy\n\n Example\n -------\n >>> df = geocode(['boston, ma', '1600 pennsylvania ave. washington, dc'])\n\n address \\\n 0 Boston, MA, USA\n 1 1600 Pennsylvania Avenue Northwest, President'...\n\n geometry\n 0 POINT (-71.0597732 42.3584308)\n 1 POINT (-77.0365305 38.8977332)\n\n \"\"\"\n return _query(strings, True, provider, **kwargs)\n\n\ndef reverse_geocode(points, provider='googlev3', **kwargs):\n \"\"\"\n Reverse geocode a set of points and get a GeoDataFrame of the resulting\n addresses.\n\n The points\n\n Parameters\n ----------\n points : list or Series of Shapely Point objects.\n x coordinate is longitude\n y coordinate is latitude\n provider : geopy geocoder to use, default 'googlev3'\n These are the same options as the geocode() function\n Some providers require additional arguments such as access keys\n See each geocoder's specific parameters in geopy.geocoders\n * googlev3, default\n * bing\n * google\n * yahoo\n * mapquest\n * openmapquest\n\n Ensure proper use of the results by consulting the Terms of Service for\n your provider.\n\n Reverse geocoding requires geopy. Install it using 'pip install geopy'.\n See also https://github.com/geopy/geopy\n\n Example\n -------\n >>> df = reverse_geocode([Point(-71.0594869, 42.3584697),\n Point(-77.0365305, 38.8977332)])\n\n address \\\n 0 29 Court Square, Boston, MA 02108, USA\n 1 1600 Pennsylvania Avenue Northwest, President'...\n\n geometry\n 0 POINT (-71.0594869 42.3584697)\n 1 POINT (-77.0365305 38.8977332)\n\n \"\"\"\n return _query(points, False, provider, **kwargs)\n\n\ndef _query(data, forward, provider, **kwargs):\n import geopy\n from geopy.geocoders.base import GeocoderQueryError\n\n if not isinstance(data, pd.Series):\n data = pd.Series(data)\n\n # workaround changed name in 0.96\n try:\n Yahoo = geopy.geocoders.YahooPlaceFinder\n except AttributeError:\n Yahoo = geopy.geocoders.Yahoo\n\n coders = {'googlev3': geopy.geocoders.GoogleV3,\n 'bing': geopy.geocoders.Bing,\n 'yahoo': Yahoo,\n 'mapquest': geopy.geocoders.MapQuest,\n 'openmapquest': geopy.geocoders.OpenMapQuest,\n 'nominatim': geopy.geocoders.Nominatim}\n\n if provider not in coders:\n raise ValueError('Unknown geocoding provider: {0}'.format(provider))\n\n coder = coders[provider](**kwargs)\n results = {}\n for i, s in iteritems(data):\n try:\n if forward:\n results[i] = coder.geocode(s)\n else:\n results[i] = coder.reverse((s.y, s.x), exactly_one=True)\n except (GeocoderQueryError, ValueError):\n results[i] = (None, None)\n time.sleep(_throttle_time(provider))\n\n df = _prepare_geocode_result(results)\n return df\n\n\ndef _prepare_geocode_result(results):\n \"\"\"\n Helper function for the geocode function\n\n Takes a dict where keys are index entries, values are tuples containing:\n (address, (lat, lon))\n\n \"\"\"\n # Prepare the data for the DataFrame as a dict of lists\n d = defaultdict(list)\n index = []\n\n for i, s in iteritems(results):\n address, loc = s\n\n # loc is lat, lon and we want lon, lat\n if loc is None:\n p = Point()\n else:\n p = Point(loc[1], loc[0])\n\n if address is None:\n address = np.nan\n\n d['geometry'].append(p)\n d['address'].append(address)\n index.append(i)\n\n df = gpd.GeoDataFrame(d, index=index)\n df.crs = from_epsg(4326)\n\n return df\n", "path": "geopandas/tools/geocoding.py"}]}
| 2,398 | 140 |
gh_patches_debug_11057
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-1407
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Code in variational_strategy.py
# 🐛 Bug
Dear Gpytorch Developers
Here is a possible issue with variational_strategy.py
I am wondering whether it is a special consideration or something missing. According to the formula in Line 121, it seems an L is missing in front of self.prior_distribution.mean in Line 115. I understand variable inducing_values has already included L.inv() according to whitening of the inducing posterior, but this is not the case for the prior. It is fortunate that self.prior_distribution.mean is always 0, so no errors.
Thank you for the great package
J.
## To reproduce
** Code snippet to reproduce **
```python
# Your code goes here
# Please make sure it does not require any external dependencies (other than PyTorch!)
# (We much prefer small snippets rather than links to existing libraries!)
```
** Stack trace/error message **
```
// Paste the bad output here!
```
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
## System information
**Please complete the following information:**
- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->
- <!-- PyTorch Version (run `print(torch.__version__)` -->
- <!-- Computer OS -->
## Additional context
Add any other context about the problem here.
</issue>
<code>
[start of gpytorch/variational/variational_strategy.py]
1 #!/usr/bin/env python3
2
3 import warnings
4
5 import torch
6
7 from .. import settings
8 from ..distributions import MultivariateNormal
9 from ..lazy import DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, SumLazyTensor, TriangularLazyTensor, delazify
10 from ..settings import trace_mode
11 from ..utils.cholesky import psd_safe_cholesky
12 from ..utils.errors import CachingError
13 from ..utils.memoize import cached, clear_cache_hook, pop_from_cache_ignore_args
14 from ..utils.warnings import OldVersionWarning
15 from ._variational_strategy import _VariationalStrategy
16
17
18 def _ensure_updated_strategy_flag_set(
19 state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
20 ):
21 device = state_dict[list(state_dict.keys())[0]].device
22 if prefix + "updated_strategy" not in state_dict:
23 state_dict[prefix + "updated_strategy"] = torch.tensor(False, device=device)
24 warnings.warn(
25 "You have loaded a variational GP model (using `VariationalStrategy`) from a previous version of "
26 "GPyTorch. We have updated the parameters of your model to work with the new version of "
27 "`VariationalStrategy` that uses whitened parameters.\nYour model will work as expected, but we "
28 "recommend that you re-save your model.",
29 OldVersionWarning,
30 )
31
32
33 class VariationalStrategy(_VariationalStrategy):
34 r"""
35 The standard variational strategy, as defined by `Hensman et al. (2015)`_.
36 This strategy takes a set of :math:`m \ll n` inducing points :math:`\mathbf Z`
37 and applies an approximate distribution :math:`q( \mathbf u)` over their function values.
38 (Here, we use the common notation :math:`\mathbf u = f(\mathbf Z)`.
39 The approximate function distribution for any abitrary input :math:`\mathbf X` is given by:
40
41 .. math::
42
43 q( f(\mathbf X) ) = \int p( f(\mathbf X) \mid \mathbf u) q(\mathbf u) \: d\mathbf u
44
45 This variational strategy uses "whitening" to accelerate the optimization of the variational
46 parameters. See `Matthews (2017)`_ for more info.
47
48 :param ~gpytorch.models.ApproximateGP model: Model this strategy is applied to.
49 Typically passed in when the VariationalStrategy is created in the
50 __init__ method of the user defined model.
51 :param torch.Tensor inducing_points: Tensor containing a set of inducing
52 points to use for variational inference.
53 :param ~gpytorch.variational.VariationalDistribution variational_distribution: A
54 VariationalDistribution object that represents the form of the variational distribution :math:`q(\mathbf u)`
55 :param learn_inducing_locations: (Default True): Whether or not
56 the inducing point locations :math:`\mathbf Z` should be learned (i.e. are they
57 parameters of the model).
58 :type learn_inducing_locations: `bool`, optional
59
60 .. _Hensman et al. (2015):
61 http://proceedings.mlr.press/v38/hensman15.pdf
62 .. _Matthews (2017):
63 https://www.repository.cam.ac.uk/handle/1810/278022
64 """
65
66 def __init__(self, model, inducing_points, variational_distribution, learn_inducing_locations=True):
67 super().__init__(model, inducing_points, variational_distribution, learn_inducing_locations)
68 self.register_buffer("updated_strategy", torch.tensor(True))
69 self._register_load_state_dict_pre_hook(_ensure_updated_strategy_flag_set)
70
71 @cached(name="cholesky_factor", ignore_args=True)
72 def _cholesky_factor(self, induc_induc_covar):
73 L = psd_safe_cholesky(delazify(induc_induc_covar).double(), jitter=settings.cholesky_jitter.value())
74 return TriangularLazyTensor(L)
75
76 @property
77 @cached(name="prior_distribution_memo")
78 def prior_distribution(self):
79 zeros = torch.zeros(
80 self._variational_distribution.shape(),
81 dtype=self._variational_distribution.dtype,
82 device=self._variational_distribution.device,
83 )
84 ones = torch.ones_like(zeros)
85 res = MultivariateNormal(zeros, DiagLazyTensor(ones))
86 return res
87
88 def forward(self, x, inducing_points, inducing_values, variational_inducing_covar=None, **kwargs):
89 # Compute full prior distribution
90 full_inputs = torch.cat([inducing_points, x], dim=-2)
91 full_output = self.model.forward(full_inputs, **kwargs)
92 full_covar = full_output.lazy_covariance_matrix
93
94 # Covariance terms
95 num_induc = inducing_points.size(-2)
96 test_mean = full_output.mean[..., num_induc:]
97 induc_induc_covar = full_covar[..., :num_induc, :num_induc].add_jitter()
98 induc_data_covar = full_covar[..., :num_induc, num_induc:].evaluate()
99 data_data_covar = full_covar[..., num_induc:, num_induc:]
100
101 # Compute interpolation terms
102 # K_ZZ^{-1/2} K_ZX
103 # K_ZZ^{-1/2} \mu_Z
104 L = self._cholesky_factor(induc_induc_covar)
105 if L.shape != induc_induc_covar.shape:
106 # Aggressive caching can cause nasty shape incompatibilies when evaluating with different batch shapes
107 # TODO: Use a hook fo this
108 try:
109 pop_from_cache_ignore_args(self, "cholesky_factor")
110 except CachingError:
111 pass
112 L = self._cholesky_factor(induc_induc_covar)
113 interp_term = L.inv_matmul(induc_data_covar.double()).to(full_inputs.dtype)
114
115 # Compute the mean of q(f)
116 # k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \mu_Z) + \mu_X
117 predictive_mean = (
118 torch.matmul(
119 interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)
120 ).squeeze(-1)
121 + test_mean
122 )
123
124 # Compute the covariance of q(f)
125 # K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX
126 middle_term = self.prior_distribution.lazy_covariance_matrix.mul(-1)
127 if variational_inducing_covar is not None:
128 middle_term = SumLazyTensor(variational_inducing_covar, middle_term)
129
130 if trace_mode.on():
131 predictive_covar = (
132 data_data_covar.add_jitter(1e-4).evaluate()
133 + interp_term.transpose(-1, -2) @ middle_term.evaluate() @ interp_term
134 )
135 else:
136 predictive_covar = SumLazyTensor(
137 data_data_covar.add_jitter(1e-4),
138 MatmulLazyTensor(interp_term.transpose(-1, -2), middle_term @ interp_term),
139 )
140
141 # Return the distribution
142 return MultivariateNormal(predictive_mean, predictive_covar)
143
144 def __call__(self, x, prior=False, **kwargs):
145 if not self.updated_strategy.item() and not prior:
146 with torch.no_grad():
147 # Get unwhitened p(u)
148 prior_function_dist = self(self.inducing_points, prior=True)
149 prior_mean = prior_function_dist.loc
150 L = self._cholesky_factor(prior_function_dist.lazy_covariance_matrix.add_jitter())
151
152 # Temporarily turn off noise that's added to the mean
153 orig_mean_init_std = self._variational_distribution.mean_init_std
154 self._variational_distribution.mean_init_std = 0.0
155
156 # Change the variational parameters to be whitened
157 variational_dist = self.variational_distribution
158 mean_diff = (variational_dist.loc - prior_mean).unsqueeze(-1).double()
159 whitened_mean = L.inv_matmul(mean_diff).squeeze(-1).to(variational_dist.loc.dtype)
160 covar_root = variational_dist.lazy_covariance_matrix.root_decomposition().root.evaluate().double()
161 whitened_covar = RootLazyTensor(L.inv_matmul(covar_root).to(variational_dist.loc.dtype))
162 whitened_variational_distribution = variational_dist.__class__(whitened_mean, whitened_covar)
163 self._variational_distribution.initialize_variational_distribution(whitened_variational_distribution)
164
165 # Reset the random noise parameter of the model
166 self._variational_distribution.mean_init_std = orig_mean_init_std
167
168 # Reset the cache
169 clear_cache_hook(self)
170
171 # Mark that we have updated the variational strategy
172 self.updated_strategy.fill_(True)
173
174 return super().__call__(x, prior=prior, **kwargs)
175
[end of gpytorch/variational/variational_strategy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gpytorch/variational/variational_strategy.py b/gpytorch/variational/variational_strategy.py
--- a/gpytorch/variational/variational_strategy.py
+++ b/gpytorch/variational/variational_strategy.py
@@ -114,12 +114,7 @@
# Compute the mean of q(f)
# k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \mu_Z) + \mu_X
- predictive_mean = (
- torch.matmul(
- interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)
- ).squeeze(-1)
- + test_mean
- )
+ predictive_mean = (interp_term.transpose(-1, -2) @ inducing_values.unsqueeze(-1)).squeeze(-1) + test_mean
# Compute the covariance of q(f)
# K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX
|
{"golden_diff": "diff --git a/gpytorch/variational/variational_strategy.py b/gpytorch/variational/variational_strategy.py\n--- a/gpytorch/variational/variational_strategy.py\n+++ b/gpytorch/variational/variational_strategy.py\n@@ -114,12 +114,7 @@\n \n # Compute the mean of q(f)\n # k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \\mu_Z) + \\mu_X\n- predictive_mean = (\n- torch.matmul(\n- interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)\n- ).squeeze(-1)\n- + test_mean\n- )\n+ predictive_mean = (interp_term.transpose(-1, -2) @ inducing_values.unsqueeze(-1)).squeeze(-1) + test_mean\n \n # Compute the covariance of q(f)\n # K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX\n", "issue": "Code in variational_strategy.py\n# \ud83d\udc1b Bug\r\n\r\nDear Gpytorch Developers\r\n\r\nHere is a possible issue with variational_strategy.py\r\n\r\nI am wondering whether it is a special consideration or something missing. According to the formula in Line 121, it seems an L is missing in front of self.prior_distribution.mean in Line 115. I understand variable inducing_values has already included L.inv() according to whitening of the inducing posterior, but this is not the case for the prior. It is fortunate that self.prior_distribution.mean is always 0, so no errors.\r\n\r\nThank you for the great package\r\n\r\nJ.\r\n\r\n## To reproduce\r\n\r\n** Code snippet to reproduce **\r\n```python\r\n# Your code goes here\r\n# Please make sure it does not require any external dependencies (other than PyTorch!)\r\n# (We much prefer small snippets rather than links to existing libraries!)\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\n// Paste the bad output here!\r\n```\r\n\r\n## Expected Behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->\r\n- <!-- PyTorch Version (run `print(torch.__version__)` -->\r\n- <!-- Computer OS -->\r\n\r\n## Additional context\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport warnings\n\nimport torch\n\nfrom .. import settings\nfrom ..distributions import MultivariateNormal\nfrom ..lazy import DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, SumLazyTensor, TriangularLazyTensor, delazify\nfrom ..settings import trace_mode\nfrom ..utils.cholesky import psd_safe_cholesky\nfrom ..utils.errors import CachingError\nfrom ..utils.memoize import cached, clear_cache_hook, pop_from_cache_ignore_args\nfrom ..utils.warnings import OldVersionWarning\nfrom ._variational_strategy import _VariationalStrategy\n\n\ndef _ensure_updated_strategy_flag_set(\n state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs\n):\n device = state_dict[list(state_dict.keys())[0]].device\n if prefix + \"updated_strategy\" not in state_dict:\n state_dict[prefix + \"updated_strategy\"] = torch.tensor(False, device=device)\n warnings.warn(\n \"You have loaded a variational GP model (using `VariationalStrategy`) from a previous version of \"\n \"GPyTorch. We have updated the parameters of your model to work with the new version of \"\n \"`VariationalStrategy` that uses whitened parameters.\\nYour model will work as expected, but we \"\n \"recommend that you re-save your model.\",\n OldVersionWarning,\n )\n\n\nclass VariationalStrategy(_VariationalStrategy):\n r\"\"\"\n The standard variational strategy, as defined by `Hensman et al. (2015)`_.\n This strategy takes a set of :math:`m \\ll n` inducing points :math:`\\mathbf Z`\n and applies an approximate distribution :math:`q( \\mathbf u)` over their function values.\n (Here, we use the common notation :math:`\\mathbf u = f(\\mathbf Z)`.\n The approximate function distribution for any abitrary input :math:`\\mathbf X` is given by:\n\n .. math::\n\n q( f(\\mathbf X) ) = \\int p( f(\\mathbf X) \\mid \\mathbf u) q(\\mathbf u) \\: d\\mathbf u\n\n This variational strategy uses \"whitening\" to accelerate the optimization of the variational\n parameters. See `Matthews (2017)`_ for more info.\n\n :param ~gpytorch.models.ApproximateGP model: Model this strategy is applied to.\n Typically passed in when the VariationalStrategy is created in the\n __init__ method of the user defined model.\n :param torch.Tensor inducing_points: Tensor containing a set of inducing\n points to use for variational inference.\n :param ~gpytorch.variational.VariationalDistribution variational_distribution: A\n VariationalDistribution object that represents the form of the variational distribution :math:`q(\\mathbf u)`\n :param learn_inducing_locations: (Default True): Whether or not\n the inducing point locations :math:`\\mathbf Z` should be learned (i.e. are they\n parameters of the model).\n :type learn_inducing_locations: `bool`, optional\n\n .. _Hensman et al. (2015):\n http://proceedings.mlr.press/v38/hensman15.pdf\n .. _Matthews (2017):\n https://www.repository.cam.ac.uk/handle/1810/278022\n \"\"\"\n\n def __init__(self, model, inducing_points, variational_distribution, learn_inducing_locations=True):\n super().__init__(model, inducing_points, variational_distribution, learn_inducing_locations)\n self.register_buffer(\"updated_strategy\", torch.tensor(True))\n self._register_load_state_dict_pre_hook(_ensure_updated_strategy_flag_set)\n\n @cached(name=\"cholesky_factor\", ignore_args=True)\n def _cholesky_factor(self, induc_induc_covar):\n L = psd_safe_cholesky(delazify(induc_induc_covar).double(), jitter=settings.cholesky_jitter.value())\n return TriangularLazyTensor(L)\n\n @property\n @cached(name=\"prior_distribution_memo\")\n def prior_distribution(self):\n zeros = torch.zeros(\n self._variational_distribution.shape(),\n dtype=self._variational_distribution.dtype,\n device=self._variational_distribution.device,\n )\n ones = torch.ones_like(zeros)\n res = MultivariateNormal(zeros, DiagLazyTensor(ones))\n return res\n\n def forward(self, x, inducing_points, inducing_values, variational_inducing_covar=None, **kwargs):\n # Compute full prior distribution\n full_inputs = torch.cat([inducing_points, x], dim=-2)\n full_output = self.model.forward(full_inputs, **kwargs)\n full_covar = full_output.lazy_covariance_matrix\n\n # Covariance terms\n num_induc = inducing_points.size(-2)\n test_mean = full_output.mean[..., num_induc:]\n induc_induc_covar = full_covar[..., :num_induc, :num_induc].add_jitter()\n induc_data_covar = full_covar[..., :num_induc, num_induc:].evaluate()\n data_data_covar = full_covar[..., num_induc:, num_induc:]\n\n # Compute interpolation terms\n # K_ZZ^{-1/2} K_ZX\n # K_ZZ^{-1/2} \\mu_Z\n L = self._cholesky_factor(induc_induc_covar)\n if L.shape != induc_induc_covar.shape:\n # Aggressive caching can cause nasty shape incompatibilies when evaluating with different batch shapes\n # TODO: Use a hook fo this\n try:\n pop_from_cache_ignore_args(self, \"cholesky_factor\")\n except CachingError:\n pass\n L = self._cholesky_factor(induc_induc_covar)\n interp_term = L.inv_matmul(induc_data_covar.double()).to(full_inputs.dtype)\n\n # Compute the mean of q(f)\n # k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \\mu_Z) + \\mu_X\n predictive_mean = (\n torch.matmul(\n interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)\n ).squeeze(-1)\n + test_mean\n )\n\n # Compute the covariance of q(f)\n # K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX\n middle_term = self.prior_distribution.lazy_covariance_matrix.mul(-1)\n if variational_inducing_covar is not None:\n middle_term = SumLazyTensor(variational_inducing_covar, middle_term)\n\n if trace_mode.on():\n predictive_covar = (\n data_data_covar.add_jitter(1e-4).evaluate()\n + interp_term.transpose(-1, -2) @ middle_term.evaluate() @ interp_term\n )\n else:\n predictive_covar = SumLazyTensor(\n data_data_covar.add_jitter(1e-4),\n MatmulLazyTensor(interp_term.transpose(-1, -2), middle_term @ interp_term),\n )\n\n # Return the distribution\n return MultivariateNormal(predictive_mean, predictive_covar)\n\n def __call__(self, x, prior=False, **kwargs):\n if not self.updated_strategy.item() and not prior:\n with torch.no_grad():\n # Get unwhitened p(u)\n prior_function_dist = self(self.inducing_points, prior=True)\n prior_mean = prior_function_dist.loc\n L = self._cholesky_factor(prior_function_dist.lazy_covariance_matrix.add_jitter())\n\n # Temporarily turn off noise that's added to the mean\n orig_mean_init_std = self._variational_distribution.mean_init_std\n self._variational_distribution.mean_init_std = 0.0\n\n # Change the variational parameters to be whitened\n variational_dist = self.variational_distribution\n mean_diff = (variational_dist.loc - prior_mean).unsqueeze(-1).double()\n whitened_mean = L.inv_matmul(mean_diff).squeeze(-1).to(variational_dist.loc.dtype)\n covar_root = variational_dist.lazy_covariance_matrix.root_decomposition().root.evaluate().double()\n whitened_covar = RootLazyTensor(L.inv_matmul(covar_root).to(variational_dist.loc.dtype))\n whitened_variational_distribution = variational_dist.__class__(whitened_mean, whitened_covar)\n self._variational_distribution.initialize_variational_distribution(whitened_variational_distribution)\n\n # Reset the random noise parameter of the model\n self._variational_distribution.mean_init_std = orig_mean_init_std\n\n # Reset the cache\n clear_cache_hook(self)\n\n # Mark that we have updated the variational strategy\n self.updated_strategy.fill_(True)\n\n return super().__call__(x, prior=prior, **kwargs)\n", "path": "gpytorch/variational/variational_strategy.py"}]}
| 3,237 | 240 |
gh_patches_debug_25309
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-2479
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Monitoring visualization show negative CPU usage with retries
**Describe the bug**
Currently, the logged CPU time in monitoring db is cumulative. This works for most of the cases, except when a task is failed and retried. As a result, multiple figures in the visualization will show a sawtooth.
**To Reproduce**
Steps to reproduce the behavior, for e.g:
1. Setup Parsl master with Python 3.6 on cluster
2. Run all the tests and open the visualization
3. See error
**Expected behavior**
Each retried task should be on a new row.
**Possible solution**
Add a new column (e.g., `retries`) to the table as a new primary key.
Fix visualization based on that
</issue>
<code>
[start of parsl/monitoring/visualization/plots/default/workflow_resource_plots.py]
1 import math
2 import numpy as np
3 import pandas as pd
4 import plotly.graph_objs as go
5 from plotly.offline import plot
6
7
8 def resource_distribution_plot(df_resources, df_task, type='psutil_process_time_user', label='CPU Time Distribution', option='avg', columns=20,):
9 # E.g., psutil_process_time_user or psutil_process_memory_percent
10
11 min_range = min(df_resources[type].astype('float'))
12 max_range = max(df_resources[type].astype('float'))
13 time_step = (max_range - min_range) / columns
14
15 if min_range == max_range:
16 x_axis = [min_range]
17 else:
18 x_axis = []
19 for i in np.arange(min_range, max_range + time_step, time_step):
20 x_axis.append(i)
21
22 apps_dict = dict()
23 for i in range(len(df_task)):
24 row = df_task.iloc[i]
25 apps_dict[row['task_id']] = []
26
27 def y_axis_setup():
28 items = [0] * len(x_axis)
29
30 for app, tasks in apps_dict.items():
31 if option == 'avg':
32 task = df_resources[df_resources['task_id'] ==
33 app][type].astype('float').mean()
34 elif option == 'max':
35 task = df_resources[df_resources['task_id'] == app][type].astype('float').max()
36
37 for i in range(len(x_axis) - 1):
38 a = task >= x_axis[i]
39 b = task < x_axis[i + 1]
40 if a and b:
41 items[i] += 1
42 break
43 if task >= x_axis[-1]:
44 items[-1] += 1
45 return items
46
47 if "memory" not in type:
48 xaxis = dict(autorange=True,
49 title='CPU user time (seconds)')
50 else:
51 xaxis = dict(autorange=True,
52 title='Memory usage (bytes)')
53 fig = go.Figure(
54 data=[go.Bar(x=x_axis,
55 y=y_axis_setup(),
56 name='tasks')],
57 layout=go.Layout(xaxis=xaxis,
58 yaxis=dict(title='Tasks'),
59 title=label + '(' + option + ')'))
60
61 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
62
63
64 def resource_time_series(tasks, type='psutil_process_time_user', label='CPU user time'):
65 tasks['epoch_time'] = (pd.to_datetime(
66 tasks['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
67 step = int(tasks['resource_monitoring_interval'][0])
68 start = tasks['epoch_time'].min()
69 end = tasks['epoch_time'].max()
70 tasks['relative_time'] = tasks['epoch_time'] - start
71 if end != start:
72 bins = pd.cut(tasks['relative_time'],
73 range(0, end - start + 1, step),
74 include_lowest=True)
75 df = tasks.groupby(bins, as_index=False)[type].mean()
76 df['time'] = step * df.index
77 fig = go.Figure(
78 data=[go.Scatter(x=df['time'],
79 y=df[type],
80 )],
81 layout=go.Layout(xaxis=dict(autorange=True,
82 title='Time (seconds)'),
83 yaxis=dict(title=label),
84 title=label))
85 else:
86 fig = go.Figure(
87 data=[go.Scatter(x=[0],
88 y=[tasks[type].mean()],
89 )],
90 layout=go.Layout(xaxis=dict(autorange=True,
91 title='Time (seconds)'),
92 yaxis=dict(title=label),
93 title=label))
94 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
95
96
97 def worker_efficiency(task, node):
98 try:
99 node['epoch_time'] = (pd.to_datetime(
100 node['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
101 task['epoch_time_start'] = (pd.to_datetime(
102 task['task_try_time_launched']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
103 task['epoch_time_running'] = (pd.to_datetime(
104 task['task_try_time_running']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
105 task['epoch_time_returned'] = (pd.to_datetime(
106 task['task_try_time_returned']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
107 start = int(min(task['epoch_time_start'].min(), node['epoch_time'].min()))
108 end = int(task['epoch_time_returned'].max())
109
110 worker_plot = [0] * (end - start + 1)
111 total_workers = node['worker_count'].sum()
112
113 for i, row in task.iterrows():
114 if math.isnan(row['epoch_time_running']):
115 # skip tasks with no running start time
116 continue
117 if math.isnan(row['epoch_time_returned']):
118 # there is no end time for this, so we should assume the "end" time
119 etr = end
120 else:
121 etr = int(row['epoch_time_returned'])
122 for j in range(int(row['epoch_time_running']), etr + 1):
123 worker_plot[j - start] += 1
124 fig = go.Figure(
125 data=[go.Scatter(x=list(range(0, end - start + 1)),
126 y=worker_plot,
127 name='Total busy workers',
128 ),
129 go.Scatter(x=list(range(0, end - start + 1)),
130 y=[total_workers] * (end - start + 1),
131 name='Total online workers',
132 )
133 ],
134 layout=go.Layout(xaxis=dict(autorange=True,
135 title='Time (seconds)'),
136 yaxis=dict(title='Number of workers'),
137 title="Worker efficiency"))
138 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
139 except Exception as e:
140 print(e)
141 return "The worker efficiency plot cannot be generated due to missing data."
142
143
144 def resource_efficiency(resource, node, label='CPU'):
145 try:
146 resource['epoch_time'] = (pd.to_datetime(
147 resource['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
148 node['epoch_time'] = (pd.to_datetime(
149 node['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
150 resource = resource.sort_values(by='epoch_time')
151 start = min(resource['epoch_time'].min(), node['epoch_time'].min())
152 end = resource['epoch_time'].max()
153 resource['relative_time'] = resource['epoch_time'] - start
154 node['relative_time'] = node['epoch_time'] - start
155
156 task_plot = [0] * (end - start + 1)
157 if label == 'CPU':
158 total = node['cpu_count'].sum()
159 elif label == 'mem':
160 total = node['total_memory'].sum() / 1024 / 1024 / 1024
161
162 resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']
163 for task_id in resource['task_id'].unique():
164 tmp = resource[resource['task_id'] == task_id]
165 tmp['last_timestamp'] = tmp['relative_time'].shift(1)
166 if label == 'CPU':
167 tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)
168 for index, row in tmp.iterrows():
169 if np.isnan(row['last_timestamp']):
170 continue
171 for i in range(int(row['last_timestamp']), int(row['relative_time'])):
172 if label == 'CPU':
173 diff = (row['total_cpu_time'] - row['last_cputime']) / (row['relative_time'] - row['last_timestamp'])
174 elif label == 'mem':
175 diff = row['psutil_process_memory_resident'] / 1024 / 1024 / 1024
176 task_plot[i] += diff
177
178 if label == 'CPU':
179 name1 = 'Used CPU cores'
180 name2 = 'Total CPU cores'
181 yaxis = 'Number of CPU cores'
182 title = 'CPU usage'
183 elif label == 'mem':
184 name1 = 'Used memory'
185 name2 = 'Total memory'
186 yaxis = 'Memory (GB)'
187 title = 'Memory usage'
188
189 fig = go.Figure(
190 data=[go.Scatter(x=list(range(0, end - start + 1)),
191 y=task_plot,
192 name=name1,
193 ),
194 go.Scatter(x=list(range(0, end - start + 1)),
195 y=[total] * (end - start + 1),
196 name=name2,
197 )
198 ],
199 layout=go.Layout(xaxis=dict(autorange=True,
200 title='Time (seconds)'),
201 yaxis=dict(title=yaxis),
202 title=title))
203 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
204 except Exception as e:
205 print(e)
206 return "The resource efficiency plot cannot be generated because of exception {}.".format(e)
207
[end of parsl/monitoring/visualization/plots/default/workflow_resource_plots.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
--- a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
+++ b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
@@ -152,6 +152,7 @@
end = resource['epoch_time'].max()
resource['relative_time'] = resource['epoch_time'] - start
node['relative_time'] = node['epoch_time'] - start
+ resource['key'] = resource['task_id'].astype(str) + "-" + resource['try_id'].astype(str)
task_plot = [0] * (end - start + 1)
if label == 'CPU':
@@ -160,8 +161,8 @@
total = node['total_memory'].sum() / 1024 / 1024 / 1024
resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']
- for task_id in resource['task_id'].unique():
- tmp = resource[resource['task_id'] == task_id]
+ for key in resource['key'].unique():
+ tmp = resource[resource['key'] == key]
tmp['last_timestamp'] = tmp['relative_time'].shift(1)
if label == 'CPU':
tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)
|
{"golden_diff": "diff --git a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py\n--- a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py\n+++ b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py\n@@ -152,6 +152,7 @@\n end = resource['epoch_time'].max()\n resource['relative_time'] = resource['epoch_time'] - start\n node['relative_time'] = node['epoch_time'] - start\n+ resource['key'] = resource['task_id'].astype(str) + \"-\" + resource['try_id'].astype(str)\n \n task_plot = [0] * (end - start + 1)\n if label == 'CPU':\n@@ -160,8 +161,8 @@\n total = node['total_memory'].sum() / 1024 / 1024 / 1024\n \n resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']\n- for task_id in resource['task_id'].unique():\n- tmp = resource[resource['task_id'] == task_id]\n+ for key in resource['key'].unique():\n+ tmp = resource[resource['key'] == key]\n tmp['last_timestamp'] = tmp['relative_time'].shift(1)\n if label == 'CPU':\n tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)\n", "issue": "Monitoring visualization show negative CPU usage with retries\n**Describe the bug**\r\nCurrently, the logged CPU time in monitoring db is cumulative. This works for most of the cases, except when a task is failed and retried. As a result, multiple figures in the visualization will show a sawtooth.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior, for e.g:\r\n1. Setup Parsl master with Python 3.6 on cluster\r\n2. Run all the tests and open the visualization\r\n3. See error\r\n\r\n**Expected behavior**\r\nEach retried task should be on a new row.\r\n\r\n**Possible solution**\r\nAdd a new column (e.g., `retries`) to the table as a new primary key.\r\nFix visualization based on that\r\n\n", "before_files": [{"content": "import math\nimport numpy as np\nimport pandas as pd\nimport plotly.graph_objs as go\nfrom plotly.offline import plot\n\n\ndef resource_distribution_plot(df_resources, df_task, type='psutil_process_time_user', label='CPU Time Distribution', option='avg', columns=20,):\n # E.g., psutil_process_time_user or psutil_process_memory_percent\n\n min_range = min(df_resources[type].astype('float'))\n max_range = max(df_resources[type].astype('float'))\n time_step = (max_range - min_range) / columns\n\n if min_range == max_range:\n x_axis = [min_range]\n else:\n x_axis = []\n for i in np.arange(min_range, max_range + time_step, time_step):\n x_axis.append(i)\n\n apps_dict = dict()\n for i in range(len(df_task)):\n row = df_task.iloc[i]\n apps_dict[row['task_id']] = []\n\n def y_axis_setup():\n items = [0] * len(x_axis)\n\n for app, tasks in apps_dict.items():\n if option == 'avg':\n task = df_resources[df_resources['task_id'] ==\n app][type].astype('float').mean()\n elif option == 'max':\n task = df_resources[df_resources['task_id'] == app][type].astype('float').max()\n\n for i in range(len(x_axis) - 1):\n a = task >= x_axis[i]\n b = task < x_axis[i + 1]\n if a and b:\n items[i] += 1\n break\n if task >= x_axis[-1]:\n items[-1] += 1\n return items\n\n if \"memory\" not in type:\n xaxis = dict(autorange=True,\n title='CPU user time (seconds)')\n else:\n xaxis = dict(autorange=True,\n title='Memory usage (bytes)')\n fig = go.Figure(\n data=[go.Bar(x=x_axis,\n y=y_axis_setup(),\n name='tasks')],\n layout=go.Layout(xaxis=xaxis,\n yaxis=dict(title='Tasks'),\n title=label + '(' + option + ')'))\n\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef resource_time_series(tasks, type='psutil_process_time_user', label='CPU user time'):\n tasks['epoch_time'] = (pd.to_datetime(\n tasks['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n step = int(tasks['resource_monitoring_interval'][0])\n start = tasks['epoch_time'].min()\n end = tasks['epoch_time'].max()\n tasks['relative_time'] = tasks['epoch_time'] - start\n if end != start:\n bins = pd.cut(tasks['relative_time'],\n range(0, end - start + 1, step),\n include_lowest=True)\n df = tasks.groupby(bins, as_index=False)[type].mean()\n df['time'] = step * df.index\n fig = go.Figure(\n data=[go.Scatter(x=df['time'],\n y=df[type],\n )],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=label),\n title=label))\n else:\n fig = go.Figure(\n data=[go.Scatter(x=[0],\n y=[tasks[type].mean()],\n )],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=label),\n title=label))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef worker_efficiency(task, node):\n try:\n node['epoch_time'] = (pd.to_datetime(\n node['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_start'] = (pd.to_datetime(\n task['task_try_time_launched']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_running'] = (pd.to_datetime(\n task['task_try_time_running']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_returned'] = (pd.to_datetime(\n task['task_try_time_returned']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n start = int(min(task['epoch_time_start'].min(), node['epoch_time'].min()))\n end = int(task['epoch_time_returned'].max())\n\n worker_plot = [0] * (end - start + 1)\n total_workers = node['worker_count'].sum()\n\n for i, row in task.iterrows():\n if math.isnan(row['epoch_time_running']):\n # skip tasks with no running start time\n continue\n if math.isnan(row['epoch_time_returned']):\n # there is no end time for this, so we should assume the \"end\" time\n etr = end\n else:\n etr = int(row['epoch_time_returned'])\n for j in range(int(row['epoch_time_running']), etr + 1):\n worker_plot[j - start] += 1\n fig = go.Figure(\n data=[go.Scatter(x=list(range(0, end - start + 1)),\n y=worker_plot,\n name='Total busy workers',\n ),\n go.Scatter(x=list(range(0, end - start + 1)),\n y=[total_workers] * (end - start + 1),\n name='Total online workers',\n )\n ],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title='Number of workers'),\n title=\"Worker efficiency\"))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n except Exception as e:\n print(e)\n return \"The worker efficiency plot cannot be generated due to missing data.\"\n\n\ndef resource_efficiency(resource, node, label='CPU'):\n try:\n resource['epoch_time'] = (pd.to_datetime(\n resource['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n node['epoch_time'] = (pd.to_datetime(\n node['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n resource = resource.sort_values(by='epoch_time')\n start = min(resource['epoch_time'].min(), node['epoch_time'].min())\n end = resource['epoch_time'].max()\n resource['relative_time'] = resource['epoch_time'] - start\n node['relative_time'] = node['epoch_time'] - start\n\n task_plot = [0] * (end - start + 1)\n if label == 'CPU':\n total = node['cpu_count'].sum()\n elif label == 'mem':\n total = node['total_memory'].sum() / 1024 / 1024 / 1024\n\n resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']\n for task_id in resource['task_id'].unique():\n tmp = resource[resource['task_id'] == task_id]\n tmp['last_timestamp'] = tmp['relative_time'].shift(1)\n if label == 'CPU':\n tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)\n for index, row in tmp.iterrows():\n if np.isnan(row['last_timestamp']):\n continue\n for i in range(int(row['last_timestamp']), int(row['relative_time'])):\n if label == 'CPU':\n diff = (row['total_cpu_time'] - row['last_cputime']) / (row['relative_time'] - row['last_timestamp'])\n elif label == 'mem':\n diff = row['psutil_process_memory_resident'] / 1024 / 1024 / 1024\n task_plot[i] += diff\n\n if label == 'CPU':\n name1 = 'Used CPU cores'\n name2 = 'Total CPU cores'\n yaxis = 'Number of CPU cores'\n title = 'CPU usage'\n elif label == 'mem':\n name1 = 'Used memory'\n name2 = 'Total memory'\n yaxis = 'Memory (GB)'\n title = 'Memory usage'\n\n fig = go.Figure(\n data=[go.Scatter(x=list(range(0, end - start + 1)),\n y=task_plot,\n name=name1,\n ),\n go.Scatter(x=list(range(0, end - start + 1)),\n y=[total] * (end - start + 1),\n name=name2,\n )\n ],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=yaxis),\n title=title))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n except Exception as e:\n print(e)\n return \"The resource efficiency plot cannot be generated because of exception {}.\".format(e)\n", "path": "parsl/monitoring/visualization/plots/default/workflow_resource_plots.py"}]}
| 3,259 | 342 |
gh_patches_debug_30742
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-265
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
More filters for table list API
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
The table list API should allow filtering. For example, we might want to get the list of all tables in a schema to see if the table the user is trying to create already exists in that schema.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
The table list endpoint should support filtering by:
- schema
- before/after: created, last updated
- whether the import was verified
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
We should use `django-filter` since it integrates with DRF and makes setting up filters easy.
</issue>
<code>
[start of mathesar/models.py]
1 from django.contrib.auth.models import User
2 from django.core.cache import cache
3 from django.db import models
4 from django.utils.functional import cached_property
5
6 from mathesar.database.base import create_mathesar_engine
7 from mathesar.utils import models as model_utils
8 from db import tables, records, schemas
9
10 NAME_CACHE_INTERVAL = 60 * 5
11
12
13 class BaseModel(models.Model):
14 created_at = models.DateTimeField(auto_now_add=True)
15 updated_at = models.DateTimeField(auto_now=True)
16
17 class Meta:
18 abstract = True
19
20
21 class DatabaseObject(BaseModel):
22 oid = models.IntegerField()
23
24 class Meta:
25 abstract = True
26
27 def __str__(self):
28 return f"{self.__class__.__name__}: {self.oid}"
29
30
31 class Schema(DatabaseObject):
32 database = models.CharField(max_length=128)
33
34 @cached_property
35 def _sa_engine(self):
36 # We're caching this since the engine is used frequently.
37 return create_mathesar_engine(self.database)
38
39 @cached_property
40 def name(self):
41 cache_key = f"{self.database}_schema_name_{self.oid}"
42 try:
43 schema_name = cache.get(cache_key)
44 if schema_name is None:
45 schema_name = schemas.get_schema_name_from_oid(
46 self.oid, self._sa_engine
47 )
48 cache.set(cache_key, schema_name, NAME_CACHE_INTERVAL)
49 return schema_name
50 # We catch this error, since it lets us decouple the cadence of
51 # overall DB reflection from the cadence of cache expiration for
52 # schema names. Also, it makes it obvious when the DB layer has
53 # been altered, as opposed to other reasons for a 404 when
54 # requesting a schema.
55 except TypeError:
56 return 'MISSING'
57
58
59 class Table(DatabaseObject):
60 schema = models.ForeignKey('Schema', on_delete=models.CASCADE,
61 related_name='tables')
62 import_verified = models.BooleanField(blank=True, null=True)
63
64 @cached_property
65 def _sa_table(self):
66 try:
67 table = tables.reflect_table_from_oid(
68 self.oid, self.schema._sa_engine,
69 )
70 # We catch this error, since it lets us decouple the cadence of
71 # overall DB reflection from the cadence of cache expiration for
72 # table names. Also, it makes it obvious when the DB layer has
73 # been altered, as opposed to other reasons for a 404 when
74 # requesting a table.
75 except TypeError:
76 table = tables.create_empty_table("MISSING")
77 return table
78
79 @cached_property
80 def name(self):
81 return self._sa_table.name
82
83 @property
84 def sa_columns(self):
85 return self._sa_table.columns
86
87 @property
88 def sa_column_names(self):
89 return self.sa_columns.keys()
90
91 @property
92 def sa_num_records(self):
93 return tables.get_count(self._sa_table, self.schema._sa_engine)
94
95 @property
96 def sa_all_records(self):
97 return records.get_records(self._sa_table, self.schema._sa_engine)
98
99 def get_record(self, id_value):
100 return records.get_record(self._sa_table, self.schema._sa_engine, id_value)
101
102 def get_records(self, limit=None, offset=None):
103 return records.get_records(self._sa_table, self.schema._sa_engine, limit, offset)
104
105 def create_record_or_records(self, record_data):
106 return records.create_record_or_records(self._sa_table, self.schema._sa_engine, record_data)
107
108 def update_record(self, id_value, record_data):
109 return records.update_record(self._sa_table, self.schema._sa_engine, id_value, record_data)
110
111 def delete_record(self, id_value):
112 return records.delete_record(self._sa_table, self.schema._sa_engine, id_value)
113
114
115 class DataFile(BaseModel):
116 file = models.FileField(
117 upload_to=model_utils.user_directory_path,
118 )
119 user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)
120 table_imported_to = models.ForeignKey(Table, related_name="data_files", blank=True,
121 null=True, on_delete=models.SET_NULL)
122 delimiter = models.CharField(max_length=1, default=',', blank=True)
123 escapechar = models.CharField(max_length=1, blank=True)
124 quotechar = models.CharField(max_length=1, default='"', blank=True)
125
[end of mathesar/models.py]
[start of mathesar/filters.py]
1 from django_property_filter import (
2 PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,
3 )
4
5 from mathesar.models import Schema, Table
6
7
8 class CharInFilter(PropertyBaseInFilter, PropertyCharFilter):
9 pass
10
11
12 class SchemaFilter(PropertyFilterSet):
13 database = CharInFilter(field_name='database', lookup_expr='in')
14 name = CharInFilter(field_name='name', lookup_expr='in')
15
16 class Meta:
17 model = Schema
18 fields = ['database', 'name']
19
20
21 class TableFilter(PropertyFilterSet):
22 name = CharInFilter(field_name='name', lookup_expr='in')
23
24 class Meta:
25 model = Table
26 fields = ['name']
27
[end of mathesar/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mathesar/filters.py b/mathesar/filters.py
--- a/mathesar/filters.py
+++ b/mathesar/filters.py
@@ -1,5 +1,6 @@
from django_property_filter import (
PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,
+ PropertyDateTimeFromToRangeFilter, PropertyBooleanFilter
)
from mathesar.models import Schema, Table
@@ -20,7 +21,13 @@
class TableFilter(PropertyFilterSet):
name = CharInFilter(field_name='name', lookup_expr='in')
+ schema = CharInFilter(field_name='schema__name', lookup_expr='in')
+ created = PropertyDateTimeFromToRangeFilter(field_name='created_at')
+ updated = PropertyDateTimeFromToRangeFilter(field_name='updated_at')
+ import_verified = PropertyBooleanFilter(field_name='import_verified')
+ not_imported = PropertyBooleanFilter(lookup_expr="isnull",
+ field_name='import_verified')
class Meta:
model = Table
- fields = ['name']
+ fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']
diff --git a/mathesar/models.py b/mathesar/models.py
--- a/mathesar/models.py
+++ b/mathesar/models.py
@@ -28,13 +28,21 @@
return f"{self.__class__.__name__}: {self.oid}"
+# TODO: Replace with a proper form of caching
+# See: https://github.com/centerofci/mathesar/issues/280
+_engine = None
+
+
class Schema(DatabaseObject):
database = models.CharField(max_length=128)
- @cached_property
+ @property
def _sa_engine(self):
+ global _engine
# We're caching this since the engine is used frequently.
- return create_mathesar_engine(self.database)
+ if _engine is None:
+ _engine = create_mathesar_engine(self.database)
+ return _engine
@cached_property
def name(self):
|
{"golden_diff": "diff --git a/mathesar/filters.py b/mathesar/filters.py\n--- a/mathesar/filters.py\n+++ b/mathesar/filters.py\n@@ -1,5 +1,6 @@\n from django_property_filter import (\n PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,\n+ PropertyDateTimeFromToRangeFilter, PropertyBooleanFilter\n )\n \n from mathesar.models import Schema, Table\n@@ -20,7 +21,13 @@\n \n class TableFilter(PropertyFilterSet):\n name = CharInFilter(field_name='name', lookup_expr='in')\n+ schema = CharInFilter(field_name='schema__name', lookup_expr='in')\n+ created = PropertyDateTimeFromToRangeFilter(field_name='created_at')\n+ updated = PropertyDateTimeFromToRangeFilter(field_name='updated_at')\n+ import_verified = PropertyBooleanFilter(field_name='import_verified')\n+ not_imported = PropertyBooleanFilter(lookup_expr=\"isnull\",\n+ field_name='import_verified')\n \n class Meta:\n model = Table\n- fields = ['name']\n+ fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']\ndiff --git a/mathesar/models.py b/mathesar/models.py\n--- a/mathesar/models.py\n+++ b/mathesar/models.py\n@@ -28,13 +28,21 @@\n return f\"{self.__class__.__name__}: {self.oid}\"\n \n \n+# TODO: Replace with a proper form of caching\n+# See: https://github.com/centerofci/mathesar/issues/280\n+_engine = None\n+\n+\n class Schema(DatabaseObject):\n database = models.CharField(max_length=128)\n \n- @cached_property\n+ @property\n def _sa_engine(self):\n+ global _engine\n # We're caching this since the engine is used frequently.\n- return create_mathesar_engine(self.database)\n+ if _engine is None:\n+ _engine = create_mathesar_engine(self.database)\n+ return _engine\n \n @cached_property\n def name(self):\n", "issue": "More filters for table list API\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nThe table list API should allow filtering. For example, we might want to get the list of all tables in a schema to see if the table the user is trying to create already exists in that schema.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nThe table list endpoint should support filtering by:\r\n- schema\r\n- before/after: created, last updated\r\n- whether the import was verified\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nWe should use `django-filter` since it integrates with DRF and makes setting up filters easy.\n", "before_files": [{"content": "from django.contrib.auth.models import User\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.functional import cached_property\n\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.utils import models as model_utils\nfrom db import tables, records, schemas\n\nNAME_CACHE_INTERVAL = 60 * 5\n\n\nclass BaseModel(models.Model):\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n\n class Meta:\n abstract = True\n\n\nclass DatabaseObject(BaseModel):\n oid = models.IntegerField()\n\n class Meta:\n abstract = True\n\n def __str__(self):\n return f\"{self.__class__.__name__}: {self.oid}\"\n\n\nclass Schema(DatabaseObject):\n database = models.CharField(max_length=128)\n\n @cached_property\n def _sa_engine(self):\n # We're caching this since the engine is used frequently.\n return create_mathesar_engine(self.database)\n\n @cached_property\n def name(self):\n cache_key = f\"{self.database}_schema_name_{self.oid}\"\n try:\n schema_name = cache.get(cache_key)\n if schema_name is None:\n schema_name = schemas.get_schema_name_from_oid(\n self.oid, self._sa_engine\n )\n cache.set(cache_key, schema_name, NAME_CACHE_INTERVAL)\n return schema_name\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # schema names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a schema.\n except TypeError:\n return 'MISSING'\n\n\nclass Table(DatabaseObject):\n schema = models.ForeignKey('Schema', on_delete=models.CASCADE,\n related_name='tables')\n import_verified = models.BooleanField(blank=True, null=True)\n\n @cached_property\n def _sa_table(self):\n try:\n table = tables.reflect_table_from_oid(\n self.oid, self.schema._sa_engine,\n )\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # table names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a table.\n except TypeError:\n table = tables.create_empty_table(\"MISSING\")\n return table\n\n @cached_property\n def name(self):\n return self._sa_table.name\n\n @property\n def sa_columns(self):\n return self._sa_table.columns\n\n @property\n def sa_column_names(self):\n return self.sa_columns.keys()\n\n @property\n def sa_num_records(self):\n return tables.get_count(self._sa_table, self.schema._sa_engine)\n\n @property\n def sa_all_records(self):\n return records.get_records(self._sa_table, self.schema._sa_engine)\n\n def get_record(self, id_value):\n return records.get_record(self._sa_table, self.schema._sa_engine, id_value)\n\n def get_records(self, limit=None, offset=None):\n return records.get_records(self._sa_table, self.schema._sa_engine, limit, offset)\n\n def create_record_or_records(self, record_data):\n return records.create_record_or_records(self._sa_table, self.schema._sa_engine, record_data)\n\n def update_record(self, id_value, record_data):\n return records.update_record(self._sa_table, self.schema._sa_engine, id_value, record_data)\n\n def delete_record(self, id_value):\n return records.delete_record(self._sa_table, self.schema._sa_engine, id_value)\n\n\nclass DataFile(BaseModel):\n file = models.FileField(\n upload_to=model_utils.user_directory_path,\n )\n user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)\n table_imported_to = models.ForeignKey(Table, related_name=\"data_files\", blank=True,\n null=True, on_delete=models.SET_NULL)\n delimiter = models.CharField(max_length=1, default=',', blank=True)\n escapechar = models.CharField(max_length=1, blank=True)\n quotechar = models.CharField(max_length=1, default='\"', blank=True)\n", "path": "mathesar/models.py"}, {"content": "from django_property_filter import (\n PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,\n)\n\nfrom mathesar.models import Schema, Table\n\n\nclass CharInFilter(PropertyBaseInFilter, PropertyCharFilter):\n pass\n\n\nclass SchemaFilter(PropertyFilterSet):\n database = CharInFilter(field_name='database', lookup_expr='in')\n name = CharInFilter(field_name='name', lookup_expr='in')\n\n class Meta:\n model = Schema\n fields = ['database', 'name']\n\n\nclass TableFilter(PropertyFilterSet):\n name = CharInFilter(field_name='name', lookup_expr='in')\n\n class Meta:\n model = Table\n fields = ['name']\n", "path": "mathesar/filters.py"}]}
| 2,114 | 448 |
gh_patches_debug_19797
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-4220
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ddtrace.opentracer` incorrectly raises `SpanContextCorruptedException` on `extract` of missing span context
The documentation for `SpanContextCorruptedException` [says](https://opentracing-python.readthedocs.io/en/1.3.0/api.html#opentracing.SpanContextCorruptedException):
> SpanContextCorruptedException should be used when the underlying span context state is seemingly present but not well-formed.
`ddtrace.opentracer`'s `extract` is throwing an error whenever it fails to recover a span, whether or not it was malformed or simply missing. This completely breaks the normal pattern of "I received an HTTP request, so I'll throw the headers at `extract` and pass the result to `child_of` for my new span, expecting to get `None` and therefore make a new root span if I was called without tracing info".
### Which version of dd-trace-py are you using?
Python 3.7
ddtrace 0.46.0
### How can we reproduce your problem?
```py
In [1]: from opentracing import Format
In [2]: from ddtrace.opentracer import Tracer
In [3]: tracer = Tracer()
In [4]: tracer.extract(Format.HTTP_HEADERS, {})
---------------------------------------------------------------------------
SpanContextCorruptedException Traceback (most recent call last)
<ipython-input-4-f497fe0c23a2> in <module>
----> 1 tracer.extract(Format.HTTP_HEADERS, {})
~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/tracer.py in extract(self, format, carrier)
326 # we have to manually activate the returned context from a distributed
327 # trace
--> 328 ot_span_ctx = propagator.extract(carrier)
329 dd_span_ctx = ot_span_ctx._dd_context
330 self._dd_tracer.context_provider.activate(dd_span_ctx)
~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/propagation/http.py in extract(self, carrier)
70 # if this occurs.
71 if not ddspan_ctx.trace_id:
---> 72 raise SpanContextCorruptedException("failed to extract span context")
73
74 baggage = {}
SpanContextCorruptedException: failed to extract span context
```
### What is the result that you expected?
I expect to get a clean `None` with no error if no DataDog span context material was present. See Jaeger:
```py
In [1]: from opentracing import Format
In [2]: import jaeger_client
In [3]: tracer = jaeger_client.Config({"service_name": "foo"}).initialize_tracer()
In [4]: tracer.extract(Format.HTTP_HEADERS, {})
In [5]: print(tracer.extract(Format.HTTP_HEADERS, {}))
None
```
</issue>
<code>
[start of ddtrace/opentracer/propagation/http.py]
1 from typing import Dict
2
3 from opentracing import InvalidCarrierException
4 from opentracing import SpanContextCorruptedException
5
6 from ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator
7
8 from ...internal.logger import get_logger
9 from ..span_context import SpanContext
10 from .propagator import Propagator
11
12
13 log = get_logger(__name__)
14
15 HTTP_BAGGAGE_PREFIX = "ot-baggage-"
16 HTTP_BAGGAGE_PREFIX_LEN = len(HTTP_BAGGAGE_PREFIX)
17
18
19 class HTTPPropagator(Propagator):
20 """OpenTracing compatible HTTP_HEADER and TEXT_MAP format propagator.
21
22 `HTTPPropagator` provides compatibility by using existing OpenTracing
23 compatible methods from the ddtracer along with new logic supporting the
24 outstanding OpenTracing-defined functionality.
25 """
26
27 @staticmethod
28 def inject(span_context, carrier):
29 # type: (SpanContext, Dict[str, str]) -> None
30 """Inject a span context into a carrier.
31
32 *span_context* is injected into the carrier by first using an
33 :class:`ddtrace.propagation.http.HTTPPropagator` to inject the ddtracer
34 specific fields.
35
36 Then the baggage is injected into *carrier*.
37
38 :param span_context: span context to inject.
39
40 :param carrier: carrier to inject into.
41 """
42 if not isinstance(carrier, dict):
43 raise InvalidCarrierException("propagator expects carrier to be a dict")
44
45 DDHTTPPropagator.inject(span_context._dd_context, carrier)
46
47 # Add the baggage
48 if span_context.baggage is not None:
49 for key in span_context.baggage:
50 carrier[HTTP_BAGGAGE_PREFIX + key] = span_context.baggage[key]
51
52 @staticmethod
53 def extract(carrier):
54 # type: (Dict[str, str]) -> SpanContext
55 """Extract a span context from a carrier.
56
57 :class:`ddtrace.propagation.http.HTTPPropagator` is used to extract
58 ddtracer supported fields into a `ddtrace.Context` context which is
59 combined with new logic to extract the baggage which is returned in an
60 OpenTracing compatible span context.
61
62 :param carrier: carrier to extract from.
63
64 :return: extracted span context.
65 """
66 if not isinstance(carrier, dict):
67 raise InvalidCarrierException("propagator expects carrier to be a dict")
68
69 ddspan_ctx = DDHTTPPropagator.extract(carrier)
70
71 # if the dd propagator fails then it will return a new empty span
72 # context (with trace_id=None), we however want to raise an exception
73 # if this occurs.
74 if not ddspan_ctx.trace_id:
75 raise SpanContextCorruptedException("failed to extract span context")
76
77 baggage = {}
78 for key in carrier:
79 if key.startswith(HTTP_BAGGAGE_PREFIX):
80 baggage[key[HTTP_BAGGAGE_PREFIX_LEN:]] = carrier[key]
81
82 return SpanContext(ddcontext=ddspan_ctx, baggage=baggage)
83
[end of ddtrace/opentracer/propagation/http.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/opentracer/propagation/http.py b/ddtrace/opentracer/propagation/http.py
--- a/ddtrace/opentracer/propagation/http.py
+++ b/ddtrace/opentracer/propagation/http.py
@@ -1,7 +1,6 @@
from typing import Dict
from opentracing import InvalidCarrierException
-from opentracing import SpanContextCorruptedException
from ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator
@@ -67,13 +66,6 @@
raise InvalidCarrierException("propagator expects carrier to be a dict")
ddspan_ctx = DDHTTPPropagator.extract(carrier)
-
- # if the dd propagator fails then it will return a new empty span
- # context (with trace_id=None), we however want to raise an exception
- # if this occurs.
- if not ddspan_ctx.trace_id:
- raise SpanContextCorruptedException("failed to extract span context")
-
baggage = {}
for key in carrier:
if key.startswith(HTTP_BAGGAGE_PREFIX):
|
{"golden_diff": "diff --git a/ddtrace/opentracer/propagation/http.py b/ddtrace/opentracer/propagation/http.py\n--- a/ddtrace/opentracer/propagation/http.py\n+++ b/ddtrace/opentracer/propagation/http.py\n@@ -1,7 +1,6 @@\n from typing import Dict\n \n from opentracing import InvalidCarrierException\n-from opentracing import SpanContextCorruptedException\n \n from ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator\n \n@@ -67,13 +66,6 @@\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n \n ddspan_ctx = DDHTTPPropagator.extract(carrier)\n-\n- # if the dd propagator fails then it will return a new empty span\n- # context (with trace_id=None), we however want to raise an exception\n- # if this occurs.\n- if not ddspan_ctx.trace_id:\n- raise SpanContextCorruptedException(\"failed to extract span context\")\n-\n baggage = {}\n for key in carrier:\n if key.startswith(HTTP_BAGGAGE_PREFIX):\n", "issue": "`ddtrace.opentracer` incorrectly raises `SpanContextCorruptedException` on `extract` of missing span context\nThe documentation for `SpanContextCorruptedException` [says](https://opentracing-python.readthedocs.io/en/1.3.0/api.html#opentracing.SpanContextCorruptedException):\r\n\r\n> SpanContextCorruptedException should be used when the underlying span context state is seemingly present but not well-formed.\r\n\r\n`ddtrace.opentracer`'s `extract` is throwing an error whenever it fails to recover a span, whether or not it was malformed or simply missing. This completely breaks the normal pattern of \"I received an HTTP request, so I'll throw the headers at `extract` and pass the result to `child_of` for my new span, expecting to get `None` and therefore make a new root span if I was called without tracing info\".\r\n\r\n### Which version of dd-trace-py are you using?\r\n\r\nPython 3.7\r\nddtrace 0.46.0\r\n\r\n### How can we reproduce your problem?\r\n\r\n```py\r\nIn [1]: from opentracing import Format\r\n\r\nIn [2]: from ddtrace.opentracer import Tracer\r\n\r\nIn [3]: tracer = Tracer()\r\n\r\nIn [4]: tracer.extract(Format.HTTP_HEADERS, {})\r\n---------------------------------------------------------------------------\r\nSpanContextCorruptedException Traceback (most recent call last)\r\n<ipython-input-4-f497fe0c23a2> in <module>\r\n----> 1 tracer.extract(Format.HTTP_HEADERS, {})\r\n\r\n~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/tracer.py in extract(self, format, carrier)\r\n 326 # we have to manually activate the returned context from a distributed\r\n 327 # trace\r\n--> 328 ot_span_ctx = propagator.extract(carrier)\r\n 329 dd_span_ctx = ot_span_ctx._dd_context\r\n 330 self._dd_tracer.context_provider.activate(dd_span_ctx)\r\n\r\n~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/propagation/http.py in extract(self, carrier)\r\n 70 # if this occurs.\r\n 71 if not ddspan_ctx.trace_id:\r\n---> 72 raise SpanContextCorruptedException(\"failed to extract span context\")\r\n 73 \r\n 74 baggage = {}\r\n\r\nSpanContextCorruptedException: failed to extract span context\r\n```\r\n\r\n### What is the result that you expected?\r\n\r\nI expect to get a clean `None` with no error if no DataDog span context material was present. See Jaeger:\r\n\r\n```py\r\nIn [1]: from opentracing import Format\r\n\r\nIn [2]: import jaeger_client\r\n\r\nIn [3]: tracer = jaeger_client.Config({\"service_name\": \"foo\"}).initialize_tracer()\r\n\r\nIn [4]: tracer.extract(Format.HTTP_HEADERS, {})\r\n\r\nIn [5]: print(tracer.extract(Format.HTTP_HEADERS, {}))\r\nNone\r\n```\r\n\n", "before_files": [{"content": "from typing import Dict\n\nfrom opentracing import InvalidCarrierException\nfrom opentracing import SpanContextCorruptedException\n\nfrom ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator\n\nfrom ...internal.logger import get_logger\nfrom ..span_context import SpanContext\nfrom .propagator import Propagator\n\n\nlog = get_logger(__name__)\n\nHTTP_BAGGAGE_PREFIX = \"ot-baggage-\"\nHTTP_BAGGAGE_PREFIX_LEN = len(HTTP_BAGGAGE_PREFIX)\n\n\nclass HTTPPropagator(Propagator):\n \"\"\"OpenTracing compatible HTTP_HEADER and TEXT_MAP format propagator.\n\n `HTTPPropagator` provides compatibility by using existing OpenTracing\n compatible methods from the ddtracer along with new logic supporting the\n outstanding OpenTracing-defined functionality.\n \"\"\"\n\n @staticmethod\n def inject(span_context, carrier):\n # type: (SpanContext, Dict[str, str]) -> None\n \"\"\"Inject a span context into a carrier.\n\n *span_context* is injected into the carrier by first using an\n :class:`ddtrace.propagation.http.HTTPPropagator` to inject the ddtracer\n specific fields.\n\n Then the baggage is injected into *carrier*.\n\n :param span_context: span context to inject.\n\n :param carrier: carrier to inject into.\n \"\"\"\n if not isinstance(carrier, dict):\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n\n DDHTTPPropagator.inject(span_context._dd_context, carrier)\n\n # Add the baggage\n if span_context.baggage is not None:\n for key in span_context.baggage:\n carrier[HTTP_BAGGAGE_PREFIX + key] = span_context.baggage[key]\n\n @staticmethod\n def extract(carrier):\n # type: (Dict[str, str]) -> SpanContext\n \"\"\"Extract a span context from a carrier.\n\n :class:`ddtrace.propagation.http.HTTPPropagator` is used to extract\n ddtracer supported fields into a `ddtrace.Context` context which is\n combined with new logic to extract the baggage which is returned in an\n OpenTracing compatible span context.\n\n :param carrier: carrier to extract from.\n\n :return: extracted span context.\n \"\"\"\n if not isinstance(carrier, dict):\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n\n ddspan_ctx = DDHTTPPropagator.extract(carrier)\n\n # if the dd propagator fails then it will return a new empty span\n # context (with trace_id=None), we however want to raise an exception\n # if this occurs.\n if not ddspan_ctx.trace_id:\n raise SpanContextCorruptedException(\"failed to extract span context\")\n\n baggage = {}\n for key in carrier:\n if key.startswith(HTTP_BAGGAGE_PREFIX):\n baggage[key[HTTP_BAGGAGE_PREFIX_LEN:]] = carrier[key]\n\n return SpanContext(ddcontext=ddspan_ctx, baggage=baggage)\n", "path": "ddtrace/opentracer/propagation/http.py"}]}
| 2,023 | 240 |
gh_patches_debug_11726
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-5842
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scrape_document with trailing slash fails
```
▶ docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS/ --depth=2
/usr/local/lib/python2.7/site-packages/django/utils/encoding.py:6: ImportWarning: Not importing directory '/app/locale': missing __init__.py
import locale
/usr/local/lib/python2.7/site-packages/django/template/backends/jinja2.py:6: ImportWarning: Not importing directory '/app/jinja2': missing __init__.py
import jinja2
/app/kuma/core/i18n.py:18: ImportWarning: Not importing directory '/app/kuma/core/jinja2': missing __init__.py
from jinja2 import nodes
/usr/local/lib/python2.7/site-packages/decorator_include.py:20: RemovedInDjango20Warning: Importing from django.core.urlresolvers is deprecated in favor of django.urls.
from django.core.urlresolvers import RegexURLPattern as URLPattern, RegexURLResolver as URLResolver
/usr/local/lib/python2.7/site-packages/soapbox/templatetags/soapbox.py:9: RemovedInDjango20Warning: assignment_tag() is deprecated. Use simple_tag() instead
@register.assignment_tag(takes_context=True)
INFO: Scrape progress, cycle 1: 1 Initializing, 1 Gathering Requirements
INFO: Scrape progress, cycle 2: 3 Initializing, 1 Gathering Requirements, 1 Done
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/app/kuma/scrape/management/commands/scrape_document.py", line 47, in handle
scraper.scrape()
File "/app/kuma/scrape/scraper.py", line 182, in scrape
dependencies = source.gather(self.requester, self.storage)
File "/app/kuma/scrape/sources/base.py", line 192, in gather
has_prereqs, data = self.load_prereqs(requester, storage)
File "/app/kuma/scrape/sources/document_children.py", line 31, in load_prereqs
data = response.json()
File "/usr/local/lib/python2.7/site-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/local/lib/python2.7/site-packages/simplejson/__init__.py", line 516, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python2.7/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/local/lib/python2.7/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.scanner.JSONDecodeError: Expecting value: line 4 column 1 (char 3)
```
Note that if I retry without the trailing slash things work. I.e. `docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS --depth=2`
Somewhere in there it assumes that the response is JSON when in fact the response status code is not OKish.
scrape_document with trailing slash fails
```
▶ docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS/ --depth=2
/usr/local/lib/python2.7/site-packages/django/utils/encoding.py:6: ImportWarning: Not importing directory '/app/locale': missing __init__.py
import locale
/usr/local/lib/python2.7/site-packages/django/template/backends/jinja2.py:6: ImportWarning: Not importing directory '/app/jinja2': missing __init__.py
import jinja2
/app/kuma/core/i18n.py:18: ImportWarning: Not importing directory '/app/kuma/core/jinja2': missing __init__.py
from jinja2 import nodes
/usr/local/lib/python2.7/site-packages/decorator_include.py:20: RemovedInDjango20Warning: Importing from django.core.urlresolvers is deprecated in favor of django.urls.
from django.core.urlresolvers import RegexURLPattern as URLPattern, RegexURLResolver as URLResolver
/usr/local/lib/python2.7/site-packages/soapbox/templatetags/soapbox.py:9: RemovedInDjango20Warning: assignment_tag() is deprecated. Use simple_tag() instead
@register.assignment_tag(takes_context=True)
INFO: Scrape progress, cycle 1: 1 Initializing, 1 Gathering Requirements
INFO: Scrape progress, cycle 2: 3 Initializing, 1 Gathering Requirements, 1 Done
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/app/kuma/scrape/management/commands/scrape_document.py", line 47, in handle
scraper.scrape()
File "/app/kuma/scrape/scraper.py", line 182, in scrape
dependencies = source.gather(self.requester, self.storage)
File "/app/kuma/scrape/sources/base.py", line 192, in gather
has_prereqs, data = self.load_prereqs(requester, storage)
File "/app/kuma/scrape/sources/document_children.py", line 31, in load_prereqs
data = response.json()
File "/usr/local/lib/python2.7/site-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/local/lib/python2.7/site-packages/simplejson/__init__.py", line 516, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python2.7/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/local/lib/python2.7/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.scanner.JSONDecodeError: Expecting value: line 4 column 1 (char 3)
```
Note that if I retry without the trailing slash things work. I.e. `docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS --depth=2`
Somewhere in there it assumes that the response is JSON when in fact the response status code is not OKish.
</issue>
<code>
[start of kuma/scrape/management/commands/__init__.py]
1 """Common methods for scraping management commands."""
2 import logging
3 from argparse import ArgumentTypeError
4
5 from django.core.management.base import BaseCommand
6 from django.utils.six.moves.urllib.parse import urlparse
7
8 from kuma.scrape.scraper import Scraper
9
10
11 class ScrapeCommand(BaseCommand):
12 """Common base class for scraping management commands."""
13 def make_scraper(self, **options):
14 """Create a Scraper instance for management commands."""
15 return Scraper(**options)
16
17 def parse_url_or_path(self, url_or_path):
18 if url_or_path.startswith('http'):
19 bits = urlparse(url_or_path)
20 host = bits.netloc
21 path = bits.path
22 ssl = (bits.scheme == 'https')
23 else:
24 host = 'wiki.developer.mozilla.org'
25 ssl = True
26 path = url_or_path
27 return host, ssl, path
28
29 def setup_logging(self, verbosity):
30 """Update logger for desired verbosity."""
31
32 if verbosity == 0:
33 level = logging.WARNING
34 elif verbosity == 1: # default
35 level = logging.INFO
36 elif verbosity >= 2:
37 level = logging.DEBUG
38
39 formatter = logging.Formatter('%(levelname)s: %(message)s')
40 console = logging.StreamHandler(self.stderr)
41 console.setLevel(level)
42 console.setFormatter(formatter)
43 logger = logging.getLogger('kuma.scraper')
44 logger.setLevel(level)
45 logger.addHandler(console)
46 logger.propagate = False
47
48 def int_all_type(self, value):
49 """A command argument that can take an integer or 'all'."""
50 if value.strip().lower() == 'all':
51 return 'all'
52 try:
53 as_int = int(value)
54 except ValueError:
55 msg = "%r should be 'all' or an integer" % value
56 raise ArgumentTypeError(msg)
57 return as_int
58
[end of kuma/scrape/management/commands/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kuma/scrape/management/commands/__init__.py b/kuma/scrape/management/commands/__init__.py
--- a/kuma/scrape/management/commands/__init__.py
+++ b/kuma/scrape/management/commands/__init__.py
@@ -18,12 +18,12 @@
if url_or_path.startswith('http'):
bits = urlparse(url_or_path)
host = bits.netloc
- path = bits.path
+ path = bits.path.rstrip('/')
ssl = (bits.scheme == 'https')
else:
host = 'wiki.developer.mozilla.org'
ssl = True
- path = url_or_path
+ path = url_or_path.rstrip('/')
return host, ssl, path
def setup_logging(self, verbosity):
|
{"golden_diff": "diff --git a/kuma/scrape/management/commands/__init__.py b/kuma/scrape/management/commands/__init__.py\n--- a/kuma/scrape/management/commands/__init__.py\n+++ b/kuma/scrape/management/commands/__init__.py\n@@ -18,12 +18,12 @@\n if url_or_path.startswith('http'):\n bits = urlparse(url_or_path)\n host = bits.netloc\n- path = bits.path\n+ path = bits.path.rstrip('/')\n ssl = (bits.scheme == 'https')\n else:\n host = 'wiki.developer.mozilla.org'\n ssl = True\n- path = url_or_path\n+ path = url_or_path.rstrip('/')\n return host, ssl, path\n \n def setup_logging(self, verbosity):\n", "issue": "scrape_document with trailing slash fails\n```\r\n\u25b6 docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS/ --depth=2\r\n/usr/local/lib/python2.7/site-packages/django/utils/encoding.py:6: ImportWarning: Not importing directory '/app/locale': missing __init__.py\r\n import locale\r\n/usr/local/lib/python2.7/site-packages/django/template/backends/jinja2.py:6: ImportWarning: Not importing directory '/app/jinja2': missing __init__.py\r\n import jinja2\r\n/app/kuma/core/i18n.py:18: ImportWarning: Not importing directory '/app/kuma/core/jinja2': missing __init__.py\r\n from jinja2 import nodes\r\n/usr/local/lib/python2.7/site-packages/decorator_include.py:20: RemovedInDjango20Warning: Importing from django.core.urlresolvers is deprecated in favor of django.urls.\r\n from django.core.urlresolvers import RegexURLPattern as URLPattern, RegexURLResolver as URLResolver\r\n/usr/local/lib/python2.7/site-packages/soapbox/templatetags/soapbox.py:9: RemovedInDjango20Warning: assignment_tag() is deprecated. Use simple_tag() instead\r\n @register.assignment_tag(takes_context=True)\r\nINFO: Scrape progress, cycle 1: 1 Initializing, 1 Gathering Requirements\r\nINFO: Scrape progress, cycle 2: 3 Initializing, 1 Gathering Requirements, 1 Done\r\nTraceback (most recent call last):\r\n File \"./manage.py\", line 10, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 364, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 356, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/base.py\", line 283, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/base.py\", line 330, in execute\r\n output = self.handle(*args, **options)\r\n File \"/app/kuma/scrape/management/commands/scrape_document.py\", line 47, in handle\r\n scraper.scrape()\r\n File \"/app/kuma/scrape/scraper.py\", line 182, in scrape\r\n dependencies = source.gather(self.requester, self.storage)\r\n File \"/app/kuma/scrape/sources/base.py\", line 192, in gather\r\n has_prereqs, data = self.load_prereqs(requester, storage)\r\n File \"/app/kuma/scrape/sources/document_children.py\", line 31, in load_prereqs\r\n data = response.json()\r\n File \"/usr/local/lib/python2.7/site-packages/requests/models.py\", line 897, in json\r\n return complexjson.loads(self.text, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/__init__.py\", line 516, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/decoder.py\", line 370, in decode\r\n obj, end = self.raw_decode(s)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/decoder.py\", line 400, in raw_decode\r\n return self.scan_once(s, idx=_w(s, idx).end())\r\nsimplejson.scanner.JSONDecodeError: Expecting value: line 4 column 1 (char 3)\r\n```\r\n\r\nNote that if I retry without the trailing slash things work. I.e. `docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS --depth=2`\r\n\r\nSomewhere in there it assumes that the response is JSON when in fact the response status code is not OKish. \nscrape_document with trailing slash fails\n```\r\n\u25b6 docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS/ --depth=2\r\n/usr/local/lib/python2.7/site-packages/django/utils/encoding.py:6: ImportWarning: Not importing directory '/app/locale': missing __init__.py\r\n import locale\r\n/usr/local/lib/python2.7/site-packages/django/template/backends/jinja2.py:6: ImportWarning: Not importing directory '/app/jinja2': missing __init__.py\r\n import jinja2\r\n/app/kuma/core/i18n.py:18: ImportWarning: Not importing directory '/app/kuma/core/jinja2': missing __init__.py\r\n from jinja2 import nodes\r\n/usr/local/lib/python2.7/site-packages/decorator_include.py:20: RemovedInDjango20Warning: Importing from django.core.urlresolvers is deprecated in favor of django.urls.\r\n from django.core.urlresolvers import RegexURLPattern as URLPattern, RegexURLResolver as URLResolver\r\n/usr/local/lib/python2.7/site-packages/soapbox/templatetags/soapbox.py:9: RemovedInDjango20Warning: assignment_tag() is deprecated. Use simple_tag() instead\r\n @register.assignment_tag(takes_context=True)\r\nINFO: Scrape progress, cycle 1: 1 Initializing, 1 Gathering Requirements\r\nINFO: Scrape progress, cycle 2: 3 Initializing, 1 Gathering Requirements, 1 Done\r\nTraceback (most recent call last):\r\n File \"./manage.py\", line 10, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 364, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 356, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/base.py\", line 283, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/management/base.py\", line 330, in execute\r\n output = self.handle(*args, **options)\r\n File \"/app/kuma/scrape/management/commands/scrape_document.py\", line 47, in handle\r\n scraper.scrape()\r\n File \"/app/kuma/scrape/scraper.py\", line 182, in scrape\r\n dependencies = source.gather(self.requester, self.storage)\r\n File \"/app/kuma/scrape/sources/base.py\", line 192, in gather\r\n has_prereqs, data = self.load_prereqs(requester, storage)\r\n File \"/app/kuma/scrape/sources/document_children.py\", line 31, in load_prereqs\r\n data = response.json()\r\n File \"/usr/local/lib/python2.7/site-packages/requests/models.py\", line 897, in json\r\n return complexjson.loads(self.text, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/__init__.py\", line 516, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/decoder.py\", line 370, in decode\r\n obj, end = self.raw_decode(s)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/decoder.py\", line 400, in raw_decode\r\n return self.scan_once(s, idx=_w(s, idx).end())\r\nsimplejson.scanner.JSONDecodeError: Expecting value: line 4 column 1 (char 3)\r\n```\r\n\r\nNote that if I retry without the trailing slash things work. I.e. `docker-compose exec web ./manage.py scrape_document https://developer.mozilla.org/en-US/docs/Web/CSS --depth=2`\r\n\r\nSomewhere in there it assumes that the response is JSON when in fact the response status code is not OKish. \n", "before_files": [{"content": "\"\"\"Common methods for scraping management commands.\"\"\"\nimport logging\nfrom argparse import ArgumentTypeError\n\nfrom django.core.management.base import BaseCommand\nfrom django.utils.six.moves.urllib.parse import urlparse\n\nfrom kuma.scrape.scraper import Scraper\n\n\nclass ScrapeCommand(BaseCommand):\n \"\"\"Common base class for scraping management commands.\"\"\"\n def make_scraper(self, **options):\n \"\"\"Create a Scraper instance for management commands.\"\"\"\n return Scraper(**options)\n\n def parse_url_or_path(self, url_or_path):\n if url_or_path.startswith('http'):\n bits = urlparse(url_or_path)\n host = bits.netloc\n path = bits.path\n ssl = (bits.scheme == 'https')\n else:\n host = 'wiki.developer.mozilla.org'\n ssl = True\n path = url_or_path\n return host, ssl, path\n\n def setup_logging(self, verbosity):\n \"\"\"Update logger for desired verbosity.\"\"\"\n\n if verbosity == 0:\n level = logging.WARNING\n elif verbosity == 1: # default\n level = logging.INFO\n elif verbosity >= 2:\n level = logging.DEBUG\n\n formatter = logging.Formatter('%(levelname)s: %(message)s')\n console = logging.StreamHandler(self.stderr)\n console.setLevel(level)\n console.setFormatter(formatter)\n logger = logging.getLogger('kuma.scraper')\n logger.setLevel(level)\n logger.addHandler(console)\n logger.propagate = False\n\n def int_all_type(self, value):\n \"\"\"A command argument that can take an integer or 'all'.\"\"\"\n if value.strip().lower() == 'all':\n return 'all'\n try:\n as_int = int(value)\n except ValueError:\n msg = \"%r should be 'all' or an integer\" % value\n raise ArgumentTypeError(msg)\n return as_int\n", "path": "kuma/scrape/management/commands/__init__.py"}]}
| 2,837 | 174 |
gh_patches_debug_29416
|
rasdani/github-patches
|
git_diff
|
paperless-ngx__paperless-ngx-1557
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] storage_path is missing in manifest.json
### Description
there is no "model": "documents.storage_path" in manifest.json. I found it only in db.sqlite
### Steps to reproduce
open export/manifest.json after `docker compose exec webserver document_exporter ../export -f`
### Webserver logs
_No response_
### Paperless-ngx version
1.8.0
### Host OS
Raspberry Pi 4/arm64
### Installation method
Docker - official image
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
</issue>
<code>
[start of src/documents/management/commands/document_exporter.py]
1 import hashlib
2 import json
3 import os
4 import shutil
5 import time
6
7 import tqdm
8 from django.conf import settings
9 from django.contrib.auth.models import Group
10 from django.contrib.auth.models import User
11 from django.core import serializers
12 from django.core.management.base import BaseCommand
13 from django.core.management.base import CommandError
14 from django.db import transaction
15 from documents.models import Comment
16 from documents.models import Correspondent
17 from documents.models import Document
18 from documents.models import DocumentType
19 from documents.models import SavedView
20 from documents.models import SavedViewFilterRule
21 from documents.models import Tag
22 from documents.models import UiSettings
23 from documents.settings import EXPORTER_ARCHIVE_NAME
24 from documents.settings import EXPORTER_FILE_NAME
25 from documents.settings import EXPORTER_THUMBNAIL_NAME
26 from filelock import FileLock
27 from paperless import version
28 from paperless.db import GnuPG
29 from paperless_mail.models import MailAccount
30 from paperless_mail.models import MailRule
31
32 from ...file_handling import delete_empty_directories
33 from ...file_handling import generate_filename
34
35
36 class Command(BaseCommand):
37
38 help = """
39 Decrypt and rename all files in our collection into a given target
40 directory. And include a manifest file containing document data for
41 easy import.
42 """.replace(
43 " ",
44 "",
45 )
46
47 def add_arguments(self, parser):
48 parser.add_argument("target")
49
50 parser.add_argument(
51 "-c",
52 "--compare-checksums",
53 default=False,
54 action="store_true",
55 help="Compare file checksums when determining whether to export "
56 "a file or not. If not specified, file size and time "
57 "modified is used instead.",
58 )
59
60 parser.add_argument(
61 "-f",
62 "--use-filename-format",
63 default=False,
64 action="store_true",
65 help="Use PAPERLESS_FILENAME_FORMAT for storing files in the "
66 "export directory, if configured.",
67 )
68
69 parser.add_argument(
70 "-d",
71 "--delete",
72 default=False,
73 action="store_true",
74 help="After exporting, delete files in the export directory that "
75 "do not belong to the current export, such as files from "
76 "deleted documents.",
77 )
78 parser.add_argument(
79 "--no-progress-bar",
80 default=False,
81 action="store_true",
82 help="If set, the progress bar will not be shown",
83 )
84
85 def __init__(self, *args, **kwargs):
86 BaseCommand.__init__(self, *args, **kwargs)
87 self.target = None
88 self.files_in_export_dir = []
89 self.exported_files = []
90 self.compare_checksums = False
91 self.use_filename_format = False
92 self.delete = False
93
94 def handle(self, *args, **options):
95
96 self.target = options["target"]
97 self.compare_checksums = options["compare_checksums"]
98 self.use_filename_format = options["use_filename_format"]
99 self.delete = options["delete"]
100
101 if not os.path.exists(self.target):
102 raise CommandError("That path doesn't exist")
103
104 if not os.access(self.target, os.W_OK):
105 raise CommandError("That path doesn't appear to be writable")
106
107 with FileLock(settings.MEDIA_LOCK):
108 self.dump(options["no_progress_bar"])
109
110 def dump(self, progress_bar_disable=False):
111 # 1. Take a snapshot of what files exist in the current export folder
112 for root, dirs, files in os.walk(self.target):
113 self.files_in_export_dir.extend(
114 map(lambda f: os.path.abspath(os.path.join(root, f)), files),
115 )
116
117 # 2. Create manifest, containing all correspondents, types, tags,
118 # documents and ui_settings
119 with transaction.atomic():
120 manifest = json.loads(
121 serializers.serialize("json", Correspondent.objects.all()),
122 )
123
124 manifest += json.loads(serializers.serialize("json", Tag.objects.all()))
125
126 manifest += json.loads(
127 serializers.serialize("json", DocumentType.objects.all()),
128 )
129
130 manifest += json.loads(
131 serializers.serialize("json", Comment.objects.all()),
132 )
133
134 documents = Document.objects.order_by("id")
135 document_map = {d.pk: d for d in documents}
136 document_manifest = json.loads(serializers.serialize("json", documents))
137 manifest += document_manifest
138
139 manifest += json.loads(
140 serializers.serialize("json", MailAccount.objects.all()),
141 )
142
143 manifest += json.loads(
144 serializers.serialize("json", MailRule.objects.all()),
145 )
146
147 manifest += json.loads(
148 serializers.serialize("json", SavedView.objects.all()),
149 )
150
151 manifest += json.loads(
152 serializers.serialize("json", SavedViewFilterRule.objects.all()),
153 )
154
155 manifest += json.loads(serializers.serialize("json", Group.objects.all()))
156
157 manifest += json.loads(serializers.serialize("json", User.objects.all()))
158
159 manifest += json.loads(
160 serializers.serialize("json", UiSettings.objects.all()),
161 )
162
163 # 3. Export files from each document
164 for index, document_dict in tqdm.tqdm(
165 enumerate(document_manifest),
166 total=len(document_manifest),
167 disable=progress_bar_disable,
168 ):
169 # 3.1. store files unencrypted
170 document_dict["fields"]["storage_type"] = Document.STORAGE_TYPE_UNENCRYPTED
171
172 document = document_map[document_dict["pk"]]
173
174 # 3.2. generate a unique filename
175 filename_counter = 0
176 while True:
177 if self.use_filename_format:
178 base_name = generate_filename(
179 document,
180 counter=filename_counter,
181 append_gpg=False,
182 )
183 else:
184 base_name = document.get_public_filename(counter=filename_counter)
185
186 if base_name not in self.exported_files:
187 self.exported_files.append(base_name)
188 break
189 else:
190 filename_counter += 1
191
192 # 3.3. write filenames into manifest
193 original_name = base_name
194 original_target = os.path.join(self.target, original_name)
195 document_dict[EXPORTER_FILE_NAME] = original_name
196
197 thumbnail_name = base_name + "-thumbnail.webp"
198 thumbnail_target = os.path.join(self.target, thumbnail_name)
199 document_dict[EXPORTER_THUMBNAIL_NAME] = thumbnail_name
200
201 if document.has_archive_version:
202 archive_name = base_name + "-archive.pdf"
203 archive_target = os.path.join(self.target, archive_name)
204 document_dict[EXPORTER_ARCHIVE_NAME] = archive_name
205 else:
206 archive_target = None
207
208 # 3.4. write files to target folder
209 t = int(time.mktime(document.created.timetuple()))
210 if document.storage_type == Document.STORAGE_TYPE_GPG:
211
212 os.makedirs(os.path.dirname(original_target), exist_ok=True)
213 with open(original_target, "wb") as f:
214 with document.source_file as out_file:
215 f.write(GnuPG.decrypted(out_file))
216 os.utime(original_target, times=(t, t))
217
218 os.makedirs(os.path.dirname(thumbnail_target), exist_ok=True)
219 with open(thumbnail_target, "wb") as f:
220 with document.thumbnail_file as out_file:
221 f.write(GnuPG.decrypted(out_file))
222 os.utime(thumbnail_target, times=(t, t))
223
224 if archive_target:
225 os.makedirs(os.path.dirname(archive_target), exist_ok=True)
226 with open(archive_target, "wb") as f:
227 with document.archive_path as out_file:
228 f.write(GnuPG.decrypted(out_file))
229 os.utime(archive_target, times=(t, t))
230 else:
231 self.check_and_copy(
232 document.source_path,
233 document.checksum,
234 original_target,
235 )
236
237 self.check_and_copy(document.thumbnail_path, None, thumbnail_target)
238
239 if archive_target:
240 self.check_and_copy(
241 document.archive_path,
242 document.archive_checksum,
243 archive_target,
244 )
245
246 # 4.1 write manifest to target folder
247 manifest_path = os.path.abspath(os.path.join(self.target, "manifest.json"))
248
249 with open(manifest_path, "w") as f:
250 json.dump(manifest, f, indent=2)
251
252 # 4.2 write version information to target folder
253 version_path = os.path.abspath(os.path.join(self.target, "version.json"))
254
255 with open(version_path, "w") as f:
256 json.dump({"version": version.__full_version_str__}, f, indent=2)
257
258 if self.delete:
259 # 5. Remove files which we did not explicitly export in this run
260
261 if manifest_path in self.files_in_export_dir:
262 self.files_in_export_dir.remove(manifest_path)
263
264 for f in self.files_in_export_dir:
265 os.remove(f)
266
267 delete_empty_directories(
268 os.path.abspath(os.path.dirname(f)),
269 os.path.abspath(self.target),
270 )
271
272 def check_and_copy(self, source, source_checksum, target):
273 if os.path.abspath(target) in self.files_in_export_dir:
274 self.files_in_export_dir.remove(os.path.abspath(target))
275
276 perform_copy = False
277
278 if os.path.exists(target):
279 source_stat = os.stat(source)
280 target_stat = os.stat(target)
281 if self.compare_checksums and source_checksum:
282 with open(target, "rb") as f:
283 target_checksum = hashlib.md5(f.read()).hexdigest()
284 perform_copy = target_checksum != source_checksum
285 elif source_stat.st_mtime != target_stat.st_mtime:
286 perform_copy = True
287 elif source_stat.st_size != target_stat.st_size:
288 perform_copy = True
289 else:
290 # Copy if it does not exist
291 perform_copy = True
292
293 if perform_copy:
294 os.makedirs(os.path.dirname(target), exist_ok=True)
295 shutil.copy2(source, target)
296
[end of src/documents/management/commands/document_exporter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/documents/management/commands/document_exporter.py b/src/documents/management/commands/document_exporter.py
--- a/src/documents/management/commands/document_exporter.py
+++ b/src/documents/management/commands/document_exporter.py
@@ -18,6 +18,7 @@
from documents.models import DocumentType
from documents.models import SavedView
from documents.models import SavedViewFilterRule
+from documents.models import StoragePath
from documents.models import Tag
from documents.models import UiSettings
from documents.settings import EXPORTER_ARCHIVE_NAME
@@ -114,8 +115,8 @@
map(lambda f: os.path.abspath(os.path.join(root, f)), files),
)
- # 2. Create manifest, containing all correspondents, types, tags,
- # documents and ui_settings
+ # 2. Create manifest, containing all correspondents, types, tags, storage paths
+ # comments, documents and ui_settings
with transaction.atomic():
manifest = json.loads(
serializers.serialize("json", Correspondent.objects.all()),
@@ -127,6 +128,10 @@
serializers.serialize("json", DocumentType.objects.all()),
)
+ manifest += json.loads(
+ serializers.serialize("json", StoragePath.objects.all()),
+ )
+
manifest += json.loads(
serializers.serialize("json", Comment.objects.all()),
)
|
{"golden_diff": "diff --git a/src/documents/management/commands/document_exporter.py b/src/documents/management/commands/document_exporter.py\n--- a/src/documents/management/commands/document_exporter.py\n+++ b/src/documents/management/commands/document_exporter.py\n@@ -18,6 +18,7 @@\n from documents.models import DocumentType\n from documents.models import SavedView\n from documents.models import SavedViewFilterRule\n+from documents.models import StoragePath\n from documents.models import Tag\n from documents.models import UiSettings\n from documents.settings import EXPORTER_ARCHIVE_NAME\n@@ -114,8 +115,8 @@\n map(lambda f: os.path.abspath(os.path.join(root, f)), files),\n )\n \n- # 2. Create manifest, containing all correspondents, types, tags,\n- # documents and ui_settings\n+ # 2. Create manifest, containing all correspondents, types, tags, storage paths\n+ # comments, documents and ui_settings\n with transaction.atomic():\n manifest = json.loads(\n serializers.serialize(\"json\", Correspondent.objects.all()),\n@@ -127,6 +128,10 @@\n serializers.serialize(\"json\", DocumentType.objects.all()),\n )\n \n+ manifest += json.loads(\n+ serializers.serialize(\"json\", StoragePath.objects.all()),\n+ )\n+\n manifest += json.loads(\n serializers.serialize(\"json\", Comment.objects.all()),\n )\n", "issue": "[BUG] storage_path is missing in manifest.json\n### Description\n\nthere is no \"model\": \"documents.storage_path\" in manifest.json. I found it only in db.sqlite\n\n### Steps to reproduce\n\nopen export/manifest.json after `docker compose exec webserver document_exporter ../export -f`\n\n### Webserver logs\n\n_No response_\n\n### Paperless-ngx version\n\n1.8.0\n\n### Host OS\n\nRaspberry Pi 4/arm64\n\n### Installation method\n\nDocker - official image\n\n### Browser\n\n_No response_\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "import hashlib\nimport json\nimport os\nimport shutil\nimport time\n\nimport tqdm\nfrom django.conf import settings\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.models import User\nfrom django.core import serializers\nfrom django.core.management.base import BaseCommand\nfrom django.core.management.base import CommandError\nfrom django.db import transaction\nfrom documents.models import Comment\nfrom documents.models import Correspondent\nfrom documents.models import Document\nfrom documents.models import DocumentType\nfrom documents.models import SavedView\nfrom documents.models import SavedViewFilterRule\nfrom documents.models import Tag\nfrom documents.models import UiSettings\nfrom documents.settings import EXPORTER_ARCHIVE_NAME\nfrom documents.settings import EXPORTER_FILE_NAME\nfrom documents.settings import EXPORTER_THUMBNAIL_NAME\nfrom filelock import FileLock\nfrom paperless import version\nfrom paperless.db import GnuPG\nfrom paperless_mail.models import MailAccount\nfrom paperless_mail.models import MailRule\n\nfrom ...file_handling import delete_empty_directories\nfrom ...file_handling import generate_filename\n\n\nclass Command(BaseCommand):\n\n help = \"\"\"\n Decrypt and rename all files in our collection into a given target\n directory. And include a manifest file containing document data for\n easy import.\n \"\"\".replace(\n \" \",\n \"\",\n )\n\n def add_arguments(self, parser):\n parser.add_argument(\"target\")\n\n parser.add_argument(\n \"-c\",\n \"--compare-checksums\",\n default=False,\n action=\"store_true\",\n help=\"Compare file checksums when determining whether to export \"\n \"a file or not. If not specified, file size and time \"\n \"modified is used instead.\",\n )\n\n parser.add_argument(\n \"-f\",\n \"--use-filename-format\",\n default=False,\n action=\"store_true\",\n help=\"Use PAPERLESS_FILENAME_FORMAT for storing files in the \"\n \"export directory, if configured.\",\n )\n\n parser.add_argument(\n \"-d\",\n \"--delete\",\n default=False,\n action=\"store_true\",\n help=\"After exporting, delete files in the export directory that \"\n \"do not belong to the current export, such as files from \"\n \"deleted documents.\",\n )\n parser.add_argument(\n \"--no-progress-bar\",\n default=False,\n action=\"store_true\",\n help=\"If set, the progress bar will not be shown\",\n )\n\n def __init__(self, *args, **kwargs):\n BaseCommand.__init__(self, *args, **kwargs)\n self.target = None\n self.files_in_export_dir = []\n self.exported_files = []\n self.compare_checksums = False\n self.use_filename_format = False\n self.delete = False\n\n def handle(self, *args, **options):\n\n self.target = options[\"target\"]\n self.compare_checksums = options[\"compare_checksums\"]\n self.use_filename_format = options[\"use_filename_format\"]\n self.delete = options[\"delete\"]\n\n if not os.path.exists(self.target):\n raise CommandError(\"That path doesn't exist\")\n\n if not os.access(self.target, os.W_OK):\n raise CommandError(\"That path doesn't appear to be writable\")\n\n with FileLock(settings.MEDIA_LOCK):\n self.dump(options[\"no_progress_bar\"])\n\n def dump(self, progress_bar_disable=False):\n # 1. Take a snapshot of what files exist in the current export folder\n for root, dirs, files in os.walk(self.target):\n self.files_in_export_dir.extend(\n map(lambda f: os.path.abspath(os.path.join(root, f)), files),\n )\n\n # 2. Create manifest, containing all correspondents, types, tags,\n # documents and ui_settings\n with transaction.atomic():\n manifest = json.loads(\n serializers.serialize(\"json\", Correspondent.objects.all()),\n )\n\n manifest += json.loads(serializers.serialize(\"json\", Tag.objects.all()))\n\n manifest += json.loads(\n serializers.serialize(\"json\", DocumentType.objects.all()),\n )\n\n manifest += json.loads(\n serializers.serialize(\"json\", Comment.objects.all()),\n )\n\n documents = Document.objects.order_by(\"id\")\n document_map = {d.pk: d for d in documents}\n document_manifest = json.loads(serializers.serialize(\"json\", documents))\n manifest += document_manifest\n\n manifest += json.loads(\n serializers.serialize(\"json\", MailAccount.objects.all()),\n )\n\n manifest += json.loads(\n serializers.serialize(\"json\", MailRule.objects.all()),\n )\n\n manifest += json.loads(\n serializers.serialize(\"json\", SavedView.objects.all()),\n )\n\n manifest += json.loads(\n serializers.serialize(\"json\", SavedViewFilterRule.objects.all()),\n )\n\n manifest += json.loads(serializers.serialize(\"json\", Group.objects.all()))\n\n manifest += json.loads(serializers.serialize(\"json\", User.objects.all()))\n\n manifest += json.loads(\n serializers.serialize(\"json\", UiSettings.objects.all()),\n )\n\n # 3. Export files from each document\n for index, document_dict in tqdm.tqdm(\n enumerate(document_manifest),\n total=len(document_manifest),\n disable=progress_bar_disable,\n ):\n # 3.1. store files unencrypted\n document_dict[\"fields\"][\"storage_type\"] = Document.STORAGE_TYPE_UNENCRYPTED\n\n document = document_map[document_dict[\"pk\"]]\n\n # 3.2. generate a unique filename\n filename_counter = 0\n while True:\n if self.use_filename_format:\n base_name = generate_filename(\n document,\n counter=filename_counter,\n append_gpg=False,\n )\n else:\n base_name = document.get_public_filename(counter=filename_counter)\n\n if base_name not in self.exported_files:\n self.exported_files.append(base_name)\n break\n else:\n filename_counter += 1\n\n # 3.3. write filenames into manifest\n original_name = base_name\n original_target = os.path.join(self.target, original_name)\n document_dict[EXPORTER_FILE_NAME] = original_name\n\n thumbnail_name = base_name + \"-thumbnail.webp\"\n thumbnail_target = os.path.join(self.target, thumbnail_name)\n document_dict[EXPORTER_THUMBNAIL_NAME] = thumbnail_name\n\n if document.has_archive_version:\n archive_name = base_name + \"-archive.pdf\"\n archive_target = os.path.join(self.target, archive_name)\n document_dict[EXPORTER_ARCHIVE_NAME] = archive_name\n else:\n archive_target = None\n\n # 3.4. write files to target folder\n t = int(time.mktime(document.created.timetuple()))\n if document.storage_type == Document.STORAGE_TYPE_GPG:\n\n os.makedirs(os.path.dirname(original_target), exist_ok=True)\n with open(original_target, \"wb\") as f:\n with document.source_file as out_file:\n f.write(GnuPG.decrypted(out_file))\n os.utime(original_target, times=(t, t))\n\n os.makedirs(os.path.dirname(thumbnail_target), exist_ok=True)\n with open(thumbnail_target, \"wb\") as f:\n with document.thumbnail_file as out_file:\n f.write(GnuPG.decrypted(out_file))\n os.utime(thumbnail_target, times=(t, t))\n\n if archive_target:\n os.makedirs(os.path.dirname(archive_target), exist_ok=True)\n with open(archive_target, \"wb\") as f:\n with document.archive_path as out_file:\n f.write(GnuPG.decrypted(out_file))\n os.utime(archive_target, times=(t, t))\n else:\n self.check_and_copy(\n document.source_path,\n document.checksum,\n original_target,\n )\n\n self.check_and_copy(document.thumbnail_path, None, thumbnail_target)\n\n if archive_target:\n self.check_and_copy(\n document.archive_path,\n document.archive_checksum,\n archive_target,\n )\n\n # 4.1 write manifest to target folder\n manifest_path = os.path.abspath(os.path.join(self.target, \"manifest.json\"))\n\n with open(manifest_path, \"w\") as f:\n json.dump(manifest, f, indent=2)\n\n # 4.2 write version information to target folder\n version_path = os.path.abspath(os.path.join(self.target, \"version.json\"))\n\n with open(version_path, \"w\") as f:\n json.dump({\"version\": version.__full_version_str__}, f, indent=2)\n\n if self.delete:\n # 5. Remove files which we did not explicitly export in this run\n\n if manifest_path in self.files_in_export_dir:\n self.files_in_export_dir.remove(manifest_path)\n\n for f in self.files_in_export_dir:\n os.remove(f)\n\n delete_empty_directories(\n os.path.abspath(os.path.dirname(f)),\n os.path.abspath(self.target),\n )\n\n def check_and_copy(self, source, source_checksum, target):\n if os.path.abspath(target) in self.files_in_export_dir:\n self.files_in_export_dir.remove(os.path.abspath(target))\n\n perform_copy = False\n\n if os.path.exists(target):\n source_stat = os.stat(source)\n target_stat = os.stat(target)\n if self.compare_checksums and source_checksum:\n with open(target, \"rb\") as f:\n target_checksum = hashlib.md5(f.read()).hexdigest()\n perform_copy = target_checksum != source_checksum\n elif source_stat.st_mtime != target_stat.st_mtime:\n perform_copy = True\n elif source_stat.st_size != target_stat.st_size:\n perform_copy = True\n else:\n # Copy if it does not exist\n perform_copy = True\n\n if perform_copy:\n os.makedirs(os.path.dirname(target), exist_ok=True)\n shutil.copy2(source, target)\n", "path": "src/documents/management/commands/document_exporter.py"}]}
| 3,520 | 300 |
gh_patches_debug_15487
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-3511
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Building MkDocs' documentation
When I began working on #3493 I needed to be able to run the dev server with MkDocs' own documentation, which was more difficult than it should have been.
First, let me say that I always work from a venv, and created a new one to start my work. In the past, one could simply do `pip install -r requirements/docs.txt` and then `mkdocs serve` and it worked. But I had to jump through a lot of hoops and learn new tools to get things working this time.
To be clear, I am not suggesting that the old tools should be brought back. Nor am I suggesting that my preferred tools be used. However, I could find no documentation about how to proceed. Eventually, I did find documentation that hatch is used for tests, and looking at `pyproject.toml` I could see a config for a hatch `docs` env. But not having ever used that tool before, it took be multiple tries (and searches) to work out how to even use that env. But, I'm getting ahead of myself...
After realizing that there were no requirements.txt files, I next looked for optional dependencies defined in `pyproject.toml`. I was hoping to maybe do `pip install .[docs]` (`.` rather than `markdown` because I was working from the working tree of the git repo). When I determined that that wasn't an option, I began looking into the hatch options. Finally in some random question in some forum I found an explanation of how to run the `shell` subcommand with an alternate env: `hatch -e docs shell`.
And then I could finally run `mkdocs serve`. Except that I got an error about missing `po` files, which is weird because I am using English, which should work without any translations being defined. Finally, after generating `po` and `mo` files, I could run the dev server and begin my work.
All of this led me to believe that the current maintainers are not ever running the dev server with MkDocs documentation. And I also could not find any automations for deploying the documentation, so I couldn't even use that as a reference.
Again, I am not being critical of the tool choices. I see that a switch was made from tox to hatch. That's fine if that is the tool that the maintainers what to use. But for an occasional contributor, I would prefer to not need to learn these tools. I would prefer to be able to use standard Python tools that work with any project. Or if I do need to use your tool of choice, then I would expect the specific commands I would need to use to all be documented clearly.
I could submit a PR which updated the documentation, but I'm not sure what the recommended best practices are here. I am simply bringing this to the attention of the maintainers with the hopes that more consideration will be given to this in the future.
</issue>
<code>
[start of docs/hooks.py]
1 import re
2 from pathlib import Path
3
4 from mkdocs.config.defaults import MkDocsConfig
5 from mkdocs.structure.nav import Page
6
7
8 def _get_language_of_translation_file(path: Path) -> str:
9 with path.open(encoding='utf-8') as f:
10 translation_line = f.readline()
11 m = re.search('^# (.+) translations ', translation_line)
12 assert m
13 return m[1]
14
15
16 def on_page_markdown(markdown: str, page: Page, config: MkDocsConfig, **kwargs):
17 if page.file.src_uri == 'user-guide/choosing-your-theme.md':
18 here = Path(config.config_file_path).parent
19
20 def replacement(m: re.Match) -> str:
21 lines = []
22 for d in sorted(here.glob(m[2])):
23 lang = _get_language_of_translation_file(Path(d, 'LC_MESSAGES', 'messages.po'))
24 lines.append(f'{m[1]}`{d.name}`: {lang}')
25 return '\n'.join(lines)
26
27 return re.sub(
28 r'^( *\* )\(see the list of existing directories `(.+)`\)$',
29 replacement,
30 markdown,
31 flags=re.MULTILINE,
32 )
33
[end of docs/hooks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/hooks.py b/docs/hooks.py
--- a/docs/hooks.py
+++ b/docs/hooks.py
@@ -1,8 +1,12 @@
+from __future__ import annotations
+
import re
from pathlib import Path
+from typing import TYPE_CHECKING
-from mkdocs.config.defaults import MkDocsConfig
-from mkdocs.structure.nav import Page
+if TYPE_CHECKING:
+ from mkdocs.config.defaults import MkDocsConfig
+ from mkdocs.structure.nav import Page
def _get_language_of_translation_file(path: Path) -> str:
@@ -13,7 +17,7 @@
return m[1]
-def on_page_markdown(markdown: str, page: Page, config: MkDocsConfig, **kwargs):
+def on_page_markdown(markdown: str, page: Page, config: MkDocsConfig, **kwargs) -> str | None:
if page.file.src_uri == 'user-guide/choosing-your-theme.md':
here = Path(config.config_file_path).parent
|
{"golden_diff": "diff --git a/docs/hooks.py b/docs/hooks.py\n--- a/docs/hooks.py\n+++ b/docs/hooks.py\n@@ -1,8 +1,12 @@\n+from __future__ import annotations\n+\n import re\n from pathlib import Path\n+from typing import TYPE_CHECKING\n \n-from mkdocs.config.defaults import MkDocsConfig\n-from mkdocs.structure.nav import Page\n+if TYPE_CHECKING:\n+ from mkdocs.config.defaults import MkDocsConfig\n+ from mkdocs.structure.nav import Page\n \n \n def _get_language_of_translation_file(path: Path) -> str:\n@@ -13,7 +17,7 @@\n return m[1]\n \n \n-def on_page_markdown(markdown: str, page: Page, config: MkDocsConfig, **kwargs):\n+def on_page_markdown(markdown: str, page: Page, config: MkDocsConfig, **kwargs) -> str | None:\n if page.file.src_uri == 'user-guide/choosing-your-theme.md':\n here = Path(config.config_file_path).parent\n", "issue": "Building MkDocs' documentation\nWhen I began working on #3493 I needed to be able to run the dev server with MkDocs' own documentation, which was more difficult than it should have been.\r\n\r\nFirst, let me say that I always work from a venv, and created a new one to start my work. In the past, one could simply do `pip install -r requirements/docs.txt` and then `mkdocs serve` and it worked. But I had to jump through a lot of hoops and learn new tools to get things working this time.\r\n\r\nTo be clear, I am not suggesting that the old tools should be brought back. Nor am I suggesting that my preferred tools be used. However, I could find no documentation about how to proceed. Eventually, I did find documentation that hatch is used for tests, and looking at `pyproject.toml` I could see a config for a hatch `docs` env. But not having ever used that tool before, it took be multiple tries (and searches) to work out how to even use that env. But, I'm getting ahead of myself...\r\n\r\nAfter realizing that there were no requirements.txt files, I next looked for optional dependencies defined in `pyproject.toml`. I was hoping to maybe do `pip install .[docs]` (`.` rather than `markdown` because I was working from the working tree of the git repo). When I determined that that wasn't an option, I began looking into the hatch options. Finally in some random question in some forum I found an explanation of how to run the `shell` subcommand with an alternate env: `hatch -e docs shell`.\r\n\r\nAnd then I could finally run `mkdocs serve`. Except that I got an error about missing `po` files, which is weird because I am using English, which should work without any translations being defined. Finally, after generating `po` and `mo` files, I could run the dev server and begin my work.\r\n\r\nAll of this led me to believe that the current maintainers are not ever running the dev server with MkDocs documentation. And I also could not find any automations for deploying the documentation, so I couldn't even use that as a reference.\r\n\r\nAgain, I am not being critical of the tool choices. I see that a switch was made from tox to hatch. That's fine if that is the tool that the maintainers what to use. But for an occasional contributor, I would prefer to not need to learn these tools. I would prefer to be able to use standard Python tools that work with any project. Or if I do need to use your tool of choice, then I would expect the specific commands I would need to use to all be documented clearly.\r\n\r\nI could submit a PR which updated the documentation, but I'm not sure what the recommended best practices are here. I am simply bringing this to the attention of the maintainers with the hopes that more consideration will be given to this in the future.\n", "before_files": [{"content": "import re\nfrom pathlib import Path\n\nfrom mkdocs.config.defaults import MkDocsConfig\nfrom mkdocs.structure.nav import Page\n\n\ndef _get_language_of_translation_file(path: Path) -> str:\n with path.open(encoding='utf-8') as f:\n translation_line = f.readline()\n m = re.search('^# (.+) translations ', translation_line)\n assert m\n return m[1]\n\n\ndef on_page_markdown(markdown: str, page: Page, config: MkDocsConfig, **kwargs):\n if page.file.src_uri == 'user-guide/choosing-your-theme.md':\n here = Path(config.config_file_path).parent\n\n def replacement(m: re.Match) -> str:\n lines = []\n for d in sorted(here.glob(m[2])):\n lang = _get_language_of_translation_file(Path(d, 'LC_MESSAGES', 'messages.po'))\n lines.append(f'{m[1]}`{d.name}`: {lang}')\n return '\\n'.join(lines)\n\n return re.sub(\n r'^( *\\* )\\(see the list of existing directories `(.+)`\\)$',\n replacement,\n markdown,\n flags=re.MULTILINE,\n )\n", "path": "docs/hooks.py"}]}
| 1,453 | 220 |
gh_patches_debug_27740
|
rasdani/github-patches
|
git_diff
|
facebookresearch__nevergrad-188
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation is out of date for 0.2.0
Trying to run examples like:
https://github.com/facebookresearch/nevergrad/blob/master/docs/machinelearning.md
gives errors.
</issue>
<code>
[start of nevergrad/instrumentation/variables.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2 # This source code is licensed under the MIT license found in the
3 # LICENSE file in the root directory of this source tree.
4
5 from typing import List, Optional, TypeVar, Union, Sequence, Any
6 import numpy as np
7 from . import discretization
8 from ..common.typetools import ArrayLike
9 from . import transforms
10 from . import utils
11
12
13 X = TypeVar("X")
14
15
16 class SoftmaxCategorical(utils.Variable[X]):
17 """Discrete set of n values transformed to a n-dim continuous variable.
18 Each of the dimension encodes a weight for a value, and the softmax of weights
19 provide probabilities for each possible value. A random value is sampled from
20 this distribution.
21
22 Parameter
23 ---------
24 possibilities: list
25 a list of possible values for the variable
26
27 Note
28 ----
29 Since the chosen value is drawn randomly, the use of this variable makes deterministic
30 functions become stochastic, hence "adding noise"
31 """
32
33 def __init__(self, possibilities: List[X], deterministic: bool = False) -> None:
34 self.deterministic = deterministic
35 self.possibilities = list(possibilities)
36
37 @property
38 def dimension(self) -> int:
39 return len(self.possibilities)
40
41 def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> X:
42 assert len(data) == len(self.possibilities)
43 deterministic = deterministic | self.deterministic
44 index = int(discretization.softmax_discretization(data, len(self.possibilities), deterministic=deterministic)[0])
45 return self.possibilities[index]
46
47 def argument_to_data(self, arg: X) -> ArrayLike:
48 assert arg in self.possibilities, f'{arg} not in allowed values: {self.possibilities}'
49 return discretization.inverse_softmax_discretization(self.possibilities.index(arg), len(self.possibilities))
50
51 def get_summary(self, data: ArrayLike) -> str:
52 output = self.data_to_argument(data, deterministic=True)
53 probas = discretization.softmax_probas(np.array(data, copy=False))
54 proba_str = ", ".join([f'"{s}": {round(100 * p)}%' for s, p in zip(self.possibilities, probas)])
55 return f"Value {output}, from data: {data} yielding probas: {proba_str}"
56
57 def _short_repr(self) -> str:
58 return "SC({}|{})".format(",".join([str(x) for x in self.possibilities]), int(self.deterministic))
59
60
61 class OrderedDiscrete(SoftmaxCategorical[X]):
62 """Discrete list of n values transformed to a 1-dim discontinuous variable.
63 A gaussian input yields a uniform distribution on the list of variables.
64
65 Parameter
66 ---------
67 possibilities: list
68 a list of possible values for the variable
69
70 Note
71 ----
72 The variables are assumed to be ordered.
73 """
74
75 @property
76 def dimension(self) -> int:
77 return 1
78
79 def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> X: # pylint: disable=arguments-differ, unused-argument
80 assert len(data) == 1
81 index = discretization.threshold_discretization(data, arity=len(self.possibilities))[0]
82 return self.possibilities[index]
83
84 def argument_to_data(self, arg: X) -> ArrayLike:
85 assert arg in self.possibilities, f'{arg} not in allowed values: {self.possibilities}'
86 index = self.possibilities.index(arg)
87 return discretization.inverse_threshold_discretization([index], len(self.possibilities))
88
89 def _short_repr(self) -> str:
90 return "OD({})".format(",".join([str(x) for x in self.possibilities]))
91
92
93 Y = Union[float, np.ndarray]
94
95
96 class Gaussian(utils.Variable[Y]):
97 """Gaussian variable with a mean and a standard deviation, and
98 possibly a shape (when using directly in Python)
99 The output will simply be mean + std * data
100 """
101
102 def __init__(self, mean: float, std: float, shape: Optional[Sequence[int]] = None) -> None:
103 self.mean = mean
104 self.std = std
105 self.shape = shape
106
107 @property
108 def dimension(self) -> int:
109 return 1 if self.shape is None else int(np.prod(self.shape))
110
111 def data_to_argument(self, data: ArrayLike, deterministic: bool = True) -> Y:
112 assert len(data) == self.dimension
113 x = data[0] if self.shape is None else np.reshape(data, self.shape)
114 return self.std * x + self.mean
115
116 def argument_to_data(self, arg: Y) -> ArrayLike:
117 return [(arg - self.mean) / self.std]
118
119 def _short_repr(self) -> str:
120 return f"G({self.mean},{self.std})"
121
122
123 class _Constant(utils.Variable[X]):
124 """Fake variable so that constant variables can fit into the
125 pipeline.
126 """
127
128 def __init__(self, value: X) -> None:
129 self.value = value
130
131 @classmethod
132 def convert_non_instrument(cls, x: Union[X, utils.Variable[X]]) -> utils.Variable[X]:
133 return x if isinstance(x, utils.Variable) else cls(x)
134
135 @property
136 def dimension(self) -> int:
137 return 0
138
139 def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> X: # pylint: disable=unused-argument
140 return self.value
141
142 def argument_to_data(self, arg: X) -> ArrayLike:
143 assert arg == self.value, f'{arg} != {self.value}'
144 return []
145
146 def get_summary(self, data: ArrayLike) -> str:
147 raise RuntimeError("Constant summary should not be called")
148
149 def _short_repr(self) -> str:
150 return f"{self.value}"
151
152
153 class Array(utils.Variable[Y]):
154 """Array variable of a given shape, on which several transforms can be applied.
155
156 Parameters
157 ----------
158 *dims: int
159 dimensions of the array (elements of shape)
160
161 Note
162 ----
163 Interesting methods (which can be chained):
164 - asfloat(): converts the array into a float (only for arrays with 1 element)
165 - with_transform(transform): apply a transform to the array
166 - affined(a, b): applies a*x+b
167 - bounded(min_val, max_val, transform="tanh"): applies a transform ("tanh" or "arctan")
168 so that output values are in range [min_val, max_val]
169 - exponentiated(base, coeff): applies base**(coeff * x)
170 """
171
172 def __init__(self, *dims: int) -> None:
173 self.transforms: List[Any] = []
174 self.shape = tuple(dims)
175 self._asfloat = False
176
177 @property
178 def dimension(self) -> int:
179 return int(np.prod(self.shape))
180
181 def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> Y: # pylint: disable=unused-argument
182 assert len(data) == self.dimension
183 array = np.array(data, copy=False)
184 for transf in self.transforms:
185 array = transf.forward(array)
186 if self._asfloat:
187 return float(array[0])
188 return array.reshape(self.shape)
189
190 def argument_to_data(self, arg: Y) -> np.ndarray:
191 if self._asfloat:
192 output = np.array([arg])
193 else:
194 output = np.array(arg, copy=False).ravel()
195 for transf in reversed(self.transforms):
196 output = transf.backward(output)
197 return output
198
199 def _short_repr(self) -> str:
200 dims = ",".join(str(d) for d in self.shape)
201 transf = "" if not self.transforms else (",[" + ",".join(f"{t:short}" for t in self.transforms) + "]")
202 fl = "" if not self._asfloat else "f"
203 return f"A({dims}{transf}){fl}"
204
205 def asfloat(self) -> 'Array':
206 if self.dimension != 1:
207 raise RuntimeError("Only Arrays with 1 element can be cast to float")
208 self._asfloat = True
209 return self
210
211 def with_transform(self, transform: transforms.Transform) -> 'Array':
212 self.transforms.append(transform)
213 return self
214
215 def exponentiated(self, base: float, coeff: float) -> 'Array':
216 """Exponentiation transform base ** (coeff * x)
217 This can for instance be used for to get a logarithmicly distruted values 10**(-[1, 2, 3]).
218
219 Parameters
220 ----------
221 base: float
222 coeff: float
223 """
224 return self.with_transform(transforms.Exponentiate(base=base, coeff=coeff))
225
226 def affined(self, a: float, b: float = 0.) -> 'Array':
227 """Affine transform a * x + b
228
229 Parameters
230 ----------
231 a: float
232 b: float
233 """
234 return self.with_transform(transforms.Affine(a=a, b=b))
235
236 def bounded(self, min_val: float, max_val: float, transform: str = "tanh") -> 'Array':
237 """Bounds all real values into [min_val, max_val] using a tanh transform.
238 Beware, tanh goes very fast to its limits.
239
240 Parameters
241 ----------
242 min_val: float
243 max_val: float
244 transform: str
245 either "tanh" or "arctan" (note that "tanh" reaches the boundaries really quickly,
246 while "arctan" is much softer)
247 """
248 if transform not in ["tanh", "arctan"]:
249 raise ValueError("Only 'tanh' and 'arctan' are allowed as transform")
250 Transf = transforms.ArctanBound if transform == "arctan" else transforms.TanhBound
251 return self.with_transform(Transf(min_val=min_val, max_val=max_val))
252
[end of nevergrad/instrumentation/variables.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nevergrad/instrumentation/variables.py b/nevergrad/instrumentation/variables.py
--- a/nevergrad/instrumentation/variables.py
+++ b/nevergrad/instrumentation/variables.py
@@ -58,7 +58,7 @@
return "SC({}|{})".format(",".join([str(x) for x in self.possibilities]), int(self.deterministic))
-class OrderedDiscrete(SoftmaxCategorical[X]):
+class OrderedDiscrete(utils.Variable[X]):
"""Discrete list of n values transformed to a 1-dim discontinuous variable.
A gaussian input yields a uniform distribution on the list of variables.
@@ -72,6 +72,9 @@
The variables are assumed to be ordered.
"""
+ def __init__(self, possibilities: List[X]) -> None:
+ self.possibilities = list(possibilities)
+
@property
def dimension(self) -> int:
return 1
@@ -108,7 +111,7 @@
def dimension(self) -> int:
return 1 if self.shape is None else int(np.prod(self.shape))
- def data_to_argument(self, data: ArrayLike, deterministic: bool = True) -> Y:
+ def data_to_argument(self, data: ArrayLike, deterministic: bool = True) -> Y: # pylint: disable=unused-argument
assert len(data) == self.dimension
x = data[0] if self.shape is None else np.reshape(data, self.shape)
return self.std * x + self.mean
|
{"golden_diff": "diff --git a/nevergrad/instrumentation/variables.py b/nevergrad/instrumentation/variables.py\n--- a/nevergrad/instrumentation/variables.py\n+++ b/nevergrad/instrumentation/variables.py\n@@ -58,7 +58,7 @@\n return \"SC({}|{})\".format(\",\".join([str(x) for x in self.possibilities]), int(self.deterministic))\n \n \n-class OrderedDiscrete(SoftmaxCategorical[X]):\n+class OrderedDiscrete(utils.Variable[X]):\n \"\"\"Discrete list of n values transformed to a 1-dim discontinuous variable.\n A gaussian input yields a uniform distribution on the list of variables.\n \n@@ -72,6 +72,9 @@\n The variables are assumed to be ordered.\n \"\"\"\n \n+ def __init__(self, possibilities: List[X]) -> None:\n+ self.possibilities = list(possibilities)\n+\n @property\n def dimension(self) -> int:\n return 1\n@@ -108,7 +111,7 @@\n def dimension(self) -> int:\n return 1 if self.shape is None else int(np.prod(self.shape))\n \n- def data_to_argument(self, data: ArrayLike, deterministic: bool = True) -> Y:\n+ def data_to_argument(self, data: ArrayLike, deterministic: bool = True) -> Y: # pylint: disable=unused-argument\n assert len(data) == self.dimension\n x = data[0] if self.shape is None else np.reshape(data, self.shape)\n return self.std * x + self.mean\n", "issue": "Documentation is out of date for 0.2.0\nTrying to run examples like:\r\n\r\nhttps://github.com/facebookresearch/nevergrad/blob/master/docs/machinelearning.md\r\n\r\ngives errors.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom typing import List, Optional, TypeVar, Union, Sequence, Any\nimport numpy as np\nfrom . import discretization\nfrom ..common.typetools import ArrayLike\nfrom . import transforms\nfrom . import utils\n\n\nX = TypeVar(\"X\")\n\n\nclass SoftmaxCategorical(utils.Variable[X]):\n \"\"\"Discrete set of n values transformed to a n-dim continuous variable.\n Each of the dimension encodes a weight for a value, and the softmax of weights\n provide probabilities for each possible value. A random value is sampled from\n this distribution.\n\n Parameter\n ---------\n possibilities: list\n a list of possible values for the variable\n\n Note\n ----\n Since the chosen value is drawn randomly, the use of this variable makes deterministic\n functions become stochastic, hence \"adding noise\"\n \"\"\"\n\n def __init__(self, possibilities: List[X], deterministic: bool = False) -> None:\n self.deterministic = deterministic\n self.possibilities = list(possibilities)\n\n @property\n def dimension(self) -> int:\n return len(self.possibilities)\n\n def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> X:\n assert len(data) == len(self.possibilities)\n deterministic = deterministic | self.deterministic\n index = int(discretization.softmax_discretization(data, len(self.possibilities), deterministic=deterministic)[0])\n return self.possibilities[index]\n\n def argument_to_data(self, arg: X) -> ArrayLike:\n assert arg in self.possibilities, f'{arg} not in allowed values: {self.possibilities}'\n return discretization.inverse_softmax_discretization(self.possibilities.index(arg), len(self.possibilities))\n\n def get_summary(self, data: ArrayLike) -> str:\n output = self.data_to_argument(data, deterministic=True)\n probas = discretization.softmax_probas(np.array(data, copy=False))\n proba_str = \", \".join([f'\"{s}\": {round(100 * p)}%' for s, p in zip(self.possibilities, probas)])\n return f\"Value {output}, from data: {data} yielding probas: {proba_str}\"\n\n def _short_repr(self) -> str:\n return \"SC({}|{})\".format(\",\".join([str(x) for x in self.possibilities]), int(self.deterministic))\n\n\nclass OrderedDiscrete(SoftmaxCategorical[X]):\n \"\"\"Discrete list of n values transformed to a 1-dim discontinuous variable.\n A gaussian input yields a uniform distribution on the list of variables.\n\n Parameter\n ---------\n possibilities: list\n a list of possible values for the variable\n\n Note\n ----\n The variables are assumed to be ordered.\n \"\"\"\n\n @property\n def dimension(self) -> int:\n return 1\n\n def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> X: # pylint: disable=arguments-differ, unused-argument\n assert len(data) == 1\n index = discretization.threshold_discretization(data, arity=len(self.possibilities))[0]\n return self.possibilities[index]\n\n def argument_to_data(self, arg: X) -> ArrayLike:\n assert arg in self.possibilities, f'{arg} not in allowed values: {self.possibilities}'\n index = self.possibilities.index(arg)\n return discretization.inverse_threshold_discretization([index], len(self.possibilities))\n\n def _short_repr(self) -> str:\n return \"OD({})\".format(\",\".join([str(x) for x in self.possibilities]))\n\n\nY = Union[float, np.ndarray]\n\n\nclass Gaussian(utils.Variable[Y]):\n \"\"\"Gaussian variable with a mean and a standard deviation, and\n possibly a shape (when using directly in Python)\n The output will simply be mean + std * data\n \"\"\"\n\n def __init__(self, mean: float, std: float, shape: Optional[Sequence[int]] = None) -> None:\n self.mean = mean\n self.std = std\n self.shape = shape\n\n @property\n def dimension(self) -> int:\n return 1 if self.shape is None else int(np.prod(self.shape))\n\n def data_to_argument(self, data: ArrayLike, deterministic: bool = True) -> Y:\n assert len(data) == self.dimension\n x = data[0] if self.shape is None else np.reshape(data, self.shape)\n return self.std * x + self.mean\n\n def argument_to_data(self, arg: Y) -> ArrayLike:\n return [(arg - self.mean) / self.std]\n\n def _short_repr(self) -> str:\n return f\"G({self.mean},{self.std})\"\n\n\nclass _Constant(utils.Variable[X]):\n \"\"\"Fake variable so that constant variables can fit into the\n pipeline.\n \"\"\"\n\n def __init__(self, value: X) -> None:\n self.value = value\n\n @classmethod\n def convert_non_instrument(cls, x: Union[X, utils.Variable[X]]) -> utils.Variable[X]:\n return x if isinstance(x, utils.Variable) else cls(x)\n\n @property\n def dimension(self) -> int:\n return 0\n\n def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> X: # pylint: disable=unused-argument\n return self.value\n\n def argument_to_data(self, arg: X) -> ArrayLike:\n assert arg == self.value, f'{arg} != {self.value}'\n return []\n\n def get_summary(self, data: ArrayLike) -> str:\n raise RuntimeError(\"Constant summary should not be called\")\n\n def _short_repr(self) -> str:\n return f\"{self.value}\"\n\n\nclass Array(utils.Variable[Y]):\n \"\"\"Array variable of a given shape, on which several transforms can be applied.\n\n Parameters\n ----------\n *dims: int\n dimensions of the array (elements of shape)\n\n Note\n ----\n Interesting methods (which can be chained):\n - asfloat(): converts the array into a float (only for arrays with 1 element)\n - with_transform(transform): apply a transform to the array\n - affined(a, b): applies a*x+b\n - bounded(min_val, max_val, transform=\"tanh\"): applies a transform (\"tanh\" or \"arctan\")\n so that output values are in range [min_val, max_val]\n - exponentiated(base, coeff): applies base**(coeff * x)\n \"\"\"\n\n def __init__(self, *dims: int) -> None:\n self.transforms: List[Any] = []\n self.shape = tuple(dims)\n self._asfloat = False\n\n @property\n def dimension(self) -> int:\n return int(np.prod(self.shape))\n\n def data_to_argument(self, data: ArrayLike, deterministic: bool = False) -> Y: # pylint: disable=unused-argument\n assert len(data) == self.dimension\n array = np.array(data, copy=False)\n for transf in self.transforms:\n array = transf.forward(array)\n if self._asfloat:\n return float(array[0])\n return array.reshape(self.shape)\n\n def argument_to_data(self, arg: Y) -> np.ndarray:\n if self._asfloat:\n output = np.array([arg])\n else:\n output = np.array(arg, copy=False).ravel()\n for transf in reversed(self.transforms):\n output = transf.backward(output)\n return output\n\n def _short_repr(self) -> str:\n dims = \",\".join(str(d) for d in self.shape)\n transf = \"\" if not self.transforms else (\",[\" + \",\".join(f\"{t:short}\" for t in self.transforms) + \"]\")\n fl = \"\" if not self._asfloat else \"f\"\n return f\"A({dims}{transf}){fl}\"\n\n def asfloat(self) -> 'Array':\n if self.dimension != 1:\n raise RuntimeError(\"Only Arrays with 1 element can be cast to float\")\n self._asfloat = True\n return self\n\n def with_transform(self, transform: transforms.Transform) -> 'Array':\n self.transforms.append(transform)\n return self\n\n def exponentiated(self, base: float, coeff: float) -> 'Array':\n \"\"\"Exponentiation transform base ** (coeff * x)\n This can for instance be used for to get a logarithmicly distruted values 10**(-[1, 2, 3]).\n\n Parameters\n ----------\n base: float\n coeff: float\n \"\"\"\n return self.with_transform(transforms.Exponentiate(base=base, coeff=coeff))\n\n def affined(self, a: float, b: float = 0.) -> 'Array':\n \"\"\"Affine transform a * x + b\n\n Parameters\n ----------\n a: float\n b: float\n \"\"\"\n return self.with_transform(transforms.Affine(a=a, b=b))\n\n def bounded(self, min_val: float, max_val: float, transform: str = \"tanh\") -> 'Array':\n \"\"\"Bounds all real values into [min_val, max_val] using a tanh transform.\n Beware, tanh goes very fast to its limits.\n\n Parameters\n ----------\n min_val: float\n max_val: float\n transform: str\n either \"tanh\" or \"arctan\" (note that \"tanh\" reaches the boundaries really quickly,\n while \"arctan\" is much softer)\n \"\"\"\n if transform not in [\"tanh\", \"arctan\"]:\n raise ValueError(\"Only 'tanh' and 'arctan' are allowed as transform\")\n Transf = transforms.ArctanBound if transform == \"arctan\" else transforms.TanhBound\n return self.with_transform(Transf(min_val=min_val, max_val=max_val))\n", "path": "nevergrad/instrumentation/variables.py"}]}
| 3,450 | 347 |
gh_patches_debug_33209
|
rasdani/github-patches
|
git_diff
|
freqtrade__freqtrade-4697
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker container as non root user
The current docker container uses root user by default. This caused permission problems from the host side. For example, I can't edit files from binded folders without sudo.
</issue>
<code>
[start of freqtrade/commands/build_config_commands.py]
1 import logging
2 import secrets
3 from pathlib import Path
4 from typing import Any, Dict, List
5
6 from questionary import Separator, prompt
7
8 from freqtrade.constants import UNLIMITED_STAKE_AMOUNT
9 from freqtrade.exceptions import OperationalException
10 from freqtrade.exchange import MAP_EXCHANGE_CHILDCLASS, available_exchanges
11 from freqtrade.misc import render_template
12
13
14 logger = logging.getLogger(__name__)
15
16
17 def validate_is_int(val):
18 try:
19 _ = int(val)
20 return True
21 except Exception:
22 return False
23
24
25 def validate_is_float(val):
26 try:
27 _ = float(val)
28 return True
29 except Exception:
30 return False
31
32
33 def ask_user_overwrite(config_path: Path) -> bool:
34 questions = [
35 {
36 "type": "confirm",
37 "name": "overwrite",
38 "message": f"File {config_path} already exists. Overwrite?",
39 "default": False,
40 },
41 ]
42 answers = prompt(questions)
43 return answers['overwrite']
44
45
46 def ask_user_config() -> Dict[str, Any]:
47 """
48 Ask user a few questions to build the configuration.
49 Interactive questions built using https://github.com/tmbo/questionary
50 :returns: Dict with keys to put into template
51 """
52 questions: List[Dict[str, Any]] = [
53 {
54 "type": "confirm",
55 "name": "dry_run",
56 "message": "Do you want to enable Dry-run (simulated trades)?",
57 "default": True,
58 },
59 {
60 "type": "text",
61 "name": "stake_currency",
62 "message": "Please insert your stake currency:",
63 "default": 'BTC',
64 },
65 {
66 "type": "text",
67 "name": "stake_amount",
68 "message": "Please insert your stake amount:",
69 "default": "0.01",
70 "validate": lambda val: val == UNLIMITED_STAKE_AMOUNT or validate_is_float(val),
71 },
72 {
73 "type": "text",
74 "name": "max_open_trades",
75 "message": f"Please insert max_open_trades (Integer or '{UNLIMITED_STAKE_AMOUNT}'):",
76 "default": "3",
77 "validate": lambda val: val == UNLIMITED_STAKE_AMOUNT or validate_is_int(val)
78 },
79 {
80 "type": "text",
81 "name": "timeframe",
82 "message": "Please insert your desired timeframe (e.g. 5m):",
83 "default": "5m",
84 },
85 {
86 "type": "text",
87 "name": "fiat_display_currency",
88 "message": "Please insert your display Currency (for reporting):",
89 "default": 'USD',
90 },
91 {
92 "type": "select",
93 "name": "exchange_name",
94 "message": "Select exchange",
95 "choices": [
96 "binance",
97 "binanceus",
98 "bittrex",
99 "kraken",
100 "ftx",
101 Separator(),
102 "other",
103 ],
104 },
105 {
106 "type": "autocomplete",
107 "name": "exchange_name",
108 "message": "Type your exchange name (Must be supported by ccxt)",
109 "choices": available_exchanges(),
110 "when": lambda x: x["exchange_name"] == 'other'
111 },
112 {
113 "type": "password",
114 "name": "exchange_key",
115 "message": "Insert Exchange Key",
116 "when": lambda x: not x['dry_run']
117 },
118 {
119 "type": "password",
120 "name": "exchange_secret",
121 "message": "Insert Exchange Secret",
122 "when": lambda x: not x['dry_run']
123 },
124 {
125 "type": "confirm",
126 "name": "telegram",
127 "message": "Do you want to enable Telegram?",
128 "default": False,
129 },
130 {
131 "type": "password",
132 "name": "telegram_token",
133 "message": "Insert Telegram token",
134 "when": lambda x: x['telegram']
135 },
136 {
137 "type": "text",
138 "name": "telegram_chat_id",
139 "message": "Insert Telegram chat id",
140 "when": lambda x: x['telegram']
141 },
142 {
143 "type": "confirm",
144 "name": "api_server",
145 "message": "Do you want to enable the Rest API (includes FreqUI)?",
146 "default": False,
147 },
148 {
149 "type": "text",
150 "name": "api_server_listen_addr",
151 "message": "Insert Api server Listen Address (best left untouched default!)",
152 "default": "127.0.0.1",
153 "when": lambda x: x['api_server']
154 },
155 {
156 "type": "text",
157 "name": "api_server_username",
158 "message": "Insert api-server username",
159 "default": "freqtrader",
160 "when": lambda x: x['api_server']
161 },
162 {
163 "type": "text",
164 "name": "api_server_password",
165 "message": "Insert api-server password",
166 "when": lambda x: x['api_server']
167 },
168 ]
169 answers = prompt(questions)
170
171 if not answers:
172 # Interrupted questionary sessions return an empty dict.
173 raise OperationalException("User interrupted interactive questions.")
174
175 # Force JWT token to be a random string
176 answers['api_server_jwt_key'] = secrets.token_hex()
177
178 return answers
179
180
181 def deploy_new_config(config_path: Path, selections: Dict[str, Any]) -> None:
182 """
183 Applies selections to the template and writes the result to config_path
184 :param config_path: Path object for new config file. Should not exist yet
185 :param selecions: Dict containing selections taken by the user.
186 """
187 from jinja2.exceptions import TemplateNotFound
188 try:
189 exchange_template = MAP_EXCHANGE_CHILDCLASS.get(
190 selections['exchange_name'], selections['exchange_name'])
191
192 selections['exchange'] = render_template(
193 templatefile=f"subtemplates/exchange_{exchange_template}.j2",
194 arguments=selections
195 )
196 except TemplateNotFound:
197 selections['exchange'] = render_template(
198 templatefile="subtemplates/exchange_generic.j2",
199 arguments=selections
200 )
201
202 config_text = render_template(templatefile='base_config.json.j2',
203 arguments=selections)
204
205 logger.info(f"Writing config to `{config_path}`.")
206 logger.info(
207 "Please make sure to check the configuration contents and adjust settings to your needs.")
208
209 config_path.write_text(config_text)
210
211
212 def start_new_config(args: Dict[str, Any]) -> None:
213 """
214 Create a new strategy from a template
215 Asking the user questions to fill out the templateaccordingly.
216 """
217
218 config_path = Path(args['config'][0])
219 if config_path.exists():
220 overwrite = ask_user_overwrite(config_path)
221 if overwrite:
222 config_path.unlink()
223 else:
224 raise OperationalException(
225 f"Configuration file `{config_path}` already exists. "
226 "Please delete it or use a different configuration file name.")
227 selections = ask_user_config()
228 deploy_new_config(config_path, selections)
229
[end of freqtrade/commands/build_config_commands.py]
[start of freqtrade/configuration/directory_operations.py]
1 import logging
2 import shutil
3 from pathlib import Path
4 from typing import Any, Dict, Optional
5
6 from freqtrade.constants import USER_DATA_FILES
7 from freqtrade.exceptions import OperationalException
8
9
10 logger = logging.getLogger(__name__)
11
12
13 def create_datadir(config: Dict[str, Any], datadir: Optional[str] = None) -> Path:
14
15 folder = Path(datadir) if datadir else Path(f"{config['user_data_dir']}/data")
16 if not datadir:
17 # set datadir
18 exchange_name = config.get('exchange', {}).get('name').lower()
19 folder = folder.joinpath(exchange_name)
20
21 if not folder.is_dir():
22 folder.mkdir(parents=True)
23 logger.info(f'Created data directory: {datadir}')
24 return folder
25
26
27 def create_userdata_dir(directory: str, create_dir: bool = False) -> Path:
28 """
29 Create userdata directory structure.
30 if create_dir is True, then the parent-directory will be created if it does not exist.
31 Sub-directories will always be created if the parent directory exists.
32 Raises OperationalException if given a non-existing directory.
33 :param directory: Directory to check
34 :param create_dir: Create directory if it does not exist.
35 :return: Path object containing the directory
36 """
37 sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "logs",
38 "notebooks", "plot", "strategies", ]
39 folder = Path(directory)
40 if not folder.is_dir():
41 if create_dir:
42 folder.mkdir(parents=True)
43 logger.info(f'Created user-data directory: {folder}')
44 else:
45 raise OperationalException(
46 f"Directory `{folder}` does not exist. "
47 "Please use `freqtrade create-userdir` to create a user directory")
48
49 # Create required subdirectories
50 for f in sub_dirs:
51 subfolder = folder / f
52 if not subfolder.is_dir():
53 subfolder.mkdir(parents=False)
54 return folder
55
56
57 def copy_sample_files(directory: Path, overwrite: bool = False) -> None:
58 """
59 Copy files from templates to User data directory.
60 :param directory: Directory to copy data to
61 :param overwrite: Overwrite existing sample files
62 """
63 if not directory.is_dir():
64 raise OperationalException(f"Directory `{directory}` does not exist.")
65 sourcedir = Path(__file__).parents[1] / "templates"
66 for source, target in USER_DATA_FILES.items():
67 targetdir = directory / target
68 if not targetdir.is_dir():
69 raise OperationalException(f"Directory `{targetdir}` does not exist.")
70 targetfile = targetdir / source
71 if targetfile.exists():
72 if not overwrite:
73 logger.warning(f"File `{targetfile}` exists already, not deploying sample file.")
74 continue
75 logger.warning(f"File `{targetfile}` exists already, overwriting.")
76 shutil.copy(str(sourcedir / source), str(targetfile))
77
[end of freqtrade/configuration/directory_operations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/freqtrade/commands/build_config_commands.py b/freqtrade/commands/build_config_commands.py
--- a/freqtrade/commands/build_config_commands.py
+++ b/freqtrade/commands/build_config_commands.py
@@ -5,6 +5,7 @@
from questionary import Separator, prompt
+from freqtrade.configuration.directory_operations import chown_user_directory
from freqtrade.constants import UNLIMITED_STAKE_AMOUNT
from freqtrade.exceptions import OperationalException
from freqtrade.exchange import MAP_EXCHANGE_CHILDCLASS, available_exchanges
@@ -216,6 +217,7 @@
"""
config_path = Path(args['config'][0])
+ chown_user_directory(config_path.parent)
if config_path.exists():
overwrite = ask_user_overwrite(config_path)
if overwrite:
diff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py
--- a/freqtrade/configuration/directory_operations.py
+++ b/freqtrade/configuration/directory_operations.py
@@ -24,6 +24,21 @@
return folder
+def chown_user_directory(directory: Path) -> None:
+ """
+ Use Sudo to change permissions of the home-directory if necessary
+ Only applies when running in docker!
+ """
+ import os
+ if os.environ.get('FT_APP_ENV') == 'docker':
+ try:
+ import subprocess
+ subprocess.check_output(
+ ['sudo', 'chown', '-R', 'ftuser:', str(directory.resolve())])
+ except Exception:
+ logger.warning(f"Could not chown {directory}")
+
+
def create_userdata_dir(directory: str, create_dir: bool = False) -> Path:
"""
Create userdata directory structure.
@@ -37,6 +52,7 @@
sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "logs",
"notebooks", "plot", "strategies", ]
folder = Path(directory)
+ chown_user_directory(folder)
if not folder.is_dir():
if create_dir:
folder.mkdir(parents=True)
|
{"golden_diff": "diff --git a/freqtrade/commands/build_config_commands.py b/freqtrade/commands/build_config_commands.py\n--- a/freqtrade/commands/build_config_commands.py\n+++ b/freqtrade/commands/build_config_commands.py\n@@ -5,6 +5,7 @@\n \n from questionary import Separator, prompt\n \n+from freqtrade.configuration.directory_operations import chown_user_directory\n from freqtrade.constants import UNLIMITED_STAKE_AMOUNT\n from freqtrade.exceptions import OperationalException\n from freqtrade.exchange import MAP_EXCHANGE_CHILDCLASS, available_exchanges\n@@ -216,6 +217,7 @@\n \"\"\"\n \n config_path = Path(args['config'][0])\n+ chown_user_directory(config_path.parent)\n if config_path.exists():\n overwrite = ask_user_overwrite(config_path)\n if overwrite:\ndiff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py\n--- a/freqtrade/configuration/directory_operations.py\n+++ b/freqtrade/configuration/directory_operations.py\n@@ -24,6 +24,21 @@\n return folder\n \n \n+def chown_user_directory(directory: Path) -> None:\n+ \"\"\"\n+ Use Sudo to change permissions of the home-directory if necessary\n+ Only applies when running in docker!\n+ \"\"\"\n+ import os\n+ if os.environ.get('FT_APP_ENV') == 'docker':\n+ try:\n+ import subprocess\n+ subprocess.check_output(\n+ ['sudo', 'chown', '-R', 'ftuser:', str(directory.resolve())])\n+ except Exception:\n+ logger.warning(f\"Could not chown {directory}\")\n+\n+\n def create_userdata_dir(directory: str, create_dir: bool = False) -> Path:\n \"\"\"\n Create userdata directory structure.\n@@ -37,6 +52,7 @@\n sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n \"notebooks\", \"plot\", \"strategies\", ]\n folder = Path(directory)\n+ chown_user_directory(folder)\n if not folder.is_dir():\n if create_dir:\n folder.mkdir(parents=True)\n", "issue": "Docker container as non root user\nThe current docker container uses root user by default. This caused permission problems from the host side. For example, I can't edit files from binded folders without sudo. \n", "before_files": [{"content": "import logging\nimport secrets\nfrom pathlib import Path\nfrom typing import Any, Dict, List\n\nfrom questionary import Separator, prompt\n\nfrom freqtrade.constants import UNLIMITED_STAKE_AMOUNT\nfrom freqtrade.exceptions import OperationalException\nfrom freqtrade.exchange import MAP_EXCHANGE_CHILDCLASS, available_exchanges\nfrom freqtrade.misc import render_template\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef validate_is_int(val):\n try:\n _ = int(val)\n return True\n except Exception:\n return False\n\n\ndef validate_is_float(val):\n try:\n _ = float(val)\n return True\n except Exception:\n return False\n\n\ndef ask_user_overwrite(config_path: Path) -> bool:\n questions = [\n {\n \"type\": \"confirm\",\n \"name\": \"overwrite\",\n \"message\": f\"File {config_path} already exists. Overwrite?\",\n \"default\": False,\n },\n ]\n answers = prompt(questions)\n return answers['overwrite']\n\n\ndef ask_user_config() -> Dict[str, Any]:\n \"\"\"\n Ask user a few questions to build the configuration.\n Interactive questions built using https://github.com/tmbo/questionary\n :returns: Dict with keys to put into template\n \"\"\"\n questions: List[Dict[str, Any]] = [\n {\n \"type\": \"confirm\",\n \"name\": \"dry_run\",\n \"message\": \"Do you want to enable Dry-run (simulated trades)?\",\n \"default\": True,\n },\n {\n \"type\": \"text\",\n \"name\": \"stake_currency\",\n \"message\": \"Please insert your stake currency:\",\n \"default\": 'BTC',\n },\n {\n \"type\": \"text\",\n \"name\": \"stake_amount\",\n \"message\": \"Please insert your stake amount:\",\n \"default\": \"0.01\",\n \"validate\": lambda val: val == UNLIMITED_STAKE_AMOUNT or validate_is_float(val),\n },\n {\n \"type\": \"text\",\n \"name\": \"max_open_trades\",\n \"message\": f\"Please insert max_open_trades (Integer or '{UNLIMITED_STAKE_AMOUNT}'):\",\n \"default\": \"3\",\n \"validate\": lambda val: val == UNLIMITED_STAKE_AMOUNT or validate_is_int(val)\n },\n {\n \"type\": \"text\",\n \"name\": \"timeframe\",\n \"message\": \"Please insert your desired timeframe (e.g. 5m):\",\n \"default\": \"5m\",\n },\n {\n \"type\": \"text\",\n \"name\": \"fiat_display_currency\",\n \"message\": \"Please insert your display Currency (for reporting):\",\n \"default\": 'USD',\n },\n {\n \"type\": \"select\",\n \"name\": \"exchange_name\",\n \"message\": \"Select exchange\",\n \"choices\": [\n \"binance\",\n \"binanceus\",\n \"bittrex\",\n \"kraken\",\n \"ftx\",\n Separator(),\n \"other\",\n ],\n },\n {\n \"type\": \"autocomplete\",\n \"name\": \"exchange_name\",\n \"message\": \"Type your exchange name (Must be supported by ccxt)\",\n \"choices\": available_exchanges(),\n \"when\": lambda x: x[\"exchange_name\"] == 'other'\n },\n {\n \"type\": \"password\",\n \"name\": \"exchange_key\",\n \"message\": \"Insert Exchange Key\",\n \"when\": lambda x: not x['dry_run']\n },\n {\n \"type\": \"password\",\n \"name\": \"exchange_secret\",\n \"message\": \"Insert Exchange Secret\",\n \"when\": lambda x: not x['dry_run']\n },\n {\n \"type\": \"confirm\",\n \"name\": \"telegram\",\n \"message\": \"Do you want to enable Telegram?\",\n \"default\": False,\n },\n {\n \"type\": \"password\",\n \"name\": \"telegram_token\",\n \"message\": \"Insert Telegram token\",\n \"when\": lambda x: x['telegram']\n },\n {\n \"type\": \"text\",\n \"name\": \"telegram_chat_id\",\n \"message\": \"Insert Telegram chat id\",\n \"when\": lambda x: x['telegram']\n },\n {\n \"type\": \"confirm\",\n \"name\": \"api_server\",\n \"message\": \"Do you want to enable the Rest API (includes FreqUI)?\",\n \"default\": False,\n },\n {\n \"type\": \"text\",\n \"name\": \"api_server_listen_addr\",\n \"message\": \"Insert Api server Listen Address (best left untouched default!)\",\n \"default\": \"127.0.0.1\",\n \"when\": lambda x: x['api_server']\n },\n {\n \"type\": \"text\",\n \"name\": \"api_server_username\",\n \"message\": \"Insert api-server username\",\n \"default\": \"freqtrader\",\n \"when\": lambda x: x['api_server']\n },\n {\n \"type\": \"text\",\n \"name\": \"api_server_password\",\n \"message\": \"Insert api-server password\",\n \"when\": lambda x: x['api_server']\n },\n ]\n answers = prompt(questions)\n\n if not answers:\n # Interrupted questionary sessions return an empty dict.\n raise OperationalException(\"User interrupted interactive questions.\")\n\n # Force JWT token to be a random string\n answers['api_server_jwt_key'] = secrets.token_hex()\n\n return answers\n\n\ndef deploy_new_config(config_path: Path, selections: Dict[str, Any]) -> None:\n \"\"\"\n Applies selections to the template and writes the result to config_path\n :param config_path: Path object for new config file. Should not exist yet\n :param selecions: Dict containing selections taken by the user.\n \"\"\"\n from jinja2.exceptions import TemplateNotFound\n try:\n exchange_template = MAP_EXCHANGE_CHILDCLASS.get(\n selections['exchange_name'], selections['exchange_name'])\n\n selections['exchange'] = render_template(\n templatefile=f\"subtemplates/exchange_{exchange_template}.j2\",\n arguments=selections\n )\n except TemplateNotFound:\n selections['exchange'] = render_template(\n templatefile=\"subtemplates/exchange_generic.j2\",\n arguments=selections\n )\n\n config_text = render_template(templatefile='base_config.json.j2',\n arguments=selections)\n\n logger.info(f\"Writing config to `{config_path}`.\")\n logger.info(\n \"Please make sure to check the configuration contents and adjust settings to your needs.\")\n\n config_path.write_text(config_text)\n\n\ndef start_new_config(args: Dict[str, Any]) -> None:\n \"\"\"\n Create a new strategy from a template\n Asking the user questions to fill out the templateaccordingly.\n \"\"\"\n\n config_path = Path(args['config'][0])\n if config_path.exists():\n overwrite = ask_user_overwrite(config_path)\n if overwrite:\n config_path.unlink()\n else:\n raise OperationalException(\n f\"Configuration file `{config_path}` already exists. \"\n \"Please delete it or use a different configuration file name.\")\n selections = ask_user_config()\n deploy_new_config(config_path, selections)\n", "path": "freqtrade/commands/build_config_commands.py"}, {"content": "import logging\nimport shutil\nfrom pathlib import Path\nfrom typing import Any, Dict, Optional\n\nfrom freqtrade.constants import USER_DATA_FILES\nfrom freqtrade.exceptions import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_datadir(config: Dict[str, Any], datadir: Optional[str] = None) -> Path:\n\n folder = Path(datadir) if datadir else Path(f\"{config['user_data_dir']}/data\")\n if not datadir:\n # set datadir\n exchange_name = config.get('exchange', {}).get('name').lower()\n folder = folder.joinpath(exchange_name)\n\n if not folder.is_dir():\n folder.mkdir(parents=True)\n logger.info(f'Created data directory: {datadir}')\n return folder\n\n\ndef create_userdata_dir(directory: str, create_dir: bool = False) -> Path:\n \"\"\"\n Create userdata directory structure.\n if create_dir is True, then the parent-directory will be created if it does not exist.\n Sub-directories will always be created if the parent directory exists.\n Raises OperationalException if given a non-existing directory.\n :param directory: Directory to check\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n \"notebooks\", \"plot\", \"strategies\", ]\n folder = Path(directory)\n if not folder.is_dir():\n if create_dir:\n folder.mkdir(parents=True)\n logger.info(f'Created user-data directory: {folder}')\n else:\n raise OperationalException(\n f\"Directory `{folder}` does not exist. \"\n \"Please use `freqtrade create-userdir` to create a user directory\")\n\n # Create required subdirectories\n for f in sub_dirs:\n subfolder = folder / f\n if not subfolder.is_dir():\n subfolder.mkdir(parents=False)\n return folder\n\n\ndef copy_sample_files(directory: Path, overwrite: bool = False) -> None:\n \"\"\"\n Copy files from templates to User data directory.\n :param directory: Directory to copy data to\n :param overwrite: Overwrite existing sample files\n \"\"\"\n if not directory.is_dir():\n raise OperationalException(f\"Directory `{directory}` does not exist.\")\n sourcedir = Path(__file__).parents[1] / \"templates\"\n for source, target in USER_DATA_FILES.items():\n targetdir = directory / target\n if not targetdir.is_dir():\n raise OperationalException(f\"Directory `{targetdir}` does not exist.\")\n targetfile = targetdir / source\n if targetfile.exists():\n if not overwrite:\n logger.warning(f\"File `{targetfile}` exists already, not deploying sample file.\")\n continue\n logger.warning(f\"File `{targetfile}` exists already, overwriting.\")\n shutil.copy(str(sourcedir / source), str(targetfile))\n", "path": "freqtrade/configuration/directory_operations.py"}]}
| 3,517 | 462 |
gh_patches_debug_300
|
rasdani/github-patches
|
git_diff
|
mlcommons__GaNDLF-477
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add histology exception tests
**Is your feature request related to a problem? Please describe.**
Currently, the histology inference pipeline contains a lot of exceptions, but they aren't being tested.
**Describe the solution you'd like**
See title.
**Describe alternatives you've considered**
N.A.
**Additional context**
N.A.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 with open("README.md") as readme_file:
13 readme = readme_file.read()
14
15
16 def git_submodule_update():
17 ## submodule update
18 os.system("git submodule update --init --recursive")
19
20
21 class CustomInstallCommand(install):
22 def run(self):
23 install.run(self)
24 git_submodule_update()
25
26
27 class CustomDevelopCommand(develop):
28 def run(self):
29 develop.run(self)
30 git_submodule_update()
31
32
33 class CustomEggInfoCommand(egg_info):
34 def run(self):
35 egg_info.run(self)
36 git_submodule_update()
37
38
39 # read version.py
40 import sys, re
41
42 try:
43 filepath = "GANDLF/version.py"
44 version_file = open(filepath)
45 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
46
47 except Exception as error:
48 __version__ = "0.0.1"
49 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
50
51 requirements = [
52 "black",
53 "numpy==1.22.0",
54 "scipy",
55 "SimpleITK!=2.0.*",
56 "torchvision",
57 "tqdm",
58 "torchio==0.18.57",
59 "pandas",
60 "pylint",
61 "scikit-learn>=0.23.2",
62 "scikit-image>=0.19.1",
63 'pickle5>=0.0.11; python_version < "3.8.0"',
64 "setuptools",
65 "seaborn",
66 "pyyaml",
67 "tiffslide",
68 "matplotlib",
69 "requests>=2.25.0",
70 "pyvips",
71 "pytest",
72 "coverage",
73 "pytest-cov",
74 "psutil",
75 "medcam",
76 "opencv-python",
77 "torchmetrics==0.5.1", # newer versions have changed api for f1 invocation
78 "OpenPatchMiner==0.1.8",
79 "zarr==2.10.3",
80 "pydicom",
81 "onnx",
82 "torchinfo==1.7.0",
83 ]
84
85 # pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389
86 if sys.platform == "darwin":
87 requirements.append("torch==1.9.0")
88 else:
89 requirements.append("torch==1.8.2")
90
91 setup(
92 name="GANDLF",
93 version=__version__,
94 author="Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun Güley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos", # alphabetical order
95 author_email="[email protected]",
96 python_requires=">=3.7",
97 packages=find_packages(),
98 cmdclass={ # this ensures git_submodule_update is called during install
99 "install": CustomInstallCommand,
100 "develop": CustomDevelopCommand,
101 "egg_info": CustomEggInfoCommand,
102 },
103 scripts=[
104 "gandlf_run",
105 "gandlf_constructCSV",
106 "gandlf_collectStats",
107 "gandlf_patchMiner",
108 "gandlf_preprocess",
109 "gandlf_anonymizer",
110 "gandlf_verifyInstall",
111 ],
112 classifiers=[
113 "Development Status :: 3 - Alpha",
114 "Intended Audience :: Science/Research",
115 "License :: OSI Approved :: BSD License",
116 "Natural Language :: English",
117 "Operating System :: OS Independent",
118 "Programming Language :: Python :: 3.7",
119 "Programming Language :: Python :: 3.8",
120 "Programming Language :: Python :: 3.9",
121 "Topic :: Scientific/Engineering :: Medical Science Apps",
122 ],
123 description=(
124 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
125 ),
126 install_requires=requirements,
127 license="BSD-3-Clause License",
128 long_description=readme,
129 long_description_content_type="text/markdown",
130 include_package_data=True,
131 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging",
132 zip_safe=False,
133 )
134
135 ## windows vips installation
136 if os.name == "nt": # proceed for windows
137 from pathlib import Path
138
139 # download and extract if main dll is absent
140 if not Path("./vips/vips-dev-8.10/bin/libvips-42.dll").exists():
141 print("Downloading and extracting VIPS for Windows")
142 url = "https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip"
143 zip_to_extract = "./vips.zip"
144 import urllib.request, zipfile
145
146 urllib.request.urlretrieve(url, zip_to_extract)
147 z = zipfile.ZipFile(zip_to_extract)
148 z.extractall("./vips")
149 z.close()
150 os.remove(zip_to_extract)
151
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -67,7 +67,7 @@
"tiffslide",
"matplotlib",
"requests>=2.25.0",
- "pyvips",
+ "pyvips==2.2.1",
"pytest",
"coverage",
"pytest-cov",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -67,7 +67,7 @@\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n- \"pyvips\",\n+ \"pyvips==2.2.1\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n", "issue": "Add histology exception tests\n**Is your feature request related to a problem? Please describe.**\r\nCurrently, the histology inference pipeline contains a lot of exceptions, but they aren't being tested.\r\n\r\n**Describe the solution you'd like**\r\nSee title.\r\n\r\n**Describe alternatives you've considered**\r\nN.A.\r\n\r\n**Additional context**\r\nN.A.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n \"pandas\",\n \"pylint\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n 'pickle5>=0.0.11; python_version < \"3.8.0\"',\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pyvips\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==0.5.1\", # newer versions have changed api for f1 invocation\n \"OpenPatchMiner==0.1.8\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n]\n\n# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389\nif sys.platform == \"darwin\":\n requirements.append(\"torch==1.9.0\")\nelse:\n requirements.append(\"torch==1.8.2\")\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun G\u00fcley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos\", # alphabetical order\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"BSD-3-Clause License\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging\",\n zip_safe=False,\n)\n\n## windows vips installation\nif os.name == \"nt\": # proceed for windows\n from pathlib import Path\n\n # download and extract if main dll is absent\n if not Path(\"./vips/vips-dev-8.10/bin/libvips-42.dll\").exists():\n print(\"Downloading and extracting VIPS for Windows\")\n url = \"https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip\"\n zip_to_extract = \"./vips.zip\"\n import urllib.request, zipfile\n\n urllib.request.urlretrieve(url, zip_to_extract)\n z = zipfile.ZipFile(zip_to_extract)\n z.extractall(\"./vips\")\n z.close()\n os.remove(zip_to_extract)\n", "path": "setup.py"}]}
| 2,142 | 87 |
gh_patches_debug_7552
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-1132
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError with pymongo `getMore` operations
**Describe your environment**
Describe your environment
Mac OS Catalina 10.15.6 (19G73)
Darwin-19.6.0-x86_64-i386-64bit
Python 3.7.7
Installed packages:
```
opentelemetry-api 0.12b0 OpenTelemetry Python API
opentelemetry-ext-honeycomb 0.5b0 Honeycomb Exporter for OpenTelemetry
opentelemetry-instrumentation 0.12b0 Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python
opentelemetry-instrumentation-botocore 0.12b0 OpenTelemetry Botocore instrumentation
opentelemetry-instrumentation-pymongo 0.12b0 OpenTelemetry pymongo instrumentation
opentelemetry-instrumentation-requests 0.12b0 OpenTelemetry requests instrumentation
pymongo 3.11.0 Python driver for MongoDB <http://www.mongodb.org>
```
**Steps to reproduce**
any find operation where the number of documents to be returned exceeds the batch size on the cursor:
```pyhton
from pymongo import MongoClient
from opentelemetry import trace
from opentelemetry.trace import TracerProvider
from opentelemetry.instrumentation.pymongo import PymongoInstrumentor
trace.set_tracer_provider(TracerProvider())
PymongoInstrumentor().instrument()
client = MongoClient()
db = client["MongoDB_Database"]
collection = db["MongoDB_Collection"]
collection.find({'batch_size': 1})
```
**What is the expected behavior?**
Spans with names like `mongodb.getMore.1`
**What is the actual behavior?**
```
Traceback (most recent call last):
File "/Users/drubin/cargurus/analytics/snowblower/.venv/lib/python3.7/site-packages/pymongo/monitoring.py", line 1266, in publish_command_start
subscriber.started(event)
File "/Users/drubin/cargurus/analytics/snowblower/.venv/lib/python3.7/site-packages/opentelemetry/instrumentation/pymongo/__init__.py", line 69, in started
name += "." + command
TypeError: can only concatenate str (not "int") to str
```
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 The integration with MongoDB supports the `pymongo`_ library, it can be
17 enabled using the ``PymongoInstrumentor``.
18
19 .. _pymongo: https://pypi.org/project/pymongo
20
21 Usage
22 -----
23
24 .. code:: python
25
26 from pymongo import MongoClient
27 from opentelemetry import trace
28 from opentelemetry.trace import TracerProvider
29 from opentelemetry.instrumentation.pymongo import PymongoInstrumentor
30
31 trace.set_tracer_provider(TracerProvider())
32
33 PymongoInstrumentor().instrument()
34 client = MongoClient()
35 db = client["MongoDB_Database"]
36 collection = db["MongoDB_Collection"]
37 collection.find_one()
38
39 API
40 ---
41 """
42
43 from pymongo import monitoring
44
45 from opentelemetry import trace
46 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
47 from opentelemetry.instrumentation.pymongo.version import __version__
48 from opentelemetry.trace import SpanKind, get_tracer
49 from opentelemetry.trace.status import Status, StatusCanonicalCode
50
51 DATABASE_TYPE = "mongodb"
52 COMMAND_ATTRIBUTES = ["filter", "sort", "skip", "limit", "pipeline"]
53
54
55 class CommandTracer(monitoring.CommandListener):
56 def __init__(self, tracer):
57 self._tracer = tracer
58 self._span_dict = {}
59 self.is_enabled = True
60
61 def started(self, event: monitoring.CommandStartedEvent):
62 """ Method to handle a pymongo CommandStartedEvent """
63 if not self.is_enabled:
64 return
65 command = event.command.get(event.command_name, "")
66 name = DATABASE_TYPE + "." + event.command_name
67 statement = event.command_name
68 if command:
69 name += "." + command
70 statement += " " + command
71
72 try:
73 span = self._tracer.start_span(name, kind=SpanKind.CLIENT)
74 span.set_attribute("component", DATABASE_TYPE)
75 span.set_attribute("db.type", DATABASE_TYPE)
76 span.set_attribute("db.instance", event.database_name)
77 span.set_attribute("db.statement", statement)
78 if event.connection_id is not None:
79 span.set_attribute("net.peer.name", event.connection_id[0])
80 span.set_attribute("net.peer.port", event.connection_id[1])
81
82 # pymongo specific, not specified by spec
83 span.set_attribute("db.mongo.operation_id", event.operation_id)
84 span.set_attribute("db.mongo.request_id", event.request_id)
85
86 for attr in COMMAND_ATTRIBUTES:
87 _attr = event.command.get(attr)
88 if _attr is not None:
89 span.set_attribute("db.mongo." + attr, str(_attr))
90
91 # Add Span to dictionary
92 self._span_dict[_get_span_dict_key(event)] = span
93 except Exception as ex: # noqa pylint: disable=broad-except
94 if span is not None:
95 span.set_status(Status(StatusCanonicalCode.INTERNAL, str(ex)))
96 span.end()
97 self._pop_span(event)
98
99 def succeeded(self, event: monitoring.CommandSucceededEvent):
100 """ Method to handle a pymongo CommandSucceededEvent """
101 if not self.is_enabled:
102 return
103 span = self._pop_span(event)
104 if span is None:
105 return
106 span.set_attribute("db.mongo.duration_micros", event.duration_micros)
107 span.set_status(Status(StatusCanonicalCode.OK, event.reply))
108 span.end()
109
110 def failed(self, event: monitoring.CommandFailedEvent):
111 """ Method to handle a pymongo CommandFailedEvent """
112 if not self.is_enabled:
113 return
114 span = self._pop_span(event)
115 if span is None:
116 return
117 span.set_attribute("db.mongo.duration_micros", event.duration_micros)
118 span.set_status(Status(StatusCanonicalCode.UNKNOWN, event.failure))
119 span.end()
120
121 def _pop_span(self, event):
122 return self._span_dict.pop(_get_span_dict_key(event), None)
123
124
125 def _get_span_dict_key(event):
126 if event.connection_id is not None:
127 return (event.request_id, event.connection_id)
128 return event.request_id
129
130
131 class PymongoInstrumentor(BaseInstrumentor):
132 _commandtracer_instance = None # type CommandTracer
133 # The instrumentation for PyMongo is based on the event listener interface
134 # https://api.mongodb.com/python/current/api/pymongo/monitoring.html.
135 # This interface only allows to register listeners and does not provide
136 # an unregister API. In order to provide a mechanishm to disable
137 # instrumentation an enabled flag is implemented in CommandTracer,
138 # it's checked in the different listeners.
139
140 def _instrument(self, **kwargs):
141 """Integrate with pymongo to trace it using event listener.
142 https://api.mongodb.com/python/current/api/pymongo/monitoring.html
143
144 Args:
145 tracer_provider: The `TracerProvider` to use. If none is passed the
146 current configured one is used.
147 """
148
149 tracer_provider = kwargs.get("tracer_provider")
150
151 # Create and register a CommandTracer only the first time
152 if self._commandtracer_instance is None:
153 tracer = get_tracer(__name__, __version__, tracer_provider)
154
155 self._commandtracer_instance = CommandTracer(tracer)
156 monitoring.register(self._commandtracer_instance)
157
158 # If already created, just enable it
159 self._commandtracer_instance.is_enabled = True
160
161 def _uninstrument(self, **kwargs):
162 if self._commandtracer_instance is not None:
163 self._commandtracer_instance.is_enabled = False
164
[end of instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py
@@ -66,8 +66,8 @@
name = DATABASE_TYPE + "." + event.command_name
statement = event.command_name
if command:
- name += "." + command
- statement += " " + command
+ name += "." + str(command)
+ statement += " " + str(command)
try:
span = self._tracer.start_span(name, kind=SpanKind.CLIENT)
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py\n@@ -66,8 +66,8 @@\n name = DATABASE_TYPE + \".\" + event.command_name\n statement = event.command_name\n if command:\n- name += \".\" + command\n- statement += \" \" + command\n+ name += \".\" + str(command)\n+ statement += \" \" + str(command)\n \n try:\n span = self._tracer.start_span(name, kind=SpanKind.CLIENT)\n", "issue": "TypeError with pymongo `getMore` operations\n**Describe your environment** \r\nDescribe your environment\r\nMac OS Catalina 10.15.6 (19G73)\r\nDarwin-19.6.0-x86_64-i386-64bit\r\nPython 3.7.7\r\nInstalled packages:\r\n```\r\nopentelemetry-api 0.12b0 OpenTelemetry Python API\r\nopentelemetry-ext-honeycomb 0.5b0 Honeycomb Exporter for OpenTelemetry\r\nopentelemetry-instrumentation 0.12b0 Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python\r\nopentelemetry-instrumentation-botocore 0.12b0 OpenTelemetry Botocore instrumentation\r\nopentelemetry-instrumentation-pymongo 0.12b0 OpenTelemetry pymongo instrumentation\r\nopentelemetry-instrumentation-requests 0.12b0 OpenTelemetry requests instrumentation\r\npymongo 3.11.0 Python driver for MongoDB <http://www.mongodb.org>\r\n```\r\n\r\n**Steps to reproduce**\r\nany find operation where the number of documents to be returned exceeds the batch size on the cursor:\r\n```pyhton\r\nfrom pymongo import MongoClient\r\nfrom opentelemetry import trace\r\nfrom opentelemetry.trace import TracerProvider\r\nfrom opentelemetry.instrumentation.pymongo import PymongoInstrumentor\r\n\r\ntrace.set_tracer_provider(TracerProvider())\r\n\r\nPymongoInstrumentor().instrument()\r\nclient = MongoClient()\r\ndb = client[\"MongoDB_Database\"]\r\ncollection = db[\"MongoDB_Collection\"]\r\ncollection.find({'batch_size': 1})\r\n```\r\n**What is the expected behavior?**\r\nSpans with names like `mongodb.getMore.1`\r\n\r\n**What is the actual behavior?**\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/drubin/cargurus/analytics/snowblower/.venv/lib/python3.7/site-packages/pymongo/monitoring.py\", line 1266, in publish_command_start\r\n subscriber.started(event)\r\n File \"/Users/drubin/cargurus/analytics/snowblower/.venv/lib/python3.7/site-packages/opentelemetry/instrumentation/pymongo/__init__.py\", line 69, in started\r\n name += \".\" + command\r\nTypeError: can only concatenate str (not \"int\") to str\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThe integration with MongoDB supports the `pymongo`_ library, it can be\nenabled using the ``PymongoInstrumentor``.\n\n.. _pymongo: https://pypi.org/project/pymongo\n\nUsage\n-----\n\n.. code:: python\n\n from pymongo import MongoClient\n from opentelemetry import trace\n from opentelemetry.trace import TracerProvider\n from opentelemetry.instrumentation.pymongo import PymongoInstrumentor\n\n trace.set_tracer_provider(TracerProvider())\n\n PymongoInstrumentor().instrument()\n client = MongoClient()\n db = client[\"MongoDB_Database\"]\n collection = db[\"MongoDB_Collection\"]\n collection.find_one()\n\nAPI\n---\n\"\"\"\n\nfrom pymongo import monitoring\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.pymongo.version import __version__\nfrom opentelemetry.trace import SpanKind, get_tracer\nfrom opentelemetry.trace.status import Status, StatusCanonicalCode\n\nDATABASE_TYPE = \"mongodb\"\nCOMMAND_ATTRIBUTES = [\"filter\", \"sort\", \"skip\", \"limit\", \"pipeline\"]\n\n\nclass CommandTracer(monitoring.CommandListener):\n def __init__(self, tracer):\n self._tracer = tracer\n self._span_dict = {}\n self.is_enabled = True\n\n def started(self, event: monitoring.CommandStartedEvent):\n \"\"\" Method to handle a pymongo CommandStartedEvent \"\"\"\n if not self.is_enabled:\n return\n command = event.command.get(event.command_name, \"\")\n name = DATABASE_TYPE + \".\" + event.command_name\n statement = event.command_name\n if command:\n name += \".\" + command\n statement += \" \" + command\n\n try:\n span = self._tracer.start_span(name, kind=SpanKind.CLIENT)\n span.set_attribute(\"component\", DATABASE_TYPE)\n span.set_attribute(\"db.type\", DATABASE_TYPE)\n span.set_attribute(\"db.instance\", event.database_name)\n span.set_attribute(\"db.statement\", statement)\n if event.connection_id is not None:\n span.set_attribute(\"net.peer.name\", event.connection_id[0])\n span.set_attribute(\"net.peer.port\", event.connection_id[1])\n\n # pymongo specific, not specified by spec\n span.set_attribute(\"db.mongo.operation_id\", event.operation_id)\n span.set_attribute(\"db.mongo.request_id\", event.request_id)\n\n for attr in COMMAND_ATTRIBUTES:\n _attr = event.command.get(attr)\n if _attr is not None:\n span.set_attribute(\"db.mongo.\" + attr, str(_attr))\n\n # Add Span to dictionary\n self._span_dict[_get_span_dict_key(event)] = span\n except Exception as ex: # noqa pylint: disable=broad-except\n if span is not None:\n span.set_status(Status(StatusCanonicalCode.INTERNAL, str(ex)))\n span.end()\n self._pop_span(event)\n\n def succeeded(self, event: monitoring.CommandSucceededEvent):\n \"\"\" Method to handle a pymongo CommandSucceededEvent \"\"\"\n if not self.is_enabled:\n return\n span = self._pop_span(event)\n if span is None:\n return\n span.set_attribute(\"db.mongo.duration_micros\", event.duration_micros)\n span.set_status(Status(StatusCanonicalCode.OK, event.reply))\n span.end()\n\n def failed(self, event: monitoring.CommandFailedEvent):\n \"\"\" Method to handle a pymongo CommandFailedEvent \"\"\"\n if not self.is_enabled:\n return\n span = self._pop_span(event)\n if span is None:\n return\n span.set_attribute(\"db.mongo.duration_micros\", event.duration_micros)\n span.set_status(Status(StatusCanonicalCode.UNKNOWN, event.failure))\n span.end()\n\n def _pop_span(self, event):\n return self._span_dict.pop(_get_span_dict_key(event), None)\n\n\ndef _get_span_dict_key(event):\n if event.connection_id is not None:\n return (event.request_id, event.connection_id)\n return event.request_id\n\n\nclass PymongoInstrumentor(BaseInstrumentor):\n _commandtracer_instance = None # type CommandTracer\n # The instrumentation for PyMongo is based on the event listener interface\n # https://api.mongodb.com/python/current/api/pymongo/monitoring.html.\n # This interface only allows to register listeners and does not provide\n # an unregister API. In order to provide a mechanishm to disable\n # instrumentation an enabled flag is implemented in CommandTracer,\n # it's checked in the different listeners.\n\n def _instrument(self, **kwargs):\n \"\"\"Integrate with pymongo to trace it using event listener.\n https://api.mongodb.com/python/current/api/pymongo/monitoring.html\n\n Args:\n tracer_provider: The `TracerProvider` to use. If none is passed the\n current configured one is used.\n \"\"\"\n\n tracer_provider = kwargs.get(\"tracer_provider\")\n\n # Create and register a CommandTracer only the first time\n if self._commandtracer_instance is None:\n tracer = get_tracer(__name__, __version__, tracer_provider)\n\n self._commandtracer_instance = CommandTracer(tracer)\n monitoring.register(self._commandtracer_instance)\n\n # If already created, just enable it\n self._commandtracer_instance.is_enabled = True\n\n def _uninstrument(self, **kwargs):\n if self._commandtracer_instance is not None:\n self._commandtracer_instance.is_enabled = False\n", "path": "instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py"}]}
| 2,783 | 207 |
gh_patches_debug_38652
|
rasdani/github-patches
|
git_diff
|
sagemath__sage-37422
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
make sage.parallel.ncpus.ncpus() use os.cpu_count()
<div id="comment:0"></div>
Currently, `sage.parallel.ncpus.ncpus()` uses platform-specific code to determine the number of available CPUs for some specific systems. This functionality is now available in the standard `os` module as `cpu_count()`.
Component: **misc**
Author: **Lorenz Panny**
Branch/Commit: **[public/change_ncpus_to_os_module](https://github.com/sagemath/sagetrac-mirror/tree/public/change_ncpus_to_os_module) @ [`a509210`](https://github.com/sagemath/sagetrac-mirror/commit/a509210125fc50baf72dcb7f2248e96cddf61c8f)**
_Issue created by migration from https://trac.sagemath.org/ticket/34328_
</issue>
<code>
[start of src/sage/parallel/ncpus.py]
1 """
2 CPU Detection
3 """
4 # Parallel Python Software: http://www.parallelpython.com
5 # Copyright (c) 2005-2008, Vitalii Vanovschi
6 # All rights reserved.
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 # * Redistributions of source code must retain the above copyright notice,
10 # this list of conditions and the following disclaimer.
11 # * Redistributions in binary form must reproduce the above copyright
12 # notice, this list of conditions and the following disclaimer in the
13 # documentation and/or other materials provided with the distribution.
14 # * Neither the name of the author nor the names of its contributors
15 # may be used to endorse or promote products derived from this software
16 # without specific prior written permission.
17 #
18 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
19 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
20 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
21 # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
22 # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
23 # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
24 # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
25 # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
26 # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
27 # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
28 # THE POSSIBILITY OF SUCH DAMAGE.
29
30 ######
31 # This is from ParallelPython (the pp.py file).
32
33 import os
34 import subprocess
35
36
37 def ncpus():
38 """
39 Detects the number of effective CPUs in the system.
40
41 EXAMPLES::
42
43 sage: sage.parallel.ncpus.ncpus() # random output -- depends on machine.
44 2
45 """
46 # Support Sage environment variable SAGE_NUM_THREADS
47 # NOTE: while doctesting, this is forced to be 2 by the
48 # sage-runtests script
49 try:
50 n = os.environ["SAGE_NUM_THREADS"]
51 except KeyError:
52 pass
53 else:
54 return int(n)
55
56 # for Linux, Unix and MacOS
57 if hasattr(os, "sysconf"):
58 if "SC_NPROCESSORS_ONLN" in os.sysconf_names:
59 # Linux and Unix
60 ncpus = os.sysconf("SC_NPROCESSORS_ONLN")
61 if isinstance(ncpus, int) and ncpus > 0:
62 return ncpus
63 else:
64 # MacOS X
65 # deprecated: return int(os.popen2("sysctl -n hw.ncpu")[1].read())
66 process = subprocess.Popen("sysctl -n hw.ncpu", shell=True,
67 stdin=subprocess.PIPE,
68 stdout=subprocess.PIPE,
69 stderr=subprocess.PIPE, close_fds=True)
70 return int(process.stdout.read())
71 # for Windows
72 if "NUMBER_OF_PROCESSORS" in os.environ:
73 ncpus = int(os.environ["NUMBER_OF_PROCESSORS"])
74 if ncpus > 0:
75 return ncpus
76 # return the default value
77 return 1
78
[end of src/sage/parallel/ncpus.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sage/parallel/ncpus.py b/src/sage/parallel/ncpus.py
--- a/src/sage/parallel/ncpus.py
+++ b/src/sage/parallel/ncpus.py
@@ -1,46 +1,19 @@
"""
CPU Detection
"""
-# Parallel Python Software: http://www.parallelpython.com
-# Copyright (c) 2005-2008, Vitalii Vanovschi
-# All rights reserved.
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution.
-# * Neither the name of the author nor the names of its contributors
-# may be used to endorse or promote products derived from this software
-# without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
-# THE POSSIBILITY OF SUCH DAMAGE.
-
-######
-# This is from ParallelPython (the pp.py file).
import os
-import subprocess
def ncpus():
"""
- Detects the number of effective CPUs in the system.
+ Return the number of available CPUs in the system.
+
+ ALGORITHM: :func:`os.sched_getaffinity` or :func:`os.cpu_count`
EXAMPLES::
- sage: sage.parallel.ncpus.ncpus() # random output -- depends on machine.
+ sage: sage.parallel.ncpus.ncpus() # random output -- depends on machine
2
"""
# Support Sage environment variable SAGE_NUM_THREADS
@@ -53,25 +26,9 @@
else:
return int(n)
- # for Linux, Unix and MacOS
- if hasattr(os, "sysconf"):
- if "SC_NPROCESSORS_ONLN" in os.sysconf_names:
- # Linux and Unix
- ncpus = os.sysconf("SC_NPROCESSORS_ONLN")
- if isinstance(ncpus, int) and ncpus > 0:
- return ncpus
- else:
- # MacOS X
- # deprecated: return int(os.popen2("sysctl -n hw.ncpu")[1].read())
- process = subprocess.Popen("sysctl -n hw.ncpu", shell=True,
- stdin=subprocess.PIPE,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, close_fds=True)
- return int(process.stdout.read())
- # for Windows
- if "NUMBER_OF_PROCESSORS" in os.environ:
- ncpus = int(os.environ["NUMBER_OF_PROCESSORS"])
- if ncpus > 0:
- return ncpus
- # return the default value
- return 1
+ n = None
+
+ if hasattr(os, 'sched_getaffinity'):
+ n = len(os.sched_getaffinity(0))
+
+ return n or os.cpu_count() or 1
|
{"golden_diff": "diff --git a/src/sage/parallel/ncpus.py b/src/sage/parallel/ncpus.py\n--- a/src/sage/parallel/ncpus.py\n+++ b/src/sage/parallel/ncpus.py\n@@ -1,46 +1,19 @@\n \"\"\"\n CPU Detection\n \"\"\"\n-# Parallel Python Software: http://www.parallelpython.com\n-# Copyright (c) 2005-2008, Vitalii Vanovschi\n-# All rights reserved.\n-# Redistribution and use in source and binary forms, with or without\n-# modification, are permitted provided that the following conditions are met:\n-# * Redistributions of source code must retain the above copyright notice,\n-# this list of conditions and the following disclaimer.\n-# * Redistributions in binary form must reproduce the above copyright\n-# notice, this list of conditions and the following disclaimer in the\n-# documentation and/or other materials provided with the distribution.\n-# * Neither the name of the author nor the names of its contributors\n-# may be used to endorse or promote products derived from this software\n-# without specific prior written permission.\n-#\n-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n-# THE POSSIBILITY OF SUCH DAMAGE.\n-\n-######\n-# This is from ParallelPython (the pp.py file).\n \n import os\n-import subprocess\n \n \n def ncpus():\n \"\"\"\n- Detects the number of effective CPUs in the system.\n+ Return the number of available CPUs in the system.\n+\n+ ALGORITHM: :func:`os.sched_getaffinity` or :func:`os.cpu_count`\n \n EXAMPLES::\n \n- sage: sage.parallel.ncpus.ncpus() # random output -- depends on machine.\n+ sage: sage.parallel.ncpus.ncpus() # random output -- depends on machine\n 2\n \"\"\"\n # Support Sage environment variable SAGE_NUM_THREADS\n@@ -53,25 +26,9 @@\n else:\n return int(n)\n \n- # for Linux, Unix and MacOS\n- if hasattr(os, \"sysconf\"):\n- if \"SC_NPROCESSORS_ONLN\" in os.sysconf_names:\n- # Linux and Unix\n- ncpus = os.sysconf(\"SC_NPROCESSORS_ONLN\")\n- if isinstance(ncpus, int) and ncpus > 0:\n- return ncpus\n- else:\n- # MacOS X\n- # deprecated: return int(os.popen2(\"sysctl -n hw.ncpu\")[1].read())\n- process = subprocess.Popen(\"sysctl -n hw.ncpu\", shell=True,\n- stdin=subprocess.PIPE,\n- stdout=subprocess.PIPE,\n- stderr=subprocess.PIPE, close_fds=True)\n- return int(process.stdout.read())\n- # for Windows\n- if \"NUMBER_OF_PROCESSORS\" in os.environ:\n- ncpus = int(os.environ[\"NUMBER_OF_PROCESSORS\"])\n- if ncpus > 0:\n- return ncpus\n- # return the default value\n- return 1\n+ n = None\n+\n+ if hasattr(os, 'sched_getaffinity'):\n+ n = len(os.sched_getaffinity(0))\n+\n+ return n or os.cpu_count() or 1\n", "issue": "make sage.parallel.ncpus.ncpus() use os.cpu_count()\n<div id=\"comment:0\"></div>\n\nCurrently, `sage.parallel.ncpus.ncpus()` uses platform-specific code to determine the number of available CPUs for some specific systems. This functionality is now available in the standard `os` module as `cpu_count()`.\n\nComponent: **misc**\n\nAuthor: **Lorenz Panny**\n\nBranch/Commit: **[public/change_ncpus_to_os_module](https://github.com/sagemath/sagetrac-mirror/tree/public/change_ncpus_to_os_module) @ [`a509210`](https://github.com/sagemath/sagetrac-mirror/commit/a509210125fc50baf72dcb7f2248e96cddf61c8f)**\n\n_Issue created by migration from https://trac.sagemath.org/ticket/34328_\n\n\n", "before_files": [{"content": "\"\"\"\nCPU Detection\n\"\"\"\n# Parallel Python Software: http://www.parallelpython.com\n# Copyright (c) 2005-2008, Vitalii Vanovschi\n# All rights reserved.\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright notice,\n# this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of the author nor the names of its contributors\n# may be used to endorse or promote products derived from this software\n# without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n# THE POSSIBILITY OF SUCH DAMAGE.\n\n######\n# This is from ParallelPython (the pp.py file).\n\nimport os\nimport subprocess\n\n\ndef ncpus():\n \"\"\"\n Detects the number of effective CPUs in the system.\n\n EXAMPLES::\n\n sage: sage.parallel.ncpus.ncpus() # random output -- depends on machine.\n 2\n \"\"\"\n # Support Sage environment variable SAGE_NUM_THREADS\n # NOTE: while doctesting, this is forced to be 2 by the\n # sage-runtests script\n try:\n n = os.environ[\"SAGE_NUM_THREADS\"]\n except KeyError:\n pass\n else:\n return int(n)\n\n # for Linux, Unix and MacOS\n if hasattr(os, \"sysconf\"):\n if \"SC_NPROCESSORS_ONLN\" in os.sysconf_names:\n # Linux and Unix\n ncpus = os.sysconf(\"SC_NPROCESSORS_ONLN\")\n if isinstance(ncpus, int) and ncpus > 0:\n return ncpus\n else:\n # MacOS X\n # deprecated: return int(os.popen2(\"sysctl -n hw.ncpu\")[1].read())\n process = subprocess.Popen(\"sysctl -n hw.ncpu\", shell=True,\n stdin=subprocess.PIPE,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE, close_fds=True)\n return int(process.stdout.read())\n # for Windows\n if \"NUMBER_OF_PROCESSORS\" in os.environ:\n ncpus = int(os.environ[\"NUMBER_OF_PROCESSORS\"])\n if ncpus > 0:\n return ncpus\n # return the default value\n return 1\n", "path": "src/sage/parallel/ncpus.py"}]}
| 1,582 | 867 |
gh_patches_debug_8709
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-817
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
last_event_id() returns None after SentryAsgiMiddleware.__call__() finished
I'm unable to access `last_event_id` of `SentryAsgiMiddleware` on exception handler in `Starlette` framework.
```python
from sentry_sdk import last_event_id
from sentry_sdk.integrations.asgi import SentryAsgiMiddleware
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
async def test_endpoint(request):
raise RuntimeError("test")
def exception_handler(*args, **kwargs):
return JSONResponse({"last_event_id": last_event_id()})
app = Starlette(
routes=[Route('/', test_endpoint)],
exception_handlers={
Exception: exception_handler,
}
)
app.add_middleware(SentryAsgiMiddleware)
```
the problem is probably with usage of Hub's context manager in `SentryAsgiMiddleware._run_app()` - after throwing exception you are clearing local `ContextVar` so `last_event_id` function tries to access wrong Hub instance.
</issue>
<code>
[start of sentry_sdk/integrations/asgi.py]
1 """
2 An ASGI middleware.
3
4 Based on Tom Christie's `sentry-asgi <https://github.com/encode/sentry-asgi>`_.
5 """
6
7 import asyncio
8 import inspect
9 import urllib
10
11 from sentry_sdk._functools import partial
12 from sentry_sdk._types import MYPY
13 from sentry_sdk.hub import Hub, _should_send_default_pii
14 from sentry_sdk.integrations._wsgi_common import _filter_headers
15 from sentry_sdk.utils import (
16 ContextVar,
17 event_from_exception,
18 transaction_from_function,
19 HAS_REAL_CONTEXTVARS,
20 CONTEXTVARS_ERROR_MESSAGE,
21 )
22 from sentry_sdk.tracing import Transaction
23
24 if MYPY:
25 from typing import Dict
26 from typing import Any
27 from typing import Optional
28 from typing import Callable
29
30 from typing_extensions import Literal
31
32 from sentry_sdk._types import Event, Hint
33
34
35 _asgi_middleware_applied = ContextVar("sentry_asgi_middleware_applied")
36
37 _DEFAULT_TRANSACTION_NAME = "generic ASGI request"
38
39
40 def _capture_exception(hub, exc):
41 # type: (Hub, Any) -> None
42
43 # Check client here as it might have been unset while streaming response
44 if hub.client is not None:
45 event, hint = event_from_exception(
46 exc,
47 client_options=hub.client.options,
48 mechanism={"type": "asgi", "handled": False},
49 )
50 hub.capture_event(event, hint=hint)
51
52
53 def _looks_like_asgi3(app):
54 # type: (Any) -> bool
55 """
56 Try to figure out if an application object supports ASGI3.
57
58 This is how uvicorn figures out the application version as well.
59 """
60 if inspect.isclass(app):
61 return hasattr(app, "__await__")
62 elif inspect.isfunction(app):
63 return asyncio.iscoroutinefunction(app)
64 else:
65 call = getattr(app, "__call__", None) # noqa
66 return asyncio.iscoroutinefunction(call)
67
68
69 class SentryAsgiMiddleware:
70 __slots__ = ("app", "__call__")
71
72 def __init__(self, app, unsafe_context_data=False):
73 # type: (Any, bool) -> None
74 """
75 Instrument an ASGI application with Sentry. Provides HTTP/websocket
76 data to sent events and basic handling for exceptions bubbling up
77 through the middleware.
78
79 :param unsafe_context_data: Disable errors when a proper contextvars installation could not be found. We do not recommend changing this from the default.
80 """
81
82 if not unsafe_context_data and not HAS_REAL_CONTEXTVARS:
83 # We better have contextvars or we're going to leak state between
84 # requests.
85 raise RuntimeError(
86 "The ASGI middleware for Sentry requires Python 3.7+ "
87 "or the aiocontextvars package." + CONTEXTVARS_ERROR_MESSAGE
88 )
89 self.app = app
90
91 if _looks_like_asgi3(app):
92 self.__call__ = self._run_asgi3 # type: Callable[..., Any]
93 else:
94 self.__call__ = self._run_asgi2
95
96 def _run_asgi2(self, scope):
97 # type: (Any) -> Any
98 async def inner(receive, send):
99 # type: (Any, Any) -> Any
100 return await self._run_app(scope, lambda: self.app(scope)(receive, send))
101
102 return inner
103
104 async def _run_asgi3(self, scope, receive, send):
105 # type: (Any, Any, Any) -> Any
106 return await self._run_app(scope, lambda: self.app(scope, receive, send))
107
108 async def _run_app(self, scope, callback):
109 # type: (Any, Any) -> Any
110 if _asgi_middleware_applied.get(False):
111 return await callback()
112
113 _asgi_middleware_applied.set(True)
114 try:
115 hub = Hub(Hub.current)
116 with hub:
117 with hub.configure_scope() as sentry_scope:
118 sentry_scope.clear_breadcrumbs()
119 sentry_scope._name = "asgi"
120 processor = partial(self.event_processor, asgi_scope=scope)
121 sentry_scope.add_event_processor(processor)
122
123 ty = scope["type"]
124
125 if ty in ("http", "websocket"):
126 transaction = Transaction.continue_from_headers(
127 dict(scope["headers"]),
128 op="{}.server".format(ty),
129 )
130 else:
131 transaction = Transaction(op="asgi.server")
132
133 transaction.name = _DEFAULT_TRANSACTION_NAME
134 transaction.set_tag("asgi.type", ty)
135
136 with hub.start_transaction(transaction):
137 # XXX: Would be cool to have correct span status, but we
138 # would have to wrap send(). That is a bit hard to do with
139 # the current abstraction over ASGI 2/3.
140 try:
141 return await callback()
142 except Exception as exc:
143 _capture_exception(hub, exc)
144 raise exc from None
145 finally:
146 _asgi_middleware_applied.set(False)
147
148 def event_processor(self, event, hint, asgi_scope):
149 # type: (Event, Hint, Any) -> Optional[Event]
150 request_info = event.get("request", {})
151
152 ty = asgi_scope["type"]
153 if ty in ("http", "websocket"):
154 request_info["method"] = asgi_scope.get("method")
155 request_info["headers"] = headers = _filter_headers(
156 self._get_headers(asgi_scope)
157 )
158 request_info["query_string"] = self._get_query(asgi_scope)
159
160 request_info["url"] = self._get_url(
161 asgi_scope, "http" if ty == "http" else "ws", headers.get("host")
162 )
163
164 client = asgi_scope.get("client")
165 if client and _should_send_default_pii():
166 request_info["env"] = {"REMOTE_ADDR": client[0]}
167
168 if (
169 event.get("transaction", _DEFAULT_TRANSACTION_NAME)
170 == _DEFAULT_TRANSACTION_NAME
171 ):
172 endpoint = asgi_scope.get("endpoint")
173 # Webframeworks like Starlette mutate the ASGI env once routing is
174 # done, which is sometime after the request has started. If we have
175 # an endpoint, overwrite our generic transaction name.
176 if endpoint:
177 event["transaction"] = transaction_from_function(endpoint)
178
179 event["request"] = request_info
180
181 return event
182
183 # Helper functions for extracting request data.
184 #
185 # Note: Those functions are not public API. If you want to mutate request
186 # data to your liking it's recommended to use the `before_send` callback
187 # for that.
188
189 def _get_url(self, scope, default_scheme, host):
190 # type: (Dict[str, Any], Literal["ws", "http"], Optional[str]) -> str
191 """
192 Extract URL from the ASGI scope, without also including the querystring.
193 """
194 scheme = scope.get("scheme", default_scheme)
195
196 server = scope.get("server", None)
197 path = scope.get("root_path", "") + scope.get("path", "")
198
199 if host:
200 return "%s://%s%s" % (scheme, host, path)
201
202 if server is not None:
203 host, port = server
204 default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}[scheme]
205 if port != default_port:
206 return "%s://%s:%s%s" % (scheme, host, port, path)
207 return "%s://%s%s" % (scheme, host, path)
208 return path
209
210 def _get_query(self, scope):
211 # type: (Any) -> Any
212 """
213 Extract querystring from the ASGI scope, in the format that the Sentry protocol expects.
214 """
215 qs = scope.get("query_string")
216 if not qs:
217 return None
218 return urllib.parse.unquote(qs.decode("latin-1"))
219
220 def _get_headers(self, scope):
221 # type: (Any) -> Dict[str, str]
222 """
223 Extract headers from the ASGI scope, in the format that the Sentry protocol expects.
224 """
225 headers = {} # type: Dict[str, str]
226 for raw_key, raw_value in scope["headers"]:
227 key = raw_key.decode("latin-1")
228 value = raw_value.decode("latin-1")
229 if key in headers:
230 headers[key] = headers[key] + ", " + value
231 else:
232 headers[key] = value
233 return headers
234
[end of sentry_sdk/integrations/asgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/asgi.py b/sentry_sdk/integrations/asgi.py
--- a/sentry_sdk/integrations/asgi.py
+++ b/sentry_sdk/integrations/asgi.py
@@ -107,8 +107,14 @@
async def _run_app(self, scope, callback):
# type: (Any, Any) -> Any
- if _asgi_middleware_applied.get(False):
- return await callback()
+ is_recursive_asgi_middleware = _asgi_middleware_applied.get(False)
+
+ if is_recursive_asgi_middleware:
+ try:
+ return await callback()
+ except Exception as exc:
+ _capture_exception(Hub.current, exc)
+ raise exc from None
_asgi_middleware_applied.set(True)
try:
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/asgi.py b/sentry_sdk/integrations/asgi.py\n--- a/sentry_sdk/integrations/asgi.py\n+++ b/sentry_sdk/integrations/asgi.py\n@@ -107,8 +107,14 @@\n \n async def _run_app(self, scope, callback):\n # type: (Any, Any) -> Any\n- if _asgi_middleware_applied.get(False):\n- return await callback()\n+ is_recursive_asgi_middleware = _asgi_middleware_applied.get(False)\n+\n+ if is_recursive_asgi_middleware:\n+ try:\n+ return await callback()\n+ except Exception as exc:\n+ _capture_exception(Hub.current, exc)\n+ raise exc from None\n \n _asgi_middleware_applied.set(True)\n try:\n", "issue": "last_event_id() returns None after SentryAsgiMiddleware.__call__() finished\nI'm unable to access `last_event_id` of `SentryAsgiMiddleware` on exception handler in `Starlette` framework.\r\n```python\r\nfrom sentry_sdk import last_event_id\r\nfrom sentry_sdk.integrations.asgi import SentryAsgiMiddleware\r\nfrom starlette.applications import Starlette\r\nfrom starlette.responses import JSONResponse\r\nfrom starlette.routing import Route\r\n\r\n\r\nasync def test_endpoint(request):\r\n raise RuntimeError(\"test\")\r\n\r\n\r\ndef exception_handler(*args, **kwargs):\r\n return JSONResponse({\"last_event_id\": last_event_id()})\r\n\r\n\r\napp = Starlette(\r\n routes=[Route('/', test_endpoint)],\r\n exception_handlers={\r\n Exception: exception_handler,\r\n }\r\n)\r\napp.add_middleware(SentryAsgiMiddleware)\r\n``` \r\n\r\nthe problem is probably with usage of Hub's context manager in `SentryAsgiMiddleware._run_app()` - after throwing exception you are clearing local `ContextVar` so `last_event_id` function tries to access wrong Hub instance.\n", "before_files": [{"content": "\"\"\"\nAn ASGI middleware.\n\nBased on Tom Christie's `sentry-asgi <https://github.com/encode/sentry-asgi>`_.\n\"\"\"\n\nimport asyncio\nimport inspect\nimport urllib\n\nfrom sentry_sdk._functools import partial\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.integrations._wsgi_common import _filter_headers\nfrom sentry_sdk.utils import (\n ContextVar,\n event_from_exception,\n transaction_from_function,\n HAS_REAL_CONTEXTVARS,\n CONTEXTVARS_ERROR_MESSAGE,\n)\nfrom sentry_sdk.tracing import Transaction\n\nif MYPY:\n from typing import Dict\n from typing import Any\n from typing import Optional\n from typing import Callable\n\n from typing_extensions import Literal\n\n from sentry_sdk._types import Event, Hint\n\n\n_asgi_middleware_applied = ContextVar(\"sentry_asgi_middleware_applied\")\n\n_DEFAULT_TRANSACTION_NAME = \"generic ASGI request\"\n\n\ndef _capture_exception(hub, exc):\n # type: (Hub, Any) -> None\n\n # Check client here as it might have been unset while streaming response\n if hub.client is not None:\n event, hint = event_from_exception(\n exc,\n client_options=hub.client.options,\n mechanism={\"type\": \"asgi\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n\ndef _looks_like_asgi3(app):\n # type: (Any) -> bool\n \"\"\"\n Try to figure out if an application object supports ASGI3.\n\n This is how uvicorn figures out the application version as well.\n \"\"\"\n if inspect.isclass(app):\n return hasattr(app, \"__await__\")\n elif inspect.isfunction(app):\n return asyncio.iscoroutinefunction(app)\n else:\n call = getattr(app, \"__call__\", None) # noqa\n return asyncio.iscoroutinefunction(call)\n\n\nclass SentryAsgiMiddleware:\n __slots__ = (\"app\", \"__call__\")\n\n def __init__(self, app, unsafe_context_data=False):\n # type: (Any, bool) -> None\n \"\"\"\n Instrument an ASGI application with Sentry. Provides HTTP/websocket\n data to sent events and basic handling for exceptions bubbling up\n through the middleware.\n\n :param unsafe_context_data: Disable errors when a proper contextvars installation could not be found. We do not recommend changing this from the default.\n \"\"\"\n\n if not unsafe_context_data and not HAS_REAL_CONTEXTVARS:\n # We better have contextvars or we're going to leak state between\n # requests.\n raise RuntimeError(\n \"The ASGI middleware for Sentry requires Python 3.7+ \"\n \"or the aiocontextvars package.\" + CONTEXTVARS_ERROR_MESSAGE\n )\n self.app = app\n\n if _looks_like_asgi3(app):\n self.__call__ = self._run_asgi3 # type: Callable[..., Any]\n else:\n self.__call__ = self._run_asgi2\n\n def _run_asgi2(self, scope):\n # type: (Any) -> Any\n async def inner(receive, send):\n # type: (Any, Any) -> Any\n return await self._run_app(scope, lambda: self.app(scope)(receive, send))\n\n return inner\n\n async def _run_asgi3(self, scope, receive, send):\n # type: (Any, Any, Any) -> Any\n return await self._run_app(scope, lambda: self.app(scope, receive, send))\n\n async def _run_app(self, scope, callback):\n # type: (Any, Any) -> Any\n if _asgi_middleware_applied.get(False):\n return await callback()\n\n _asgi_middleware_applied.set(True)\n try:\n hub = Hub(Hub.current)\n with hub:\n with hub.configure_scope() as sentry_scope:\n sentry_scope.clear_breadcrumbs()\n sentry_scope._name = \"asgi\"\n processor = partial(self.event_processor, asgi_scope=scope)\n sentry_scope.add_event_processor(processor)\n\n ty = scope[\"type\"]\n\n if ty in (\"http\", \"websocket\"):\n transaction = Transaction.continue_from_headers(\n dict(scope[\"headers\"]),\n op=\"{}.server\".format(ty),\n )\n else:\n transaction = Transaction(op=\"asgi.server\")\n\n transaction.name = _DEFAULT_TRANSACTION_NAME\n transaction.set_tag(\"asgi.type\", ty)\n\n with hub.start_transaction(transaction):\n # XXX: Would be cool to have correct span status, but we\n # would have to wrap send(). That is a bit hard to do with\n # the current abstraction over ASGI 2/3.\n try:\n return await callback()\n except Exception as exc:\n _capture_exception(hub, exc)\n raise exc from None\n finally:\n _asgi_middleware_applied.set(False)\n\n def event_processor(self, event, hint, asgi_scope):\n # type: (Event, Hint, Any) -> Optional[Event]\n request_info = event.get(\"request\", {})\n\n ty = asgi_scope[\"type\"]\n if ty in (\"http\", \"websocket\"):\n request_info[\"method\"] = asgi_scope.get(\"method\")\n request_info[\"headers\"] = headers = _filter_headers(\n self._get_headers(asgi_scope)\n )\n request_info[\"query_string\"] = self._get_query(asgi_scope)\n\n request_info[\"url\"] = self._get_url(\n asgi_scope, \"http\" if ty == \"http\" else \"ws\", headers.get(\"host\")\n )\n\n client = asgi_scope.get(\"client\")\n if client and _should_send_default_pii():\n request_info[\"env\"] = {\"REMOTE_ADDR\": client[0]}\n\n if (\n event.get(\"transaction\", _DEFAULT_TRANSACTION_NAME)\n == _DEFAULT_TRANSACTION_NAME\n ):\n endpoint = asgi_scope.get(\"endpoint\")\n # Webframeworks like Starlette mutate the ASGI env once routing is\n # done, which is sometime after the request has started. If we have\n # an endpoint, overwrite our generic transaction name.\n if endpoint:\n event[\"transaction\"] = transaction_from_function(endpoint)\n\n event[\"request\"] = request_info\n\n return event\n\n # Helper functions for extracting request data.\n #\n # Note: Those functions are not public API. If you want to mutate request\n # data to your liking it's recommended to use the `before_send` callback\n # for that.\n\n def _get_url(self, scope, default_scheme, host):\n # type: (Dict[str, Any], Literal[\"ws\", \"http\"], Optional[str]) -> str\n \"\"\"\n Extract URL from the ASGI scope, without also including the querystring.\n \"\"\"\n scheme = scope.get(\"scheme\", default_scheme)\n\n server = scope.get(\"server\", None)\n path = scope.get(\"root_path\", \"\") + scope.get(\"path\", \"\")\n\n if host:\n return \"%s://%s%s\" % (scheme, host, path)\n\n if server is not None:\n host, port = server\n default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\n if port != default_port:\n return \"%s://%s:%s%s\" % (scheme, host, port, path)\n return \"%s://%s%s\" % (scheme, host, path)\n return path\n\n def _get_query(self, scope):\n # type: (Any) -> Any\n \"\"\"\n Extract querystring from the ASGI scope, in the format that the Sentry protocol expects.\n \"\"\"\n qs = scope.get(\"query_string\")\n if not qs:\n return None\n return urllib.parse.unquote(qs.decode(\"latin-1\"))\n\n def _get_headers(self, scope):\n # type: (Any) -> Dict[str, str]\n \"\"\"\n Extract headers from the ASGI scope, in the format that the Sentry protocol expects.\n \"\"\"\n headers = {} # type: Dict[str, str]\n for raw_key, raw_value in scope[\"headers\"]:\n key = raw_key.decode(\"latin-1\")\n value = raw_value.decode(\"latin-1\")\n if key in headers:\n headers[key] = headers[key] + \", \" + value\n else:\n headers[key] = value\n return headers\n", "path": "sentry_sdk/integrations/asgi.py"}]}
| 3,219 | 186 |
gh_patches_debug_9132
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-10307
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] cElementTree has been deprecated and will be removed in favor of ElementTree
Reference : https://bugs.python.org/issue36543
```
bokeh/sampledata/us_states.py
33:import xml.etree.cElementTree as et
bokeh/sampledata/us_counties.py
40:import xml.etree.cElementTree as et
```
</issue>
<code>
[start of bokeh/sampledata/us_counties.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' This modules exposes geometry data for Unites States. It exposes a
8 dictionary ``data``, which is indexed by the two-tuples:
9
10 .. code-block:: python
11
12 (state_id, county_id)
13
14 that have the following dictionaries as the associated value:
15
16 .. code-block:: python
17
18 data[(1,1)]['name']
19 data[(1,1)]['state']
20 data[(1,1)]['detailed name']
21 data[(1,1)]['lats']
22 data[(1,1)]['lons']
23
24 Entries for ``'name'`` can have duplicates for certain states (e.g. Virginia).
25 The combination of ``'detailed name'`` and ``'state'`` will always be unique.
26
27 '''
28 #-----------------------------------------------------------------------------
29 # Boilerplate
30 #-----------------------------------------------------------------------------
31 import logging # isort:skip
32 log = logging.getLogger(__name__)
33
34 #-----------------------------------------------------------------------------
35 # Imports
36 #-----------------------------------------------------------------------------
37
38 # Standard library imports
39 import csv
40 import xml.etree.cElementTree as et
41
42 # Bokeh imports
43 from ..util.sampledata import external_path, open_csv
44
45 #-----------------------------------------------------------------------------
46 # Globals and constants
47 #-----------------------------------------------------------------------------
48
49 __all__ = (
50 'data',
51 )
52
53 #-----------------------------------------------------------------------------
54 # General API
55 #-----------------------------------------------------------------------------
56
57 #-----------------------------------------------------------------------------
58 # Dev API
59 #-----------------------------------------------------------------------------
60
61 #-----------------------------------------------------------------------------
62 # Private API
63 #-----------------------------------------------------------------------------
64
65 def _read_data():
66 '''
67
68 '''
69 nan = float('NaN')
70
71 data = {}
72
73 with open_csv(external_path('US_Counties.csv')) as f:
74 next(f)
75 reader = csv.reader(f, delimiter=str(','), quotechar=str('"'))
76 for row in reader:
77 name, dummy, state, dummy, geometry, dummy, dummy, dummy, det_name, state_id, county_id, dummy, dummy = row
78 xml = et.fromstring(geometry)
79 lats = []
80 lons = []
81 for i, poly in enumerate(xml.findall('.//outerBoundaryIs/LinearRing/coordinates')):
82 if i > 0:
83 lats.append(nan)
84 lons.append(nan)
85 coords = (c.split(',')[:2] for c in poly.text.split())
86 lat, lon = list(zip(*[(float(lat), float(lon)) for lon, lat in
87 coords]))
88 lats.extend(lat)
89 lons.extend(lon)
90 data[(int(state_id), int(county_id))] = {
91 'name' : name,
92 'detailed name' : det_name,
93 'state' : state,
94 'lats' : lats,
95 'lons' : lons,
96 }
97
98 return data
99
100 #-----------------------------------------------------------------------------
101 # Code
102 #-----------------------------------------------------------------------------
103
104 data = _read_data()
105
[end of bokeh/sampledata/us_counties.py]
[start of bokeh/sampledata/us_states.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 '''
8 This modules exposes geometry data for Unites States. It exposes a dictionary 'data' which is
9 indexed by the two letter state code (e.g., 'CA', 'TX') and has the following dictionary as the
10 associated value:
11
12 data['CA']['name']
13 data['CA']['region']
14 data['CA']['lats']
15 data['CA']['lons']
16
17 '''
18
19 #-----------------------------------------------------------------------------
20 # Boilerplate
21 #-----------------------------------------------------------------------------
22 import logging # isort:skip
23 log = logging.getLogger(__name__)
24
25 #-----------------------------------------------------------------------------
26 # Imports
27 #-----------------------------------------------------------------------------
28
29 # Standard library imports
30 import codecs
31 import csv
32 import gzip
33 import xml.etree.cElementTree as et
34
35 # Bokeh imports
36 from ..util.sampledata import package_path
37
38 #-----------------------------------------------------------------------------
39 # Globals and constants
40 #-----------------------------------------------------------------------------
41
42 __all__ = (
43 'data',
44 )
45
46 #-----------------------------------------------------------------------------
47 # General API
48 #-----------------------------------------------------------------------------
49
50 #-----------------------------------------------------------------------------
51 # Dev API
52 #-----------------------------------------------------------------------------
53
54 #-----------------------------------------------------------------------------
55 # Private API
56 #-----------------------------------------------------------------------------
57
58 def _read_data():
59 '''
60
61 '''
62 nan = float('NaN')
63
64 data = {}
65
66 with gzip.open(package_path('US_Regions_State_Boundaries.csv.gz')) as f:
67 decoded = codecs.iterdecode(f, "utf-8")
68 next(decoded)
69 reader = csv.reader(decoded, delimiter=str(','), quotechar=str('"'))
70 for row in reader:
71 region, name, code, geometry, dummy = row
72 xml = et.fromstring(geometry)
73 lats = []
74 lons = []
75 for i, poly in enumerate(xml.findall('.//outerBoundaryIs/LinearRing/coordinates')):
76 if i > 0:
77 lats.append(nan)
78 lons.append(nan)
79 coords = (c.split(',')[:2] for c in poly.text.split())
80 lat, lon = list(zip(*[(float(lat), float(lon)) for lon, lat in
81 coords]))
82 lats.extend(lat)
83 lons.extend(lon)
84 data[code] = {
85 'name' : name,
86 'region' : region,
87 'lats' : lats,
88 'lons' : lons,
89 }
90
91 return data
92
93 #-----------------------------------------------------------------------------
94 # Code
95 #-----------------------------------------------------------------------------
96
97 data = _read_data()
98
[end of bokeh/sampledata/us_states.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bokeh/sampledata/us_counties.py b/bokeh/sampledata/us_counties.py
--- a/bokeh/sampledata/us_counties.py
+++ b/bokeh/sampledata/us_counties.py
@@ -37,7 +37,7 @@
# Standard library imports
import csv
-import xml.etree.cElementTree as et
+import xml.etree.ElementTree as et
# Bokeh imports
from ..util.sampledata import external_path, open_csv
diff --git a/bokeh/sampledata/us_states.py b/bokeh/sampledata/us_states.py
--- a/bokeh/sampledata/us_states.py
+++ b/bokeh/sampledata/us_states.py
@@ -30,7 +30,7 @@
import codecs
import csv
import gzip
-import xml.etree.cElementTree as et
+import xml.etree.ElementTree as et
# Bokeh imports
from ..util.sampledata import package_path
|
{"golden_diff": "diff --git a/bokeh/sampledata/us_counties.py b/bokeh/sampledata/us_counties.py\n--- a/bokeh/sampledata/us_counties.py\n+++ b/bokeh/sampledata/us_counties.py\n@@ -37,7 +37,7 @@\n \n # Standard library imports\n import csv\n-import xml.etree.cElementTree as et\n+import xml.etree.ElementTree as et\n \n # Bokeh imports\n from ..util.sampledata import external_path, open_csv\ndiff --git a/bokeh/sampledata/us_states.py b/bokeh/sampledata/us_states.py\n--- a/bokeh/sampledata/us_states.py\n+++ b/bokeh/sampledata/us_states.py\n@@ -30,7 +30,7 @@\n import codecs\n import csv\n import gzip\n-import xml.etree.cElementTree as et\n+import xml.etree.ElementTree as et\n \n # Bokeh imports\n from ..util.sampledata import package_path\n", "issue": "[BUG] cElementTree has been deprecated and will be removed in favor of ElementTree\nReference : https://bugs.python.org/issue36543\r\n\r\n```\r\nbokeh/sampledata/us_states.py\r\n33:import xml.etree.cElementTree as et\r\n\r\nbokeh/sampledata/us_counties.py\r\n40:import xml.etree.cElementTree as et\r\n```\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' This modules exposes geometry data for Unites States. It exposes a\ndictionary ``data``, which is indexed by the two-tuples:\n\n.. code-block:: python\n\n (state_id, county_id)\n\nthat have the following dictionaries as the associated value:\n\n.. code-block:: python\n\n data[(1,1)]['name']\n data[(1,1)]['state']\n data[(1,1)]['detailed name']\n data[(1,1)]['lats']\n data[(1,1)]['lons']\n\nEntries for ``'name'`` can have duplicates for certain states (e.g. Virginia).\nThe combination of ``'detailed name'`` and ``'state'`` will always be unique.\n\n'''\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nimport csv\nimport xml.etree.cElementTree as et\n\n# Bokeh imports\nfrom ..util.sampledata import external_path, open_csv\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'data',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\ndef _read_data():\n '''\n\n '''\n nan = float('NaN')\n\n data = {}\n\n with open_csv(external_path('US_Counties.csv')) as f:\n next(f)\n reader = csv.reader(f, delimiter=str(','), quotechar=str('\"'))\n for row in reader:\n name, dummy, state, dummy, geometry, dummy, dummy, dummy, det_name, state_id, county_id, dummy, dummy = row\n xml = et.fromstring(geometry)\n lats = []\n lons = []\n for i, poly in enumerate(xml.findall('.//outerBoundaryIs/LinearRing/coordinates')):\n if i > 0:\n lats.append(nan)\n lons.append(nan)\n coords = (c.split(',')[:2] for c in poly.text.split())\n lat, lon = list(zip(*[(float(lat), float(lon)) for lon, lat in\n coords]))\n lats.extend(lat)\n lons.extend(lon)\n data[(int(state_id), int(county_id))] = {\n 'name' : name,\n 'detailed name' : det_name,\n 'state' : state,\n 'lats' : lats,\n 'lons' : lons,\n }\n\n return data\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\ndata = _read_data()\n", "path": "bokeh/sampledata/us_counties.py"}, {"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\nThis modules exposes geometry data for Unites States. It exposes a dictionary 'data' which is\nindexed by the two letter state code (e.g., 'CA', 'TX') and has the following dictionary as the\nassociated value:\n\n data['CA']['name']\n data['CA']['region']\n data['CA']['lats']\n data['CA']['lons']\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nimport codecs\nimport csv\nimport gzip\nimport xml.etree.cElementTree as et\n\n# Bokeh imports\nfrom ..util.sampledata import package_path\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'data',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\ndef _read_data():\n '''\n\n '''\n nan = float('NaN')\n\n data = {}\n\n with gzip.open(package_path('US_Regions_State_Boundaries.csv.gz')) as f:\n decoded = codecs.iterdecode(f, \"utf-8\")\n next(decoded)\n reader = csv.reader(decoded, delimiter=str(','), quotechar=str('\"'))\n for row in reader:\n region, name, code, geometry, dummy = row\n xml = et.fromstring(geometry)\n lats = []\n lons = []\n for i, poly in enumerate(xml.findall('.//outerBoundaryIs/LinearRing/coordinates')):\n if i > 0:\n lats.append(nan)\n lons.append(nan)\n coords = (c.split(',')[:2] for c in poly.text.split())\n lat, lon = list(zip(*[(float(lat), float(lon)) for lon, lat in\n coords]))\n lats.extend(lat)\n lons.extend(lon)\n data[code] = {\n 'name' : name,\n 'region' : region,\n 'lats' : lats,\n 'lons' : lons,\n }\n\n return data\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\ndata = _read_data()\n", "path": "bokeh/sampledata/us_states.py"}]}
| 2,213 | 205 |
gh_patches_debug_8327
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-3652
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Empty completion.timestamp_format crashes
After `:set completion.timestamp_format ''`:
```
17:26:29 ERROR misc crashsignal:exception_hook:216 Uncaught exception
Traceback (most recent call last):
File "/home/florian/proj/qutebrowser/git/qutebrowser/completion/completer.py", line 260, in _update_completion
completion.set_pattern(pattern)
File "/home/florian/proj/qutebrowser/git/qutebrowser/completion/completionwidget.py", line 320, in set_pattern
self.model().set_pattern(pattern)
File "/home/florian/proj/qutebrowser/git/qutebrowser/completion/models/completionmodel.py", line 185, in set_pattern
cat.set_pattern(pattern)
File "/home/florian/proj/qutebrowser/git/qutebrowser/completion/models/histcategory.py", line 85, in set_pattern
.format(timestamp_format.replace("'", "`")))
AttributeError: 'NoneType' object has no attribute 'replace'
```
cc @rcorre and @erikdsjostrom who reported this
</issue>
<code>
[start of qutebrowser/completion/models/histcategory.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2017-2018 Ryan Roden-Corrent (rcorre) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """A completion category that queries the SQL History store."""
21
22 from PyQt5.QtSql import QSqlQueryModel
23
24 from qutebrowser.misc import sql
25 from qutebrowser.utils import debug
26 from qutebrowser.config import config
27
28
29 class HistoryCategory(QSqlQueryModel):
30
31 """A completion category that queries the SQL History store."""
32
33 def __init__(self, *, delete_func=None, parent=None):
34 """Create a new History completion category."""
35 super().__init__(parent=parent)
36 self.name = "History"
37 self._query = None
38
39 # advertise that this model filters by URL and title
40 self.columns_to_filter = [0, 1]
41 self.delete_func = delete_func
42
43 def _atime_expr(self):
44 """If max_items is set, return an expression to limit the query."""
45 max_items = config.val.completion.web_history_max_items
46 # HistoryCategory should not be added to the completion in that case.
47 assert max_items != 0
48
49 if max_items < 0:
50 return ''
51
52 min_atime = sql.Query(' '.join([
53 'SELECT min(last_atime) FROM',
54 '(SELECT last_atime FROM CompletionHistory',
55 'ORDER BY last_atime DESC LIMIT :limit)',
56 ])).run(limit=max_items).value()
57
58 if not min_atime:
59 # if there are no history items, min_atime may be '' (issue #2849)
60 return ''
61
62 return "AND last_atime >= {}".format(min_atime)
63
64 def set_pattern(self, pattern):
65 """Set the pattern used to filter results.
66
67 Args:
68 pattern: string pattern to filter by.
69 """
70 # escape to treat a user input % or _ as a literal, not a wildcard
71 pattern = pattern.replace('%', '\\%')
72 pattern = pattern.replace('_', '\\_')
73 words = ['%{}%'.format(w) for w in pattern.split(' ')]
74
75 # build a where clause to match all of the words in any order
76 # given the search term "a b", the WHERE clause would be:
77 # ((url || title) LIKE '%a%') AND ((url || title) LIKE '%b%')
78 where_clause = ' AND '.join(
79 "(url || title) LIKE :{} escape '\\'".format(i)
80 for i in range(len(words)))
81
82 # replace ' in timestamp-format to avoid breaking the query
83 timestamp_format = config.val.completion.timestamp_format
84 timefmt = ("strftime('{}', last_atime, 'unixepoch', 'localtime')"
85 .format(timestamp_format.replace("'", "`")))
86
87 if not self._query or len(words) != len(self._query.boundValues()):
88 # if the number of words changed, we need to generate a new query
89 # otherwise, we can reuse the prepared query for performance
90 self._query = sql.Query(' '.join([
91 "SELECT url, title, {}".format(timefmt),
92 "FROM CompletionHistory",
93 # the incoming pattern will have literal % and _ escaped
94 # we need to tell sql to treat '\' as an escape character
95 'WHERE ({})'.format(where_clause),
96 self._atime_expr(),
97 "ORDER BY last_atime DESC",
98 ]), forward_only=False)
99
100 with debug.log_time('sql', 'Running completion query'):
101 self._query.run(**{
102 str(i): w for i, w in enumerate(words)})
103 self.setQuery(self._query)
104
105 def removeRows(self, row, _count, _parent=None):
106 """Override QAbstractItemModel::removeRows to re-run sql query."""
107 # re-run query to reload updated table
108 with debug.log_time('sql', 'Re-running completion query post-delete'):
109 self._query.run()
110 self.setQuery(self._query)
111 while self.rowCount() < row:
112 self.fetchMore()
113 return True
114
[end of qutebrowser/completion/models/histcategory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/qutebrowser/completion/models/histcategory.py b/qutebrowser/completion/models/histcategory.py
--- a/qutebrowser/completion/models/histcategory.py
+++ b/qutebrowser/completion/models/histcategory.py
@@ -80,7 +80,7 @@
for i in range(len(words)))
# replace ' in timestamp-format to avoid breaking the query
- timestamp_format = config.val.completion.timestamp_format
+ timestamp_format = config.val.completion.timestamp_format or ''
timefmt = ("strftime('{}', last_atime, 'unixepoch', 'localtime')"
.format(timestamp_format.replace("'", "`")))
|
{"golden_diff": "diff --git a/qutebrowser/completion/models/histcategory.py b/qutebrowser/completion/models/histcategory.py\n--- a/qutebrowser/completion/models/histcategory.py\n+++ b/qutebrowser/completion/models/histcategory.py\n@@ -80,7 +80,7 @@\n for i in range(len(words)))\n \n # replace ' in timestamp-format to avoid breaking the query\n- timestamp_format = config.val.completion.timestamp_format\n+ timestamp_format = config.val.completion.timestamp_format or ''\n timefmt = (\"strftime('{}', last_atime, 'unixepoch', 'localtime')\"\n .format(timestamp_format.replace(\"'\", \"`\")))\n", "issue": "Empty completion.timestamp_format crashes\nAfter `:set completion.timestamp_format ''`:\r\n\r\n```\r\n17:26:29 ERROR misc crashsignal:exception_hook:216 Uncaught exception\r\nTraceback (most recent call last):\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/completion/completer.py\", line 260, in _update_completion\r\n completion.set_pattern(pattern)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/completion/completionwidget.py\", line 320, in set_pattern\r\n self.model().set_pattern(pattern)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/completion/models/completionmodel.py\", line 185, in set_pattern\r\n cat.set_pattern(pattern)\r\n File \"/home/florian/proj/qutebrowser/git/qutebrowser/completion/models/histcategory.py\", line 85, in set_pattern\r\n .format(timestamp_format.replace(\"'\", \"`\")))\r\nAttributeError: 'NoneType' object has no attribute 'replace'\r\n```\r\n\r\ncc @rcorre and @erikdsjostrom who reported this\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2017-2018 Ryan Roden-Corrent (rcorre) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"A completion category that queries the SQL History store.\"\"\"\n\nfrom PyQt5.QtSql import QSqlQueryModel\n\nfrom qutebrowser.misc import sql\nfrom qutebrowser.utils import debug\nfrom qutebrowser.config import config\n\n\nclass HistoryCategory(QSqlQueryModel):\n\n \"\"\"A completion category that queries the SQL History store.\"\"\"\n\n def __init__(self, *, delete_func=None, parent=None):\n \"\"\"Create a new History completion category.\"\"\"\n super().__init__(parent=parent)\n self.name = \"History\"\n self._query = None\n\n # advertise that this model filters by URL and title\n self.columns_to_filter = [0, 1]\n self.delete_func = delete_func\n\n def _atime_expr(self):\n \"\"\"If max_items is set, return an expression to limit the query.\"\"\"\n max_items = config.val.completion.web_history_max_items\n # HistoryCategory should not be added to the completion in that case.\n assert max_items != 0\n\n if max_items < 0:\n return ''\n\n min_atime = sql.Query(' '.join([\n 'SELECT min(last_atime) FROM',\n '(SELECT last_atime FROM CompletionHistory',\n 'ORDER BY last_atime DESC LIMIT :limit)',\n ])).run(limit=max_items).value()\n\n if not min_atime:\n # if there are no history items, min_atime may be '' (issue #2849)\n return ''\n\n return \"AND last_atime >= {}\".format(min_atime)\n\n def set_pattern(self, pattern):\n \"\"\"Set the pattern used to filter results.\n\n Args:\n pattern: string pattern to filter by.\n \"\"\"\n # escape to treat a user input % or _ as a literal, not a wildcard\n pattern = pattern.replace('%', '\\\\%')\n pattern = pattern.replace('_', '\\\\_')\n words = ['%{}%'.format(w) for w in pattern.split(' ')]\n\n # build a where clause to match all of the words in any order\n # given the search term \"a b\", the WHERE clause would be:\n # ((url || title) LIKE '%a%') AND ((url || title) LIKE '%b%')\n where_clause = ' AND '.join(\n \"(url || title) LIKE :{} escape '\\\\'\".format(i)\n for i in range(len(words)))\n\n # replace ' in timestamp-format to avoid breaking the query\n timestamp_format = config.val.completion.timestamp_format\n timefmt = (\"strftime('{}', last_atime, 'unixepoch', 'localtime')\"\n .format(timestamp_format.replace(\"'\", \"`\")))\n\n if not self._query or len(words) != len(self._query.boundValues()):\n # if the number of words changed, we need to generate a new query\n # otherwise, we can reuse the prepared query for performance\n self._query = sql.Query(' '.join([\n \"SELECT url, title, {}\".format(timefmt),\n \"FROM CompletionHistory\",\n # the incoming pattern will have literal % and _ escaped\n # we need to tell sql to treat '\\' as an escape character\n 'WHERE ({})'.format(where_clause),\n self._atime_expr(),\n \"ORDER BY last_atime DESC\",\n ]), forward_only=False)\n\n with debug.log_time('sql', 'Running completion query'):\n self._query.run(**{\n str(i): w for i, w in enumerate(words)})\n self.setQuery(self._query)\n\n def removeRows(self, row, _count, _parent=None):\n \"\"\"Override QAbstractItemModel::removeRows to re-run sql query.\"\"\"\n # re-run query to reload updated table\n with debug.log_time('sql', 'Re-running completion query post-delete'):\n self._query.run()\n self.setQuery(self._query)\n while self.rowCount() < row:\n self.fetchMore()\n return True\n", "path": "qutebrowser/completion/models/histcategory.py"}]}
| 2,058 | 140 |
gh_patches_debug_38751
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-116
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python Model download for GCS and S3
Downloading from GCS and S3 needs to be completed.
https://github.com/kubeflow/kfserving/blob/2f8d33d1a9773c5694a22ba749192163251fe287/python/kfserving/kfserving/storage.py#L27-L33
</issue>
<code>
[start of python/kfserving/kfserving/storage.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 import tempfile
17 import os
18
19 _GCS_PREFIX = "gs://"
20 _S3_PREFIX = "s3://"
21 _LOCAL_PREFIX = "file://"
22
23
24 class Storage(object):
25 @staticmethod
26 def download(uri: str) -> str:
27 logging.info("Copying contents of %s to local" % uri)
28 if uri.startswith(_LOCAL_PREFIX) or os.path.exists(uri):
29 return Storage._download_local(uri)
30
31 temp_dir = tempfile.mkdtemp()
32 if uri.startswith(_GCS_PREFIX):
33 Storage._download_gcs(uri, temp_dir)
34 elif uri.startswith(_S3_PREFIX):
35 Storage._download_s3(uri, temp_dir)
36 else:
37 raise Exception("Cannot recognize storage type for " + uri +
38 "\n'%s', '%s', and '%s' are the current available storage type." %
39 (_GCS_PREFIX, _S3_PREFIX, _LOCAL_PREFIX))
40
41 logging.info("Successfully copied %s to %s" % (uri, temp_dir))
42 return temp_dir
43
44 @staticmethod
45 def _download_s3(uri, temp_dir: str):
46 raise NotImplementedError
47
48 @staticmethod
49 def _download_gcs(uri, temp_dir: str):
50 raise NotImplementedError
51
52 @staticmethod
53 def _download_local(uri):
54 local_path = uri.replace(_LOCAL_PREFIX, "", 1)
55 if not os.path.exists(local_path):
56 raise Exception("Local path %s does not exist." % (uri))
57 return local_path
58
[end of python/kfserving/kfserving/storage.py]
[start of python/kfserving/setup.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22
23 setup(
24 name='kfserver',
25 version='0.1.0',
26 author_email='[email protected]',
27 license='../../LICENSE.txt',
28 url='https://github.com/kubeflow/kfserving/python/kfserving/kfserving',
29 description='Model Server for arbitrary python ML frameworks.',
30 long_description=open('README.md').read(),
31 python_requires='>3.4',
32 packages=find_packages("kfserving"),
33 install_requires=[
34 "tornado >= 1.4.1",
35 "argparse >= 1.4.0",
36 "numpy"
37 ],
38 tests_require=tests_require,
39 extras_require={'test': tests_require}
40 )
41
[end of python/kfserving/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/kfserving/kfserving/storage.py b/python/kfserving/kfserving/storage.py
--- a/python/kfserving/kfserving/storage.py
+++ b/python/kfserving/kfserving/storage.py
@@ -15,6 +15,10 @@
import logging
import tempfile
import os
+import re
+from minio import Minio
+from google.cloud import storage
+from google.auth import exceptions
_GCS_PREFIX = "gs://"
_S3_PREFIX = "s3://"
@@ -43,11 +47,36 @@
@staticmethod
def _download_s3(uri, temp_dir: str):
- raise NotImplementedError
+ client = Storage._create_minio_client()
+ bucket_args = uri.replace(_S3_PREFIX, "", 1).split("/", 1)
+ bucket_name = bucket_args[0]
+ bucket_path = bucket_args[1] if len(bucket_args) > 1 else ""
+ objects = client.list_objects(bucket_name, prefix=bucket_path, recursive=True)
+ for obj in objects:
+ # Replace any prefix from the object key with temp_dir
+ subdir_object_key = obj.object_name.replace(bucket_path, "", 1).strip("/")
+ client.fget_object(bucket_name, obj.object_name, os.path.join(temp_dir, subdir_object_key))
@staticmethod
def _download_gcs(uri, temp_dir: str):
- raise NotImplementedError
+ try:
+ storage_client = storage.Client()
+ except exceptions.DefaultCredentialsError as e:
+ storage_client = storage.Client.create_anonymous_client()
+ bucket_args = uri.replace(_GCS_PREFIX, "", 1).split("/", 1)
+ bucket_name = bucket_args[0]
+ bucket_path = bucket_args[1] if len(bucket_args) > 1 else ""
+ bucket = storage_client.bucket(bucket_name)
+ blobs = bucket.list_blobs(prefix=bucket_path)
+ for blob in blobs:
+ # Replace any prefix from the object key with temp_dir
+ subdir_object_key = blob.name.replace(bucket_path, "", 1).strip("/")
+ # Create necessary subdirectory to store the object locally
+ if "/" in subdir_object_key:
+ local_object_dir = os.path.join(temp_dir, subdir_object_key.rsplit("/", 1)[0])
+ if not os.path.isdir(local_object_dir):
+ os.makedirs(local_object_dir, exist_ok=True)
+ blob.download_to_filename(os.path.join(temp_dir, subdir_object_key))
@staticmethod
def _download_local(uri):
@@ -55,3 +84,13 @@
if not os.path.exists(local_path):
raise Exception("Local path %s does not exist." % (uri))
return local_path
+
+ @staticmethod
+ def _create_minio_client():
+ # Remove possible http scheme for Minio
+ url = re.compile(r"https?://")
+ minioClient = Minio(url.sub("", os.getenv("S3_ENDPOINT", "")),
+ access_key=os.getenv("AWS_ACCESS_KEY_ID", ""),
+ secret_key=os.getenv("AWS_SECRET_ACCESS_KEY", ""),
+ secure=True)
+ return minioClient
diff --git a/python/kfserving/setup.py b/python/kfserving/setup.py
--- a/python/kfserving/setup.py
+++ b/python/kfserving/setup.py
@@ -33,6 +33,8 @@
install_requires=[
"tornado >= 1.4.1",
"argparse >= 1.4.0",
+ "minio >= 4.0.9",
+ "google-cloud-storage >= 1.16.0",
"numpy"
],
tests_require=tests_require,
|
{"golden_diff": "diff --git a/python/kfserving/kfserving/storage.py b/python/kfserving/kfserving/storage.py\n--- a/python/kfserving/kfserving/storage.py\n+++ b/python/kfserving/kfserving/storage.py\n@@ -15,6 +15,10 @@\n import logging\n import tempfile\n import os\n+import re\n+from minio import Minio\n+from google.cloud import storage\n+from google.auth import exceptions\n \n _GCS_PREFIX = \"gs://\"\n _S3_PREFIX = \"s3://\"\n@@ -43,11 +47,36 @@\n \n @staticmethod\n def _download_s3(uri, temp_dir: str):\n- raise NotImplementedError\n+ client = Storage._create_minio_client()\n+ bucket_args = uri.replace(_S3_PREFIX, \"\", 1).split(\"/\", 1)\n+ bucket_name = bucket_args[0]\n+ bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n+ objects = client.list_objects(bucket_name, prefix=bucket_path, recursive=True)\n+ for obj in objects:\n+ # Replace any prefix from the object key with temp_dir\n+ subdir_object_key = obj.object_name.replace(bucket_path, \"\", 1).strip(\"/\")\n+ client.fget_object(bucket_name, obj.object_name, os.path.join(temp_dir, subdir_object_key))\n \n @staticmethod\n def _download_gcs(uri, temp_dir: str):\n- raise NotImplementedError\n+ try:\n+ storage_client = storage.Client()\n+ except exceptions.DefaultCredentialsError as e:\n+ storage_client = storage.Client.create_anonymous_client()\n+ bucket_args = uri.replace(_GCS_PREFIX, \"\", 1).split(\"/\", 1)\n+ bucket_name = bucket_args[0]\n+ bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n+ bucket = storage_client.bucket(bucket_name)\n+ blobs = bucket.list_blobs(prefix=bucket_path)\n+ for blob in blobs:\n+ # Replace any prefix from the object key with temp_dir\n+ subdir_object_key = blob.name.replace(bucket_path, \"\", 1).strip(\"/\")\n+ # Create necessary subdirectory to store the object locally\n+ if \"/\" in subdir_object_key:\n+ local_object_dir = os.path.join(temp_dir, subdir_object_key.rsplit(\"/\", 1)[0])\n+ if not os.path.isdir(local_object_dir):\n+ os.makedirs(local_object_dir, exist_ok=True)\n+ blob.download_to_filename(os.path.join(temp_dir, subdir_object_key))\n \n @staticmethod\n def _download_local(uri):\n@@ -55,3 +84,13 @@\n if not os.path.exists(local_path):\n raise Exception(\"Local path %s does not exist.\" % (uri))\n return local_path\n+\n+ @staticmethod\n+ def _create_minio_client():\n+ # Remove possible http scheme for Minio\n+ url = re.compile(r\"https?://\")\n+ minioClient = Minio(url.sub(\"\", os.getenv(\"S3_ENDPOINT\", \"\")),\n+ access_key=os.getenv(\"AWS_ACCESS_KEY_ID\", \"\"),\n+ secret_key=os.getenv(\"AWS_SECRET_ACCESS_KEY\", \"\"),\n+ secure=True)\n+ return minioClient\ndiff --git a/python/kfserving/setup.py b/python/kfserving/setup.py\n--- a/python/kfserving/setup.py\n+++ b/python/kfserving/setup.py\n@@ -33,6 +33,8 @@\n install_requires=[\n \"tornado >= 1.4.1\",\n \"argparse >= 1.4.0\",\n+ \"minio >= 4.0.9\",\n+ \"google-cloud-storage >= 1.16.0\",\n \"numpy\"\n ],\n tests_require=tests_require,\n", "issue": "Python Model download for GCS and S3\nDownloading from GCS and S3 needs to be completed.\r\n\r\nhttps://github.com/kubeflow/kfserving/blob/2f8d33d1a9773c5694a22ba749192163251fe287/python/kfserving/kfserving/storage.py#L27-L33\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport tempfile\nimport os\n\n_GCS_PREFIX = \"gs://\"\n_S3_PREFIX = \"s3://\"\n_LOCAL_PREFIX = \"file://\"\n\n\nclass Storage(object):\n @staticmethod\n def download(uri: str) -> str:\n logging.info(\"Copying contents of %s to local\" % uri)\n if uri.startswith(_LOCAL_PREFIX) or os.path.exists(uri):\n return Storage._download_local(uri)\n\n temp_dir = tempfile.mkdtemp()\n if uri.startswith(_GCS_PREFIX):\n Storage._download_gcs(uri, temp_dir)\n elif uri.startswith(_S3_PREFIX):\n Storage._download_s3(uri, temp_dir)\n else:\n raise Exception(\"Cannot recognize storage type for \" + uri +\n \"\\n'%s', '%s', and '%s' are the current available storage type.\" %\n (_GCS_PREFIX, _S3_PREFIX, _LOCAL_PREFIX))\n\n logging.info(\"Successfully copied %s to %s\" % (uri, temp_dir))\n return temp_dir\n\n @staticmethod\n def _download_s3(uri, temp_dir: str):\n raise NotImplementedError\n\n @staticmethod\n def _download_gcs(uri, temp_dir: str):\n raise NotImplementedError\n\n @staticmethod\n def _download_local(uri):\n local_path = uri.replace(_LOCAL_PREFIX, \"\", 1)\n if not os.path.exists(local_path):\n raise Exception(\"Local path %s does not exist.\" % (uri))\n return local_path\n", "path": "python/kfserving/kfserving/storage.py"}, {"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='kfserver',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/kfserving',\n description='Model Server for arbitrary python ML frameworks.',\n long_description=open('README.md').read(),\n python_requires='>3.4',\n packages=find_packages(\"kfserving\"),\n install_requires=[\n \"tornado >= 1.4.1\",\n \"argparse >= 1.4.0\",\n \"numpy\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/kfserving/setup.py"}]}
| 1,600 | 816 |
gh_patches_debug_17010
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-4880
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Alembic migration for metrics table uses incorrect server default
## MLflow Roadmap Item
This is an MLflow Roadmap item that has been prioritized by the MLflow maintainers. We're seeking help with the implementation of roadmap items tagged with the `help wanted` label.
For requirements clarifications and implementation questions, or to request a PR review, please tag @harupy in your communications related to this issue.
### System information
- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: No
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: miniconda container - debian buster
- **MLflow installed from (source or binary)**: mflow from pypi
- **MLflow version (run ``mlflow --version``)**: mlflow 1.14.1 trying to upgrade to 1.16 or 1.17
- **Python version**: Python 3.9.2
- **npm version, if running the dev UI**: NA
- **Exact command to reproduce**: mlflow db upgrade <MSSQL connection string >
- Tracking server DB: Azure Microsoft SQL DB
### Describe the problem
When I upgrade the database from 1.14.1 to a higher version I get an error. Currently use an Azure MSFT DB. Would like to upgrade to 1.16 or 1.17
### Code to reproduce issue
mlflow db upgrade "mssql+pyodbc://_rest_of_conn_string"
### Other info / logs
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Column already has a DEFAULT bound to it. (1781) (SQLExecDirectW)')
[SQL: ALTER TABLE metrics ADD DEFAULT '0' FOR is_nan]
(Background on this error at: http://sqlalche.me/e/14/f405)
### What component(s), interfaces, languages, and integrations does this bug affect?
Components
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [x] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs
- [ ] `area/server-infra`: MLflow server, JavaScript dev server
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
Interface
- [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [x] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
Language
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
Integrations
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
</issue>
<code>
[start of mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py]
1 """reset_default_value_for_is_nan_in_metrics_table_for_mysql
2
3 Revision ID: c48cb773bb87
4 Revises: 39d1c3be5f05
5 Create Date: 2021-04-02 15:43:28.466043
6
7 """
8 from alembic import op
9 import sqlalchemy as sa
10
11
12 # revision identifiers, used by Alembic.
13 revision = "c48cb773bb87"
14 down_revision = "39d1c3be5f05"
15 branch_labels = None
16 depends_on = None
17
18
19 def upgrade():
20 # This part of the migration is only relevant for MySQL.
21 # In 39d1c3be5f05_add_is_nan_constraint_for_metrics_tables_if_necessary.py
22 # (added in MLflow 1.15.0), `alter_column` is called on the `is_nan` column in the `metrics`
23 # table without specifying `existing_server_default`. This alters the column default value to
24 # NULL in MySQL (see the doc below).
25 #
26 # https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.alter_column
27 #
28 # To revert this change, set the default column value to "0" by specifying `server_default`
29 with op.batch_alter_table("metrics") as batch_op:
30 batch_op.alter_column(
31 "is_nan",
32 type_=sa.types.Boolean(create_constraint=True),
33 nullable=False,
34 server_default="0",
35 )
36
37
38 def downgrade():
39 pass
40
[end of mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py b/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py
--- a/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py
+++ b/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py
@@ -26,13 +26,15 @@
# https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.alter_column
#
# To revert this change, set the default column value to "0" by specifying `server_default`
- with op.batch_alter_table("metrics") as batch_op:
- batch_op.alter_column(
- "is_nan",
- type_=sa.types.Boolean(create_constraint=True),
- nullable=False,
- server_default="0",
- )
+ bind = op.get_bind()
+ if bind.engine.name == "mysql":
+ with op.batch_alter_table("metrics") as batch_op:
+ batch_op.alter_column(
+ "is_nan",
+ type_=sa.types.Boolean(create_constraint=True),
+ nullable=False,
+ server_default="0",
+ )
def downgrade():
|
{"golden_diff": "diff --git a/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py b/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py\n--- a/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py\n+++ b/mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py\n@@ -26,13 +26,15 @@\n # https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.alter_column\n #\n # To revert this change, set the default column value to \"0\" by specifying `server_default`\n- with op.batch_alter_table(\"metrics\") as batch_op:\n- batch_op.alter_column(\n- \"is_nan\",\n- type_=sa.types.Boolean(create_constraint=True),\n- nullable=False,\n- server_default=\"0\",\n- )\n+ bind = op.get_bind()\n+ if bind.engine.name == \"mysql\":\n+ with op.batch_alter_table(\"metrics\") as batch_op:\n+ batch_op.alter_column(\n+ \"is_nan\",\n+ type_=sa.types.Boolean(create_constraint=True),\n+ nullable=False,\n+ server_default=\"0\",\n+ )\n \n \n def downgrade():\n", "issue": "[BUG] Alembic migration for metrics table uses incorrect server default\n## MLflow Roadmap Item\r\n\r\nThis is an MLflow Roadmap item that has been prioritized by the MLflow maintainers. We're seeking help with the implementation of roadmap items tagged with the `help wanted` label.\r\n\r\nFor requirements clarifications and implementation questions, or to request a PR review, please tag @harupy in your communications related to this issue.\r\n\r\n### System information\r\n- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: No\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: miniconda container - debian buster\r\n- **MLflow installed from (source or binary)**: mflow from pypi\r\n- **MLflow version (run ``mlflow --version``)**: mlflow 1.14.1 trying to upgrade to 1.16 or 1.17\r\n- **Python version**: Python 3.9.2\r\n- **npm version, if running the dev UI**: NA\r\n- **Exact command to reproduce**: mlflow db upgrade <MSSQL connection string >\r\n- Tracking server DB: Azure Microsoft SQL DB\r\n\r\n### Describe the problem\r\nWhen I upgrade the database from 1.14.1 to a higher version I get an error. Currently use an Azure MSFT DB. Would like to upgrade to 1.16 or 1.17\r\n\r\n### Code to reproduce issue\r\nmlflow db upgrade \"mssql+pyodbc://_rest_of_conn_string\"\r\n\r\n### Other info / logs\r\nsqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Column already has a DEFAULT bound to it. (1781) (SQLExecDirectW)')\r\n[SQL: ALTER TABLE metrics ADD DEFAULT '0' FOR is_nan]\r\n(Background on this error at: http://sqlalche.me/e/14/f405)\r\n\r\n\r\n### What component(s), interfaces, languages, and integrations does this bug affect?\r\nComponents \r\n- [ ] `area/artifacts`: Artifact stores and artifact logging\r\n- [ ] `area/build`: Build and test infrastructure for MLflow\r\n- [ ] `area/docs`: MLflow documentation pages\r\n- [ ] `area/examples`: Example code\r\n- [x] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\r\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\r\n- [ ] `area/projects`: MLproject format, project running backends\r\n- [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs\r\n- [ ] `area/server-infra`: MLflow server, JavaScript dev server\r\n- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging\r\n\r\nInterface \r\n- [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting\r\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\r\n- [x] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\r\n- [ ] `area/windows`: Windows support\r\n\r\nLanguage \r\n- [ ] `language/r`: R APIs and clients\r\n- [ ] `language/java`: Java APIs and clients\r\n- [ ] `language/new`: Proposals for new client languages\r\n\r\nIntegrations\r\n- [ ] `integrations/azure`: Azure and Azure ML integrations\r\n- [ ] `integrations/sagemaker`: SageMaker integrations\r\n- [ ] `integrations/databricks`: Databricks integrations\r\n\n", "before_files": [{"content": "\"\"\"reset_default_value_for_is_nan_in_metrics_table_for_mysql\n\nRevision ID: c48cb773bb87\nRevises: 39d1c3be5f05\nCreate Date: 2021-04-02 15:43:28.466043\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = \"c48cb773bb87\"\ndown_revision = \"39d1c3be5f05\"\nbranch_labels = None\ndepends_on = None\n\n\ndef upgrade():\n # This part of the migration is only relevant for MySQL.\n # In 39d1c3be5f05_add_is_nan_constraint_for_metrics_tables_if_necessary.py\n # (added in MLflow 1.15.0), `alter_column` is called on the `is_nan` column in the `metrics`\n # table without specifying `existing_server_default`. This alters the column default value to\n # NULL in MySQL (see the doc below).\n #\n # https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.alter_column\n #\n # To revert this change, set the default column value to \"0\" by specifying `server_default`\n with op.batch_alter_table(\"metrics\") as batch_op:\n batch_op.alter_column(\n \"is_nan\",\n type_=sa.types.Boolean(create_constraint=True),\n nullable=False,\n server_default=\"0\",\n )\n\n\ndef downgrade():\n pass\n", "path": "mlflow/store/db_migrations/versions/c48cb773bb87_reset_default_value_for_is_nan_in_metrics_table_for_mysql.py"}]}
| 1,814 | 328 |
gh_patches_debug_2329
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-5539
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scim_sync_all task fails
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Logs**
<details>
<summary>Stacktrace from authentik</summary>
```
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py", line 231, in _inner
reraise(*exc_info)
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/_compat.py", line 60, in reraise
raise value
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py", line 226, in _inner
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/authentik/providers/scim/tasks.py", line 38, in scim_sync_all
for provider in SCIMProvider.objects.all(backchannel_application__isnull=False):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
builtins.TypeError: BaseManager.all() got an unexpected keyword argument 'backchannel_application__isnull'
```
</details>
**Version and Deployment (please complete the following information):**
- authentik version: 2023.4.1
- Deployment: [e.g. docker-compose, helm]
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of authentik/providers/scim/tasks.py]
1 """SCIM Provider tasks"""
2 from typing import Any, Optional
3
4 from celery.result import allow_join_result
5 from django.core.paginator import Paginator
6 from django.db.models import Model, QuerySet
7 from django.utils.text import slugify
8 from django.utils.translation import gettext_lazy as _
9 from pydanticscim.responses import PatchOp
10 from structlog.stdlib import get_logger
11
12 from authentik.core.models import Group, User
13 from authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus
14 from authentik.lib.utils.reflection import path_to_class
15 from authentik.providers.scim.clients import PAGE_SIZE
16 from authentik.providers.scim.clients.base import SCIMClient
17 from authentik.providers.scim.clients.exceptions import SCIMRequestException, StopSync
18 from authentik.providers.scim.clients.group import SCIMGroupClient
19 from authentik.providers.scim.clients.user import SCIMUserClient
20 from authentik.providers.scim.models import SCIMProvider
21 from authentik.root.celery import CELERY_APP
22
23 LOGGER = get_logger(__name__)
24
25
26 def client_for_model(provider: SCIMProvider, model: Model) -> SCIMClient:
27 """Get SCIM client for model"""
28 if isinstance(model, User):
29 return SCIMUserClient(provider)
30 if isinstance(model, Group):
31 return SCIMGroupClient(provider)
32 raise ValueError(f"Invalid model {model}")
33
34
35 @CELERY_APP.task()
36 def scim_sync_all():
37 """Run sync for all providers"""
38 for provider in SCIMProvider.objects.all(backchannel_application__isnull=False):
39 scim_sync.delay(provider.pk)
40
41
42 @CELERY_APP.task(bind=True, base=MonitoredTask)
43 def scim_sync(self: MonitoredTask, provider_pk: int) -> None:
44 """Run SCIM full sync for provider"""
45 provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()
46 if not provider:
47 return
48 self.set_uid(slugify(provider.name))
49 result = TaskResult(TaskResultStatus.SUCCESSFUL, [])
50 result.messages.append(_("Starting full SCIM sync"))
51 LOGGER.debug("Starting SCIM sync")
52 users_paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)
53 groups_paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)
54 with allow_join_result():
55 try:
56 for page in users_paginator.page_range:
57 result.messages.append(_("Syncing page %(page)d of users" % {"page": page}))
58 for msg in scim_sync_users.delay(page, provider_pk).get():
59 result.messages.append(msg)
60 for page in groups_paginator.page_range:
61 result.messages.append(_("Syncing page %(page)d of groups" % {"page": page}))
62 for msg in scim_sync_group.delay(page, provider_pk).get():
63 result.messages.append(msg)
64 except StopSync as exc:
65 self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))
66 return
67 self.set_status(result)
68
69
70 @CELERY_APP.task()
71 def scim_sync_users(page: int, provider_pk: int):
72 """Sync single or multiple users to SCIM"""
73 messages = []
74 provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()
75 if not provider:
76 return messages
77 try:
78 client = SCIMUserClient(provider)
79 except SCIMRequestException:
80 return messages
81 paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)
82 LOGGER.debug("starting user sync for page", page=page)
83 for user in paginator.page(page).object_list:
84 try:
85 client.write(user)
86 except SCIMRequestException as exc:
87 LOGGER.warning("failed to sync user", exc=exc, user=user)
88 messages.append(
89 _(
90 "Failed to sync user due to remote error %(name)s: %(error)s"
91 % {
92 "name": user.username,
93 "error": str(exc),
94 }
95 )
96 )
97 except StopSync as exc:
98 LOGGER.warning("Stopping sync", exc=exc)
99 messages.append(
100 _(
101 "Stopping sync due to error: %(error)s"
102 % {
103 "error": str(exc),
104 }
105 )
106 )
107 break
108 return messages
109
110
111 @CELERY_APP.task()
112 def scim_sync_group(page: int, provider_pk: int):
113 """Sync single or multiple groups to SCIM"""
114 messages = []
115 provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()
116 if not provider:
117 return messages
118 try:
119 client = SCIMGroupClient(provider)
120 except SCIMRequestException:
121 return messages
122 paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)
123 LOGGER.debug("starting group sync for page", page=page)
124 for group in paginator.page(page).object_list:
125 try:
126 client.write(group)
127 except SCIMRequestException as exc:
128 LOGGER.warning("failed to sync group", exc=exc, group=group)
129 messages.append(
130 _(
131 "Failed to sync group due to remote error %(name)s: %(error)s"
132 % {
133 "name": group.name,
134 "error": str(exc),
135 }
136 )
137 )
138 except StopSync as exc:
139 LOGGER.warning("Stopping sync", exc=exc)
140 messages.append(
141 _(
142 "Stopping sync due to error: %(error)s"
143 % {
144 "error": str(exc),
145 }
146 )
147 )
148 break
149 return messages
150
151
152 @CELERY_APP.task()
153 def scim_signal_direct(model: str, pk: Any, raw_op: str):
154 """Handler for post_save and pre_delete signal"""
155 model_class: type[Model] = path_to_class(model)
156 instance = model_class.objects.filter(pk=pk).first()
157 if not instance:
158 return
159 operation = PatchOp(raw_op)
160 for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):
161 client = client_for_model(provider, instance)
162 # Check if the object is allowed within the provider's restrictions
163 queryset: Optional[QuerySet] = None
164 if isinstance(instance, User):
165 queryset = provider.get_user_qs()
166 if isinstance(instance, Group):
167 queryset = provider.get_group_qs()
168 if not queryset:
169 continue
170
171 # The queryset we get from the provider must include the instance we've got given
172 # otherwise ignore this provider
173 if not queryset.filter(pk=instance.pk).exists():
174 continue
175
176 try:
177 if operation == PatchOp.add:
178 client.write(instance)
179 if operation == PatchOp.remove:
180 client.delete(instance)
181 except (StopSync, SCIMRequestException) as exc:
182 LOGGER.warning(exc)
183
184
185 @CELERY_APP.task()
186 def scim_signal_m2m(group_pk: str, action: str, pk_set: list[int]):
187 """Update m2m (group membership)"""
188 group = Group.objects.filter(pk=group_pk).first()
189 if not group:
190 return
191 for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):
192 # Check if the object is allowed within the provider's restrictions
193 queryset: QuerySet = provider.get_group_qs()
194 # The queryset we get from the provider must include the instance we've got given
195 # otherwise ignore this provider
196 if not queryset.filter(pk=group_pk).exists():
197 continue
198
199 client = SCIMGroupClient(provider)
200 try:
201 operation = None
202 if action == "post_add":
203 operation = PatchOp.add
204 if action == "post_remove":
205 operation = PatchOp.remove
206 client.update_group(group, operation, pk_set)
207 except (StopSync, SCIMRequestException) as exc:
208 LOGGER.warning(exc)
209
[end of authentik/providers/scim/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/providers/scim/tasks.py b/authentik/providers/scim/tasks.py
--- a/authentik/providers/scim/tasks.py
+++ b/authentik/providers/scim/tasks.py
@@ -35,7 +35,7 @@
@CELERY_APP.task()
def scim_sync_all():
"""Run sync for all providers"""
- for provider in SCIMProvider.objects.all(backchannel_application__isnull=False):
+ for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):
scim_sync.delay(provider.pk)
|
{"golden_diff": "diff --git a/authentik/providers/scim/tasks.py b/authentik/providers/scim/tasks.py\n--- a/authentik/providers/scim/tasks.py\n+++ b/authentik/providers/scim/tasks.py\n@@ -35,7 +35,7 @@\n @CELERY_APP.task()\n def scim_sync_all():\n \"\"\"Run sync for all providers\"\"\"\n- for provider in SCIMProvider.objects.all(backchannel_application__isnull=False):\n+ for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n scim_sync.delay(provider.pk)\n", "issue": "scim_sync_all task fails\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to '...'\r\n2. Click on '....'\r\n3. Scroll down to '....'\r\n4. See error\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Logs**\r\n<details>\r\n <summary>Stacktrace from authentik</summary>\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 451, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py\", line 231, in _inner\r\n reraise(*exc_info)\r\n File \"/usr/local/lib/python3.11/site-packages/sentry_sdk/_compat.py\", line 60, in reraise\r\n raise value\r\n File \"/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py\", line 226, in _inner\r\n return f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 734, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/authentik/providers/scim/tasks.py\", line 38, in scim_sync_all\r\n for provider in SCIMProvider.objects.all(backchannel_application__isnull=False):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbuiltins.TypeError: BaseManager.all() got an unexpected keyword argument 'backchannel_application__isnull'\r\n```\r\n</details>\r\n\r\n\r\n**Version and Deployment (please complete the following information):**\r\n- authentik version: 2023.4.1\r\n- Deployment: [e.g. docker-compose, helm]\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n \n", "before_files": [{"content": "\"\"\"SCIM Provider tasks\"\"\"\nfrom typing import Any, Optional\n\nfrom celery.result import allow_join_result\nfrom django.core.paginator import Paginator\nfrom django.db.models import Model, QuerySet\nfrom django.utils.text import slugify\nfrom django.utils.translation import gettext_lazy as _\nfrom pydanticscim.responses import PatchOp\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import Group, User\nfrom authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus\nfrom authentik.lib.utils.reflection import path_to_class\nfrom authentik.providers.scim.clients import PAGE_SIZE\nfrom authentik.providers.scim.clients.base import SCIMClient\nfrom authentik.providers.scim.clients.exceptions import SCIMRequestException, StopSync\nfrom authentik.providers.scim.clients.group import SCIMGroupClient\nfrom authentik.providers.scim.clients.user import SCIMUserClient\nfrom authentik.providers.scim.models import SCIMProvider\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger(__name__)\n\n\ndef client_for_model(provider: SCIMProvider, model: Model) -> SCIMClient:\n \"\"\"Get SCIM client for model\"\"\"\n if isinstance(model, User):\n return SCIMUserClient(provider)\n if isinstance(model, Group):\n return SCIMGroupClient(provider)\n raise ValueError(f\"Invalid model {model}\")\n\n\n@CELERY_APP.task()\ndef scim_sync_all():\n \"\"\"Run sync for all providers\"\"\"\n for provider in SCIMProvider.objects.all(backchannel_application__isnull=False):\n scim_sync.delay(provider.pk)\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\ndef scim_sync(self: MonitoredTask, provider_pk: int) -> None:\n \"\"\"Run SCIM full sync for provider\"\"\"\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return\n self.set_uid(slugify(provider.name))\n result = TaskResult(TaskResultStatus.SUCCESSFUL, [])\n result.messages.append(_(\"Starting full SCIM sync\"))\n LOGGER.debug(\"Starting SCIM sync\")\n users_paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)\n groups_paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)\n with allow_join_result():\n try:\n for page in users_paginator.page_range:\n result.messages.append(_(\"Syncing page %(page)d of users\" % {\"page\": page}))\n for msg in scim_sync_users.delay(page, provider_pk).get():\n result.messages.append(msg)\n for page in groups_paginator.page_range:\n result.messages.append(_(\"Syncing page %(page)d of groups\" % {\"page\": page}))\n for msg in scim_sync_group.delay(page, provider_pk).get():\n result.messages.append(msg)\n except StopSync as exc:\n self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))\n return\n self.set_status(result)\n\n\n@CELERY_APP.task()\ndef scim_sync_users(page: int, provider_pk: int):\n \"\"\"Sync single or multiple users to SCIM\"\"\"\n messages = []\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return messages\n try:\n client = SCIMUserClient(provider)\n except SCIMRequestException:\n return messages\n paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)\n LOGGER.debug(\"starting user sync for page\", page=page)\n for user in paginator.page(page).object_list:\n try:\n client.write(user)\n except SCIMRequestException as exc:\n LOGGER.warning(\"failed to sync user\", exc=exc, user=user)\n messages.append(\n _(\n \"Failed to sync user due to remote error %(name)s: %(error)s\"\n % {\n \"name\": user.username,\n \"error\": str(exc),\n }\n )\n )\n except StopSync as exc:\n LOGGER.warning(\"Stopping sync\", exc=exc)\n messages.append(\n _(\n \"Stopping sync due to error: %(error)s\"\n % {\n \"error\": str(exc),\n }\n )\n )\n break\n return messages\n\n\n@CELERY_APP.task()\ndef scim_sync_group(page: int, provider_pk: int):\n \"\"\"Sync single or multiple groups to SCIM\"\"\"\n messages = []\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return messages\n try:\n client = SCIMGroupClient(provider)\n except SCIMRequestException:\n return messages\n paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)\n LOGGER.debug(\"starting group sync for page\", page=page)\n for group in paginator.page(page).object_list:\n try:\n client.write(group)\n except SCIMRequestException as exc:\n LOGGER.warning(\"failed to sync group\", exc=exc, group=group)\n messages.append(\n _(\n \"Failed to sync group due to remote error %(name)s: %(error)s\"\n % {\n \"name\": group.name,\n \"error\": str(exc),\n }\n )\n )\n except StopSync as exc:\n LOGGER.warning(\"Stopping sync\", exc=exc)\n messages.append(\n _(\n \"Stopping sync due to error: %(error)s\"\n % {\n \"error\": str(exc),\n }\n )\n )\n break\n return messages\n\n\n@CELERY_APP.task()\ndef scim_signal_direct(model: str, pk: Any, raw_op: str):\n \"\"\"Handler for post_save and pre_delete signal\"\"\"\n model_class: type[Model] = path_to_class(model)\n instance = model_class.objects.filter(pk=pk).first()\n if not instance:\n return\n operation = PatchOp(raw_op)\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n client = client_for_model(provider, instance)\n # Check if the object is allowed within the provider's restrictions\n queryset: Optional[QuerySet] = None\n if isinstance(instance, User):\n queryset = provider.get_user_qs()\n if isinstance(instance, Group):\n queryset = provider.get_group_qs()\n if not queryset:\n continue\n\n # The queryset we get from the provider must include the instance we've got given\n # otherwise ignore this provider\n if not queryset.filter(pk=instance.pk).exists():\n continue\n\n try:\n if operation == PatchOp.add:\n client.write(instance)\n if operation == PatchOp.remove:\n client.delete(instance)\n except (StopSync, SCIMRequestException) as exc:\n LOGGER.warning(exc)\n\n\n@CELERY_APP.task()\ndef scim_signal_m2m(group_pk: str, action: str, pk_set: list[int]):\n \"\"\"Update m2m (group membership)\"\"\"\n group = Group.objects.filter(pk=group_pk).first()\n if not group:\n return\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n # Check if the object is allowed within the provider's restrictions\n queryset: QuerySet = provider.get_group_qs()\n # The queryset we get from the provider must include the instance we've got given\n # otherwise ignore this provider\n if not queryset.filter(pk=group_pk).exists():\n continue\n\n client = SCIMGroupClient(provider)\n try:\n operation = None\n if action == \"post_add\":\n operation = PatchOp.add\n if action == \"post_remove\":\n operation = PatchOp.remove\n client.update_group(group, operation, pk_set)\n except (StopSync, SCIMRequestException) as exc:\n LOGGER.warning(exc)\n", "path": "authentik/providers/scim/tasks.py"}]}
| 3,195 | 120 |
gh_patches_debug_4527
|
rasdani/github-patches
|
git_diff
|
ethereum__web3.py-321
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Changes to web3.eth.estimateGas
Check whether 'from' parameter is already in transaction and use it. Otherwise use defaultAccount.
Basically to implement the same behavior as for 'sendTransaction'.
</issue>
<code>
[start of web3/eth.py]
1 from __future__ import unicode_literals
2
3 from cytoolz import compose
4 from cytoolz.dicttoolz import (
5 assoc,
6 )
7
8 from eth_utils import (
9 is_address,
10 is_string,
11 keccak,
12 )
13
14 from web3.iban import Iban
15
16 from web3.contract import (
17 Contract,
18 )
19 from web3.module import (
20 Module,
21 )
22
23 from web3.utils.blocks import (
24 select_method_for_block_identifier,
25 )
26 from web3.utils.signing import (
27 signature_wrapper,
28 )
29 from web3.utils.empty import (
30 empty,
31 )
32 from web3.utils.encoding import (
33 to_bytes,
34 to_hex,
35 )
36 from web3.utils.filters import (
37 BlockFilter,
38 TransactionFilter,
39 LogFilter,
40 )
41 from web3.utils.transactions import (
42 get_buffered_gas_estimate,
43 )
44 from web3.utils.validation import (
45 validate_address,
46 validate_address_checksum,
47 )
48
49
50 class Eth(Module):
51 defaultAccount = empty
52 defaultBlock = "latest"
53 defaultContractFactory = Contract
54 iban = Iban
55
56 def namereg(self):
57 raise NotImplementedError()
58
59 def icapNamereg(self):
60 raise NotImplementedError()
61
62 @property
63 def protocolVersion(self):
64 return self.web3.manager.request_blocking("eth_protocolVersion", [])
65
66 @property
67 def syncing(self):
68 return self.web3.manager.request_blocking("eth_syncing", [])
69
70 @property
71 def coinbase(self):
72 return self.web3.manager.request_blocking("eth_coinbase", [])
73
74 @property
75 def mining(self):
76 return self.web3.manager.request_blocking("eth_mining", [])
77
78 @property
79 def hashrate(self):
80 return self.web3.manager.request_blocking("eth_hashrate", [])
81
82 @property
83 def gasPrice(self):
84 return self.web3.manager.request_blocking("eth_gasPrice", [])
85
86 @property
87 def accounts(self):
88 return self.web3.manager.request_blocking("eth_accounts", [])
89
90 @property
91 def blockNumber(self):
92 return self.web3.manager.request_blocking("eth_blockNumber", [])
93
94 def getBalance(self, account, block_identifier=None):
95 if block_identifier is None:
96 block_identifier = self.defaultBlock
97 return self.web3.manager.request_blocking(
98 "eth_getBalance",
99 [account, block_identifier],
100 )
101
102 def getStorageAt(self, account, position, block_identifier=None):
103 if block_identifier is None:
104 block_identifier = self.defaultBlock
105 return self.web3.manager.request_blocking(
106 "eth_getStorageAt",
107 [account, position, block_identifier]
108 )
109
110 def getCode(self, account, block_identifier=None):
111 if block_identifier is None:
112 block_identifier = self.defaultBlock
113 return self.web3.manager.request_blocking(
114 "eth_getCode",
115 [account, block_identifier],
116 )
117
118 def getBlock(self, block_identifier, full_transactions=False):
119 """
120 `eth_getBlockByHash`
121 `eth_getBlockByNumber`
122 """
123 method = select_method_for_block_identifier(
124 block_identifier,
125 if_predefined='eth_getBlockByNumber',
126 if_hash='eth_getBlockByHash',
127 if_number='eth_getBlockByNumber',
128 )
129
130 return self.web3.manager.request_blocking(
131 method,
132 [block_identifier, full_transactions],
133 )
134
135 def getBlockTransactionCount(self, block_identifier):
136 """
137 `eth_getBlockTransactionCountByHash`
138 `eth_getBlockTransactionCountByNumber`
139 """
140 method = select_method_for_block_identifier(
141 block_identifier,
142 if_predefined='eth_getBlockTransactionCountByNumber',
143 if_hash='eth_getBlockTransactionCountByHash',
144 if_number='eth_getBlockTransactionCountByNumber',
145 )
146 return self.web3.manager.request_blocking(
147 method,
148 [block_identifier],
149 )
150
151 def getUncleCount(self, block_identifier):
152 """
153 `eth_getUncleCountByBlockHash`
154 `eth_getUncleCountByBlockNumber`
155 """
156 method = select_method_for_block_identifier(
157 block_identifier,
158 if_predefined='eth_getUncleCountByBlockNumber',
159 if_hash='eth_getUncleCountByBlockHash',
160 if_number='eth_getUncleCountByBlockNumber',
161 )
162 return self.web3.manager.request_blocking(
163 method,
164 [block_identifier],
165 )
166
167 def getTransaction(self, transaction_hash):
168 return self.web3.manager.request_blocking(
169 "eth_getTransactionByHash",
170 [transaction_hash],
171 )
172
173 def getTransactionFromBlock(self, block_identifier, transaction_index):
174 """
175 `eth_getTransactionByBlockHashAndIndex`
176 `eth_getTransactionByBlockNumberAndIndex`
177 """
178 method = select_method_for_block_identifier(
179 block_identifier,
180 if_predefined='eth_getTransactionByBlockNumberAndIndex',
181 if_hash='eth_getTransactionByBlockHashAndIndex',
182 if_number='eth_getTransactionByBlockNumberAndIndex',
183 )
184 return self.web3.manager.request_blocking(
185 method,
186 [block_identifier, transaction_index],
187 )
188
189 def getTransactionReceipt(self, transaction_hash):
190 return self.web3.manager.request_blocking(
191 "eth_getTransactionReceipt",
192 [transaction_hash],
193 )
194
195 def getTransactionCount(self, account, block_identifier=None):
196 if block_identifier is None:
197 block_identifier = self.defaultBlock
198 return self.web3.manager.request_blocking(
199 "eth_getTransactionCount",
200 [
201 account,
202 block_identifier,
203 ],
204 )
205
206 def sendTransaction(self, transaction):
207 # TODO: move to middleware
208 if 'from' not in transaction and is_address(self.defaultAccount):
209 transaction = assoc(transaction, 'from', self.defaultAccount)
210
211 # TODO: move gas estimation in middleware
212 if 'gas' not in transaction:
213 transaction = assoc(
214 transaction,
215 'gas',
216 get_buffered_gas_estimate(self.web3, transaction),
217 )
218
219 return self.web3.manager.request_blocking(
220 "eth_sendTransaction",
221 [transaction],
222 )
223
224 def sendRawTransaction(self, raw_transaction):
225 return self.web3.manager.request_blocking(
226 "eth_sendRawTransaction",
227 [raw_transaction],
228 )
229
230 def sign(self, account, data=None, hexstr=None, text=None):
231 message_hex = to_hex(data, hexstr=hexstr, text=text)
232 return self.web3.manager.request_blocking(
233 "eth_sign", [account, message_hex],
234 )
235
236 @staticmethod
237 def _recoveryMessageHash(data=None, hexstr=None, text=None):
238 message_bytes = to_bytes(data, hexstr=hexstr, text=text)
239 recovery_hasher = compose(to_hex, keccak, signature_wrapper)
240 return recovery_hasher(message_bytes)
241
242 def call(self, transaction, block_identifier=None):
243 # TODO: move to middleware
244 if 'from' not in transaction and is_address(self.defaultAccount):
245 transaction = assoc(transaction, 'from', self.defaultAccount)
246
247 # TODO: move to middleware
248 if block_identifier is None:
249 block_identifier = self.defaultBlock
250
251 return self.web3.manager.request_blocking(
252 "eth_call",
253 [transaction, block_identifier],
254 )
255
256 def estimateGas(self, transaction):
257 # TODO: move to middleware
258 if is_address(self.defaultAccount):
259 transaction = assoc(transaction, 'from', self.defaultAccount)
260
261 return self.web3.manager.request_blocking(
262 "eth_estimateGas",
263 [transaction],
264 )
265
266 def filter(self, filter_params):
267 if is_string(filter_params):
268 if filter_params == "latest":
269 filter_id = self.web3.manager.request_blocking(
270 "eth_newBlockFilter", [],
271 )
272 return BlockFilter(self.web3, filter_id)
273 elif filter_params == "pending":
274 filter_id = self.web3.manager.request_blocking(
275 "eth_newPendingTransactionFilter", [],
276 )
277 return TransactionFilter(self.web3, filter_id)
278 else:
279 raise ValueError(
280 "The filter API only accepts the values of `pending` or "
281 "`latest` for string based filters"
282 )
283 elif isinstance(filter_params, dict):
284 filter_id = self.web3.manager.request_blocking(
285 "eth_newFilter",
286 [filter_params],
287 )
288 return LogFilter(self.web3, filter_id)
289 else:
290 raise ValueError("Must provide either a string or a valid filter object")
291
292 def getFilterChanges(self, filter_id):
293 return self.web3.manager.request_blocking(
294 "eth_getFilterChanges", [filter_id],
295 )
296
297 def getFilterLogs(self, filter_id):
298 return self.web3.manager.request_blocking(
299 "eth_getFilterLogs", [filter_id],
300 )
301
302 def getLogs(self, filter_params):
303 raise NotImplementedError("Not yet implemented")
304
305 def uninstallFilter(self, filter_id):
306 return self.web3.manager.request_blocking(
307 "eth_uninstallFilter", [filter_id],
308 )
309
310 def contract(self,
311 *args,
312 **kwargs):
313 ContractFactoryClass = kwargs.pop('ContractFactoryClass', self.defaultContractFactory)
314 contract_name = kwargs.pop('contract_name', None)
315
316 has_address = any((
317 'address' in kwargs,
318 len(args) >= 1 and is_address(args[0]),
319 len(args) >= 2 and is_address(args[1]),
320 ))
321
322 for potential_address in args:
323 validate_address_checksum(potential_address)
324
325 if has_address:
326 if 'address' in kwargs:
327 address = kwargs.pop('address')
328 elif is_address(args[0]):
329 address = args[0]
330 elif is_address(args[1]):
331 address = args[1]
332 kwargs['abi'] = args[0]
333 validate_address(address)
334
335 return ContractFactoryClass.factory(self.web3, contract_name, **kwargs)(address)
336 else:
337 try:
338 kwargs['abi'] = args[0]
339 except IndexError:
340 pass
341 return ContractFactoryClass.factory(self.web3, contract_name, **kwargs)
342
343 def setContractFactory(self, contractFactory):
344 self.defaultContractFactory = contractFactory
345
346 def getCompilers(self):
347 return self.web3.manager.request_blocking("eth_getCompilers", [])
348
349 def getWork(self):
350 return self.web3.manager.request_blocking("eth_getWork", [])
351
[end of web3/eth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/web3/eth.py b/web3/eth.py
--- a/web3/eth.py
+++ b/web3/eth.py
@@ -255,7 +255,7 @@
def estimateGas(self, transaction):
# TODO: move to middleware
- if is_address(self.defaultAccount):
+ if 'from' not in transaction and is_address(self.defaultAccount):
transaction = assoc(transaction, 'from', self.defaultAccount)
return self.web3.manager.request_blocking(
|
{"golden_diff": "diff --git a/web3/eth.py b/web3/eth.py\n--- a/web3/eth.py\n+++ b/web3/eth.py\n@@ -255,7 +255,7 @@\n \n def estimateGas(self, transaction):\n # TODO: move to middleware\n- if is_address(self.defaultAccount):\n+ if 'from' not in transaction and is_address(self.defaultAccount):\n transaction = assoc(transaction, 'from', self.defaultAccount)\n \n return self.web3.manager.request_blocking(\n", "issue": "Changes to web3.eth.estimateGas\nCheck whether 'from' parameter is already in transaction and use it. Otherwise use defaultAccount.\r\nBasically to implement the same behavior as for 'sendTransaction'.\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom cytoolz import compose\nfrom cytoolz.dicttoolz import (\n assoc,\n)\n\nfrom eth_utils import (\n is_address,\n is_string,\n keccak,\n)\n\nfrom web3.iban import Iban\n\nfrom web3.contract import (\n Contract,\n)\nfrom web3.module import (\n Module,\n)\n\nfrom web3.utils.blocks import (\n select_method_for_block_identifier,\n)\nfrom web3.utils.signing import (\n signature_wrapper,\n)\nfrom web3.utils.empty import (\n empty,\n)\nfrom web3.utils.encoding import (\n to_bytes,\n to_hex,\n)\nfrom web3.utils.filters import (\n BlockFilter,\n TransactionFilter,\n LogFilter,\n)\nfrom web3.utils.transactions import (\n get_buffered_gas_estimate,\n)\nfrom web3.utils.validation import (\n validate_address,\n validate_address_checksum,\n)\n\n\nclass Eth(Module):\n defaultAccount = empty\n defaultBlock = \"latest\"\n defaultContractFactory = Contract\n iban = Iban\n\n def namereg(self):\n raise NotImplementedError()\n\n def icapNamereg(self):\n raise NotImplementedError()\n\n @property\n def protocolVersion(self):\n return self.web3.manager.request_blocking(\"eth_protocolVersion\", [])\n\n @property\n def syncing(self):\n return self.web3.manager.request_blocking(\"eth_syncing\", [])\n\n @property\n def coinbase(self):\n return self.web3.manager.request_blocking(\"eth_coinbase\", [])\n\n @property\n def mining(self):\n return self.web3.manager.request_blocking(\"eth_mining\", [])\n\n @property\n def hashrate(self):\n return self.web3.manager.request_blocking(\"eth_hashrate\", [])\n\n @property\n def gasPrice(self):\n return self.web3.manager.request_blocking(\"eth_gasPrice\", [])\n\n @property\n def accounts(self):\n return self.web3.manager.request_blocking(\"eth_accounts\", [])\n\n @property\n def blockNumber(self):\n return self.web3.manager.request_blocking(\"eth_blockNumber\", [])\n\n def getBalance(self, account, block_identifier=None):\n if block_identifier is None:\n block_identifier = self.defaultBlock\n return self.web3.manager.request_blocking(\n \"eth_getBalance\",\n [account, block_identifier],\n )\n\n def getStorageAt(self, account, position, block_identifier=None):\n if block_identifier is None:\n block_identifier = self.defaultBlock\n return self.web3.manager.request_blocking(\n \"eth_getStorageAt\",\n [account, position, block_identifier]\n )\n\n def getCode(self, account, block_identifier=None):\n if block_identifier is None:\n block_identifier = self.defaultBlock\n return self.web3.manager.request_blocking(\n \"eth_getCode\",\n [account, block_identifier],\n )\n\n def getBlock(self, block_identifier, full_transactions=False):\n \"\"\"\n `eth_getBlockByHash`\n `eth_getBlockByNumber`\n \"\"\"\n method = select_method_for_block_identifier(\n block_identifier,\n if_predefined='eth_getBlockByNumber',\n if_hash='eth_getBlockByHash',\n if_number='eth_getBlockByNumber',\n )\n\n return self.web3.manager.request_blocking(\n method,\n [block_identifier, full_transactions],\n )\n\n def getBlockTransactionCount(self, block_identifier):\n \"\"\"\n `eth_getBlockTransactionCountByHash`\n `eth_getBlockTransactionCountByNumber`\n \"\"\"\n method = select_method_for_block_identifier(\n block_identifier,\n if_predefined='eth_getBlockTransactionCountByNumber',\n if_hash='eth_getBlockTransactionCountByHash',\n if_number='eth_getBlockTransactionCountByNumber',\n )\n return self.web3.manager.request_blocking(\n method,\n [block_identifier],\n )\n\n def getUncleCount(self, block_identifier):\n \"\"\"\n `eth_getUncleCountByBlockHash`\n `eth_getUncleCountByBlockNumber`\n \"\"\"\n method = select_method_for_block_identifier(\n block_identifier,\n if_predefined='eth_getUncleCountByBlockNumber',\n if_hash='eth_getUncleCountByBlockHash',\n if_number='eth_getUncleCountByBlockNumber',\n )\n return self.web3.manager.request_blocking(\n method,\n [block_identifier],\n )\n\n def getTransaction(self, transaction_hash):\n return self.web3.manager.request_blocking(\n \"eth_getTransactionByHash\",\n [transaction_hash],\n )\n\n def getTransactionFromBlock(self, block_identifier, transaction_index):\n \"\"\"\n `eth_getTransactionByBlockHashAndIndex`\n `eth_getTransactionByBlockNumberAndIndex`\n \"\"\"\n method = select_method_for_block_identifier(\n block_identifier,\n if_predefined='eth_getTransactionByBlockNumberAndIndex',\n if_hash='eth_getTransactionByBlockHashAndIndex',\n if_number='eth_getTransactionByBlockNumberAndIndex',\n )\n return self.web3.manager.request_blocking(\n method,\n [block_identifier, transaction_index],\n )\n\n def getTransactionReceipt(self, transaction_hash):\n return self.web3.manager.request_blocking(\n \"eth_getTransactionReceipt\",\n [transaction_hash],\n )\n\n def getTransactionCount(self, account, block_identifier=None):\n if block_identifier is None:\n block_identifier = self.defaultBlock\n return self.web3.manager.request_blocking(\n \"eth_getTransactionCount\",\n [\n account,\n block_identifier,\n ],\n )\n\n def sendTransaction(self, transaction):\n # TODO: move to middleware\n if 'from' not in transaction and is_address(self.defaultAccount):\n transaction = assoc(transaction, 'from', self.defaultAccount)\n\n # TODO: move gas estimation in middleware\n if 'gas' not in transaction:\n transaction = assoc(\n transaction,\n 'gas',\n get_buffered_gas_estimate(self.web3, transaction),\n )\n\n return self.web3.manager.request_blocking(\n \"eth_sendTransaction\",\n [transaction],\n )\n\n def sendRawTransaction(self, raw_transaction):\n return self.web3.manager.request_blocking(\n \"eth_sendRawTransaction\",\n [raw_transaction],\n )\n\n def sign(self, account, data=None, hexstr=None, text=None):\n message_hex = to_hex(data, hexstr=hexstr, text=text)\n return self.web3.manager.request_blocking(\n \"eth_sign\", [account, message_hex],\n )\n\n @staticmethod\n def _recoveryMessageHash(data=None, hexstr=None, text=None):\n message_bytes = to_bytes(data, hexstr=hexstr, text=text)\n recovery_hasher = compose(to_hex, keccak, signature_wrapper)\n return recovery_hasher(message_bytes)\n\n def call(self, transaction, block_identifier=None):\n # TODO: move to middleware\n if 'from' not in transaction and is_address(self.defaultAccount):\n transaction = assoc(transaction, 'from', self.defaultAccount)\n\n # TODO: move to middleware\n if block_identifier is None:\n block_identifier = self.defaultBlock\n\n return self.web3.manager.request_blocking(\n \"eth_call\",\n [transaction, block_identifier],\n )\n\n def estimateGas(self, transaction):\n # TODO: move to middleware\n if is_address(self.defaultAccount):\n transaction = assoc(transaction, 'from', self.defaultAccount)\n\n return self.web3.manager.request_blocking(\n \"eth_estimateGas\",\n [transaction],\n )\n\n def filter(self, filter_params):\n if is_string(filter_params):\n if filter_params == \"latest\":\n filter_id = self.web3.manager.request_blocking(\n \"eth_newBlockFilter\", [],\n )\n return BlockFilter(self.web3, filter_id)\n elif filter_params == \"pending\":\n filter_id = self.web3.manager.request_blocking(\n \"eth_newPendingTransactionFilter\", [],\n )\n return TransactionFilter(self.web3, filter_id)\n else:\n raise ValueError(\n \"The filter API only accepts the values of `pending` or \"\n \"`latest` for string based filters\"\n )\n elif isinstance(filter_params, dict):\n filter_id = self.web3.manager.request_blocking(\n \"eth_newFilter\",\n [filter_params],\n )\n return LogFilter(self.web3, filter_id)\n else:\n raise ValueError(\"Must provide either a string or a valid filter object\")\n\n def getFilterChanges(self, filter_id):\n return self.web3.manager.request_blocking(\n \"eth_getFilterChanges\", [filter_id],\n )\n\n def getFilterLogs(self, filter_id):\n return self.web3.manager.request_blocking(\n \"eth_getFilterLogs\", [filter_id],\n )\n\n def getLogs(self, filter_params):\n raise NotImplementedError(\"Not yet implemented\")\n\n def uninstallFilter(self, filter_id):\n return self.web3.manager.request_blocking(\n \"eth_uninstallFilter\", [filter_id],\n )\n\n def contract(self,\n *args,\n **kwargs):\n ContractFactoryClass = kwargs.pop('ContractFactoryClass', self.defaultContractFactory)\n contract_name = kwargs.pop('contract_name', None)\n\n has_address = any((\n 'address' in kwargs,\n len(args) >= 1 and is_address(args[0]),\n len(args) >= 2 and is_address(args[1]),\n ))\n\n for potential_address in args:\n validate_address_checksum(potential_address)\n\n if has_address:\n if 'address' in kwargs:\n address = kwargs.pop('address')\n elif is_address(args[0]):\n address = args[0]\n elif is_address(args[1]):\n address = args[1]\n kwargs['abi'] = args[0]\n validate_address(address)\n\n return ContractFactoryClass.factory(self.web3, contract_name, **kwargs)(address)\n else:\n try:\n kwargs['abi'] = args[0]\n except IndexError:\n pass\n return ContractFactoryClass.factory(self.web3, contract_name, **kwargs)\n\n def setContractFactory(self, contractFactory):\n self.defaultContractFactory = contractFactory\n\n def getCompilers(self):\n return self.web3.manager.request_blocking(\"eth_getCompilers\", [])\n\n def getWork(self):\n return self.web3.manager.request_blocking(\"eth_getWork\", [])\n", "path": "web3/eth.py"}]}
| 3,712 | 111 |
gh_patches_debug_24334
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-3688
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[reply refactor] Allow journalists to download replies from journalist interface
After #3673 is implemented, we should allow journalists to download replies from the journalist interface UI. Note that for long-running SecureDrop instances, there will be old replies encrypted only to the source key that should be unavailable for download.
Epic: #3097
</issue>
<code>
[start of securedrop/journalist_app/col.py]
1 # -*- coding: utf-8 -*-
2
3 from flask import (Blueprint, redirect, url_for, render_template, flash,
4 request, abort, send_file, current_app)
5 from flask_babel import gettext
6 from sqlalchemy.orm.exc import NoResultFound
7
8 from db import db
9 from models import Submission
10 from journalist_app.forms import ReplyForm
11 from journalist_app.utils import (make_star_true, make_star_false, get_source,
12 delete_collection, col_download_unread,
13 col_download_all, col_star, col_un_star,
14 col_delete)
15
16
17 def make_blueprint(config):
18 view = Blueprint('col', __name__)
19
20 @view.route('/add_star/<filesystem_id>', methods=('POST',))
21 def add_star(filesystem_id):
22 make_star_true(filesystem_id)
23 db.session.commit()
24 return redirect(url_for('main.index'))
25
26 @view.route("/remove_star/<filesystem_id>", methods=('POST',))
27 def remove_star(filesystem_id):
28 make_star_false(filesystem_id)
29 db.session.commit()
30 return redirect(url_for('main.index'))
31
32 @view.route('/<filesystem_id>')
33 def col(filesystem_id):
34 form = ReplyForm()
35 source = get_source(filesystem_id)
36 source.has_key = current_app.crypto_util.getkey(filesystem_id)
37 return render_template("col.html", filesystem_id=filesystem_id,
38 source=source, form=form)
39
40 @view.route('/delete/<filesystem_id>', methods=('POST',))
41 def delete_single(filesystem_id):
42 """deleting a single collection from its /col page"""
43 source = get_source(filesystem_id)
44 delete_collection(filesystem_id)
45 flash(gettext("{source_name}'s collection deleted")
46 .format(source_name=source.journalist_designation),
47 "notification")
48 return redirect(url_for('main.index'))
49
50 @view.route('/process', methods=('POST',))
51 def process():
52 actions = {'download-unread': col_download_unread,
53 'download-all': col_download_all, 'star': col_star,
54 'un-star': col_un_star, 'delete': col_delete}
55 if 'cols_selected' not in request.form:
56 flash(gettext('No collections selected.'), 'error')
57 return redirect(url_for('main.index'))
58
59 # getlist is cgi.FieldStorage.getlist
60 cols_selected = request.form.getlist('cols_selected')
61 action = request.form['action']
62
63 if action not in actions:
64 return abort(500)
65
66 method = actions[action]
67 return method(cols_selected)
68
69 @view.route('/<filesystem_id>/<fn>')
70 def download_single_submission(filesystem_id, fn):
71 """Sends a client the contents of a single submission."""
72 if '..' in fn or fn.startswith('/'):
73 abort(404)
74
75 try:
76 Submission.query.filter(
77 Submission.filename == fn).one().downloaded = True
78 db.session.commit()
79 except NoResultFound as e:
80 current_app.logger.error(
81 "Could not mark " + fn + " as downloaded: %s" % (e,))
82
83 return send_file(current_app.storage.path(filesystem_id, fn),
84 mimetype="application/pgp-encrypted")
85
86 return view
87
[end of securedrop/journalist_app/col.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/securedrop/journalist_app/col.py b/securedrop/journalist_app/col.py
--- a/securedrop/journalist_app/col.py
+++ b/securedrop/journalist_app/col.py
@@ -67,18 +67,21 @@
return method(cols_selected)
@view.route('/<filesystem_id>/<fn>')
- def download_single_submission(filesystem_id, fn):
- """Sends a client the contents of a single submission."""
+ def download_single_file(filesystem_id, fn):
+ """Sends a client the contents of a single file, either a submission
+ or a journalist reply"""
if '..' in fn or fn.startswith('/'):
abort(404)
- try:
- Submission.query.filter(
- Submission.filename == fn).one().downloaded = True
- db.session.commit()
- except NoResultFound as e:
- current_app.logger.error(
- "Could not mark " + fn + " as downloaded: %s" % (e,))
+ # only mark as read when it's a submission (and not a journalist reply)
+ if not fn.endswith('reply.gpg'):
+ try:
+ Submission.query.filter(
+ Submission.filename == fn).one().downloaded = True
+ db.session.commit()
+ except NoResultFound as e:
+ current_app.logger.error(
+ "Could not mark " + fn + " as downloaded: %s" % (e,))
return send_file(current_app.storage.path(filesystem_id, fn),
mimetype="application/pgp-encrypted")
|
{"golden_diff": "diff --git a/securedrop/journalist_app/col.py b/securedrop/journalist_app/col.py\n--- a/securedrop/journalist_app/col.py\n+++ b/securedrop/journalist_app/col.py\n@@ -67,18 +67,21 @@\n return method(cols_selected)\n \n @view.route('/<filesystem_id>/<fn>')\n- def download_single_submission(filesystem_id, fn):\n- \"\"\"Sends a client the contents of a single submission.\"\"\"\n+ def download_single_file(filesystem_id, fn):\n+ \"\"\"Sends a client the contents of a single file, either a submission\n+ or a journalist reply\"\"\"\n if '..' in fn or fn.startswith('/'):\n abort(404)\n \n- try:\n- Submission.query.filter(\n- Submission.filename == fn).one().downloaded = True\n- db.session.commit()\n- except NoResultFound as e:\n- current_app.logger.error(\n- \"Could not mark \" + fn + \" as downloaded: %s\" % (e,))\n+ # only mark as read when it's a submission (and not a journalist reply)\n+ if not fn.endswith('reply.gpg'):\n+ try:\n+ Submission.query.filter(\n+ Submission.filename == fn).one().downloaded = True\n+ db.session.commit()\n+ except NoResultFound as e:\n+ current_app.logger.error(\n+ \"Could not mark \" + fn + \" as downloaded: %s\" % (e,))\n \n return send_file(current_app.storage.path(filesystem_id, fn),\n mimetype=\"application/pgp-encrypted\")\n", "issue": "[reply refactor] Allow journalists to download replies from journalist interface\nAfter #3673 is implemented, we should allow journalists to download replies from the journalist interface UI. Note that for long-running SecureDrop instances, there will be old replies encrypted only to the source key that should be unavailable for download.\r\n\r\nEpic: #3097\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom flask import (Blueprint, redirect, url_for, render_template, flash,\n request, abort, send_file, current_app)\nfrom flask_babel import gettext\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom db import db\nfrom models import Submission\nfrom journalist_app.forms import ReplyForm\nfrom journalist_app.utils import (make_star_true, make_star_false, get_source,\n delete_collection, col_download_unread,\n col_download_all, col_star, col_un_star,\n col_delete)\n\n\ndef make_blueprint(config):\n view = Blueprint('col', __name__)\n\n @view.route('/add_star/<filesystem_id>', methods=('POST',))\n def add_star(filesystem_id):\n make_star_true(filesystem_id)\n db.session.commit()\n return redirect(url_for('main.index'))\n\n @view.route(\"/remove_star/<filesystem_id>\", methods=('POST',))\n def remove_star(filesystem_id):\n make_star_false(filesystem_id)\n db.session.commit()\n return redirect(url_for('main.index'))\n\n @view.route('/<filesystem_id>')\n def col(filesystem_id):\n form = ReplyForm()\n source = get_source(filesystem_id)\n source.has_key = current_app.crypto_util.getkey(filesystem_id)\n return render_template(\"col.html\", filesystem_id=filesystem_id,\n source=source, form=form)\n\n @view.route('/delete/<filesystem_id>', methods=('POST',))\n def delete_single(filesystem_id):\n \"\"\"deleting a single collection from its /col page\"\"\"\n source = get_source(filesystem_id)\n delete_collection(filesystem_id)\n flash(gettext(\"{source_name}'s collection deleted\")\n .format(source_name=source.journalist_designation),\n \"notification\")\n return redirect(url_for('main.index'))\n\n @view.route('/process', methods=('POST',))\n def process():\n actions = {'download-unread': col_download_unread,\n 'download-all': col_download_all, 'star': col_star,\n 'un-star': col_un_star, 'delete': col_delete}\n if 'cols_selected' not in request.form:\n flash(gettext('No collections selected.'), 'error')\n return redirect(url_for('main.index'))\n\n # getlist is cgi.FieldStorage.getlist\n cols_selected = request.form.getlist('cols_selected')\n action = request.form['action']\n\n if action not in actions:\n return abort(500)\n\n method = actions[action]\n return method(cols_selected)\n\n @view.route('/<filesystem_id>/<fn>')\n def download_single_submission(filesystem_id, fn):\n \"\"\"Sends a client the contents of a single submission.\"\"\"\n if '..' in fn or fn.startswith('/'):\n abort(404)\n\n try:\n Submission.query.filter(\n Submission.filename == fn).one().downloaded = True\n db.session.commit()\n except NoResultFound as e:\n current_app.logger.error(\n \"Could not mark \" + fn + \" as downloaded: %s\" % (e,))\n\n return send_file(current_app.storage.path(filesystem_id, fn),\n mimetype=\"application/pgp-encrypted\")\n\n return view\n", "path": "securedrop/journalist_app/col.py"}]}
| 1,461 | 354 |
gh_patches_debug_42292
|
rasdani/github-patches
|
git_diff
|
azavea__raster-vision-178
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add metrics for counting to eval.py
We should add a new metric to the detection evaluation script so that it computes how close the counts are compared to ground truth.
</issue>
<code>
[start of src/rv/detection/commands/eval_predictions.py]
1 import json
2 from os.path import join
3
4 import numpy as np
5 import rasterio
6 import click
7
8 from object_detection.utils import object_detection_evaluation, label_map_util
9
10 from rv.utils import (
11 download_if_needed, make_empty_dir, get_local_path, upload_if_needed,
12 get_boxes_from_geojson, download_and_build_vrt)
13 from rv.detection.commands.settings import max_num_classes, temp_root_dir
14
15
16 def get_eval_result(ground_truth_path, predictions_path, image_dataset):
17 gt_boxes, gt_classes, _ = \
18 get_boxes_from_geojson(ground_truth_path, image_dataset)
19 # Subtract one because class id's start at 1, but evaluation api assumes
20 # the start at 0. You might think we could just write the label_map.pbtxt
21 # so the class ids start at 0, but that throws an exception.
22 gt_classes -= 1
23
24 pred_boxes, pred_classes, pred_scores = \
25 get_boxes_from_geojson(predictions_path, image_dataset)
26 pred_classes -= 1
27
28 nb_gt_classes = len(set(gt_classes))
29 od_eval = object_detection_evaluation.ObjectDetectionEvaluation(
30 nb_gt_classes, matching_iou_threshold=0.1)
31 image_key = 'image'
32 od_eval.add_single_ground_truth_image_info(
33 image_key, gt_boxes, gt_classes)
34 od_eval.add_single_detected_image_info(
35 image_key, pred_boxes, pred_scores, pred_classes)
36
37 od_eval.evaluate()
38 return od_eval.get_eval_result()
39
40
41 def write_results(output_path, label_map_path, eval_result):
42 label_map = label_map_util.load_labelmap(label_map_path)
43 categories = label_map_util.convert_label_map_to_categories(
44 label_map, max_num_classes=max_num_classes, use_display_name=True)
45 category_index = label_map_util.create_category_index(categories)
46
47 results = []
48 for class_id in range(1, len(category_index) + 1):
49 class_name = category_index[class_id]['name']
50 # Subtract one to account for fact that class id's start at 1.
51 # precisions and recalls are lists with one element for each
52 # predicted box, assuming they are sorted by score. Each element is
53 # the precision or recall assuming that all predicted boxes with that
54 # score or above are used. So, the last element is the value assuming
55 # that all predictions are used.
56
57 precisions = eval_result.precisions[class_id - 1]
58 recalls = eval_result.recalls[class_id - 1]
59 # Get precision and recall assuming all predicted boxes are used.
60 class_results = {
61 'name': class_name,
62 'precision': precisions[-1],
63 'recall': recalls[-1]
64 }
65 results.append(class_results)
66
67 with open(output_path, 'w') as output_file:
68 output_file.write(json.dumps(results, indent=4))
69
70
71 def _eval_predictions(image_uris, label_map_uri, ground_truth_uri,
72 predictions_uri, output_uri):
73 temp_dir = join(temp_root_dir, 'eval_predictions')
74 make_empty_dir(temp_dir)
75
76 image_path = download_and_build_vrt(temp_dir, image_uris)
77 image_dataset = rasterio.open(image_path)
78
79 ground_truth_path = download_if_needed(temp_dir, ground_truth_uri)
80 predictions_path = download_if_needed(temp_dir, predictions_uri)
81 label_map_path = download_if_needed(temp_dir, label_map_uri)
82
83 eval_result = get_eval_result(
84 ground_truth_path, predictions_path, image_dataset)
85
86 output_path = get_local_path(temp_dir, output_uri)
87 write_results(output_path, label_map_path, eval_result)
88 upload_if_needed(output_path, output_uri)
89
90
91 @click.command()
92 @click.argument('image_uris', nargs=-1)
93 @click.argument('label_map_uri')
94 @click.argument('ground_truth_uri')
95 @click.argument('predictions_uri')
96 @click.argument('output_uri')
97 def eval_predictions(image_uris, label_map_uri, ground_truth_uri,
98 predictions_uri, output_uri):
99 """Evaluate predictions against ground truth for a single predictions file.
100
101 Args:
102 ground_truth_uri: GeoJSON file with ground truth bounding boxes
103 predictions_uri: GeoJSON file with predicted bounding boxes
104 output_uri: JSON file with metrics
105 """
106 _eval_predictions(image_uris, label_map_uri, ground_truth_uri,
107 predictions_uri, output_uri)
108
109
110 if __name__ == '__main__':
111 eval_predictions()
112
[end of src/rv/detection/commands/eval_predictions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/rv/detection/commands/eval_predictions.py b/src/rv/detection/commands/eval_predictions.py
--- a/src/rv/detection/commands/eval_predictions.py
+++ b/src/rv/detection/commands/eval_predictions.py
@@ -1,7 +1,6 @@
import json
-from os.path import join
+from os.path import join, dirname
-import numpy as np
import rasterio
import click
@@ -13,7 +12,7 @@
from rv.detection.commands.settings import max_num_classes, temp_root_dir
-def get_eval_result(ground_truth_path, predictions_path, image_dataset):
+def get_od_eval(ground_truth_path, predictions_path, image_dataset):
gt_boxes, gt_classes, _ = \
get_boxes_from_geojson(ground_truth_path, image_dataset)
# Subtract one because class id's start at 1, but evaluation api assumes
@@ -35,10 +34,12 @@
image_key, pred_boxes, pred_scores, pred_classes)
od_eval.evaluate()
- return od_eval.get_eval_result()
+ return od_eval
-def write_results(output_path, label_map_path, eval_result):
+def write_results(output_path, label_map_path, od_eval):
+ make_empty_dir(dirname(output_path), empty_dir=False)
+
label_map = label_map_util.load_labelmap(label_map_path)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=max_num_classes, use_display_name=True)
@@ -53,19 +54,30 @@
# the precision or recall assuming that all predicted boxes with that
# score or above are used. So, the last element is the value assuming
# that all predictions are used.
-
+ eval_result = od_eval.get_eval_result()
precisions = eval_result.precisions[class_id - 1]
recalls = eval_result.recalls[class_id - 1]
# Get precision and recall assuming all predicted boxes are used.
+ precision = precisions[-1]
+ recall = recalls[-1]
+ f1 = (2 * precision * recall) / (precision + recall)
+
+ gt_count = od_eval.num_gt_instances_per_class[class_id -1]
+ pred_count = len(recalls)
+ count_error = pred_count - gt_count
+ norm_count_error = count_error / gt_count
+
class_results = {
'name': class_name,
- 'precision': precisions[-1],
- 'recall': recalls[-1]
+ 'precision': precision,
+ 'recall': recall,
+ 'f1': f1,
+ 'norm_count_error': norm_count_error
}
results.append(class_results)
with open(output_path, 'w') as output_file:
- output_file.write(json.dumps(results, indent=4))
+ output_file.write(json.dumps(results, indent=4, sort_keys=True))
def _eval_predictions(image_uris, label_map_uri, ground_truth_uri,
@@ -80,11 +92,11 @@
predictions_path = download_if_needed(temp_dir, predictions_uri)
label_map_path = download_if_needed(temp_dir, label_map_uri)
- eval_result = get_eval_result(
+ od_eval = get_od_eval(
ground_truth_path, predictions_path, image_dataset)
output_path = get_local_path(temp_dir, output_uri)
- write_results(output_path, label_map_path, eval_result)
+ write_results(output_path, label_map_path, od_eval)
upload_if_needed(output_path, output_uri)
|
{"golden_diff": "diff --git a/src/rv/detection/commands/eval_predictions.py b/src/rv/detection/commands/eval_predictions.py\n--- a/src/rv/detection/commands/eval_predictions.py\n+++ b/src/rv/detection/commands/eval_predictions.py\n@@ -1,7 +1,6 @@\n import json\n-from os.path import join\n+from os.path import join, dirname\n \n-import numpy as np\n import rasterio\n import click\n \n@@ -13,7 +12,7 @@\n from rv.detection.commands.settings import max_num_classes, temp_root_dir\n \n \n-def get_eval_result(ground_truth_path, predictions_path, image_dataset):\n+def get_od_eval(ground_truth_path, predictions_path, image_dataset):\n gt_boxes, gt_classes, _ = \\\n get_boxes_from_geojson(ground_truth_path, image_dataset)\n # Subtract one because class id's start at 1, but evaluation api assumes\n@@ -35,10 +34,12 @@\n image_key, pred_boxes, pred_scores, pred_classes)\n \n od_eval.evaluate()\n- return od_eval.get_eval_result()\n+ return od_eval\n \n \n-def write_results(output_path, label_map_path, eval_result):\n+def write_results(output_path, label_map_path, od_eval):\n+ make_empty_dir(dirname(output_path), empty_dir=False)\n+\n label_map = label_map_util.load_labelmap(label_map_path)\n categories = label_map_util.convert_label_map_to_categories(\n label_map, max_num_classes=max_num_classes, use_display_name=True)\n@@ -53,19 +54,30 @@\n # the precision or recall assuming that all predicted boxes with that\n # score or above are used. So, the last element is the value assuming\n # that all predictions are used.\n-\n+ eval_result = od_eval.get_eval_result()\n precisions = eval_result.precisions[class_id - 1]\n recalls = eval_result.recalls[class_id - 1]\n # Get precision and recall assuming all predicted boxes are used.\n+ precision = precisions[-1]\n+ recall = recalls[-1]\n+ f1 = (2 * precision * recall) / (precision + recall)\n+\n+ gt_count = od_eval.num_gt_instances_per_class[class_id -1]\n+ pred_count = len(recalls)\n+ count_error = pred_count - gt_count\n+ norm_count_error = count_error / gt_count\n+\n class_results = {\n 'name': class_name,\n- 'precision': precisions[-1],\n- 'recall': recalls[-1]\n+ 'precision': precision,\n+ 'recall': recall,\n+ 'f1': f1,\n+ 'norm_count_error': norm_count_error\n }\n results.append(class_results)\n \n with open(output_path, 'w') as output_file:\n- output_file.write(json.dumps(results, indent=4))\n+ output_file.write(json.dumps(results, indent=4, sort_keys=True))\n \n \n def _eval_predictions(image_uris, label_map_uri, ground_truth_uri,\n@@ -80,11 +92,11 @@\n predictions_path = download_if_needed(temp_dir, predictions_uri)\n label_map_path = download_if_needed(temp_dir, label_map_uri)\n \n- eval_result = get_eval_result(\n+ od_eval = get_od_eval(\n ground_truth_path, predictions_path, image_dataset)\n \n output_path = get_local_path(temp_dir, output_uri)\n- write_results(output_path, label_map_path, eval_result)\n+ write_results(output_path, label_map_path, od_eval)\n upload_if_needed(output_path, output_uri)\n", "issue": "Add metrics for counting to eval.py\nWe should add a new metric to the detection evaluation script so that it computes how close the counts are compared to ground truth.\n", "before_files": [{"content": "import json\nfrom os.path import join\n\nimport numpy as np\nimport rasterio\nimport click\n\nfrom object_detection.utils import object_detection_evaluation, label_map_util\n\nfrom rv.utils import (\n download_if_needed, make_empty_dir, get_local_path, upload_if_needed,\n get_boxes_from_geojson, download_and_build_vrt)\nfrom rv.detection.commands.settings import max_num_classes, temp_root_dir\n\n\ndef get_eval_result(ground_truth_path, predictions_path, image_dataset):\n gt_boxes, gt_classes, _ = \\\n get_boxes_from_geojson(ground_truth_path, image_dataset)\n # Subtract one because class id's start at 1, but evaluation api assumes\n # the start at 0. You might think we could just write the label_map.pbtxt\n # so the class ids start at 0, but that throws an exception.\n gt_classes -= 1\n\n pred_boxes, pred_classes, pred_scores = \\\n get_boxes_from_geojson(predictions_path, image_dataset)\n pred_classes -= 1\n\n nb_gt_classes = len(set(gt_classes))\n od_eval = object_detection_evaluation.ObjectDetectionEvaluation(\n nb_gt_classes, matching_iou_threshold=0.1)\n image_key = 'image'\n od_eval.add_single_ground_truth_image_info(\n image_key, gt_boxes, gt_classes)\n od_eval.add_single_detected_image_info(\n image_key, pred_boxes, pred_scores, pred_classes)\n\n od_eval.evaluate()\n return od_eval.get_eval_result()\n\n\ndef write_results(output_path, label_map_path, eval_result):\n label_map = label_map_util.load_labelmap(label_map_path)\n categories = label_map_util.convert_label_map_to_categories(\n label_map, max_num_classes=max_num_classes, use_display_name=True)\n category_index = label_map_util.create_category_index(categories)\n\n results = []\n for class_id in range(1, len(category_index) + 1):\n class_name = category_index[class_id]['name']\n # Subtract one to account for fact that class id's start at 1.\n # precisions and recalls are lists with one element for each\n # predicted box, assuming they are sorted by score. Each element is\n # the precision or recall assuming that all predicted boxes with that\n # score or above are used. So, the last element is the value assuming\n # that all predictions are used.\n\n precisions = eval_result.precisions[class_id - 1]\n recalls = eval_result.recalls[class_id - 1]\n # Get precision and recall assuming all predicted boxes are used.\n class_results = {\n 'name': class_name,\n 'precision': precisions[-1],\n 'recall': recalls[-1]\n }\n results.append(class_results)\n\n with open(output_path, 'w') as output_file:\n output_file.write(json.dumps(results, indent=4))\n\n\ndef _eval_predictions(image_uris, label_map_uri, ground_truth_uri,\n predictions_uri, output_uri):\n temp_dir = join(temp_root_dir, 'eval_predictions')\n make_empty_dir(temp_dir)\n\n image_path = download_and_build_vrt(temp_dir, image_uris)\n image_dataset = rasterio.open(image_path)\n\n ground_truth_path = download_if_needed(temp_dir, ground_truth_uri)\n predictions_path = download_if_needed(temp_dir, predictions_uri)\n label_map_path = download_if_needed(temp_dir, label_map_uri)\n\n eval_result = get_eval_result(\n ground_truth_path, predictions_path, image_dataset)\n\n output_path = get_local_path(temp_dir, output_uri)\n write_results(output_path, label_map_path, eval_result)\n upload_if_needed(output_path, output_uri)\n\n\[email protected]()\[email protected]('image_uris', nargs=-1)\[email protected]('label_map_uri')\[email protected]('ground_truth_uri')\[email protected]('predictions_uri')\[email protected]('output_uri')\ndef eval_predictions(image_uris, label_map_uri, ground_truth_uri,\n predictions_uri, output_uri):\n \"\"\"Evaluate predictions against ground truth for a single predictions file.\n\n Args:\n ground_truth_uri: GeoJSON file with ground truth bounding boxes\n predictions_uri: GeoJSON file with predicted bounding boxes\n output_uri: JSON file with metrics\n \"\"\"\n _eval_predictions(image_uris, label_map_uri, ground_truth_uri,\n predictions_uri, output_uri)\n\n\nif __name__ == '__main__':\n eval_predictions()\n", "path": "src/rv/detection/commands/eval_predictions.py"}]}
| 1,757 | 780 |
gh_patches_debug_7082
|
rasdani/github-patches
|
git_diff
|
paperless-ngx__paperless-ngx-2494
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Rollback doesn't work as described in the documentation
### Description
Rollback doesn't work properly as described in the documentation.
### Steps to reproduce
1. Go to 'https://docs.paperless-ngx.com/setup/#moving-back-to-paperless'
2. Open a terminal from the container and execute the command like mentioned at 'Or without docker:'
3. Error
### Webserver logs
```bash
root@ad747fe4e9e8:/usr/src/paperless/src# python3 manage.py migrate documents 0023
Operations to perform:
Target specific migration: 0023_document_current_filename, from documents
Running migrations:
Rendering model states... DONE
Unapplying paperless_mail.0016_mailrule_consumption_scope... OK
Unapplying paperless_mail.0015_alter_mailrule_action... OK
Unapplying paperless_mail.0014_alter_mailrule_action... OK
Unapplying paperless_mail.0013_merge_20220412_1051... OK
Unapplying paperless_mail.0012_alter_mailrule_assign_tags... OK
Unapplying paperless_mail.0011_remove_mailrule_assign_tag... OK
Unapplying paperless_mail.0010_auto_20220311_1602... OK
Unapplying paperless_mail.0009_mailrule_assign_tags... OK
Unapplying paperless_mail.0009_alter_mailrule_action_alter_mailrule_folder... OK
Unapplying paperless_mail.0008_auto_20210516_0940... OK
Unapplying paperless_mail.0007_auto_20210106_0138... OK
Unapplying paperless_mail.0006_auto_20210101_2340... OK
Unapplying paperless_mail.0005_help_texts... OK
Unapplying paperless_mail.0004_mailrule_order... OK
Unapplying paperless_mail.0003_auto_20201118_1940... OK
Unapplying paperless_mail.0002_auto_20201117_1334... OK
Unapplying paperless_mail.0001_initial... OK
Unapplying documents.1028_remove_paperlesstask_task_args_and_more... OK
Unapplying documents.1027_remove_paperlesstask_attempted_task_and_more... OK
Unapplying documents.1026_transition_to_celery...Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 357, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.IntegrityError: NOT NULL constraint failed: new__documents_paperlesstask.name
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 96, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 349, in handle
post_migrate_state = executor.migrate(
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py", line 141, in migrate
state = self._migrate_all_backwards(plan, full_plan, fake=fake)
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py", line 219, in _migrate_all_backwards
self.unapply_migration(states[migration], migration, fake=fake)
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py", line 279, in unapply_migration
state = migration.unapply(state, schema_editor)
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/migration.py", line 191, in unapply
operation.database_backwards(
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/operations/fields.py", line 178, in database_backwards
schema_editor.add_field(from_model, to_model._meta.get_field(self.name))
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/schema.py", line 397, in add_field
self._remake_table(model, create_field=field)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/schema.py", line 333, in _remake_table
self.execute(
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/schema.py", line 199, in execute
cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 357, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: new__documents_paperlesstask.name
```
### Browser logs
_No response_
### Paperless-ngx version
1.11.3
### Host OS
Unraid 6.11.5
### Installation method
Docker - official image
### Browser
Firefox
### Configuration changes
_No response_
### Other
_No response_
</issue>
<code>
[start of src/documents/migrations/1022_paperlesstask.py]
1 # Generated by Django 4.0.4 on 2022-05-23 07:14
2
3 from django.db import migrations, models
4 import django.db.models.deletion
5
6
7 class Migration(migrations.Migration):
8
9 dependencies = [
10 ("documents", "1021_webp_thumbnail_conversion"),
11 ]
12
13 operations = [
14 migrations.CreateModel(
15 name="PaperlessTask",
16 fields=[
17 (
18 "id",
19 models.AutoField(
20 auto_created=True,
21 primary_key=True,
22 serialize=False,
23 verbose_name="ID",
24 ),
25 ),
26 ("task_id", models.CharField(max_length=128)),
27 ("name", models.CharField(max_length=256)),
28 (
29 "created",
30 models.DateTimeField(auto_now=True, verbose_name="created"),
31 ),
32 (
33 "started",
34 models.DateTimeField(null=True, verbose_name="started"),
35 ),
36 ("acknowledged", models.BooleanField(default=False)),
37 (
38 "attempted_task",
39 models.OneToOneField(
40 blank=True,
41 null=True,
42 on_delete=django.db.models.deletion.CASCADE,
43 related_name="attempted_task",
44 # This is a dummy field, 1026 will fix up the column
45 # This manual change is required, as django doesn't django doesn't really support
46 # removing an app which has migration deps like this
47 to="documents.document",
48 ),
49 ),
50 ],
51 )
52 ]
53
[end of src/documents/migrations/1022_paperlesstask.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/documents/migrations/1022_paperlesstask.py b/src/documents/migrations/1022_paperlesstask.py
--- a/src/documents/migrations/1022_paperlesstask.py
+++ b/src/documents/migrations/1022_paperlesstask.py
@@ -24,7 +24,7 @@
),
),
("task_id", models.CharField(max_length=128)),
- ("name", models.CharField(max_length=256)),
+ ("name", models.CharField(max_length=256, null=True)),
(
"created",
models.DateTimeField(auto_now=True, verbose_name="created"),
|
{"golden_diff": "diff --git a/src/documents/migrations/1022_paperlesstask.py b/src/documents/migrations/1022_paperlesstask.py\n--- a/src/documents/migrations/1022_paperlesstask.py\n+++ b/src/documents/migrations/1022_paperlesstask.py\n@@ -24,7 +24,7 @@\n ),\n ),\n (\"task_id\", models.CharField(max_length=128)),\n- (\"name\", models.CharField(max_length=256)),\n+ (\"name\", models.CharField(max_length=256, null=True)),\n (\n \"created\",\n models.DateTimeField(auto_now=True, verbose_name=\"created\"),\n", "issue": "[BUG] Rollback doesn't work as described in the documentation\n### Description\n\nRollback doesn't work properly as described in the documentation.\n\n### Steps to reproduce\n\n1. Go to 'https://docs.paperless-ngx.com/setup/#moving-back-to-paperless'\r\n2. Open a terminal from the container and execute the command like mentioned at 'Or without docker:'\r\n3. Error\n\n### Webserver logs\n\n```bash\nroot@ad747fe4e9e8:/usr/src/paperless/src# python3 manage.py migrate documents 0023\r\nOperations to perform:\r\n Target specific migration: 0023_document_current_filename, from documents\r\nRunning migrations:\r\n Rendering model states... DONE\r\n Unapplying paperless_mail.0016_mailrule_consumption_scope... OK\r\n Unapplying paperless_mail.0015_alter_mailrule_action... OK\r\n Unapplying paperless_mail.0014_alter_mailrule_action... OK\r\n Unapplying paperless_mail.0013_merge_20220412_1051... OK\r\n Unapplying paperless_mail.0012_alter_mailrule_assign_tags... OK\r\n Unapplying paperless_mail.0011_remove_mailrule_assign_tag... OK\r\n Unapplying paperless_mail.0010_auto_20220311_1602... OK\r\n Unapplying paperless_mail.0009_mailrule_assign_tags... OK\r\n Unapplying paperless_mail.0009_alter_mailrule_action_alter_mailrule_folder... OK\r\n Unapplying paperless_mail.0008_auto_20210516_0940... OK\r\n Unapplying paperless_mail.0007_auto_20210106_0138... OK\r\n Unapplying paperless_mail.0006_auto_20210101_2340... OK\r\n Unapplying paperless_mail.0005_help_texts... OK\r\n Unapplying paperless_mail.0004_mailrule_order... OK\r\n Unapplying paperless_mail.0003_auto_20201118_1940... OK\r\n Unapplying paperless_mail.0002_auto_20201117_1334... OK\r\n Unapplying paperless_mail.0001_initial... OK\r\n Unapplying documents.1028_remove_paperlesstask_task_args_and_more... OK\r\n Unapplying documents.1027_remove_paperlesstask_attempted_task_and_more... OK\r\n Unapplying documents.1026_transition_to_celery...Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py\", line 357, in execute\r\n return Database.Cursor.execute(self, query, params)\r\nsqlite3.IntegrityError: NOT NULL constraint failed: new__documents_paperlesstask.name\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/src/paperless/src/manage.py\", line 11, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 446, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py\", line 440, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 402, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 448, in execute\r\n output = self.handle(*args, **options)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/base.py\", line 96, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/django/core/management/commands/migrate.py\", line 349, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 141, in migrate\r\n state = self._migrate_all_backwards(plan, full_plan, fake=fake)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 219, in _migrate_all_backwards\r\n self.unapply_migration(states[migration], migration, fake=fake)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 279, in unapply_migration\r\n state = migration.unapply(state, schema_editor)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/migrations/migration.py\", line 191, in unapply\r\n operation.database_backwards(\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/migrations/operations/fields.py\", line 178, in database_backwards\r\n schema_editor.add_field(from_model, to_model._meta.get_field(self.name))\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/schema.py\", line 397, in add_field\r\n self._remake_table(model, create_field=field)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/schema.py\", line 333, in _remake_table\r\n self.execute(\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/base/schema.py\", line 199, in execute\r\n cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py\", line 67, in execute\r\n return self._execute_with_wrappers(\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py\", line 80, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/utils.py\", line 91, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py\", line 357, in execute\r\n return Database.Cursor.execute(self, query, params)\r\ndjango.db.utils.IntegrityError: NOT NULL constraint failed: new__documents_paperlesstask.name\n```\n\n\n### Browser logs\n\n_No response_\n\n### Paperless-ngx version\n\n1.11.3\n\n### Host OS\n\nUnraid 6.11.5\n\n### Installation method\n\nDocker - official image\n\n### Browser\n\nFirefox\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "# Generated by Django 4.0.4 on 2022-05-23 07:14\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"documents\", \"1021_webp_thumbnail_conversion\"),\n ]\n\n operations = [\n migrations.CreateModel(\n name=\"PaperlessTask\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"task_id\", models.CharField(max_length=128)),\n (\"name\", models.CharField(max_length=256)),\n (\n \"created\",\n models.DateTimeField(auto_now=True, verbose_name=\"created\"),\n ),\n (\n \"started\",\n models.DateTimeField(null=True, verbose_name=\"started\"),\n ),\n (\"acknowledged\", models.BooleanField(default=False)),\n (\n \"attempted_task\",\n models.OneToOneField(\n blank=True,\n null=True,\n on_delete=django.db.models.deletion.CASCADE,\n related_name=\"attempted_task\",\n # This is a dummy field, 1026 will fix up the column\n # This manual change is required, as django doesn't django doesn't really support\n # removing an app which has migration deps like this\n to=\"documents.document\",\n ),\n ),\n ],\n )\n ]\n", "path": "src/documents/migrations/1022_paperlesstask.py"}]}
| 2,660 | 147 |
gh_patches_debug_524
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-660
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reload Behavior: Documentation != Code, Unnecessary Reloads
Hey! I upgraded to 0.11.4 shortly after release today, and it has a couple of quirks.
1. The documentation [here](https://github.com/encode/uvicorn/compare/0.11.3...0.11.4#diff-1819b1daaccb3d358620ade9c67e9118R22) says "python file changes" but the code says ["all non-dotfile changes"](https://github.com/encode/uvicorn/compare/0.11.3...0.11.4#diff-b0da863c7164698a2ef0fa805e4a9197R40).
2. That behavior, while from the test cases seems to be intended to roll up things like `.graphql` files, also unfortunately rolls up `.pyc` files, meaning every restart is a double restart:
```
WARNING: Detected file change in 'app/main.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [87024]
INFO: Started server process [87080]
INFO: Waiting for application startup.
INFO: Application startup complete.
WARNING: Detected file change in 'app/__pycache__/main.cpython-37.pyc'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [87080]
INFO: Started server process [87093]
INFO: Waiting for application startup.
INFO: Application startup complete.
```
It might be better to use [Path.glob](https://docs.python.org/3/library/pathlib.html#pathlib.Path.glob) so users can specify file extensions and paths more explicitly than with `os.walk`, but it's published already so maybe as another flag?
3. A minor point, but worth noting in the docs: `--reload_dir` on the CLI becomes `reload_dirs=['my_dir']` in code: `uvicorn.run('app.main:app', host="0.0.0.0", port=8000, reload=True, reload_dirs=['app'])`
Thanks for making this great library!
</issue>
<code>
[start of uvicorn/__init__.py]
1 from uvicorn.config import Config
2 from uvicorn.main import Server, main, run
3
4 __version__ = "0.11.4"
5 __all__ = ["main", "run", "Config", "Server"]
6
[end of uvicorn/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/uvicorn/__init__.py b/uvicorn/__init__.py
--- a/uvicorn/__init__.py
+++ b/uvicorn/__init__.py
@@ -1,5 +1,5 @@
from uvicorn.config import Config
from uvicorn.main import Server, main, run
-__version__ = "0.11.4"
+__version__ = "0.11.5"
__all__ = ["main", "run", "Config", "Server"]
|
{"golden_diff": "diff --git a/uvicorn/__init__.py b/uvicorn/__init__.py\n--- a/uvicorn/__init__.py\n+++ b/uvicorn/__init__.py\n@@ -1,5 +1,5 @@\n from uvicorn.config import Config\n from uvicorn.main import Server, main, run\n \n-__version__ = \"0.11.4\"\n+__version__ = \"0.11.5\"\n __all__ = [\"main\", \"run\", \"Config\", \"Server\"]\n", "issue": "Reload Behavior: Documentation != Code, Unnecessary Reloads\nHey! I upgraded to 0.11.4 shortly after release today, and it has a couple of quirks.\r\n\r\n1. The documentation [here](https://github.com/encode/uvicorn/compare/0.11.3...0.11.4#diff-1819b1daaccb3d358620ade9c67e9118R22) says \"python file changes\" but the code says [\"all non-dotfile changes\"](https://github.com/encode/uvicorn/compare/0.11.3...0.11.4#diff-b0da863c7164698a2ef0fa805e4a9197R40).\r\n2. That behavior, while from the test cases seems to be intended to roll up things like `.graphql` files, also unfortunately rolls up `.pyc` files, meaning every restart is a double restart:\r\n\r\n```\r\nWARNING: Detected file change in 'app/main.py'. Reloading...\r\nINFO: Shutting down\r\nINFO: Waiting for application shutdown.\r\nINFO: Application shutdown complete.\r\nINFO: Finished server process [87024]\r\nINFO: Started server process [87080]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nWARNING: Detected file change in 'app/__pycache__/main.cpython-37.pyc'. Reloading...\r\nINFO: Shutting down\r\nINFO: Waiting for application shutdown.\r\nINFO: Application shutdown complete.\r\nINFO: Finished server process [87080]\r\nINFO: Started server process [87093]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\n```\r\n\r\nIt might be better to use [Path.glob](https://docs.python.org/3/library/pathlib.html#pathlib.Path.glob) so users can specify file extensions and paths more explicitly than with `os.walk`, but it's published already so maybe as another flag?\r\n\r\n3. A minor point, but worth noting in the docs: `--reload_dir` on the CLI becomes `reload_dirs=['my_dir']` in code: `uvicorn.run('app.main:app', host=\"0.0.0.0\", port=8000, reload=True, reload_dirs=['app'])`\r\n\r\nThanks for making this great library!\n", "before_files": [{"content": "from uvicorn.config import Config\nfrom uvicorn.main import Server, main, run\n\n__version__ = \"0.11.4\"\n__all__ = [\"main\", \"run\", \"Config\", \"Server\"]\n", "path": "uvicorn/__init__.py"}]}
| 1,117 | 110 |
gh_patches_debug_6220
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-5955
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix the count of total projects shown on the homepage
On the hompage (https://readthedocs.org/), we show the total count of the projects.

This number also includes projects that are identified as spam.
It would be good if we exclude spam projects from it.
</issue>
<code>
[start of readthedocs/core/views/__init__.py]
1 """
2 Core views.
3
4 Including the main homepage, documentation and header rendering,
5 and server errors.
6 """
7
8 import os
9 import logging
10 from urllib.parse import urlparse
11
12 from django.conf import settings
13 from django.http import HttpResponseRedirect, Http404, JsonResponse
14 from django.shortcuts import render, get_object_or_404, redirect
15 from django.views.generic import TemplateView
16 from django.views.static import serve as static_serve
17
18 from readthedocs.builds.models import Version
19 from readthedocs.core.utils.general import wipe_version_via_slugs
20 from readthedocs.core.resolver import resolve_path
21 from readthedocs.core.symlink import PrivateSymlink, PublicSymlink
22 from readthedocs.projects.constants import PRIVATE
23 from readthedocs.projects.models import HTMLFile, Project
24 from readthedocs.redirects.utils import (
25 get_redirect_response,
26 project_and_path_from_request,
27 language_and_version_from_path
28 )
29
30 log = logging.getLogger(__name__)
31
32
33 class NoProjectException(Exception):
34 pass
35
36
37 class HomepageView(TemplateView):
38
39 template_name = 'homepage.html'
40
41 def get_context_data(self, **kwargs):
42 """Add latest builds and featured projects."""
43 context = super().get_context_data(**kwargs)
44 context['featured_list'] = Project.objects.filter(featured=True)
45 context['projects_count'] = Project.objects.count()
46 return context
47
48
49 class SupportView(TemplateView):
50 template_name = 'support.html'
51
52 def get_context_data(self, **kwargs):
53 context = super().get_context_data(**kwargs)
54 support_email = settings.SUPPORT_EMAIL
55 if not support_email:
56 support_email = 'support@{domain}'.format(
57 domain=settings.PRODUCTION_DOMAIN
58 )
59
60 context['support_email'] = support_email
61 return context
62
63
64 def random_page(request, project_slug=None): # pylint: disable=unused-argument
65 html_file = HTMLFile.objects.internal().order_by('?')
66 if project_slug:
67 html_file = html_file.filter(project__slug=project_slug)
68 html_file = html_file.first()
69 if html_file is None:
70 raise Http404
71 url = html_file.get_absolute_url()
72 return HttpResponseRedirect(url)
73
74
75 def wipe_version(request, project_slug, version_slug):
76 version = get_object_or_404(
77 Version.internal.all(),
78 project__slug=project_slug,
79 slug=version_slug,
80 )
81 # We need to check by ``for_admin_user`` here to allow members of the
82 # ``Admin`` team (which doesn't own the project) under the corporate site.
83 if version.project not in Project.objects.for_admin_user(user=request.user):
84 raise Http404('You must own this project to wipe it.')
85
86 if request.method == 'POST':
87 wipe_version_via_slugs(
88 version_slug=version_slug,
89 project_slug=project_slug,
90 )
91 return redirect('project_version_list', project_slug)
92 return render(
93 request,
94 'wipe_version.html',
95 {'version': version, 'project': version.project},
96 )
97
98
99 def server_error_500(request, template_name='500.html'):
100 """A simple 500 handler so we get media."""
101 r = render(request, template_name)
102 r.status_code = 500
103 return r
104
105
106 def server_error_404(request, exception=None, template_name='404.html'): # pylint: disable=unused-argument # noqa
107 """
108 A simple 404 handler so we get media.
109
110 .. note::
111
112 Marking exception as optional to make /404/ testing page to work.
113 """
114 response = get_redirect_response(request, full_path=request.get_full_path())
115
116 # Return a redirect response if there is one
117 if response:
118 if response.url == request.build_absolute_uri():
119 # check that we do have a response and avoid infinite redirect
120 log.warning(
121 'Infinite Redirect: FROM URL is the same than TO URL. url=%s',
122 response.url,
123 )
124 else:
125 return response
126
127 # Try to serve custom 404 pages if it's a subdomain/cname
128 if getattr(request, 'subdomain', False) or getattr(request, 'cname', False):
129 return server_error_404_subdomain(request, template_name)
130
131 # Return the default 404 page generated by Read the Docs
132 r = render(request, template_name)
133 r.status_code = 404
134 return r
135
136
137 def server_error_404_subdomain(request, template_name='404.html'):
138 """
139 Handler for 404 pages on subdomains.
140
141 Check if the project associated has a custom ``404.html`` and serve this
142 page. First search for a 404 page in the current version, then continues
143 with the default version and finally, if none of them are found, the Read
144 the Docs default page (Maze Found) is rendered by Django and served.
145 """
146
147 def resolve_404_path(project, version_slug=None, language=None, filename='404.html'):
148 """
149 Helper to resolve the path of ``404.html`` for project.
150
151 The resolution is based on ``project`` object, version slug and
152 language.
153
154 :returns: tuple containing the (basepath, filename)
155 :rtype: tuple
156 """
157 filename = resolve_path(
158 project,
159 version_slug=version_slug,
160 language=language,
161 filename=filename,
162 subdomain=True, # subdomain will make it a "full" path without a URL prefix
163 )
164
165 # This breaks path joining, by ignoring the root when given an "absolute" path
166 if filename[0] == '/':
167 filename = filename[1:]
168
169 version = None
170 if version_slug:
171 version_qs = project.versions.filter(slug=version_slug)
172 if version_qs.exists():
173 version = version_qs.first()
174
175 private = any([
176 version and version.privacy_level == PRIVATE,
177 not version and project.privacy_level == PRIVATE,
178 ])
179 if private:
180 symlink = PrivateSymlink(project)
181 else:
182 symlink = PublicSymlink(project)
183 basepath = symlink.project_root
184 fullpath = os.path.join(basepath, filename)
185 return (basepath, filename, fullpath)
186
187 project, full_path = project_and_path_from_request(request, request.get_full_path())
188
189 if project:
190 language = None
191 version_slug = None
192 schema, netloc, path, params, query, fragments = urlparse(full_path)
193 if not project.single_version:
194 language, version_slug, path = language_and_version_from_path(path)
195
196 # Firstly, attempt to serve the 404 of the current version (version_slug)
197 # Secondly, try to serve the 404 page for the default version
198 # (project.get_default_version())
199 for slug in (version_slug, project.get_default_version()):
200 for tryfile in ('404.html', '404/index.html'):
201 basepath, filename, fullpath = resolve_404_path(project, slug, language, tryfile)
202 if os.path.exists(fullpath):
203 log.debug(
204 'serving 404.html page current version: [project: %s] [version: %s]',
205 project.slug,
206 slug,
207 )
208 r = static_serve(request, filename, basepath)
209 r.status_code = 404
210 return r
211
212 # Finally, return the default 404 page generated by Read the Docs
213 r = render(request, template_name)
214 r.status_code = 404
215 return r
216
217
218 def do_not_track(request):
219 dnt_header = request.META.get('HTTP_DNT')
220
221 # https://w3c.github.io/dnt/drafts/tracking-dnt.html#status-representation
222 return JsonResponse( # pylint: disable=redundant-content-type-for-json-response
223 {
224 'policy': 'https://docs.readthedocs.io/en/latest/privacy-policy.html',
225 'same-party': [
226 'readthedocs.org',
227 'readthedocs.com',
228 'readthedocs.io', # .org Documentation Sites
229 'readthedocs-hosted.com', # .com Documentation Sites
230 ],
231 'tracking': 'N' if dnt_header == '1' else 'T',
232 }, content_type='application/tracking-status+json',
233 )
234
[end of readthedocs/core/views/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/core/views/__init__.py b/readthedocs/core/views/__init__.py
--- a/readthedocs/core/views/__init__.py
+++ b/readthedocs/core/views/__init__.py
@@ -42,7 +42,7 @@
"""Add latest builds and featured projects."""
context = super().get_context_data(**kwargs)
context['featured_list'] = Project.objects.filter(featured=True)
- context['projects_count'] = Project.objects.count()
+ context['projects_count'] = Project.objects.exclude(users__profile__banned=True).count()
return context
|
{"golden_diff": "diff --git a/readthedocs/core/views/__init__.py b/readthedocs/core/views/__init__.py\n--- a/readthedocs/core/views/__init__.py\n+++ b/readthedocs/core/views/__init__.py\n@@ -42,7 +42,7 @@\n \"\"\"Add latest builds and featured projects.\"\"\"\n context = super().get_context_data(**kwargs)\n context['featured_list'] = Project.objects.filter(featured=True)\n- context['projects_count'] = Project.objects.count()\n+ context['projects_count'] = Project.objects.exclude(users__profile__banned=True).count()\n return context\n", "issue": "Fix the count of total projects shown on the homepage\nOn the hompage (https://readthedocs.org/), we show the total count of the projects.\r\n\r\n\r\n\r\nThis number also includes projects that are identified as spam.\r\nIt would be good if we exclude spam projects from it.\n", "before_files": [{"content": "\"\"\"\nCore views.\n\nIncluding the main homepage, documentation and header rendering,\nand server errors.\n\"\"\"\n\nimport os\nimport logging\nfrom urllib.parse import urlparse\n\nfrom django.conf import settings\nfrom django.http import HttpResponseRedirect, Http404, JsonResponse\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.views.generic import TemplateView\nfrom django.views.static import serve as static_serve\n\nfrom readthedocs.builds.models import Version\nfrom readthedocs.core.utils.general import wipe_version_via_slugs\nfrom readthedocs.core.resolver import resolve_path\nfrom readthedocs.core.symlink import PrivateSymlink, PublicSymlink\nfrom readthedocs.projects.constants import PRIVATE\nfrom readthedocs.projects.models import HTMLFile, Project\nfrom readthedocs.redirects.utils import (\n get_redirect_response,\n project_and_path_from_request,\n language_and_version_from_path\n)\n\nlog = logging.getLogger(__name__)\n\n\nclass NoProjectException(Exception):\n pass\n\n\nclass HomepageView(TemplateView):\n\n template_name = 'homepage.html'\n\n def get_context_data(self, **kwargs):\n \"\"\"Add latest builds and featured projects.\"\"\"\n context = super().get_context_data(**kwargs)\n context['featured_list'] = Project.objects.filter(featured=True)\n context['projects_count'] = Project.objects.count()\n return context\n\n\nclass SupportView(TemplateView):\n template_name = 'support.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n support_email = settings.SUPPORT_EMAIL\n if not support_email:\n support_email = 'support@{domain}'.format(\n domain=settings.PRODUCTION_DOMAIN\n )\n\n context['support_email'] = support_email\n return context\n\n\ndef random_page(request, project_slug=None): # pylint: disable=unused-argument\n html_file = HTMLFile.objects.internal().order_by('?')\n if project_slug:\n html_file = html_file.filter(project__slug=project_slug)\n html_file = html_file.first()\n if html_file is None:\n raise Http404\n url = html_file.get_absolute_url()\n return HttpResponseRedirect(url)\n\n\ndef wipe_version(request, project_slug, version_slug):\n version = get_object_or_404(\n Version.internal.all(),\n project__slug=project_slug,\n slug=version_slug,\n )\n # We need to check by ``for_admin_user`` here to allow members of the\n # ``Admin`` team (which doesn't own the project) under the corporate site.\n if version.project not in Project.objects.for_admin_user(user=request.user):\n raise Http404('You must own this project to wipe it.')\n\n if request.method == 'POST':\n wipe_version_via_slugs(\n version_slug=version_slug,\n project_slug=project_slug,\n )\n return redirect('project_version_list', project_slug)\n return render(\n request,\n 'wipe_version.html',\n {'version': version, 'project': version.project},\n )\n\n\ndef server_error_500(request, template_name='500.html'):\n \"\"\"A simple 500 handler so we get media.\"\"\"\n r = render(request, template_name)\n r.status_code = 500\n return r\n\n\ndef server_error_404(request, exception=None, template_name='404.html'): # pylint: disable=unused-argument # noqa\n \"\"\"\n A simple 404 handler so we get media.\n\n .. note::\n\n Marking exception as optional to make /404/ testing page to work.\n \"\"\"\n response = get_redirect_response(request, full_path=request.get_full_path())\n\n # Return a redirect response if there is one\n if response:\n if response.url == request.build_absolute_uri():\n # check that we do have a response and avoid infinite redirect\n log.warning(\n 'Infinite Redirect: FROM URL is the same than TO URL. url=%s',\n response.url,\n )\n else:\n return response\n\n # Try to serve custom 404 pages if it's a subdomain/cname\n if getattr(request, 'subdomain', False) or getattr(request, 'cname', False):\n return server_error_404_subdomain(request, template_name)\n\n # Return the default 404 page generated by Read the Docs\n r = render(request, template_name)\n r.status_code = 404\n return r\n\n\ndef server_error_404_subdomain(request, template_name='404.html'):\n \"\"\"\n Handler for 404 pages on subdomains.\n\n Check if the project associated has a custom ``404.html`` and serve this\n page. First search for a 404 page in the current version, then continues\n with the default version and finally, if none of them are found, the Read\n the Docs default page (Maze Found) is rendered by Django and served.\n \"\"\"\n\n def resolve_404_path(project, version_slug=None, language=None, filename='404.html'):\n \"\"\"\n Helper to resolve the path of ``404.html`` for project.\n\n The resolution is based on ``project`` object, version slug and\n language.\n\n :returns: tuple containing the (basepath, filename)\n :rtype: tuple\n \"\"\"\n filename = resolve_path(\n project,\n version_slug=version_slug,\n language=language,\n filename=filename,\n subdomain=True, # subdomain will make it a \"full\" path without a URL prefix\n )\n\n # This breaks path joining, by ignoring the root when given an \"absolute\" path\n if filename[0] == '/':\n filename = filename[1:]\n\n version = None\n if version_slug:\n version_qs = project.versions.filter(slug=version_slug)\n if version_qs.exists():\n version = version_qs.first()\n\n private = any([\n version and version.privacy_level == PRIVATE,\n not version and project.privacy_level == PRIVATE,\n ])\n if private:\n symlink = PrivateSymlink(project)\n else:\n symlink = PublicSymlink(project)\n basepath = symlink.project_root\n fullpath = os.path.join(basepath, filename)\n return (basepath, filename, fullpath)\n\n project, full_path = project_and_path_from_request(request, request.get_full_path())\n\n if project:\n language = None\n version_slug = None\n schema, netloc, path, params, query, fragments = urlparse(full_path)\n if not project.single_version:\n language, version_slug, path = language_and_version_from_path(path)\n\n # Firstly, attempt to serve the 404 of the current version (version_slug)\n # Secondly, try to serve the 404 page for the default version\n # (project.get_default_version())\n for slug in (version_slug, project.get_default_version()):\n for tryfile in ('404.html', '404/index.html'):\n basepath, filename, fullpath = resolve_404_path(project, slug, language, tryfile)\n if os.path.exists(fullpath):\n log.debug(\n 'serving 404.html page current version: [project: %s] [version: %s]',\n project.slug,\n slug,\n )\n r = static_serve(request, filename, basepath)\n r.status_code = 404\n return r\n\n # Finally, return the default 404 page generated by Read the Docs\n r = render(request, template_name)\n r.status_code = 404\n return r\n\n\ndef do_not_track(request):\n dnt_header = request.META.get('HTTP_DNT')\n\n # https://w3c.github.io/dnt/drafts/tracking-dnt.html#status-representation\n return JsonResponse( # pylint: disable=redundant-content-type-for-json-response\n {\n 'policy': 'https://docs.readthedocs.io/en/latest/privacy-policy.html',\n 'same-party': [\n 'readthedocs.org',\n 'readthedocs.com',\n 'readthedocs.io', # .org Documentation Sites\n 'readthedocs-hosted.com', # .com Documentation Sites\n ],\n 'tracking': 'N' if dnt_header == '1' else 'T',\n }, content_type='application/tracking-status+json',\n )\n", "path": "readthedocs/core/views/__init__.py"}]}
| 3,113 | 131 |
gh_patches_debug_35579
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-2202
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add the COMBO token to the airdrops
## Abstract
Furucombo has launched a token and airdropped to their users and donors on January 15th.
These tokens can be claimed periodically over a vesting period of 8 weeks.
## Motivation
The airdrop ranges with either 350 or 900 COMBO which at the time of writing it worth 560 and 1450 usd. It would be nice have a easy way to track these tokes so the user can claim them.
<!-- Why do you think this feature should be addressed. What is the value added to the users of Rotki and why would they want to have it implemented? -->
## Specification
Add the COMBO token to the airdrop section and update the balance of the COMBO tokens in each vesting cycle.
Each vesting cycle is at every friday for 8 weeks afaik.
Idk how much will be distributed each cycle but those who had 350 where able to unlock 87.5 today.
A list of eligible addresses can be found here:
https://docs.google.com/spreadsheets/d/113AiPrGJ-yp7g-Kdo_IofMWiftc7msEoBMmKNKxJ4fo/edit#gid=0
The announcement of the token airdrop can be found here:
https://medium.com/furucombo/first-furucombo-grant-7b1e48175c99
<!-- If the feature is techical in nature please write as detailed as possible a specification of what needs to be built. -->
</issue>
<code>
[start of rotkehlchen/constants/assets.py]
1 from rotkehlchen.assets.asset import Asset, EthereumToken
2
3 A_USD = Asset('USD')
4 A_EUR = Asset('EUR')
5 A_GBP = Asset('GBP')
6 A_JPY = Asset('JPY')
7 A_CNY = Asset('CNY')
8 A_CAD = Asset('CAD')
9 A_KRW = Asset('KRW')
10 A_RUB = Asset('RUB')
11 A_CHF = Asset('CHF')
12 A_TRY = Asset('TRY')
13 A_ZAR = Asset('ZAR')
14 A_AUD = Asset('AUD')
15 A_NZD = Asset('NZD')
16 A_BRL = Asset('BRL')
17 FIAT_CURRENCIES = (
18 A_USD,
19 A_EUR,
20 A_GBP,
21 A_JPY,
22 A_CNY,
23 A_CAD,
24 A_KRW,
25 A_RUB,
26 A_CHF,
27 A_TRY,
28 A_ZAR,
29 A_AUD,
30 A_NZD,
31 A_BRL,
32 )
33
34 S_BTC = 'BTC'
35 S_ETH = 'ETH'
36 S_KSM = 'KSM'
37
38 A_BTC = Asset(S_BTC)
39 A_BCH = Asset('BCH')
40 A_BAL = Asset('BAL')
41 A_BSV = Asset('BSV')
42 A_ETH = Asset(S_ETH)
43 A_ETH2 = Asset('ETH2')
44 A_ETC = Asset('ETC')
45 A_KSM = Asset(S_KSM)
46 A_BAT = EthereumToken('BAT')
47 A_UNI = EthereumToken('UNI')
48 A_1INCH = EthereumToken('1INCH')
49 A_DAI = EthereumToken('DAI')
50 A_SAI = EthereumToken('SAI')
51 A_YFI = EthereumToken('YFI')
52 A_USDT = EthereumToken('USDT')
53 A_USDC = EthereumToken('USDC')
54 A_TUSD = EthereumToken('TUSD')
55 A_ALINK = EthereumToken('aLINK')
56 A_GUSD = EthereumToken('GUSD')
57 A_CRV = EthereumToken('CRV')
58 A_KNC = EthereumToken('KNC')
59 A_WBTC = EthereumToken('WBTC')
60 A_WETH = EthereumToken('WETH')
61 A_ZRX = EthereumToken('ZRX')
62 A_MANA = EthereumToken('MANA')
63 A_PAX = EthereumToken('PAX')
64 A_COMP = EthereumToken('COMP')
65 A_LRC = EthereumToken('LRC')
66 A_LINK = EthereumToken('LINK')
67 A_ADX = EthereumToken('ADX')
68 A_TORN = EthereumToken('TORN')
69 A_CORN = EthereumToken('CORN-2')
70 A_GRAIN = EthereumToken('GRAIN')
71
[end of rotkehlchen/constants/assets.py]
[start of rotkehlchen/chain/ethereum/airdrops.py]
1 from rotkehlchen.typing import ChecksumEthAddress
2 from typing import List, Dict, TextIO, Iterator, Tuple
3 from rotkehlchen.constants.assets import A_UNI, A_1INCH, A_TORN, A_CORN, A_GRAIN
4 import csv
5 import requests
6 from pathlib import Path
7 from collections import defaultdict
8 from rotkehlchen.errors import RemoteError
9 from rotkehlchen.chain.ethereum.utils import token_normalized_value_decimals
10
11 AIRDROPS = {
12 'uniswap': (
13 # is checksummed
14 'https://gist.githubusercontent.com/LefterisJP/d883cb7187a7c4fcf98c7a62f45568e7/raw/3718c95d572a29b9c3906d7c64726d3bd7524bfd/uniswap.csv', # noqa: E501
15 A_UNI,
16 'https://app.uniswap.org/',
17 ),
18 '1inch': (
19 # is checksummed
20 'https://gist.githubusercontent.com/LefterisJP/8f41d1511bf354d7e56810188116a410/raw/87d967e86e1435aa3a9ddb97ce20531e4e52dbad/1inch.csv', # noqa: E501
21 A_1INCH,
22 'https://1inch.exchange/',
23 ),
24 'tornado': (
25 # is checksummed
26 'https://raw.githubusercontent.com/tornadocash/airdrop/master/airdrop.csv',
27 A_TORN, # Don't have TORN token yet?
28 'https://tornado.cash/',
29 ),
30 'cornichon': (
31 # is checksummed
32 'https://gist.githubusercontent.com/LefterisJP/5199d8bc6caa3253c343cd5084489088/raw/7e9ca4c4772fc50780bfe9997e1c43525e1b7445/cornichon_airdrop.csv', # noqa: E501
33 A_CORN,
34 'https://cornichon.ape.tax/',
35 ),
36 'grain': (
37 # is checksummed
38 'https://gist.githubusercontent.com/LefterisJP/08d7a5b28876741b300c944650c89280/raw/987ab4a92d5363fdbe262f639565732bd1fd3921/grain_iou.csv', # noqa: E501
39 A_GRAIN,
40 'https://claim.harvest.finance/',
41 ),
42 }
43
44
45 def get_airdrop_data(name: str, data_dir: Path) -> Tuple[Iterator, TextIO]:
46 airdrops_dir = data_dir / 'airdrops'
47 airdrops_dir.mkdir(parents=True, exist_ok=True)
48 filename = airdrops_dir / f'{name}.csv'
49 if not filename.is_file():
50 # if not cached, get it from the gist
51 try:
52 request = requests.get(AIRDROPS[name][0])
53 except requests.exceptions.RequestException as e:
54 raise RemoteError(f'Airdrops Gist request failed due to {str(e)}') from e
55
56 with open(filename, 'w') as f:
57 f.write(request.content.decode('utf-8'))
58
59 csvfile = open(filename, 'r')
60 iterator = csv.reader(csvfile)
61 next(iterator) # skip header
62 return iterator, csvfile
63
64
65 def check_airdrops(
66 addresses: List[ChecksumEthAddress],
67 data_dir: Path,
68 ) -> Dict[ChecksumEthAddress, Dict]:
69 """Checks airdrop data for the given list of ethereum addresses
70
71 May raise:
72 - RemoteError if the remote request fails
73 """
74 found_data: Dict[ChecksumEthAddress, Dict] = defaultdict(lambda: defaultdict(dict))
75 for protocol_name, airdrop_data in AIRDROPS.items():
76 data, csvfile = get_airdrop_data(protocol_name, data_dir)
77 for addr, amount, *_ in data:
78 # not doing to_checksum_address() here since the file addresses are checksummed
79 # and doing to_checksum_address() so many times hits performance
80 if protocol_name in ('cornichon', 'tornado', 'grain'):
81 amount = token_normalized_value_decimals(int(amount), 18)
82 if addr in addresses:
83 found_data[addr][protocol_name] = {
84 'amount': str(amount),
85 'asset': airdrop_data[1],
86 'link': airdrop_data[2],
87 }
88 csvfile.close()
89
90 return dict(found_data)
91
[end of rotkehlchen/chain/ethereum/airdrops.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rotkehlchen/chain/ethereum/airdrops.py b/rotkehlchen/chain/ethereum/airdrops.py
--- a/rotkehlchen/chain/ethereum/airdrops.py
+++ b/rotkehlchen/chain/ethereum/airdrops.py
@@ -1,12 +1,14 @@
-from rotkehlchen.typing import ChecksumEthAddress
-from typing import List, Dict, TextIO, Iterator, Tuple
-from rotkehlchen.constants.assets import A_UNI, A_1INCH, A_TORN, A_CORN, A_GRAIN
import csv
-import requests
-from pathlib import Path
from collections import defaultdict
-from rotkehlchen.errors import RemoteError
+from pathlib import Path
+from typing import Dict, Iterator, List, TextIO, Tuple
+
+import requests
+
from rotkehlchen.chain.ethereum.utils import token_normalized_value_decimals
+from rotkehlchen.constants.assets import A_1INCH, A_COMBO, A_CORN, A_GRAIN, A_LDO, A_TORN, A_UNI
+from rotkehlchen.errors import RemoteError
+from rotkehlchen.typing import ChecksumEthAddress
AIRDROPS = {
'uniswap': (
@@ -39,6 +41,18 @@
A_GRAIN,
'https://claim.harvest.finance/',
),
+ 'furucombo': (
+ # is checksummed
+ 'https://gist.githubusercontent.com/LefterisJP/69612e155e8063fd6b3422d4efbf22a3/raw/b9023960ab1c478ee2620c456e208e5124115c19/furucombo_airdrop.csv', # noqa: E501
+ A_COMBO,
+ 'https://furucombo.app/',
+ ),
+ 'lido': (
+ # is checksummed
+ 'https://gist.githubusercontent.com/LefterisJP/57a8d65280a482fed6f3e2cc00c0e540/raw/e6ebac56c438cc8a882585c5f5bfba64eb57c424/lido_airdrop.csv', # noqa: E501
+ A_LDO,
+ 'https://lido.fi/',
+ ),
}
@@ -77,7 +91,7 @@
for addr, amount, *_ in data:
# not doing to_checksum_address() here since the file addresses are checksummed
# and doing to_checksum_address() so many times hits performance
- if protocol_name in ('cornichon', 'tornado', 'grain'):
+ if protocol_name in ('cornichon', 'tornado', 'grain', 'lido'):
amount = token_normalized_value_decimals(int(amount), 18)
if addr in addresses:
found_data[addr][protocol_name] = {
diff --git a/rotkehlchen/constants/assets.py b/rotkehlchen/constants/assets.py
--- a/rotkehlchen/constants/assets.py
+++ b/rotkehlchen/constants/assets.py
@@ -68,3 +68,5 @@
A_TORN = EthereumToken('TORN')
A_CORN = EthereumToken('CORN-2')
A_GRAIN = EthereumToken('GRAIN')
+A_COMBO = EthereumToken('COMBO')
+A_LDO = EthereumToken('LDO')
|
{"golden_diff": "diff --git a/rotkehlchen/chain/ethereum/airdrops.py b/rotkehlchen/chain/ethereum/airdrops.py\n--- a/rotkehlchen/chain/ethereum/airdrops.py\n+++ b/rotkehlchen/chain/ethereum/airdrops.py\n@@ -1,12 +1,14 @@\n-from rotkehlchen.typing import ChecksumEthAddress\n-from typing import List, Dict, TextIO, Iterator, Tuple\n-from rotkehlchen.constants.assets import A_UNI, A_1INCH, A_TORN, A_CORN, A_GRAIN\n import csv\n-import requests\n-from pathlib import Path\n from collections import defaultdict\n-from rotkehlchen.errors import RemoteError\n+from pathlib import Path\n+from typing import Dict, Iterator, List, TextIO, Tuple\n+\n+import requests\n+\n from rotkehlchen.chain.ethereum.utils import token_normalized_value_decimals\n+from rotkehlchen.constants.assets import A_1INCH, A_COMBO, A_CORN, A_GRAIN, A_LDO, A_TORN, A_UNI\n+from rotkehlchen.errors import RemoteError\n+from rotkehlchen.typing import ChecksumEthAddress\n \n AIRDROPS = {\n 'uniswap': (\n@@ -39,6 +41,18 @@\n A_GRAIN,\n 'https://claim.harvest.finance/',\n ),\n+ 'furucombo': (\n+ # is checksummed\n+ 'https://gist.githubusercontent.com/LefterisJP/69612e155e8063fd6b3422d4efbf22a3/raw/b9023960ab1c478ee2620c456e208e5124115c19/furucombo_airdrop.csv', # noqa: E501\n+ A_COMBO,\n+ 'https://furucombo.app/',\n+ ),\n+ 'lido': (\n+ # is checksummed\n+ 'https://gist.githubusercontent.com/LefterisJP/57a8d65280a482fed6f3e2cc00c0e540/raw/e6ebac56c438cc8a882585c5f5bfba64eb57c424/lido_airdrop.csv', # noqa: E501\n+ A_LDO,\n+ 'https://lido.fi/',\n+ ),\n }\n \n \n@@ -77,7 +91,7 @@\n for addr, amount, *_ in data:\n # not doing to_checksum_address() here since the file addresses are checksummed\n # and doing to_checksum_address() so many times hits performance\n- if protocol_name in ('cornichon', 'tornado', 'grain'):\n+ if protocol_name in ('cornichon', 'tornado', 'grain', 'lido'):\n amount = token_normalized_value_decimals(int(amount), 18)\n if addr in addresses:\n found_data[addr][protocol_name] = {\ndiff --git a/rotkehlchen/constants/assets.py b/rotkehlchen/constants/assets.py\n--- a/rotkehlchen/constants/assets.py\n+++ b/rotkehlchen/constants/assets.py\n@@ -68,3 +68,5 @@\n A_TORN = EthereumToken('TORN')\n A_CORN = EthereumToken('CORN-2')\n A_GRAIN = EthereumToken('GRAIN')\n+A_COMBO = EthereumToken('COMBO')\n+A_LDO = EthereumToken('LDO')\n", "issue": "Add the COMBO token to the airdrops\n## Abstract\r\n\r\nFurucombo has launched a token and airdropped to their users and donors on January 15th. \r\nThese tokens can be claimed periodically over a vesting period of 8 weeks.\r\n\r\n## Motivation\r\n\r\nThe airdrop ranges with either 350 or 900 COMBO which at the time of writing it worth 560 and 1450 usd. It would be nice have a easy way to track these tokes so the user can claim them.\r\n<!-- Why do you think this feature should be addressed. What is the value added to the users of Rotki and why would they want to have it implemented? -->\r\n\r\n## Specification\r\nAdd the COMBO token to the airdrop section and update the balance of the COMBO tokens in each vesting cycle.\r\nEach vesting cycle is at every friday for 8 weeks afaik. \r\nIdk how much will be distributed each cycle but those who had 350 where able to unlock 87.5 today. \r\n\r\nA list of eligible addresses can be found here:\r\nhttps://docs.google.com/spreadsheets/d/113AiPrGJ-yp7g-Kdo_IofMWiftc7msEoBMmKNKxJ4fo/edit#gid=0\r\nThe announcement of the token airdrop can be found here: \r\nhttps://medium.com/furucombo/first-furucombo-grant-7b1e48175c99\r\n<!-- If the feature is techical in nature please write as detailed as possible a specification of what needs to be built. -->\r\n\n", "before_files": [{"content": "from rotkehlchen.assets.asset import Asset, EthereumToken\n\nA_USD = Asset('USD')\nA_EUR = Asset('EUR')\nA_GBP = Asset('GBP')\nA_JPY = Asset('JPY')\nA_CNY = Asset('CNY')\nA_CAD = Asset('CAD')\nA_KRW = Asset('KRW')\nA_RUB = Asset('RUB')\nA_CHF = Asset('CHF')\nA_TRY = Asset('TRY')\nA_ZAR = Asset('ZAR')\nA_AUD = Asset('AUD')\nA_NZD = Asset('NZD')\nA_BRL = Asset('BRL')\nFIAT_CURRENCIES = (\n A_USD,\n A_EUR,\n A_GBP,\n A_JPY,\n A_CNY,\n A_CAD,\n A_KRW,\n A_RUB,\n A_CHF,\n A_TRY,\n A_ZAR,\n A_AUD,\n A_NZD,\n A_BRL,\n)\n\nS_BTC = 'BTC'\nS_ETH = 'ETH'\nS_KSM = 'KSM'\n\nA_BTC = Asset(S_BTC)\nA_BCH = Asset('BCH')\nA_BAL = Asset('BAL')\nA_BSV = Asset('BSV')\nA_ETH = Asset(S_ETH)\nA_ETH2 = Asset('ETH2')\nA_ETC = Asset('ETC')\nA_KSM = Asset(S_KSM)\nA_BAT = EthereumToken('BAT')\nA_UNI = EthereumToken('UNI')\nA_1INCH = EthereumToken('1INCH')\nA_DAI = EthereumToken('DAI')\nA_SAI = EthereumToken('SAI')\nA_YFI = EthereumToken('YFI')\nA_USDT = EthereumToken('USDT')\nA_USDC = EthereumToken('USDC')\nA_TUSD = EthereumToken('TUSD')\nA_ALINK = EthereumToken('aLINK')\nA_GUSD = EthereumToken('GUSD')\nA_CRV = EthereumToken('CRV')\nA_KNC = EthereumToken('KNC')\nA_WBTC = EthereumToken('WBTC')\nA_WETH = EthereumToken('WETH')\nA_ZRX = EthereumToken('ZRX')\nA_MANA = EthereumToken('MANA')\nA_PAX = EthereumToken('PAX')\nA_COMP = EthereumToken('COMP')\nA_LRC = EthereumToken('LRC')\nA_LINK = EthereumToken('LINK')\nA_ADX = EthereumToken('ADX')\nA_TORN = EthereumToken('TORN')\nA_CORN = EthereumToken('CORN-2')\nA_GRAIN = EthereumToken('GRAIN')\n", "path": "rotkehlchen/constants/assets.py"}, {"content": "from rotkehlchen.typing import ChecksumEthAddress\nfrom typing import List, Dict, TextIO, Iterator, Tuple\nfrom rotkehlchen.constants.assets import A_UNI, A_1INCH, A_TORN, A_CORN, A_GRAIN\nimport csv\nimport requests\nfrom pathlib import Path\nfrom collections import defaultdict\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.chain.ethereum.utils import token_normalized_value_decimals\n\nAIRDROPS = {\n 'uniswap': (\n # is checksummed\n 'https://gist.githubusercontent.com/LefterisJP/d883cb7187a7c4fcf98c7a62f45568e7/raw/3718c95d572a29b9c3906d7c64726d3bd7524bfd/uniswap.csv', # noqa: E501\n A_UNI,\n 'https://app.uniswap.org/',\n ),\n '1inch': (\n # is checksummed\n 'https://gist.githubusercontent.com/LefterisJP/8f41d1511bf354d7e56810188116a410/raw/87d967e86e1435aa3a9ddb97ce20531e4e52dbad/1inch.csv', # noqa: E501\n A_1INCH,\n 'https://1inch.exchange/',\n ),\n 'tornado': (\n # is checksummed\n 'https://raw.githubusercontent.com/tornadocash/airdrop/master/airdrop.csv',\n A_TORN, # Don't have TORN token yet?\n 'https://tornado.cash/',\n ),\n 'cornichon': (\n # is checksummed\n 'https://gist.githubusercontent.com/LefterisJP/5199d8bc6caa3253c343cd5084489088/raw/7e9ca4c4772fc50780bfe9997e1c43525e1b7445/cornichon_airdrop.csv', # noqa: E501\n A_CORN,\n 'https://cornichon.ape.tax/',\n ),\n 'grain': (\n # is checksummed\n 'https://gist.githubusercontent.com/LefterisJP/08d7a5b28876741b300c944650c89280/raw/987ab4a92d5363fdbe262f639565732bd1fd3921/grain_iou.csv', # noqa: E501\n A_GRAIN,\n 'https://claim.harvest.finance/',\n ),\n}\n\n\ndef get_airdrop_data(name: str, data_dir: Path) -> Tuple[Iterator, TextIO]:\n airdrops_dir = data_dir / 'airdrops'\n airdrops_dir.mkdir(parents=True, exist_ok=True)\n filename = airdrops_dir / f'{name}.csv'\n if not filename.is_file():\n # if not cached, get it from the gist\n try:\n request = requests.get(AIRDROPS[name][0])\n except requests.exceptions.RequestException as e:\n raise RemoteError(f'Airdrops Gist request failed due to {str(e)}') from e\n\n with open(filename, 'w') as f:\n f.write(request.content.decode('utf-8'))\n\n csvfile = open(filename, 'r')\n iterator = csv.reader(csvfile)\n next(iterator) # skip header\n return iterator, csvfile\n\n\ndef check_airdrops(\n addresses: List[ChecksumEthAddress],\n data_dir: Path,\n) -> Dict[ChecksumEthAddress, Dict]:\n \"\"\"Checks airdrop data for the given list of ethereum addresses\n\n May raise:\n - RemoteError if the remote request fails\n \"\"\"\n found_data: Dict[ChecksumEthAddress, Dict] = defaultdict(lambda: defaultdict(dict))\n for protocol_name, airdrop_data in AIRDROPS.items():\n data, csvfile = get_airdrop_data(protocol_name, data_dir)\n for addr, amount, *_ in data:\n # not doing to_checksum_address() here since the file addresses are checksummed\n # and doing to_checksum_address() so many times hits performance\n if protocol_name in ('cornichon', 'tornado', 'grain'):\n amount = token_normalized_value_decimals(int(amount), 18)\n if addr in addresses:\n found_data[addr][protocol_name] = {\n 'amount': str(amount),\n 'asset': airdrop_data[1],\n 'link': airdrop_data[2],\n }\n csvfile.close()\n\n return dict(found_data)\n", "path": "rotkehlchen/chain/ethereum/airdrops.py"}]}
| 2,880 | 800 |
gh_patches_debug_7927
|
rasdani/github-patches
|
git_diff
|
huggingface__transformers-4747
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Loading config file bug
Hi guys.
I have released a new (XLNet based) transformer model for low-resource language Tigrinya
[(TigXLNet)](https://github.com/abryeemessi/Transferring-Monolingual-Model-to-Low-Resource-Language) and found a bug when loading a pre-trained config file:
My config file looks like:
https://s3.amazonaws.com/models.huggingface.co/bert/abryee/TigXLNet/config.json
config = AutoConfig.from_pretrained("abryee/TigXLNet")
print(config.d_head) #prints 48 even though d_head in the given config file is 64.
</issue>
<code>
[start of src/transformers/configuration_xlnet.py]
1 # coding=utf-8
2 # Copyright 2018 Google AI, Google Brain and Carnegie Mellon University Authors and the HuggingFace Inc. team.
3 # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 """ XLNet configuration """
17
18
19 import logging
20
21 from .configuration_utils import PretrainedConfig
22
23
24 logger = logging.getLogger(__name__)
25
26 XLNET_PRETRAINED_CONFIG_ARCHIVE_MAP = {
27 "xlnet-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json",
28 "xlnet-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-large-cased-config.json",
29 }
30
31
32 class XLNetConfig(PretrainedConfig):
33 """
34 This is the configuration class to store the configuration of a :class:`~transformers.XLNetModel`.
35 It is used to instantiate an XLNet model according to the specified arguments, defining the model
36 architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
37 the `xlnet-large-cased <https://huggingface.co/xlnet-large-cased>`__ architecture.
38
39 Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used
40 to control the model outputs. Read the documentation from :class:`~transformers.PretrainedConfig`
41 for more information.
42
43 Args:
44 vocab_size (:obj:`int`, optional, defaults to 32000):
45 Vocabulary size of the XLNet model. Defines the different tokens that
46 can be represented by the `inputs_ids` passed to the forward method of :class:`~transformers.XLNetModel`.
47 d_model (:obj:`int`, optional, defaults to 1024):
48 Dimensionality of the encoder layers and the pooler layer.
49 n_layer (:obj:`int`, optional, defaults to 24):
50 Number of hidden layers in the Transformer encoder.
51 n_head (:obj:`int`, optional, defaults to 16):
52 Number of attention heads for each attention layer in the Transformer encoder.
53 d_inner (:obj:`int`, optional, defaults to 4096):
54 Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
55 ff_activation (:obj:`string`, optional, defaults to "gelu"):
56 The non-linear activation function (function or string) in the
57 encoder and pooler. If string, "gelu", "relu" and "swish" are supported.
58 untie_r (:obj:`boolean`, optional, defaults to :obj:`True`):
59 Untie relative position biases
60 attn_type (:obj:`string`, optional, defaults to "bi"):
61 The attention type used by the model. Set 'bi' for XLNet, 'uni' for Transformer-XL.
62 initializer_range (:obj:`float`, optional, defaults to 0.02):
63 The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64 layer_norm_eps (:obj:`float`, optional, defaults to 1e-12):
65 The epsilon used by the layer normalization layers.
66 dropout (:obj:`float`, optional, defaults to 0.1):
67 The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
68 mem_len (:obj:`int` or :obj:`None`, optional, defaults to :obj:`None`):
69 The number of tokens to cache. The key/value pairs that have already been pre-computed
70 in a previous forward pass won't be re-computed. See the
71 `quickstart <https://huggingface.co/transformers/quickstart.html#using-the-past>`__
72 for more information.
73 reuse_len (:obj:`int` or :obj:`None`, optional, defaults to :obj:`None`):
74 The number of tokens in the current batch to be cached and reused in the future.
75 bi_data (:obj:`boolean`, optional, defaults to :obj:`False`):
76 Whether to use bidirectional input pipeline. Usually set to `True` during
77 pretraining and `False` during finetuning.
78 clamp_len (:obj:`int`, optional, defaults to -1):
79 Clamp all relative distances larger than clamp_len.
80 Setting this attribute to -1 means no clamping.
81 same_length (:obj:`boolean`, optional, defaults to :obj:`False`):
82 Whether to use the same attention length for each token.
83 summary_type (:obj:`string`, optional, defaults to "last"):
84 Argument used when doing sequence summary. Used in for the multiple choice head in
85 :class:transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.
86 Is one of the following options:
87 - 'last' => take the last token hidden state (like XLNet)
88 - 'first' => take the first token hidden state (like Bert)
89 - 'mean' => take the mean of all tokens hidden states
90 - 'cls_index' => supply a Tensor of classification token position (GPT/GPT-2)
91 - 'attn' => Not implemented now, use multi-head attention
92 summary_use_proj (:obj:`boolean`, optional, defaults to :obj:`True`):
93 Argument used when doing sequence summary. Used in for the multiple choice head in
94 :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.
95 Add a projection after the vector extraction
96 summary_activation (:obj:`string` or :obj:`None`, optional, defaults to :obj:`None`):
97 Argument used when doing sequence summary. Used in for the multiple choice head in
98 :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.
99 'tanh' => add a tanh activation to the output, Other => no activation.
100 summary_proj_to_labels (:obj:`boolean`, optional, defaults to :obj:`True`):
101 Argument used when doing sequence summary. Used in for the multiple choice head in
102 :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.
103 If True, the projection outputs to config.num_labels classes (otherwise to hidden_size). Default: False.
104 summary_last_dropout (:obj:`float`, optional, defaults to 0.1):
105 Argument used when doing sequence summary. Used in for the multiple choice head in
106 :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.
107 Add a dropout after the projection and activation
108 start_n_top (:obj:`int`, optional, defaults to 5):
109 Used in the SQuAD evaluation script for XLM and XLNet.
110 end_n_top (:obj:`int`, optional, defaults to 5):
111 Used in the SQuAD evaluation script for XLM and XLNet.
112
113 Example::
114
115 from transformers import XLNetConfig, XLNetModel
116
117 # Initializing a XLNet configuration
118 configuration = XLNetConfig()
119
120 # Initializing a model from the configuration
121 model = XLNetModel(configuration)
122
123 # Accessing the model configuration
124 configuration = model.config
125 """
126
127 model_type = "xlnet"
128
129 def __init__(
130 self,
131 vocab_size=32000,
132 d_model=1024,
133 n_layer=24,
134 n_head=16,
135 d_inner=4096,
136 ff_activation="gelu",
137 untie_r=True,
138 attn_type="bi",
139 initializer_range=0.02,
140 layer_norm_eps=1e-12,
141 dropout=0.1,
142 mem_len=None,
143 reuse_len=None,
144 bi_data=False,
145 clamp_len=-1,
146 same_length=False,
147 summary_type="last",
148 summary_use_proj=True,
149 summary_activation="tanh",
150 summary_last_dropout=0.1,
151 start_n_top=5,
152 end_n_top=5,
153 pad_token_id=5,
154 bos_token_id=1,
155 eos_token_id=2,
156 **kwargs
157 ):
158 """Constructs XLNetConfig.
159 """
160 super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
161 self.vocab_size = vocab_size
162 self.d_model = d_model
163 self.n_layer = n_layer
164 self.n_head = n_head
165 assert d_model % n_head == 0
166 self.d_head = d_model // n_head
167 self.ff_activation = ff_activation
168 self.d_inner = d_inner
169 self.untie_r = untie_r
170 self.attn_type = attn_type
171
172 self.initializer_range = initializer_range
173 self.layer_norm_eps = layer_norm_eps
174
175 self.dropout = dropout
176 self.mem_len = mem_len
177 self.reuse_len = reuse_len
178 self.bi_data = bi_data
179 self.clamp_len = clamp_len
180 self.same_length = same_length
181
182 self.summary_type = summary_type
183 self.summary_use_proj = summary_use_proj
184 self.summary_activation = summary_activation
185 self.summary_last_dropout = summary_last_dropout
186 self.start_n_top = start_n_top
187 self.end_n_top = end_n_top
188
189 self.bos_token_id = bos_token_id
190 self.pad_token_id = pad_token_id
191 self.eos_token_id = eos_token_id
192
193 @property
194 def max_position_embeddings(self):
195 return -1
196
197 @property
198 def n_token(self): # Backward compatibility
199 return self.vocab_size
200
201 @n_token.setter
202 def n_token(self, value): # Backward compatibility
203 self.vocab_size = value
204
205 @property
206 def hidden_size(self):
207 return self.d_model
208
209 @property
210 def num_attention_heads(self):
211 return self.n_head
212
213 @property
214 def num_hidden_layers(self):
215 return self.n_layer
216
[end of src/transformers/configuration_xlnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/transformers/configuration_xlnet.py b/src/transformers/configuration_xlnet.py
--- a/src/transformers/configuration_xlnet.py
+++ b/src/transformers/configuration_xlnet.py
@@ -163,6 +163,10 @@
self.n_layer = n_layer
self.n_head = n_head
assert d_model % n_head == 0
+ if "d_head" in kwargs:
+ assert kwargs["d_head"] == d_model // n_head, (
+ f"`d_head` ({kwargs['d_head']}) should be equal to `d_model // n_head` ({d_model // n_head})"
+ )
self.d_head = d_model // n_head
self.ff_activation = ff_activation
self.d_inner = d_inner
|
{"golden_diff": "diff --git a/src/transformers/configuration_xlnet.py b/src/transformers/configuration_xlnet.py\n--- a/src/transformers/configuration_xlnet.py\n+++ b/src/transformers/configuration_xlnet.py\n@@ -163,6 +163,10 @@\n self.n_layer = n_layer\n self.n_head = n_head\n assert d_model % n_head == 0\n+ if \"d_head\" in kwargs:\n+ assert kwargs[\"d_head\"] == d_model // n_head, (\n+ f\"`d_head` ({kwargs['d_head']}) should be equal to `d_model // n_head` ({d_model // n_head})\"\n+ )\n self.d_head = d_model // n_head\n self.ff_activation = ff_activation\n self.d_inner = d_inner\n", "issue": "Loading config file bug\nHi guys.\r\n\r\nI have released a new (XLNet based) transformer model for low-resource language Tigrinya \r\n [(TigXLNet)](https://github.com/abryeemessi/Transferring-Monolingual-Model-to-Low-Resource-Language) and found a bug when loading a pre-trained config file: \r\n\r\nMy config file looks like:\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/abryee/TigXLNet/config.json\r\n\r\n\r\n config = AutoConfig.from_pretrained(\"abryee/TigXLNet\")\r\n print(config.d_head) #prints 48 even though d_head in the given config file is 64.\r\n\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2018 Google AI, Google Brain and Carnegie Mellon University Authors and the HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" XLNet configuration \"\"\"\n\n\nimport logging\n\nfrom .configuration_utils import PretrainedConfig\n\n\nlogger = logging.getLogger(__name__)\n\nXLNET_PRETRAINED_CONFIG_ARCHIVE_MAP = {\n \"xlnet-base-cased\": \"https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json\",\n \"xlnet-large-cased\": \"https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-large-cased-config.json\",\n}\n\n\nclass XLNetConfig(PretrainedConfig):\n \"\"\"\n This is the configuration class to store the configuration of a :class:`~transformers.XLNetModel`.\n It is used to instantiate an XLNet model according to the specified arguments, defining the model\n architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of\n the `xlnet-large-cased <https://huggingface.co/xlnet-large-cased>`__ architecture.\n\n Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used\n to control the model outputs. Read the documentation from :class:`~transformers.PretrainedConfig`\n for more information.\n\n Args:\n vocab_size (:obj:`int`, optional, defaults to 32000):\n Vocabulary size of the XLNet model. Defines the different tokens that\n can be represented by the `inputs_ids` passed to the forward method of :class:`~transformers.XLNetModel`.\n d_model (:obj:`int`, optional, defaults to 1024):\n Dimensionality of the encoder layers and the pooler layer.\n n_layer (:obj:`int`, optional, defaults to 24):\n Number of hidden layers in the Transformer encoder.\n n_head (:obj:`int`, optional, defaults to 16):\n Number of attention heads for each attention layer in the Transformer encoder.\n d_inner (:obj:`int`, optional, defaults to 4096):\n Dimensionality of the \"intermediate\" (i.e., feed-forward) layer in the Transformer encoder.\n ff_activation (:obj:`string`, optional, defaults to \"gelu\"):\n The non-linear activation function (function or string) in the\n encoder and pooler. If string, \"gelu\", \"relu\" and \"swish\" are supported.\n untie_r (:obj:`boolean`, optional, defaults to :obj:`True`):\n Untie relative position biases\n attn_type (:obj:`string`, optional, defaults to \"bi\"):\n The attention type used by the model. Set 'bi' for XLNet, 'uni' for Transformer-XL.\n initializer_range (:obj:`float`, optional, defaults to 0.02):\n The standard deviation of the truncated_normal_initializer for initializing all weight matrices.\n layer_norm_eps (:obj:`float`, optional, defaults to 1e-12):\n The epsilon used by the layer normalization layers.\n dropout (:obj:`float`, optional, defaults to 0.1):\n The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.\n mem_len (:obj:`int` or :obj:`None`, optional, defaults to :obj:`None`):\n The number of tokens to cache. The key/value pairs that have already been pre-computed\n in a previous forward pass won't be re-computed. See the\n `quickstart <https://huggingface.co/transformers/quickstart.html#using-the-past>`__\n for more information.\n reuse_len (:obj:`int` or :obj:`None`, optional, defaults to :obj:`None`):\n The number of tokens in the current batch to be cached and reused in the future.\n bi_data (:obj:`boolean`, optional, defaults to :obj:`False`):\n Whether to use bidirectional input pipeline. Usually set to `True` during\n pretraining and `False` during finetuning.\n clamp_len (:obj:`int`, optional, defaults to -1):\n Clamp all relative distances larger than clamp_len.\n Setting this attribute to -1 means no clamping.\n same_length (:obj:`boolean`, optional, defaults to :obj:`False`):\n Whether to use the same attention length for each token.\n summary_type (:obj:`string`, optional, defaults to \"last\"):\n Argument used when doing sequence summary. Used in for the multiple choice head in\n :class:transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.\n Is one of the following options:\n - 'last' => take the last token hidden state (like XLNet)\n - 'first' => take the first token hidden state (like Bert)\n - 'mean' => take the mean of all tokens hidden states\n - 'cls_index' => supply a Tensor of classification token position (GPT/GPT-2)\n - 'attn' => Not implemented now, use multi-head attention\n summary_use_proj (:obj:`boolean`, optional, defaults to :obj:`True`):\n Argument used when doing sequence summary. Used in for the multiple choice head in\n :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.\n Add a projection after the vector extraction\n summary_activation (:obj:`string` or :obj:`None`, optional, defaults to :obj:`None`):\n Argument used when doing sequence summary. Used in for the multiple choice head in\n :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.\n 'tanh' => add a tanh activation to the output, Other => no activation.\n summary_proj_to_labels (:obj:`boolean`, optional, defaults to :obj:`True`):\n Argument used when doing sequence summary. Used in for the multiple choice head in\n :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.\n If True, the projection outputs to config.num_labels classes (otherwise to hidden_size). Default: False.\n summary_last_dropout (:obj:`float`, optional, defaults to 0.1):\n Argument used when doing sequence summary. Used in for the multiple choice head in\n :class:`~transformers.XLNetForSequenceClassification` and :class:`~transformers.XLNetForMultipleChoice`.\n Add a dropout after the projection and activation\n start_n_top (:obj:`int`, optional, defaults to 5):\n Used in the SQuAD evaluation script for XLM and XLNet.\n end_n_top (:obj:`int`, optional, defaults to 5):\n Used in the SQuAD evaluation script for XLM and XLNet.\n\n Example::\n\n from transformers import XLNetConfig, XLNetModel\n\n # Initializing a XLNet configuration\n configuration = XLNetConfig()\n\n # Initializing a model from the configuration\n model = XLNetModel(configuration)\n\n # Accessing the model configuration\n configuration = model.config\n \"\"\"\n\n model_type = \"xlnet\"\n\n def __init__(\n self,\n vocab_size=32000,\n d_model=1024,\n n_layer=24,\n n_head=16,\n d_inner=4096,\n ff_activation=\"gelu\",\n untie_r=True,\n attn_type=\"bi\",\n initializer_range=0.02,\n layer_norm_eps=1e-12,\n dropout=0.1,\n mem_len=None,\n reuse_len=None,\n bi_data=False,\n clamp_len=-1,\n same_length=False,\n summary_type=\"last\",\n summary_use_proj=True,\n summary_activation=\"tanh\",\n summary_last_dropout=0.1,\n start_n_top=5,\n end_n_top=5,\n pad_token_id=5,\n bos_token_id=1,\n eos_token_id=2,\n **kwargs\n ):\n \"\"\"Constructs XLNetConfig.\n \"\"\"\n super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)\n self.vocab_size = vocab_size\n self.d_model = d_model\n self.n_layer = n_layer\n self.n_head = n_head\n assert d_model % n_head == 0\n self.d_head = d_model // n_head\n self.ff_activation = ff_activation\n self.d_inner = d_inner\n self.untie_r = untie_r\n self.attn_type = attn_type\n\n self.initializer_range = initializer_range\n self.layer_norm_eps = layer_norm_eps\n\n self.dropout = dropout\n self.mem_len = mem_len\n self.reuse_len = reuse_len\n self.bi_data = bi_data\n self.clamp_len = clamp_len\n self.same_length = same_length\n\n self.summary_type = summary_type\n self.summary_use_proj = summary_use_proj\n self.summary_activation = summary_activation\n self.summary_last_dropout = summary_last_dropout\n self.start_n_top = start_n_top\n self.end_n_top = end_n_top\n\n self.bos_token_id = bos_token_id\n self.pad_token_id = pad_token_id\n self.eos_token_id = eos_token_id\n\n @property\n def max_position_embeddings(self):\n return -1\n\n @property\n def n_token(self): # Backward compatibility\n return self.vocab_size\n\n @n_token.setter\n def n_token(self, value): # Backward compatibility\n self.vocab_size = value\n\n @property\n def hidden_size(self):\n return self.d_model\n\n @property\n def num_attention_heads(self):\n return self.n_head\n\n @property\n def num_hidden_layers(self):\n return self.n_layer\n", "path": "src/transformers/configuration_xlnet.py"}]}
| 3,524 | 178 |
gh_patches_debug_64390
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2150
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider tmobile_us is broken
During the global build at 2021-06-30-14-42-26, spider **tmobile_us** failed with **7563 features** and **2 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/logs/tmobile_us.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/tmobile_us.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/tmobile_us.geojson))
</issue>
<code>
[start of locations/spiders/tmobile_us.py]
1 # -*- coding: utf-8 -*-
2 import json
3 from urllib.parse import urlencode
4
5 import scrapy
6
7 from locations.items import GeojsonPointItem
8 from locations.hours import OpeningHours
9
10 DAY_MAPPING = {'Monday': 'Mo',
11 'Tuesday': 'Tu',
12 'Wednesday': 'We',
13 'Thursday': 'Th',
14 'Friday': 'Fr',
15 'Saturday': 'Sa',
16 'Sunday': 'Su'}
17
18 BASE_URL = 'https://onmyj41p3c.execute-api.us-west-2.amazonaws.com/prod/getStoresByCoordinates?'
19
20
21 class TMobileUSSpider(scrapy.Spider):
22 name = "tmobile_us"
23 item_attributes = { 'brand': "T-Mobile" }
24 allowed_domains = ["www.t-mobile.com"]
25 download_delay = 0.2
26
27 def parse_hours(self, store_hours):
28 opening_hours = OpeningHours()
29 if store_hours is None:
30 return
31
32 for store_day in store_hours:
33 day = DAY_MAPPING[store_day.get("day")]
34 open_time = store_day.get("opens")
35 close_time = store_day.get("closes")
36 if open_time is None and close_time is None:
37 continue
38 opening_hours.add_range(day=day,
39 open_time=open_time,
40 close_time=close_time,
41 time_format='%H:%M'
42 )
43
44 return opening_hours.as_opening_hours()
45
46 def start_requests(self):
47 url = BASE_URL
48
49 with open('./locations/searchable_points/us_centroids_25mile_radius.csv') as points:
50
51 next(points) # Ignore the header
52 for point in points:
53 _, lat, lon = point.strip().split(',')
54
55 params = {
56 'latitude': '{}'.format(lat),
57 'longitude': '{}'.format(lon),
58 'count': '1000',
59 'radius': '25',
60 'ignoreLoadingBar': 'false'
61 }
62
63 yield scrapy.http.Request(url + urlencode(params), callback=self.parse)
64
65 def parse(self, response):
66 data = json.loads(response.body_as_unicode())
67
68 for store in data:
69 properties = {
70 'name': store["name"],
71 'ref': store["id"],
72 'addr_full': store["location"]["address"]["streetAddress"],
73 'city': store["location"]["address"]["addressLocality"],
74 'state': store["location"]["address"]["addressRegion"],
75 'postcode': store["location"]["address"]["postalCode"],
76 'phone': store.get("telephone"),
77 'website': store.get("url") or response.url,
78 'lat': float(store["location"]["latitude"]),
79 'lon': float(store["location"]["longitude"]),
80 }
81
82 hours = self.parse_hours(store.get("hours", []))
83 if hours:
84 properties["opening_hours"] = hours
85
86 yield GeojsonPointItem(**properties)
87
[end of locations/spiders/tmobile_us.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/tmobile_us.py b/locations/spiders/tmobile_us.py
--- a/locations/spiders/tmobile_us.py
+++ b/locations/spiders/tmobile_us.py
@@ -15,7 +15,7 @@
'Saturday': 'Sa',
'Sunday': 'Su'}
-BASE_URL = 'https://onmyj41p3c.execute-api.us-west-2.amazonaws.com/prod/getStoresByCoordinates?'
+BASE_URL = 'https://onmyj41p3c.execute-api.us-west-2.amazonaws.com/prod/v2.1/getStoresByCoordinates?'
class TMobileUSSpider(scrapy.Spider):
|
{"golden_diff": "diff --git a/locations/spiders/tmobile_us.py b/locations/spiders/tmobile_us.py\n--- a/locations/spiders/tmobile_us.py\n+++ b/locations/spiders/tmobile_us.py\n@@ -15,7 +15,7 @@\n 'Saturday': 'Sa',\n 'Sunday': 'Su'}\n \n-BASE_URL = 'https://onmyj41p3c.execute-api.us-west-2.amazonaws.com/prod/getStoresByCoordinates?'\n+BASE_URL = 'https://onmyj41p3c.execute-api.us-west-2.amazonaws.com/prod/v2.1/getStoresByCoordinates?'\n \n \n class TMobileUSSpider(scrapy.Spider):\n", "issue": "Spider tmobile_us is broken\nDuring the global build at 2021-06-30-14-42-26, spider **tmobile_us** failed with **7563 features** and **2 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/logs/tmobile_us.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/tmobile_us.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/tmobile_us.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nfrom urllib.parse import urlencode\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAY_MAPPING = {'Monday': 'Mo',\n 'Tuesday': 'Tu',\n 'Wednesday': 'We',\n 'Thursday': 'Th',\n 'Friday': 'Fr',\n 'Saturday': 'Sa',\n 'Sunday': 'Su'}\n\nBASE_URL = 'https://onmyj41p3c.execute-api.us-west-2.amazonaws.com/prod/getStoresByCoordinates?'\n\n\nclass TMobileUSSpider(scrapy.Spider):\n name = \"tmobile_us\"\n item_attributes = { 'brand': \"T-Mobile\" }\n allowed_domains = [\"www.t-mobile.com\"]\n download_delay = 0.2\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n if store_hours is None:\n return\n\n for store_day in store_hours:\n day = DAY_MAPPING[store_day.get(\"day\")]\n open_time = store_day.get(\"opens\")\n close_time = store_day.get(\"closes\")\n if open_time is None and close_time is None:\n continue\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%H:%M'\n )\n\n return opening_hours.as_opening_hours()\n\n def start_requests(self):\n url = BASE_URL\n\n with open('./locations/searchable_points/us_centroids_25mile_radius.csv') as points:\n\n next(points) # Ignore the header\n for point in points:\n _, lat, lon = point.strip().split(',')\n\n params = {\n 'latitude': '{}'.format(lat),\n 'longitude': '{}'.format(lon),\n 'count': '1000',\n 'radius': '25',\n 'ignoreLoadingBar': 'false'\n }\n\n yield scrapy.http.Request(url + urlencode(params), callback=self.parse)\n\n def parse(self, response):\n data = json.loads(response.body_as_unicode())\n\n for store in data:\n properties = {\n 'name': store[\"name\"],\n 'ref': store[\"id\"],\n 'addr_full': store[\"location\"][\"address\"][\"streetAddress\"],\n 'city': store[\"location\"][\"address\"][\"addressLocality\"],\n 'state': store[\"location\"][\"address\"][\"addressRegion\"],\n 'postcode': store[\"location\"][\"address\"][\"postalCode\"],\n 'phone': store.get(\"telephone\"),\n 'website': store.get(\"url\") or response.url,\n 'lat': float(store[\"location\"][\"latitude\"]),\n 'lon': float(store[\"location\"][\"longitude\"]),\n }\n\n hours = self.parse_hours(store.get(\"hours\", []))\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/tmobile_us.py"}]}
| 1,505 | 150 |
gh_patches_debug_25740
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-2605
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`hypothesis --version` does not work
When I use `pip install hypothesis` in a clear env and run `hypothesis --version`, this happens:
```python
» hypothesis --version
Traceback (most recent call last):
File "/Users/sobolev/Documents/github/returns/.venv/bin/hypothesis", line 6, in <module>
from hypothesis.extra.cli import main
File "/Users/sobolev/Documents/github/returns/.venv/lib/python3.7/site-packages/hypothesis/extra/cli.py", line 35, in <module>
from hypothesis.extra import ghostwriter
File "/Users/sobolev/Documents/github/returns/.venv/lib/python3.7/site-packages/hypothesis/extra/ghostwriter.py", line 57, in <module>
import black
ModuleNotFoundError: No module named 'black'
```
Looks like `black` is treated as a required dependency right now.
After installing `black`:
```
» hypothesis --version
hypothesis, version 5.33.0
```
</issue>
<code>
[start of hypothesis-python/src/hypothesis/extra/cli.py]
1 # This file is part of Hypothesis, which may be found at
2 # https://github.com/HypothesisWorks/hypothesis/
3 #
4 # Most of this work is copyright (C) 2013-2020 David R. MacIver
5 # ([email protected]), but it contains contributions by others. See
6 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
7 # consult the git log if you need to determine who owns an individual
8 # contribution.
9 #
10 # This Source Code Form is subject to the terms of the Mozilla Public License,
11 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
12 # obtain one at https://mozilla.org/MPL/2.0/.
13 #
14 # END HEADER
15
16 """
17 .. _hypothesis-cli:
18
19 ----------------
20 hypothesis[cli]
21 ----------------
22
23 This module provides Hypothesis' command-line interface, for e.g.
24 :doc:`'ghostwriting' tests <ghostwriter>` via the terminal.
25 It requires the :pypi:`click` package.
26
27 Run :command:`hypothesis --help` in your terminal for details.
28 """
29
30 import builtins
31 import importlib
32 import sys
33 from difflib import get_close_matches
34
35 from hypothesis.extra import ghostwriter
36
37 try:
38 import click
39 except ImportError:
40
41 def main():
42 """If `click` is not installed, tell the user to install it then exit."""
43 sys.stderr.write(
44 """
45 The Hypothesis command-line interface requires the `click` package,
46 which you do not have installed. Run:
47
48 python -m pip install --upgrade hypothesis[cli]
49
50 and try again.
51 """
52 )
53 sys.exit(1)
54
55
56 else:
57 # Ensure that Python scripts in the current working directory are importable,
58 # on the principle that Ghostwriter should 'just work' for novice users. Note
59 # that we append rather than prepend to the module search path, so this will
60 # never shadow the stdlib or installed packages.
61 sys.path.append(".")
62
63 @click.group(context_settings={"help_option_names": ("-h", "--help")})
64 @click.version_option()
65 def main():
66 pass
67
68 def obj_name(s: str) -> object:
69 """This "type" imports whatever object is named by a dotted string."""
70 s = s.strip()
71 try:
72 return importlib.import_module(s)
73 except ImportError:
74 pass
75 if "." not in s:
76 modulename, module, funcname = "builtins", builtins, s
77 else:
78 modulename, funcname = s.rsplit(".", 1)
79 try:
80 module = importlib.import_module(modulename)
81 except ImportError:
82 raise click.UsageError(
83 f"Failed to import the {modulename} module for introspection. "
84 "Check spelling and your Python import path, or use the Python API?"
85 )
86 try:
87 return getattr(module, funcname)
88 except AttributeError:
89 public_names = [name for name in vars(module) if not name.startswith("_")]
90 matches = get_close_matches(funcname, public_names)
91 raise click.UsageError(
92 f"Found the {modulename!r} module, but it doesn't have a "
93 f"{funcname!r} attribute."
94 + (f" Closest matches: {matches!r}" if matches else "")
95 )
96
97 @main.command() # type: ignore # Click adds the .command attribute
98 @click.argument("func", type=obj_name, required=True, nargs=-1)
99 @click.option("--idempotent", "writer", flag_value="idempotent")
100 @click.option("--binary-op", "writer", flag_value="binary_operation")
101 @click.option("--equivalent", "writer", flag_value="equivalent")
102 @click.option("--roundtrip", "writer", flag_value="roundtrip")
103 # Note: we deliberately omit a --ufunc flag, because the magic()
104 # detection of ufuncs is both precise and complete.
105 @click.option(
106 "--style",
107 type=click.Choice(["pytest", "unittest"]),
108 default="pytest",
109 help="pytest-style function, or unittest-style method?",
110 )
111 @click.option(
112 "-e",
113 "--except",
114 "except_",
115 type=obj_name,
116 multiple=True,
117 help="dotted name of exception(s) to ignore",
118 )
119 def write(func, writer, except_, style): # noqa: D301 # \b disables autowrap
120 """`hypothesis write` writes property-based tests for you!
121
122 Type annotations are helpful but not required for our advanced introspection
123 and templating logic. Try running the examples below to see how it works:
124
125 \b
126 hypothesis write gzip
127 hypothesis write re.compile --except re.error
128 hypothesis write --style=unittest --idempotent sorted
129 hypothesis write --binary-op operator.add
130 hypothesis write --equivalent ast.literal_eval eval
131 hypothesis write --roundtrip json.dumps json.loads
132 """
133 # NOTE: if you want to call this function from Python, look instead at the
134 # ``hypothesis.extra.ghostwriter`` module. Click-decorated functions have
135 # a different calling convention, and raise SystemExit instead of returning.
136 if writer is None:
137 writer = "magic"
138 elif writer == "idempotent" and len(func) > 1:
139 raise click.UsageError("Test functions for idempotence one at a time.")
140 elif writer == "roundtrip" and len(func) == 1:
141 writer = "idempotent"
142 elif writer == "equivalent" and len(func) == 1:
143 writer = "fuzz"
144
145 print(getattr(ghostwriter, writer)(*func, except_=except_ or (), style=style))
146
[end of hypothesis-python/src/hypothesis/extra/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/hypothesis-python/src/hypothesis/extra/cli.py b/hypothesis-python/src/hypothesis/extra/cli.py
--- a/hypothesis-python/src/hypothesis/extra/cli.py
+++ b/hypothesis-python/src/hypothesis/extra/cli.py
@@ -32,7 +32,14 @@
import sys
from difflib import get_close_matches
-from hypothesis.extra import ghostwriter
+MESSAGE = """
+The Hypothesis command-line interface requires the `{}` package,
+which you do not have installed. Run:
+
+ python -m pip install --upgrade hypothesis[cli]
+
+and try again.
+"""
try:
import click
@@ -40,16 +47,7 @@
def main():
"""If `click` is not installed, tell the user to install it then exit."""
- sys.stderr.write(
- """
-The Hypothesis command-line interface requires the `click` package,
-which you do not have installed. Run:
-
- python -m pip install --upgrade hypothesis[cli]
-
-and try again.
-"""
- )
+ sys.stderr.write(MESSAGE.format("click"))
sys.exit(1)
@@ -142,4 +140,10 @@
elif writer == "equivalent" and len(func) == 1:
writer = "fuzz"
+ try:
+ from hypothesis.extra import ghostwriter
+ except ImportError:
+ sys.stderr.write(MESSAGE.format("black"))
+ sys.exit(1)
+
print(getattr(ghostwriter, writer)(*func, except_=except_ or (), style=style))
|
{"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/extra/cli.py b/hypothesis-python/src/hypothesis/extra/cli.py\n--- a/hypothesis-python/src/hypothesis/extra/cli.py\n+++ b/hypothesis-python/src/hypothesis/extra/cli.py\n@@ -32,7 +32,14 @@\n import sys\n from difflib import get_close_matches\n \n-from hypothesis.extra import ghostwriter\n+MESSAGE = \"\"\"\n+The Hypothesis command-line interface requires the `{}` package,\n+which you do not have installed. Run:\n+\n+ python -m pip install --upgrade hypothesis[cli]\n+\n+and try again.\n+\"\"\"\n \n try:\n import click\n@@ -40,16 +47,7 @@\n \n def main():\n \"\"\"If `click` is not installed, tell the user to install it then exit.\"\"\"\n- sys.stderr.write(\n- \"\"\"\n-The Hypothesis command-line interface requires the `click` package,\n-which you do not have installed. Run:\n-\n- python -m pip install --upgrade hypothesis[cli]\n-\n-and try again.\n-\"\"\"\n- )\n+ sys.stderr.write(MESSAGE.format(\"click\"))\n sys.exit(1)\n \n \n@@ -142,4 +140,10 @@\n elif writer == \"equivalent\" and len(func) == 1:\n writer = \"fuzz\"\n \n+ try:\n+ from hypothesis.extra import ghostwriter\n+ except ImportError:\n+ sys.stderr.write(MESSAGE.format(\"black\"))\n+ sys.exit(1)\n+\n print(getattr(ghostwriter, writer)(*func, except_=except_ or (), style=style))\n", "issue": "`hypothesis --version` does not work\nWhen I use `pip install hypothesis` in a clear env and run `hypothesis --version`, this happens:\r\n\r\n```python\r\n\u00bb hypothesis --version\r\nTraceback (most recent call last):\r\n File \"/Users/sobolev/Documents/github/returns/.venv/bin/hypothesis\", line 6, in <module>\r\n from hypothesis.extra.cli import main\r\n File \"/Users/sobolev/Documents/github/returns/.venv/lib/python3.7/site-packages/hypothesis/extra/cli.py\", line 35, in <module>\r\n from hypothesis.extra import ghostwriter\r\n File \"/Users/sobolev/Documents/github/returns/.venv/lib/python3.7/site-packages/hypothesis/extra/ghostwriter.py\", line 57, in <module>\r\n import black\r\nModuleNotFoundError: No module named 'black'\r\n```\r\n\r\nLooks like `black` is treated as a required dependency right now.\r\n\r\nAfter installing `black`:\r\n\r\n```\r\n\u00bb hypothesis --version\r\nhypothesis, version 5.33.0\r\n```\n", "before_files": [{"content": "# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2020 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\n\"\"\"\n.. _hypothesis-cli:\n\n----------------\nhypothesis[cli]\n----------------\n\nThis module provides Hypothesis' command-line interface, for e.g.\n:doc:`'ghostwriting' tests <ghostwriter>` via the terminal.\nIt requires the :pypi:`click` package.\n\nRun :command:`hypothesis --help` in your terminal for details.\n\"\"\"\n\nimport builtins\nimport importlib\nimport sys\nfrom difflib import get_close_matches\n\nfrom hypothesis.extra import ghostwriter\n\ntry:\n import click\nexcept ImportError:\n\n def main():\n \"\"\"If `click` is not installed, tell the user to install it then exit.\"\"\"\n sys.stderr.write(\n \"\"\"\nThe Hypothesis command-line interface requires the `click` package,\nwhich you do not have installed. Run:\n\n python -m pip install --upgrade hypothesis[cli]\n\nand try again.\n\"\"\"\n )\n sys.exit(1)\n\n\nelse:\n # Ensure that Python scripts in the current working directory are importable,\n # on the principle that Ghostwriter should 'just work' for novice users. Note\n # that we append rather than prepend to the module search path, so this will\n # never shadow the stdlib or installed packages.\n sys.path.append(\".\")\n\n @click.group(context_settings={\"help_option_names\": (\"-h\", \"--help\")})\n @click.version_option()\n def main():\n pass\n\n def obj_name(s: str) -> object:\n \"\"\"This \"type\" imports whatever object is named by a dotted string.\"\"\"\n s = s.strip()\n try:\n return importlib.import_module(s)\n except ImportError:\n pass\n if \".\" not in s:\n modulename, module, funcname = \"builtins\", builtins, s\n else:\n modulename, funcname = s.rsplit(\".\", 1)\n try:\n module = importlib.import_module(modulename)\n except ImportError:\n raise click.UsageError(\n f\"Failed to import the {modulename} module for introspection. \"\n \"Check spelling and your Python import path, or use the Python API?\"\n )\n try:\n return getattr(module, funcname)\n except AttributeError:\n public_names = [name for name in vars(module) if not name.startswith(\"_\")]\n matches = get_close_matches(funcname, public_names)\n raise click.UsageError(\n f\"Found the {modulename!r} module, but it doesn't have a \"\n f\"{funcname!r} attribute.\"\n + (f\" Closest matches: {matches!r}\" if matches else \"\")\n )\n\n @main.command() # type: ignore # Click adds the .command attribute\n @click.argument(\"func\", type=obj_name, required=True, nargs=-1)\n @click.option(\"--idempotent\", \"writer\", flag_value=\"idempotent\")\n @click.option(\"--binary-op\", \"writer\", flag_value=\"binary_operation\")\n @click.option(\"--equivalent\", \"writer\", flag_value=\"equivalent\")\n @click.option(\"--roundtrip\", \"writer\", flag_value=\"roundtrip\")\n # Note: we deliberately omit a --ufunc flag, because the magic()\n # detection of ufuncs is both precise and complete.\n @click.option(\n \"--style\",\n type=click.Choice([\"pytest\", \"unittest\"]),\n default=\"pytest\",\n help=\"pytest-style function, or unittest-style method?\",\n )\n @click.option(\n \"-e\",\n \"--except\",\n \"except_\",\n type=obj_name,\n multiple=True,\n help=\"dotted name of exception(s) to ignore\",\n )\n def write(func, writer, except_, style): # noqa: D301 # \\b disables autowrap\n \"\"\"`hypothesis write` writes property-based tests for you!\n\n Type annotations are helpful but not required for our advanced introspection\n and templating logic. Try running the examples below to see how it works:\n\n \\b\n hypothesis write gzip\n hypothesis write re.compile --except re.error\n hypothesis write --style=unittest --idempotent sorted\n hypothesis write --binary-op operator.add\n hypothesis write --equivalent ast.literal_eval eval\n hypothesis write --roundtrip json.dumps json.loads\n \"\"\"\n # NOTE: if you want to call this function from Python, look instead at the\n # ``hypothesis.extra.ghostwriter`` module. Click-decorated functions have\n # a different calling convention, and raise SystemExit instead of returning.\n if writer is None:\n writer = \"magic\"\n elif writer == \"idempotent\" and len(func) > 1:\n raise click.UsageError(\"Test functions for idempotence one at a time.\")\n elif writer == \"roundtrip\" and len(func) == 1:\n writer = \"idempotent\"\n elif writer == \"equivalent\" and len(func) == 1:\n writer = \"fuzz\"\n\n print(getattr(ghostwriter, writer)(*func, except_=except_ or (), style=style))\n", "path": "hypothesis-python/src/hypothesis/extra/cli.py"}]}
| 2,377 | 361 |
gh_patches_debug_1059
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-5645
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Image uploader does not recognise uploaded file
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
As of today (8-09-2021) the image uploader does not recognise that an image has been selected and uploaded. Instead, it displays "Please provide an image URL" after hitting submit.
### Evidence / Screenshot (if possible)
### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to ...any edition
2. Do ...upload an image as a cover and submit.
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: "Please provide an image URL"
* Expected: Image should be added as cover.
### Details
- **Logged in (Y/N)?** y
- **Browser type/version?** Chrome Version 92.0.4515.159 (Official Build) (x86_64)
- **Operating system?** MacOS
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
</issue>
<code>
[start of openlibrary/plugins/upstream/covers.py]
1 """Handle book cover/author photo upload.
2 """
3 from logging import getLogger
4
5 import requests
6 import six
7 import web
8 from six import BytesIO
9
10 from infogami.utils import delegate
11 from infogami.utils.view import safeint
12 from openlibrary import accounts
13 from openlibrary.plugins.upstream.models import Image
14 from openlibrary.plugins.upstream.utils import get_coverstore_url, render_template
15
16 logger = getLogger("openlibrary.plugins.upstream.covers")
17 def setup():
18 pass
19
20 class add_cover(delegate.page):
21 path = "(/books/OL\d+M)/add-cover"
22 cover_category = "b"
23
24 def GET(self, key):
25 book = web.ctx.site.get(key)
26 return render_template('covers/add', book)
27
28 def POST(self, key):
29 book = web.ctx.site.get(key)
30 if not book:
31 raise web.notfound("")
32
33 i = web.input(file={}, url="")
34
35 # remove references to field storage objects
36 web.ctx.pop("_fieldstorage", None)
37
38 data = self.upload(key, i)
39 coverid = data.get('id')
40
41 if coverid:
42 self.save(book, coverid, url=i.url)
43 cover = Image(web.ctx.site, "b", coverid)
44 return render_template("covers/saved", cover)
45 else:
46 return render_template("covers/add", book, {'url': i.url}, data)
47
48 def upload(self, key, i):
49 """Uploads a cover to coverstore and returns the response."""
50 olid = key.split("/")[-1]
51
52 if i.file is not None and hasattr(i.file, 'value'):
53 data = i.file.value
54 else:
55 data = None
56
57 if i.url and i.url.strip() == "http://":
58 i.url = ""
59
60 user = accounts.get_current_user()
61 params = {
62 "author": user and user.key,
63 "source_url": i.url,
64 "olid": olid,
65 "ip": web.ctx.ip
66 }
67
68 upload_url = '%s/%s/upload2' % (
69 get_coverstore_url(), self.cover_category)
70
71 if upload_url.startswith("//"):
72 upload_url = "http:" + upload_url
73
74 try:
75 files = {'data': BytesIO(data)}
76 response = requests.post(upload_url, data=params, files=files)
77 return web.storage(response.json())
78 except requests.HTTPError as e:
79 logger.exception("Covers upload failed")
80 return web.storage({'error': str(e)})
81
82 def save(self, book, coverid, url=None):
83 book.covers = [coverid] + [cover.id for cover in book.get_covers()]
84 book._save("Added new cover", action="add-cover", data={"url": url})
85
86 class add_work_cover(add_cover):
87 path = "(/works/OL\d+W)/add-cover"
88 cover_category = "w"
89
90 def upload(self, key, i):
91 if "coverid" in i and safeint(i.coverid):
92 return web.storage(id=int(i.coverid))
93 else:
94 return add_cover.upload(self, key, i)
95
96 class add_photo(add_cover):
97 path = "(/authors/OL\d+A)/add-photo"
98 cover_category = "a"
99
100 def save(self, author, photoid, url=None):
101 author.photos = [photoid] + [photo.id for photo in author.get_photos()]
102 author._save("Added new photo", action="add-photo", data={"url": url})
103
104 class manage_covers(delegate.page):
105 path = "(/books/OL\d+M)/manage-covers"
106 def GET(self, key):
107 book = web.ctx.site.get(key)
108 if not book:
109 raise web.notfound()
110 return render_template("covers/manage", key, self.get_images(book))
111
112 def get_images(self, book):
113 return book.get_covers()
114
115 def get_image(self, book):
116 return book.get_cover()
117
118 def save_images(self, book, covers):
119 book.covers = covers
120 book._save('Update covers')
121
122 def POST(self, key):
123 book = web.ctx.site.get(key)
124 if not book:
125 raise web.notfound()
126
127 images = web.input(image=[]).image
128 if '-' in images:
129 images = [int(id) for id in images[:images.index('-')]]
130 self.save_images(book, images)
131 return render_template("covers/saved", self.get_image(book), showinfo=False)
132 else:
133 # ERROR
134 pass
135
136 class manage_work_covers(manage_covers):
137 path = "(/works/OL\d+W)/manage-covers"
138
139
140 class manage_photos(manage_covers):
141 path = "(/authors/OL\d+A)/manage-photos"
142
143 def get_images(self, author):
144 return author.get_photos()
145
146 def get_image(self, author):
147 return author.get_photo()
148
149 def save_images(self, author, photos):
150 author.photos = photos
151 author._save('Update photos')
152
[end of openlibrary/plugins/upstream/covers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openlibrary/plugins/upstream/covers.py b/openlibrary/plugins/upstream/covers.py
--- a/openlibrary/plugins/upstream/covers.py
+++ b/openlibrary/plugins/upstream/covers.py
@@ -54,7 +54,7 @@
else:
data = None
- if i.url and i.url.strip() == "http://":
+ if i.url and i.url.strip() == "https://":
i.url = ""
user = accounts.get_current_user()
|
{"golden_diff": "diff --git a/openlibrary/plugins/upstream/covers.py b/openlibrary/plugins/upstream/covers.py\n--- a/openlibrary/plugins/upstream/covers.py\n+++ b/openlibrary/plugins/upstream/covers.py\n@@ -54,7 +54,7 @@\n else:\n data = None\n \n- if i.url and i.url.strip() == \"http://\":\n+ if i.url and i.url.strip() == \"https://\":\n i.url = \"\"\n \n user = accounts.get_current_user()\n", "issue": "Image uploader does not recognise uploaded file\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\nAs of today (8-09-2021) the image uploader does not recognise that an image has been selected and uploaded. Instead, it displays \"Please provide an image URL\" after hitting submit.\r\n\r\n### Evidence / Screenshot (if possible)\r\n\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to ...any edition\r\n2. Do ...upload an image as a cover and submit.\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: \"Please provide an image URL\"\r\n* Expected: Image should be added as cover.\r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?** y\r\n- **Browser type/version?** Chrome Version 92.0.4515.159 (Official Build) (x86_64)\r\n- **Operating system?** MacOS\r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n\n", "before_files": [{"content": "\"\"\"Handle book cover/author photo upload.\n\"\"\"\nfrom logging import getLogger\n\nimport requests\nimport six\nimport web\nfrom six import BytesIO\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary import accounts\nfrom openlibrary.plugins.upstream.models import Image\nfrom openlibrary.plugins.upstream.utils import get_coverstore_url, render_template\n\nlogger = getLogger(\"openlibrary.plugins.upstream.covers\")\ndef setup():\n pass\n\nclass add_cover(delegate.page):\n path = \"(/books/OL\\d+M)/add-cover\"\n cover_category = \"b\"\n\n def GET(self, key):\n book = web.ctx.site.get(key)\n return render_template('covers/add', book)\n\n def POST(self, key):\n book = web.ctx.site.get(key)\n if not book:\n raise web.notfound(\"\")\n\n i = web.input(file={}, url=\"\")\n\n # remove references to field storage objects\n web.ctx.pop(\"_fieldstorage\", None)\n\n data = self.upload(key, i)\n coverid = data.get('id')\n\n if coverid:\n self.save(book, coverid, url=i.url)\n cover = Image(web.ctx.site, \"b\", coverid)\n return render_template(\"covers/saved\", cover)\n else:\n return render_template(\"covers/add\", book, {'url': i.url}, data)\n\n def upload(self, key, i):\n \"\"\"Uploads a cover to coverstore and returns the response.\"\"\"\n olid = key.split(\"/\")[-1]\n\n if i.file is not None and hasattr(i.file, 'value'):\n data = i.file.value\n else:\n data = None\n\n if i.url and i.url.strip() == \"http://\":\n i.url = \"\"\n\n user = accounts.get_current_user()\n params = {\n \"author\": user and user.key,\n \"source_url\": i.url,\n \"olid\": olid,\n \"ip\": web.ctx.ip\n }\n\n upload_url = '%s/%s/upload2' % (\n get_coverstore_url(), self.cover_category)\n\n if upload_url.startswith(\"//\"):\n upload_url = \"http:\" + upload_url\n\n try:\n files = {'data': BytesIO(data)}\n response = requests.post(upload_url, data=params, files=files)\n return web.storage(response.json())\n except requests.HTTPError as e:\n logger.exception(\"Covers upload failed\")\n return web.storage({'error': str(e)})\n\n def save(self, book, coverid, url=None):\n book.covers = [coverid] + [cover.id for cover in book.get_covers()]\n book._save(\"Added new cover\", action=\"add-cover\", data={\"url\": url})\n\nclass add_work_cover(add_cover):\n path = \"(/works/OL\\d+W)/add-cover\"\n cover_category = \"w\"\n\n def upload(self, key, i):\n if \"coverid\" in i and safeint(i.coverid):\n return web.storage(id=int(i.coverid))\n else:\n return add_cover.upload(self, key, i)\n\nclass add_photo(add_cover):\n path = \"(/authors/OL\\d+A)/add-photo\"\n cover_category = \"a\"\n\n def save(self, author, photoid, url=None):\n author.photos = [photoid] + [photo.id for photo in author.get_photos()]\n author._save(\"Added new photo\", action=\"add-photo\", data={\"url\": url})\n\nclass manage_covers(delegate.page):\n path = \"(/books/OL\\d+M)/manage-covers\"\n def GET(self, key):\n book = web.ctx.site.get(key)\n if not book:\n raise web.notfound()\n return render_template(\"covers/manage\", key, self.get_images(book))\n\n def get_images(self, book):\n return book.get_covers()\n\n def get_image(self, book):\n return book.get_cover()\n\n def save_images(self, book, covers):\n book.covers = covers\n book._save('Update covers')\n\n def POST(self, key):\n book = web.ctx.site.get(key)\n if not book:\n raise web.notfound()\n\n images = web.input(image=[]).image\n if '-' in images:\n images = [int(id) for id in images[:images.index('-')]]\n self.save_images(book, images)\n return render_template(\"covers/saved\", self.get_image(book), showinfo=False)\n else:\n # ERROR\n pass\n\nclass manage_work_covers(manage_covers):\n path = \"(/works/OL\\d+W)/manage-covers\"\n\n\nclass manage_photos(manage_covers):\n path = \"(/authors/OL\\d+A)/manage-photos\"\n\n def get_images(self, author):\n return author.get_photos()\n\n def get_image(self, author):\n return author.get_photo()\n\n def save_images(self, author, photos):\n author.photos = photos\n author._save('Update photos')\n", "path": "openlibrary/plugins/upstream/covers.py"}]}
| 2,294 | 108 |
gh_patches_debug_36368
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-4017
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scapy.layers.tls.crypto.hkdf.TLS13_HKDF crashes when cryptography module is missing
### Brief description
scapy.layers.tls.crypto.hkdf.TLS13_HKDF crashes in multiple environments
### Scapy version
2.5.0
### Python version
3.11.2
### Operating system
macOS Ventura 13.3.1 (with M1 chip)
### Additional environment information
_No response_
### How to reproduce
```
$ python --version
Python 3.11.2
$ pip install scapy
$ pip show scapy
Name: scapy
Version: 2.5.0
...
$ python
Python 3.11.2 (main, Feb 16 2023, 02:55:59) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from scapy.layers.tls.crypto.hkdf import TLS13_HKDF
>>> TLS13_HKDF("sha256")
```
We can also reproduce from the default python docker image:
```
$ docker run -it --entrypoint bash python:latest
# pip install scapy
# python
>>> from scapy.layers.tls.crypto.hkdf import TLS13_HKDF
>>> TLS13_HKDF("sha256")
```
### Actual result
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.11/site-packages/scapy/layers/tls/crypto/hkdf.py", line 23, in __init__
self.hash = _get_hash(hash_name)
^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
### Expected result
<scapy.layers.tls.crypto.hkdf.TLS13_HKDF object at 0x...>
### Related resources
_No response_
</issue>
<code>
[start of scapy/layers/tls/crypto/hkdf.py]
1 # SPDX-License-Identifier: GPL-2.0-only
2 # This file is part of Scapy
3 # See https://scapy.net/ for more information
4 # Copyright (C) 2017 Maxence Tury
5
6 """
7 Stateless HKDF for TLS 1.3.
8 """
9
10 import struct
11
12 from scapy.config import conf
13 from scapy.layers.tls.crypto.pkcs1 import _get_hash
14
15 if conf.crypto_valid:
16 from cryptography.hazmat.backends import default_backend
17 from cryptography.hazmat.primitives.kdf.hkdf import HKDF, HKDFExpand
18 from cryptography.hazmat.primitives.hashes import Hash
19 from cryptography.hazmat.primitives.hmac import HMAC
20
21
22 class TLS13_HKDF(object):
23 def __init__(self, hash_name="sha256"):
24 self.hash = _get_hash(hash_name)
25
26 def extract(self, salt, ikm):
27 h = self.hash
28 hkdf = HKDF(h, h.digest_size, salt, None, default_backend())
29 if ikm is None:
30 ikm = b"\x00" * h.digest_size
31 return hkdf._extract(ikm)
32
33 def expand(self, prk, info, L):
34 h = self.hash
35 hkdf = HKDFExpand(h, L, info, default_backend())
36 return hkdf.derive(prk)
37
38 def expand_label(self, secret, label, hash_value, length):
39 hkdf_label = struct.pack("!H", length)
40 hkdf_label += struct.pack("B", 6 + len(label))
41 hkdf_label += b"tls13 "
42 hkdf_label += label
43 hkdf_label += struct.pack("B", len(hash_value))
44 hkdf_label += hash_value
45 return self.expand(secret, hkdf_label, length)
46
47 def derive_secret(self, secret, label, messages):
48 h = Hash(self.hash, backend=default_backend())
49 h.update(messages)
50 hash_messages = h.finalize()
51 hash_len = self.hash.digest_size
52 return self.expand_label(secret, label, hash_messages, hash_len)
53
54 def compute_verify_data(self, basekey, handshake_context):
55 hash_len = self.hash.digest_size
56 finished_key = self.expand_label(basekey, b"finished", b"", hash_len)
57
58 h = Hash(self.hash, backend=default_backend())
59 h.update(handshake_context)
60 hash_value = h.finalize()
61
62 hm = HMAC(finished_key, self.hash, default_backend())
63 hm.update(hash_value)
64 return hm.finalize()
65
[end of scapy/layers/tls/crypto/hkdf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scapy/layers/tls/crypto/hkdf.py b/scapy/layers/tls/crypto/hkdf.py
--- a/scapy/layers/tls/crypto/hkdf.py
+++ b/scapy/layers/tls/crypto/hkdf.py
@@ -9,7 +9,7 @@
import struct
-from scapy.config import conf
+from scapy.config import conf, crypto_validator
from scapy.layers.tls.crypto.pkcs1 import _get_hash
if conf.crypto_valid:
@@ -20,9 +20,11 @@
class TLS13_HKDF(object):
+ @crypto_validator
def __init__(self, hash_name="sha256"):
self.hash = _get_hash(hash_name)
+ @crypto_validator
def extract(self, salt, ikm):
h = self.hash
hkdf = HKDF(h, h.digest_size, salt, None, default_backend())
@@ -30,11 +32,13 @@
ikm = b"\x00" * h.digest_size
return hkdf._extract(ikm)
+ @crypto_validator
def expand(self, prk, info, L):
h = self.hash
hkdf = HKDFExpand(h, L, info, default_backend())
return hkdf.derive(prk)
+ @crypto_validator
def expand_label(self, secret, label, hash_value, length):
hkdf_label = struct.pack("!H", length)
hkdf_label += struct.pack("B", 6 + len(label))
@@ -44,6 +48,7 @@
hkdf_label += hash_value
return self.expand(secret, hkdf_label, length)
+ @crypto_validator
def derive_secret(self, secret, label, messages):
h = Hash(self.hash, backend=default_backend())
h.update(messages)
@@ -51,6 +56,7 @@
hash_len = self.hash.digest_size
return self.expand_label(secret, label, hash_messages, hash_len)
+ @crypto_validator
def compute_verify_data(self, basekey, handshake_context):
hash_len = self.hash.digest_size
finished_key = self.expand_label(basekey, b"finished", b"", hash_len)
|
{"golden_diff": "diff --git a/scapy/layers/tls/crypto/hkdf.py b/scapy/layers/tls/crypto/hkdf.py\n--- a/scapy/layers/tls/crypto/hkdf.py\n+++ b/scapy/layers/tls/crypto/hkdf.py\n@@ -9,7 +9,7 @@\n \n import struct\n \n-from scapy.config import conf\n+from scapy.config import conf, crypto_validator\n from scapy.layers.tls.crypto.pkcs1 import _get_hash\n \n if conf.crypto_valid:\n@@ -20,9 +20,11 @@\n \n \n class TLS13_HKDF(object):\n+ @crypto_validator\n def __init__(self, hash_name=\"sha256\"):\n self.hash = _get_hash(hash_name)\n \n+ @crypto_validator\n def extract(self, salt, ikm):\n h = self.hash\n hkdf = HKDF(h, h.digest_size, salt, None, default_backend())\n@@ -30,11 +32,13 @@\n ikm = b\"\\x00\" * h.digest_size\n return hkdf._extract(ikm)\n \n+ @crypto_validator\n def expand(self, prk, info, L):\n h = self.hash\n hkdf = HKDFExpand(h, L, info, default_backend())\n return hkdf.derive(prk)\n \n+ @crypto_validator\n def expand_label(self, secret, label, hash_value, length):\n hkdf_label = struct.pack(\"!H\", length)\n hkdf_label += struct.pack(\"B\", 6 + len(label))\n@@ -44,6 +48,7 @@\n hkdf_label += hash_value\n return self.expand(secret, hkdf_label, length)\n \n+ @crypto_validator\n def derive_secret(self, secret, label, messages):\n h = Hash(self.hash, backend=default_backend())\n h.update(messages)\n@@ -51,6 +56,7 @@\n hash_len = self.hash.digest_size\n return self.expand_label(secret, label, hash_messages, hash_len)\n \n+ @crypto_validator\n def compute_verify_data(self, basekey, handshake_context):\n hash_len = self.hash.digest_size\n finished_key = self.expand_label(basekey, b\"finished\", b\"\", hash_len)\n", "issue": "scapy.layers.tls.crypto.hkdf.TLS13_HKDF crashes when cryptography module is missing\n### Brief description\n\nscapy.layers.tls.crypto.hkdf.TLS13_HKDF crashes in multiple environments\n\n### Scapy version\n\n2.5.0\n\n### Python version\n\n3.11.2\n\n### Operating system\n\nmacOS Ventura 13.3.1 (with M1 chip)\n\n### Additional environment information\n\n_No response_\n\n### How to reproduce\n\n```\r\n$ python --version\r\nPython 3.11.2\r\n$ pip install scapy\r\n$ pip show scapy\r\nName: scapy\r\nVersion: 2.5.0\r\n...\r\n$ python\r\nPython 3.11.2 (main, Feb 16 2023, 02:55:59) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from scapy.layers.tls.crypto.hkdf import TLS13_HKDF\r\n>>> TLS13_HKDF(\"sha256\")\r\n```\r\n\r\nWe can also reproduce from the default python docker image:\r\n```\r\n$ docker run -it --entrypoint bash python:latest\r\n# pip install scapy\r\n# python\r\n>>> from scapy.layers.tls.crypto.hkdf import TLS13_HKDF\r\n>>> TLS13_HKDF(\"sha256\")\r\n```\n\n### Actual result\n\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/opt/homebrew/lib/python3.11/site-packages/scapy/layers/tls/crypto/hkdf.py\", line 23, in __init__\r\n self.hash = _get_hash(hash_name)\r\n ^^^^^^^^^^^^^^^^^^^^\r\nTypeError: 'NoneType' object is not callable\n\n### Expected result\n\n<scapy.layers.tls.crypto.hkdf.TLS13_HKDF object at 0x...>\n\n### Related resources\n\n_No response_\n", "before_files": [{"content": "# SPDX-License-Identifier: GPL-2.0-only\n# This file is part of Scapy\n# See https://scapy.net/ for more information\n# Copyright (C) 2017 Maxence Tury\n\n\"\"\"\nStateless HKDF for TLS 1.3.\n\"\"\"\n\nimport struct\n\nfrom scapy.config import conf\nfrom scapy.layers.tls.crypto.pkcs1 import _get_hash\n\nif conf.crypto_valid:\n from cryptography.hazmat.backends import default_backend\n from cryptography.hazmat.primitives.kdf.hkdf import HKDF, HKDFExpand\n from cryptography.hazmat.primitives.hashes import Hash\n from cryptography.hazmat.primitives.hmac import HMAC\n\n\nclass TLS13_HKDF(object):\n def __init__(self, hash_name=\"sha256\"):\n self.hash = _get_hash(hash_name)\n\n def extract(self, salt, ikm):\n h = self.hash\n hkdf = HKDF(h, h.digest_size, salt, None, default_backend())\n if ikm is None:\n ikm = b\"\\x00\" * h.digest_size\n return hkdf._extract(ikm)\n\n def expand(self, prk, info, L):\n h = self.hash\n hkdf = HKDFExpand(h, L, info, default_backend())\n return hkdf.derive(prk)\n\n def expand_label(self, secret, label, hash_value, length):\n hkdf_label = struct.pack(\"!H\", length)\n hkdf_label += struct.pack(\"B\", 6 + len(label))\n hkdf_label += b\"tls13 \"\n hkdf_label += label\n hkdf_label += struct.pack(\"B\", len(hash_value))\n hkdf_label += hash_value\n return self.expand(secret, hkdf_label, length)\n\n def derive_secret(self, secret, label, messages):\n h = Hash(self.hash, backend=default_backend())\n h.update(messages)\n hash_messages = h.finalize()\n hash_len = self.hash.digest_size\n return self.expand_label(secret, label, hash_messages, hash_len)\n\n def compute_verify_data(self, basekey, handshake_context):\n hash_len = self.hash.digest_size\n finished_key = self.expand_label(basekey, b\"finished\", b\"\", hash_len)\n\n h = Hash(self.hash, backend=default_backend())\n h.update(handshake_context)\n hash_value = h.finalize()\n\n hm = HMAC(finished_key, self.hash, default_backend())\n hm.update(hash_value)\n return hm.finalize()\n", "path": "scapy/layers/tls/crypto/hkdf.py"}]}
| 1,678 | 494 |
gh_patches_debug_38118
|
rasdani/github-patches
|
git_diff
|
ethereum__web3.py-171
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support rlp==0.4.7
* Version: 3.7.2
* Python: 2.7
* OS: osx
### What was wrong?
https://github.com/ethereum/pydevp2p requires `rlp==0.4.7` ([here](https://github.com/ethereum/pydevp2p/blob/develop/requirements.txt#L11)), but `web3.py` requires `rlp>=0.4.6,<0.4.7`. This causes headaches when a project requires both packages as well as `rlp` independently.
I've created a repo with updated requirements here: https://github.com/karlfloersch/web3.py, but I haven't verified that everything is working properly.
#### Cute Animal Picture

Thank you!
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 import os
4 import sys
5
6 from setuptools import (
7 setup,
8 find_packages,
9 )
10
11
12 DIR = os.path.dirname(os.path.abspath(__file__))
13
14
15 readme = open(os.path.join(DIR, 'README.md')).read()
16
17 install_requires=[
18 "ethereum-abi-utils>=0.4.0",
19 "ethereum-utils>=0.2.0",
20 "pylru>=1.0.9",
21 "pysha3>=0.3",
22 "requests>=2.12.4",
23 "rlp>=0.4.6,<0.4.7",
24 ]
25
26 if sys.platform == 'win32':
27 install_requires.append('pypiwin32')
28
29 setup(
30 name='web3',
31 version='3.7.2',
32 description="""Web3.py""",
33 long_description=readme,
34 author='Piper Merriam',
35 author_email='[email protected]',
36 url='https://github.com/pipermerriam/web3.py',
37 include_package_data=True,
38 install_requires=install_requires,
39 extras_require={
40 'Tester': ["eth-testrpc>=1.1.0"],
41 'tester': ["eth-testrpc>=1.1.0"],
42 'gevent': [
43 "gevent>=1.1.1,<1.2.0",
44 "geventhttpclient>=1.3.1",
45 ],
46 },
47 py_modules=['web3'],
48 license="MIT",
49 zip_safe=False,
50 keywords='ethereum',
51 packages=find_packages(exclude=["tests", "tests.*"]),
52 classifiers=[
53 'Development Status :: 2 - Pre-Alpha',
54 'Intended Audience :: Developers',
55 'License :: OSI Approved :: MIT License',
56 'Natural Language :: English',
57 'Programming Language :: Python :: 2',
58 'Programming Language :: Python :: 2.7',
59 'Programming Language :: Python :: 3',
60 'Programming Language :: Python :: 3.4',
61 'Programming Language :: Python :: 3.5',
62 ],
63 )
64
[end of setup.py]
[start of web3/providers/manager.py]
1 import uuid
2 import json
3 import collections
4
5 import rlp
6
7 from eth_utils import (
8 force_text,
9 to_normalized_address,
10 is_string,
11 is_dict,
12 encode_hex,
13 decode_hex,
14 keccak,
15 )
16
17 from web3.utils.encoding import (
18 to_decimal,
19 )
20 from web3.utils.transactions import (
21 is_bitcoin_available,
22 Transaction,
23 serialize_transaction,
24 add_signature_to_transaction,
25 )
26 from web3.utils.compat import (
27 spawn,
28 )
29
30
31 class RequestManager(object):
32 def __init__(self, provider):
33 self.pending_requests = {}
34 self.provider = provider
35
36 def setProvider(self, provider):
37 self.provider = provider
38
39 def request_blocking(self, method, params):
40 """
41 Make a synchronous request using the provider
42 """
43 response_raw = self.provider.make_request(method, params)
44
45 if is_string(response_raw):
46 response = json.loads(force_text(response_raw))
47 elif is_dict(response_raw):
48 response = response_raw
49
50 if "error" in response:
51 raise ValueError(response["error"])
52
53 return response['result']
54
55 def request_async(self, method, params):
56 request_id = uuid.uuid4()
57 self.pending_requests[request_id] = spawn(
58 self.request_blocking,
59 method,
60 params,
61 )
62 return request_id
63
64 def receive_blocking(self, request_id, timeout=None):
65 try:
66 request = self.pending_requests.pop(request_id)
67 except KeyError:
68 raise KeyError("Request for id:{0} not found".format(request_id))
69 else:
70 response_raw = request.get(timeout=timeout)
71
72 response = json.loads(response_raw)
73
74 if "error" in response:
75 raise ValueError(response["error"])
76
77 return response['result']
78
79 def receive_async(self, request_id, *args, **kwargs):
80 raise NotImplementedError("Callback pattern not implemented")
81
82
83 class ManagerWrapper(object):
84 def __init__(self, wrapped_manager):
85 self.wrapped_manager = wrapped_manager
86
87 @property
88 def provider(self):
89 return self.wrapped_manager.provider
90
91 @property
92 def pending_requests(self):
93 return self.wrapped_manager.pending_requests
94
95 def setProvider(self, provider):
96 self.wrapped_manager.provider = provider
97
98 def request_blocking(self, *args, **kwargs):
99 return self.wrapped_manager.request_blocking(*args, **kwargs)
100
101 def request_async(self, *args, **kwargs):
102 return self.wrapped_manager.request_async(*args, **kwargs)
103
104 def receive_blocking(self, *args, **kwargs):
105 return self.wrapped_manager.receive_blocking(*args, **kwargs)
106
107 def receive_async(self, *args, **kwargs):
108 return self.wrapped_manager.receive_async(*args, **kwargs)
109
110
111 class BaseSendRawTransactionMixin(ManagerWrapper):
112 _known_transactions = None
113 _known_nonces = None
114
115 def __init__(self, *args, **kwargs):
116 self._known_transactions = collections.defaultdict(set)
117 self._known_nonces = collections.defaultdict(set)
118 super(BaseSendRawTransactionMixin, self).__init__(*args, **kwargs)
119
120 def _get_nonces_and_cleanup(self, addr, chain_nonce):
121 all_txns = {
122 txn_hash: self.request_blocking(
123 'eth_getTransactionByHash',
124 [txn_hash],
125 ) for txn_hash in self._known_transactions[addr]
126 }
127 for txn_hash, txn in all_txns.items():
128 if txn is None:
129 continue
130 txn_nonce = to_decimal(txn['nonce'])
131 if txn_nonce < chain_nonce:
132 self._known_transactions[addr].discard(txn_hash)
133 else:
134 yield txn_nonce
135
136 all_known_nonces = tuple(self._known_nonces[addr])
137 for nonce in all_known_nonces:
138 if nonce < chain_nonce:
139 self._known_nonces[addr].discard(nonce)
140 else:
141 yield nonce
142
143 def get_chain_nonce(self, addr):
144 chain_nonce = to_decimal(self.request_blocking(
145 'eth_getTransactionCount',
146 [addr, 'pending']
147 ))
148 return chain_nonce
149
150 def get_nonce(self, addr):
151 chain_nonce = self.get_chain_nonce(addr)
152 tracked_txn_nonces = tuple(self._get_nonces_and_cleanup(addr, chain_nonce))
153 nonce = max(0, chain_nonce, *tracked_txn_nonces)
154 if nonce == 0 and not tracked_txn_nonces:
155 return -1
156 else:
157 return nonce
158
159 def get_transaction_signature(self, serialized_txn):
160 raise NotImplementedError("Must be implemented by subclasses")
161
162 def sign_and_serialize_transaction(self, transaction):
163 serialized_txn = serialize_transaction(transaction)
164 signature = self.get_transaction_signature(transaction)
165 signed_transaction = add_signature_to_transaction(
166 serialized_txn,
167 signature,
168 )
169 signed_and_serialized_txn = rlp.encode(signed_transaction, Transaction)
170 return signed_and_serialized_txn
171
172 def construct_full_transaction(self, base_transaction):
173 txn_from = base_transaction['from']
174 full_txn = dict(**base_transaction)
175 full_txn.setdefault('nonce', self.get_nonce(txn_from) + 1)
176 full_txn.setdefault('gasPrice', self.request_blocking(
177 'eth_gasPrice', []
178 ))
179 full_txn.setdefault('gas', hex(90000))
180 full_txn.setdefault('value', '0x0')
181 full_txn.setdefault('to', '')
182 full_txn.setdefault('data', '')
183 return full_txn
184
185 TXN_SENDING_METHODS = {
186 'eth_sendTransaction',
187 'eth_sendRawTransaction',
188 'personal_signAndSendTransaction',
189 'personal_sendTransaction',
190 }
191
192 def request_blocking(self, method, params):
193 if method == 'eth_sendTransaction':
194 base_transaction = params[0]
195 # create a fully signed transaction and send through the
196 # `eth_sendRawTransaction` endpoint instead.
197 full_transaction = self.construct_full_transaction(base_transaction)
198 raw_transaction_bytes = self.sign_and_serialize_transaction(
199 full_transaction,
200 )
201 raw_transaction_bytes_as_hex = encode_hex(raw_transaction_bytes)
202 return self.request_blocking(
203 'eth_sendRawTransaction', [raw_transaction_bytes_as_hex],
204 )
205
206 result = super(BaseSendRawTransactionMixin, self).request_blocking(
207 method, params,
208 )
209 if method in self.TXN_SENDING_METHODS:
210 if method == 'eth_sendRawTransaction':
211 txn = rlp.decode(decode_hex(params[0]), Transaction)
212 self._known_transactions[to_normalized_address(txn.sender)].add(result)
213 self._known_nonces[to_normalized_address(txn.sender)].add(txn.nonce)
214 else:
215 txn = params[0]
216 self._known_transactions[to_normalized_address(txn['from'])].add(result)
217 if 'nonce' in txn:
218 self._known_nonces[to_normalized_address(txn['from'])].add(
219 to_decimal(txn['nonce'])
220 )
221 return result
222
223
224 class DelegatedSigningManager(BaseSendRawTransactionMixin):
225 def __init__(self, *args, **kwargs):
226 self.signing_manager = kwargs.pop('signing_manager')
227 super(DelegatedSigningManager, self).__init__(*args, **kwargs)
228
229 def get_chain_nonce(self, addr):
230 signer_nonce = to_decimal(self.signing_manager.request_blocking(
231 'eth_getTransactionCount',
232 [addr, 'pending']
233 ))
234 wrapped_nonce = to_decimal(self.wrapped_manager.request_blocking(
235 'eth_getTransactionCount',
236 [addr, 'pending']
237 ))
238 return max(signer_nonce, wrapped_nonce)
239
240 def get_transaction_signature(self, transaction):
241 serialized_txn = serialize_transaction(transaction)
242 hash_to_sign = self.signing_manager.request_blocking(
243 'web3_sha3', [encode_hex(serialized_txn)],
244 )
245 signature_hex = self.signing_manager.request_blocking(
246 'eth_sign',
247 [
248 transaction['from'],
249 hash_to_sign,
250 ],
251 )
252 signature = decode_hex(signature_hex)
253 return signature
254
255
256 class PrivateKeySigningManager(BaseSendRawTransactionMixin):
257 def __init__(self, *args, **kwargs):
258 if not is_bitcoin_available():
259 raise ImportError(
260 "In order to use the `PrivateKeySigningManager` the "
261 "`bitcoin` and `secp256k1` packages must be installed."
262 )
263 self.keys = kwargs.pop('keys', {})
264 super(PrivateKeySigningManager, self).__init__(*args, **kwargs)
265
266 def register_private_key(self, key):
267 from bitcoin import privtopub
268 address = to_normalized_address(keccak(privtopub(key)[1:])[-20:])
269 self.keys[address] = key
270
271 def sign_and_serialize_transaction(self, transaction):
272 txn_from = to_normalized_address(transaction['from'])
273 if txn_from not in self.keys:
274 raise KeyError("No signing key registered for from address: {0}".format(txn_from))
275 transaction = Transaction(
276 nonce=to_decimal(transaction['nonce']),
277 gasprice=to_decimal(transaction['gasPrice']),
278 startgas=to_decimal(transaction['gas']),
279 to=transaction['to'],
280 value=to_decimal(transaction['value']),
281 data=decode_hex(transaction['data']),
282 )
283 transaction.sign(self.keys[txn_from])
284 assert to_normalized_address(transaction.sender) == txn_from
285 return rlp.encode(transaction, Transaction)
286
[end of web3/providers/manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,7 @@
"pylru>=1.0.9",
"pysha3>=0.3",
"requests>=2.12.4",
- "rlp>=0.4.6,<0.4.7",
+ "rlp>=0.4.7",
]
if sys.platform == 'win32':
@@ -37,8 +37,7 @@
include_package_data=True,
install_requires=install_requires,
extras_require={
- 'Tester': ["eth-testrpc>=1.1.0"],
- 'tester': ["eth-testrpc>=1.1.0"],
+ 'tester': ["eth-testrpc>=1.2.0"],
'gevent': [
"gevent>=1.1.1,<1.2.0",
"geventhttpclient>=1.3.1",
diff --git a/web3/providers/manager.py b/web3/providers/manager.py
--- a/web3/providers/manager.py
+++ b/web3/providers/manager.py
@@ -1,17 +1,18 @@
-import uuid
-import json
import collections
+import json
+import uuid
+import warnings
import rlp
from eth_utils import (
+ decode_hex,
+ encode_hex,
force_text,
- to_normalized_address,
- is_string,
is_dict,
- encode_hex,
- decode_hex,
+ is_string,
keccak,
+ to_normalized_address,
)
from web3.utils.encoding import (
@@ -82,6 +83,10 @@
class ManagerWrapper(object):
def __init__(self, wrapped_manager):
+ warnings.warn(DeprecationWarning(
+ "ManagerWrapper has been deprecated and will be removed from"
+ "web3.py in subsequen releases."
+ ))
self.wrapped_manager = wrapped_manager
@property
@@ -223,6 +228,10 @@
class DelegatedSigningManager(BaseSendRawTransactionMixin):
def __init__(self, *args, **kwargs):
+ warnings.warn(DeprecationWarning(
+ "DelegatedSigningManager has been deprecated and will be removed from"
+ "web3.py in subsequen releases."
+ ))
self.signing_manager = kwargs.pop('signing_manager')
super(DelegatedSigningManager, self).__init__(*args, **kwargs)
@@ -255,6 +264,10 @@
class PrivateKeySigningManager(BaseSendRawTransactionMixin):
def __init__(self, *args, **kwargs):
+ warnings.warn(DeprecationWarning(
+ "PrivateKeySigningManager has been deprecated and will be removed from"
+ "web3.py in subsequen releases."
+ ))
if not is_bitcoin_available():
raise ImportError(
"In order to use the `PrivateKeySigningManager` the "
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,7 +20,7 @@\n \"pylru>=1.0.9\",\n \"pysha3>=0.3\",\n \"requests>=2.12.4\",\n- \"rlp>=0.4.6,<0.4.7\",\n+ \"rlp>=0.4.7\",\n ]\n \n if sys.platform == 'win32':\n@@ -37,8 +37,7 @@\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n- 'Tester': [\"eth-testrpc>=1.1.0\"],\n- 'tester': [\"eth-testrpc>=1.1.0\"],\n+ 'tester': [\"eth-testrpc>=1.2.0\"],\n 'gevent': [\n \"gevent>=1.1.1,<1.2.0\",\n \"geventhttpclient>=1.3.1\",\ndiff --git a/web3/providers/manager.py b/web3/providers/manager.py\n--- a/web3/providers/manager.py\n+++ b/web3/providers/manager.py\n@@ -1,17 +1,18 @@\n-import uuid\n-import json\n import collections\n+import json\n+import uuid\n+import warnings\n \n import rlp\n \n from eth_utils import (\n+ decode_hex,\n+ encode_hex,\n force_text,\n- to_normalized_address,\n- is_string,\n is_dict,\n- encode_hex,\n- decode_hex,\n+ is_string,\n keccak,\n+ to_normalized_address,\n )\n \n from web3.utils.encoding import (\n@@ -82,6 +83,10 @@\n \n class ManagerWrapper(object):\n def __init__(self, wrapped_manager):\n+ warnings.warn(DeprecationWarning(\n+ \"ManagerWrapper has been deprecated and will be removed from\"\n+ \"web3.py in subsequen releases.\"\n+ ))\n self.wrapped_manager = wrapped_manager\n \n @property\n@@ -223,6 +228,10 @@\n \n class DelegatedSigningManager(BaseSendRawTransactionMixin):\n def __init__(self, *args, **kwargs):\n+ warnings.warn(DeprecationWarning(\n+ \"DelegatedSigningManager has been deprecated and will be removed from\"\n+ \"web3.py in subsequen releases.\"\n+ ))\n self.signing_manager = kwargs.pop('signing_manager')\n super(DelegatedSigningManager, self).__init__(*args, **kwargs)\n \n@@ -255,6 +264,10 @@\n \n class PrivateKeySigningManager(BaseSendRawTransactionMixin):\n def __init__(self, *args, **kwargs):\n+ warnings.warn(DeprecationWarning(\n+ \"PrivateKeySigningManager has been deprecated and will be removed from\"\n+ \"web3.py in subsequen releases.\"\n+ ))\n if not is_bitcoin_available():\n raise ImportError(\n \"In order to use the `PrivateKeySigningManager` the \"\n", "issue": "Support rlp==0.4.7\n* Version: 3.7.2\r\n* Python: 2.7\r\n* OS: osx\r\n\r\n### What was wrong?\r\nhttps://github.com/ethereum/pydevp2p requires `rlp==0.4.7` ([here](https://github.com/ethereum/pydevp2p/blob/develop/requirements.txt#L11)), but `web3.py` requires `rlp>=0.4.6,<0.4.7`. This causes headaches when a project requires both packages as well as `rlp` independently.\r\n\r\nI've created a repo with updated requirements here: https://github.com/karlfloersch/web3.py, but I haven't verified that everything is working properly.\r\n\r\n#### Cute Animal Picture\r\n\r\n\r\nThank you!\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport os\nimport sys\n\nfrom setuptools import (\n setup,\n find_packages,\n)\n\n\nDIR = os.path.dirname(os.path.abspath(__file__))\n\n\nreadme = open(os.path.join(DIR, 'README.md')).read()\n\ninstall_requires=[\n \"ethereum-abi-utils>=0.4.0\",\n \"ethereum-utils>=0.2.0\",\n \"pylru>=1.0.9\",\n \"pysha3>=0.3\",\n \"requests>=2.12.4\",\n \"rlp>=0.4.6,<0.4.7\",\n]\n\nif sys.platform == 'win32':\n install_requires.append('pypiwin32')\n\nsetup(\n name='web3',\n version='3.7.2',\n description=\"\"\"Web3.py\"\"\",\n long_description=readme,\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/pipermerriam/web3.py',\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'Tester': [\"eth-testrpc>=1.1.0\"],\n 'tester': [\"eth-testrpc>=1.1.0\"],\n 'gevent': [\n \"gevent>=1.1.1,<1.2.0\",\n \"geventhttpclient>=1.3.1\",\n ],\n },\n py_modules=['web3'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 2 - Pre-Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],\n)\n", "path": "setup.py"}, {"content": "import uuid\nimport json\nimport collections\n\nimport rlp\n\nfrom eth_utils import (\n force_text,\n to_normalized_address,\n is_string,\n is_dict,\n encode_hex,\n decode_hex,\n keccak,\n)\n\nfrom web3.utils.encoding import (\n to_decimal,\n)\nfrom web3.utils.transactions import (\n is_bitcoin_available,\n Transaction,\n serialize_transaction,\n add_signature_to_transaction,\n)\nfrom web3.utils.compat import (\n spawn,\n)\n\n\nclass RequestManager(object):\n def __init__(self, provider):\n self.pending_requests = {}\n self.provider = provider\n\n def setProvider(self, provider):\n self.provider = provider\n\n def request_blocking(self, method, params):\n \"\"\"\n Make a synchronous request using the provider\n \"\"\"\n response_raw = self.provider.make_request(method, params)\n\n if is_string(response_raw):\n response = json.loads(force_text(response_raw))\n elif is_dict(response_raw):\n response = response_raw\n\n if \"error\" in response:\n raise ValueError(response[\"error\"])\n\n return response['result']\n\n def request_async(self, method, params):\n request_id = uuid.uuid4()\n self.pending_requests[request_id] = spawn(\n self.request_blocking,\n method,\n params,\n )\n return request_id\n\n def receive_blocking(self, request_id, timeout=None):\n try:\n request = self.pending_requests.pop(request_id)\n except KeyError:\n raise KeyError(\"Request for id:{0} not found\".format(request_id))\n else:\n response_raw = request.get(timeout=timeout)\n\n response = json.loads(response_raw)\n\n if \"error\" in response:\n raise ValueError(response[\"error\"])\n\n return response['result']\n\n def receive_async(self, request_id, *args, **kwargs):\n raise NotImplementedError(\"Callback pattern not implemented\")\n\n\nclass ManagerWrapper(object):\n def __init__(self, wrapped_manager):\n self.wrapped_manager = wrapped_manager\n\n @property\n def provider(self):\n return self.wrapped_manager.provider\n\n @property\n def pending_requests(self):\n return self.wrapped_manager.pending_requests\n\n def setProvider(self, provider):\n self.wrapped_manager.provider = provider\n\n def request_blocking(self, *args, **kwargs):\n return self.wrapped_manager.request_blocking(*args, **kwargs)\n\n def request_async(self, *args, **kwargs):\n return self.wrapped_manager.request_async(*args, **kwargs)\n\n def receive_blocking(self, *args, **kwargs):\n return self.wrapped_manager.receive_blocking(*args, **kwargs)\n\n def receive_async(self, *args, **kwargs):\n return self.wrapped_manager.receive_async(*args, **kwargs)\n\n\nclass BaseSendRawTransactionMixin(ManagerWrapper):\n _known_transactions = None\n _known_nonces = None\n\n def __init__(self, *args, **kwargs):\n self._known_transactions = collections.defaultdict(set)\n self._known_nonces = collections.defaultdict(set)\n super(BaseSendRawTransactionMixin, self).__init__(*args, **kwargs)\n\n def _get_nonces_and_cleanup(self, addr, chain_nonce):\n all_txns = {\n txn_hash: self.request_blocking(\n 'eth_getTransactionByHash',\n [txn_hash],\n ) for txn_hash in self._known_transactions[addr]\n }\n for txn_hash, txn in all_txns.items():\n if txn is None:\n continue\n txn_nonce = to_decimal(txn['nonce'])\n if txn_nonce < chain_nonce:\n self._known_transactions[addr].discard(txn_hash)\n else:\n yield txn_nonce\n\n all_known_nonces = tuple(self._known_nonces[addr])\n for nonce in all_known_nonces:\n if nonce < chain_nonce:\n self._known_nonces[addr].discard(nonce)\n else:\n yield nonce\n\n def get_chain_nonce(self, addr):\n chain_nonce = to_decimal(self.request_blocking(\n 'eth_getTransactionCount',\n [addr, 'pending']\n ))\n return chain_nonce\n\n def get_nonce(self, addr):\n chain_nonce = self.get_chain_nonce(addr)\n tracked_txn_nonces = tuple(self._get_nonces_and_cleanup(addr, chain_nonce))\n nonce = max(0, chain_nonce, *tracked_txn_nonces)\n if nonce == 0 and not tracked_txn_nonces:\n return -1\n else:\n return nonce\n\n def get_transaction_signature(self, serialized_txn):\n raise NotImplementedError(\"Must be implemented by subclasses\")\n\n def sign_and_serialize_transaction(self, transaction):\n serialized_txn = serialize_transaction(transaction)\n signature = self.get_transaction_signature(transaction)\n signed_transaction = add_signature_to_transaction(\n serialized_txn,\n signature,\n )\n signed_and_serialized_txn = rlp.encode(signed_transaction, Transaction)\n return signed_and_serialized_txn\n\n def construct_full_transaction(self, base_transaction):\n txn_from = base_transaction['from']\n full_txn = dict(**base_transaction)\n full_txn.setdefault('nonce', self.get_nonce(txn_from) + 1)\n full_txn.setdefault('gasPrice', self.request_blocking(\n 'eth_gasPrice', []\n ))\n full_txn.setdefault('gas', hex(90000))\n full_txn.setdefault('value', '0x0')\n full_txn.setdefault('to', '')\n full_txn.setdefault('data', '')\n return full_txn\n\n TXN_SENDING_METHODS = {\n 'eth_sendTransaction',\n 'eth_sendRawTransaction',\n 'personal_signAndSendTransaction',\n 'personal_sendTransaction',\n }\n\n def request_blocking(self, method, params):\n if method == 'eth_sendTransaction':\n base_transaction = params[0]\n # create a fully signed transaction and send through the\n # `eth_sendRawTransaction` endpoint instead.\n full_transaction = self.construct_full_transaction(base_transaction)\n raw_transaction_bytes = self.sign_and_serialize_transaction(\n full_transaction,\n )\n raw_transaction_bytes_as_hex = encode_hex(raw_transaction_bytes)\n return self.request_blocking(\n 'eth_sendRawTransaction', [raw_transaction_bytes_as_hex],\n )\n\n result = super(BaseSendRawTransactionMixin, self).request_blocking(\n method, params,\n )\n if method in self.TXN_SENDING_METHODS:\n if method == 'eth_sendRawTransaction':\n txn = rlp.decode(decode_hex(params[0]), Transaction)\n self._known_transactions[to_normalized_address(txn.sender)].add(result)\n self._known_nonces[to_normalized_address(txn.sender)].add(txn.nonce)\n else:\n txn = params[0]\n self._known_transactions[to_normalized_address(txn['from'])].add(result)\n if 'nonce' in txn:\n self._known_nonces[to_normalized_address(txn['from'])].add(\n to_decimal(txn['nonce'])\n )\n return result\n\n\nclass DelegatedSigningManager(BaseSendRawTransactionMixin):\n def __init__(self, *args, **kwargs):\n self.signing_manager = kwargs.pop('signing_manager')\n super(DelegatedSigningManager, self).__init__(*args, **kwargs)\n\n def get_chain_nonce(self, addr):\n signer_nonce = to_decimal(self.signing_manager.request_blocking(\n 'eth_getTransactionCount',\n [addr, 'pending']\n ))\n wrapped_nonce = to_decimal(self.wrapped_manager.request_blocking(\n 'eth_getTransactionCount',\n [addr, 'pending']\n ))\n return max(signer_nonce, wrapped_nonce)\n\n def get_transaction_signature(self, transaction):\n serialized_txn = serialize_transaction(transaction)\n hash_to_sign = self.signing_manager.request_blocking(\n 'web3_sha3', [encode_hex(serialized_txn)],\n )\n signature_hex = self.signing_manager.request_blocking(\n 'eth_sign',\n [\n transaction['from'],\n hash_to_sign,\n ],\n )\n signature = decode_hex(signature_hex)\n return signature\n\n\nclass PrivateKeySigningManager(BaseSendRawTransactionMixin):\n def __init__(self, *args, **kwargs):\n if not is_bitcoin_available():\n raise ImportError(\n \"In order to use the `PrivateKeySigningManager` the \"\n \"`bitcoin` and `secp256k1` packages must be installed.\"\n )\n self.keys = kwargs.pop('keys', {})\n super(PrivateKeySigningManager, self).__init__(*args, **kwargs)\n\n def register_private_key(self, key):\n from bitcoin import privtopub\n address = to_normalized_address(keccak(privtopub(key)[1:])[-20:])\n self.keys[address] = key\n\n def sign_and_serialize_transaction(self, transaction):\n txn_from = to_normalized_address(transaction['from'])\n if txn_from not in self.keys:\n raise KeyError(\"No signing key registered for from address: {0}\".format(txn_from))\n transaction = Transaction(\n nonce=to_decimal(transaction['nonce']),\n gasprice=to_decimal(transaction['gasPrice']),\n startgas=to_decimal(transaction['gas']),\n to=transaction['to'],\n value=to_decimal(transaction['value']),\n data=decode_hex(transaction['data']),\n )\n transaction.sign(self.keys[txn_from])\n assert to_normalized_address(transaction.sender) == txn_from\n return rlp.encode(transaction, Transaction)\n", "path": "web3/providers/manager.py"}]}
| 4,087 | 659 |
gh_patches_debug_4597
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-16277
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ThresholdedReLU crashes when the input is a list
**System information**.
- Have I written custom code (as opposed to using a stock example script provided in Keras): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 2.8.0
- Python version: 3.7.12
- Bazel version (if compiling from source): N/A
- GPU model and memory: N/A
- Exact command to reproduce: https://colab.research.google.com/drive/144FOk8RO-Ew_eBtGZlCmsUL6k0cEAlUO?usp=sharing
**Describe the problem**.
`keras.layers.ThresholdedReLU` fails to accept a list input by reporting the following error:
```
[/usr/local/lib/python3.7/dist-packages/keras/layers/advanced_activations.py](https://localhost:8080/#) in call(self, inputs)
262
263 def call(self, inputs):
--> 264 theta = tf.cast(self.theta, inputs.dtype)
265 return inputs * tf.cast(tf.greater(inputs, theta), inputs.dtype)
266
AttributeError: Exception encountered when calling layer "thresholded_re_lu_1" (type ThresholdedReLU).
'list' object has no attribute 'dtype'
Call arguments received:
• inputs=['tf.Tensor(shape=(None, 1, 10), dtype=float32)', 'tf.Tensor(shape=(None, 1, 10), dtype=float32)', 'tf.Tensor(shape=(None, 1, 10), dtype=float32)']
```
In contrast, `keras.layers.ReLU` and `keras.layers.LeakyReLU` can accept the list input.
**Describe the current behavior**.
`keras.layers.ThresholdedReLU` crashes when the input is a list
**Describe the expected behavior**.
ThresholdedReLU can accept the list input.
**[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)**.
- Do you want to contribute a PR? (yes/no):
- If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions
- Briefly describe your candidate solution(if contributing):
After comparing the code between `ThresholedReLU` and `ReLU`, I think the reason is that `ReLU` directly use the backend implementation: [keras/layers/activation/relu.py#L96](https://github.com/keras-team/keras/blob/90aa76f6c48ec5252b2db7926ff060e262dfc1cc/keras/layers/activation/relu.py#L96) while ThresholdedReLU implements by itself: [keras/layers/activation/thresholded_relu.py#L63](https://github.com/keras-team/keras/blob/90aa76f6c48ec5252b2db7926ff060e262dfc1cc/keras/layers/activation/thresholded_relu.py#L63). Not sure why does such an implementation inconsistency exist, but I think we can do something similar in the thresholded_relu.py#L61-63 like [backend.relu](https://github.com/keras-team/keras/blob/90aa76f6c48ec5252b2db7926ff060e262dfc1cc/keras/backend.py#L4963) does:
```
def call(self, inputs):
dtype = getattr(inputs, 'dtype', floatx())
theta = tf.cast(self.theta, dtype)
return inputs * tf.cast(tf.greater(inputs, theta), dtype)
```
Of course, we can also directly use the `backend.relu` for the implementation of `ThresholdedReLU` like `ReLU` and `LeakyReLU` do.
**Standalone code to reproduce the issue**.
You can access this [link](https://colab.research.google.com/drive/144FOk8RO-Ew_eBtGZlCmsUL6k0cEAlUO?usp=sharing) or run the following code:
```
import keras
x = keras.layers.Input(shape=(1,10))
y = keras.layers.ThresholdedReLU()([x,x,x])
model = keras.models.Model(x,y)
model.summary()
```
</issue>
<code>
[start of keras/layers/activation/thresholded_relu.py]
1 # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Thresholded Rectified Linear Unit activation layer."""
16 # pylint: disable=g-classes-have-attributes,g-direct-tensorflow-import
17
18 from keras import backend
19 from keras.engine.base_layer import Layer
20 from keras.utils import tf_utils
21 import tensorflow.compat.v2 as tf
22
23 from tensorflow.python.util.tf_export import keras_export
24
25
26 @keras_export('keras.layers.ThresholdedReLU')
27 class ThresholdedReLU(Layer):
28 """Thresholded Rectified Linear Unit.
29
30 It follows:
31
32 ```
33 f(x) = x for x > theta
34 f(x) = 0 otherwise`
35 ```
36
37 Input shape:
38 Arbitrary. Use the keyword argument `input_shape`
39 (tuple of integers, does not include the samples axis)
40 when using this layer as the first layer in a model.
41
42 Output shape:
43 Same shape as the input.
44
45 Args:
46 theta: Float >= 0. Threshold location of activation.
47 """
48
49 def __init__(self, theta=1.0, **kwargs):
50 super(ThresholdedReLU, self).__init__(**kwargs)
51 if theta is None:
52 raise ValueError(
53 'Theta of a Thresholded ReLU layer cannot be None, expecting a float.'
54 f' Received: {theta}')
55 if theta < 0:
56 raise ValueError('The theta value of a Thresholded ReLU layer '
57 f'should be >=0. Received: {theta}')
58 self.supports_masking = True
59 self.theta = backend.cast_to_floatx(theta)
60
61 def call(self, inputs):
62 theta = tf.cast(self.theta, inputs.dtype)
63 return inputs * tf.cast(tf.greater(inputs, theta), inputs.dtype)
64
65 def get_config(self):
66 config = {'theta': float(self.theta)}
67 base_config = super(ThresholdedReLU, self).get_config()
68 return dict(list(base_config.items()) + list(config.items()))
69
70 @tf_utils.shape_type_conversion
71 def compute_output_shape(self, input_shape):
72 return input_shape
73
[end of keras/layers/activation/thresholded_relu.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras/layers/activation/thresholded_relu.py b/keras/layers/activation/thresholded_relu.py
--- a/keras/layers/activation/thresholded_relu.py
+++ b/keras/layers/activation/thresholded_relu.py
@@ -59,8 +59,8 @@
self.theta = backend.cast_to_floatx(theta)
def call(self, inputs):
- theta = tf.cast(self.theta, inputs.dtype)
- return inputs * tf.cast(tf.greater(inputs, theta), inputs.dtype)
+ dtype = self.compute_dtype
+ return inputs * tf.cast(tf.greater(inputs, self.theta), dtype)
def get_config(self):
config = {'theta': float(self.theta)}
|
{"golden_diff": "diff --git a/keras/layers/activation/thresholded_relu.py b/keras/layers/activation/thresholded_relu.py\n--- a/keras/layers/activation/thresholded_relu.py\n+++ b/keras/layers/activation/thresholded_relu.py\n@@ -59,8 +59,8 @@\n self.theta = backend.cast_to_floatx(theta)\n \n def call(self, inputs):\n- theta = tf.cast(self.theta, inputs.dtype)\n- return inputs * tf.cast(tf.greater(inputs, theta), inputs.dtype)\n+ dtype = self.compute_dtype\n+ return inputs * tf.cast(tf.greater(inputs, self.theta), dtype)\n \n def get_config(self):\n config = {'theta': float(self.theta)}\n", "issue": "ThresholdedReLU crashes when the input is a list\n**System information**.\r\n- Have I written custom code (as opposed to using a stock example script provided in Keras): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): 2.8.0\r\n- Python version: 3.7.12\r\n- Bazel version (if compiling from source): N/A\r\n- GPU model and memory: N/A\r\n- Exact command to reproduce: https://colab.research.google.com/drive/144FOk8RO-Ew_eBtGZlCmsUL6k0cEAlUO?usp=sharing\r\n\r\n**Describe the problem**.\r\n`keras.layers.ThresholdedReLU` fails to accept a list input by reporting the following error:\r\n\r\n```\r\n[/usr/local/lib/python3.7/dist-packages/keras/layers/advanced_activations.py](https://localhost:8080/#) in call(self, inputs)\r\n 262 \r\n 263 def call(self, inputs):\r\n--> 264 theta = tf.cast(self.theta, inputs.dtype)\r\n 265 return inputs * tf.cast(tf.greater(inputs, theta), inputs.dtype)\r\n 266 \r\n\r\nAttributeError: Exception encountered when calling layer \"thresholded_re_lu_1\" (type ThresholdedReLU).\r\n\r\n'list' object has no attribute 'dtype'\r\n\r\nCall arguments received:\r\n \u2022 inputs=['tf.Tensor(shape=(None, 1, 10), dtype=float32)', 'tf.Tensor(shape=(None, 1, 10), dtype=float32)', 'tf.Tensor(shape=(None, 1, 10), dtype=float32)']\r\n```\r\nIn contrast, `keras.layers.ReLU` and `keras.layers.LeakyReLU` can accept the list input.\r\n\r\n**Describe the current behavior**.\r\n`keras.layers.ThresholdedReLU` crashes when the input is a list\r\n\r\n**Describe the expected behavior**.\r\nThresholdedReLU can accept the list input.\r\n\r\n**[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)**.\r\n\r\n- Do you want to contribute a PR? (yes/no):\r\n- If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions\r\n- Briefly describe your candidate solution(if contributing):\r\n\r\nAfter comparing the code between `ThresholedReLU` and `ReLU`, I think the reason is that `ReLU` directly use the backend implementation: [keras/layers/activation/relu.py#L96](https://github.com/keras-team/keras/blob/90aa76f6c48ec5252b2db7926ff060e262dfc1cc/keras/layers/activation/relu.py#L96) while ThresholdedReLU implements by itself: [keras/layers/activation/thresholded_relu.py#L63](https://github.com/keras-team/keras/blob/90aa76f6c48ec5252b2db7926ff060e262dfc1cc/keras/layers/activation/thresholded_relu.py#L63). Not sure why does such an implementation inconsistency exist, but I think we can do something similar in the thresholded_relu.py#L61-63 like [backend.relu](https://github.com/keras-team/keras/blob/90aa76f6c48ec5252b2db7926ff060e262dfc1cc/keras/backend.py#L4963) does:\r\n\r\n```\r\ndef call(self, inputs):\r\n dtype = getattr(inputs, 'dtype', floatx())\r\n theta = tf.cast(self.theta, dtype)\r\n return inputs * tf.cast(tf.greater(inputs, theta), dtype)\r\n```\r\n\r\nOf course, we can also directly use the `backend.relu` for the implementation of `ThresholdedReLU` like `ReLU` and `LeakyReLU` do.\r\n\r\n**Standalone code to reproduce the issue**.\r\nYou can access this [link](https://colab.research.google.com/drive/144FOk8RO-Ew_eBtGZlCmsUL6k0cEAlUO?usp=sharing) or run the following code:\r\n\r\n```\r\nimport keras\r\nx = keras.layers.Input(shape=(1,10))\r\ny = keras.layers.ThresholdedReLU()([x,x,x])\r\nmodel = keras.models.Model(x,y)\r\nmodel.summary()\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Thresholded Rectified Linear Unit activation layer.\"\"\"\n# pylint: disable=g-classes-have-attributes,g-direct-tensorflow-import\n\nfrom keras import backend\nfrom keras.engine.base_layer import Layer\nfrom keras.utils import tf_utils\nimport tensorflow.compat.v2 as tf\n\nfrom tensorflow.python.util.tf_export import keras_export\n\n\n@keras_export('keras.layers.ThresholdedReLU')\nclass ThresholdedReLU(Layer):\n \"\"\"Thresholded Rectified Linear Unit.\n\n It follows:\n\n ```\n f(x) = x for x > theta\n f(x) = 0 otherwise`\n ```\n\n Input shape:\n Arbitrary. Use the keyword argument `input_shape`\n (tuple of integers, does not include the samples axis)\n when using this layer as the first layer in a model.\n\n Output shape:\n Same shape as the input.\n\n Args:\n theta: Float >= 0. Threshold location of activation.\n \"\"\"\n\n def __init__(self, theta=1.0, **kwargs):\n super(ThresholdedReLU, self).__init__(**kwargs)\n if theta is None:\n raise ValueError(\n 'Theta of a Thresholded ReLU layer cannot be None, expecting a float.'\n f' Received: {theta}')\n if theta < 0:\n raise ValueError('The theta value of a Thresholded ReLU layer '\n f'should be >=0. Received: {theta}')\n self.supports_masking = True\n self.theta = backend.cast_to_floatx(theta)\n\n def call(self, inputs):\n theta = tf.cast(self.theta, inputs.dtype)\n return inputs * tf.cast(tf.greater(inputs, theta), inputs.dtype)\n\n def get_config(self):\n config = {'theta': float(self.theta)}\n base_config = super(ThresholdedReLU, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n\n @tf_utils.shape_type_conversion\n def compute_output_shape(self, input_shape):\n return input_shape\n", "path": "keras/layers/activation/thresholded_relu.py"}]}
| 2,273 | 162 |
gh_patches_debug_2919
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-1538
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
VS 2017 backend emits bad WindowsTargetPlatformVersion value
When I tried generating a VS 2017 solution, the generated app.vcxproj contained this:
```
<WindowsTargetPlatformVersion>10.0.14393.0\</WindowsTargetPlatformVersion>
```
Which then causes errors in other `.targets` files attempting to do a numeric comparison against that.
This value is probably taken straight from one of these environment variables:
```
WindowsSDKLibVersion=10.0.14393.0\
WindowsSDKVersion=10.0.14393.0\
```
The trailing backslash is a bit suspect, but may be there intentionally so it can be concatenated to
```
WindowsSdkDir=C:\Program Files (x86)\Windows Kits\10\
```
directly.
</issue>
<code>
[start of mesonbuild/backend/vs2017backend.py]
1 # Copyright 2014-2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 from .vs2010backend import Vs2010Backend
18
19
20 class Vs2017Backend(Vs2010Backend):
21 def __init__(self, build):
22 super().__init__(build)
23 self.name = 'vs2017'
24 self.platform_toolset = 'v141'
25 self.vs_version = '2017'
26 # WindowsSDKVersion should be set by command prompt.
27 self.windows_target_platform_version = os.getenv('WindowsSDKVersion', None)
28
[end of mesonbuild/backend/vs2017backend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mesonbuild/backend/vs2017backend.py b/mesonbuild/backend/vs2017backend.py
--- a/mesonbuild/backend/vs2017backend.py
+++ b/mesonbuild/backend/vs2017backend.py
@@ -24,4 +24,4 @@
self.platform_toolset = 'v141'
self.vs_version = '2017'
# WindowsSDKVersion should be set by command prompt.
- self.windows_target_platform_version = os.getenv('WindowsSDKVersion', None)
+ self.windows_target_platform_version = os.getenv('WindowsSDKVersion', None).rstrip('\\')
|
{"golden_diff": "diff --git a/mesonbuild/backend/vs2017backend.py b/mesonbuild/backend/vs2017backend.py\n--- a/mesonbuild/backend/vs2017backend.py\n+++ b/mesonbuild/backend/vs2017backend.py\n@@ -24,4 +24,4 @@\n self.platform_toolset = 'v141'\n self.vs_version = '2017'\n # WindowsSDKVersion should be set by command prompt.\n- self.windows_target_platform_version = os.getenv('WindowsSDKVersion', None)\n+ self.windows_target_platform_version = os.getenv('WindowsSDKVersion', None).rstrip('\\\\')\n", "issue": "VS 2017 backend emits bad WindowsTargetPlatformVersion value\nWhen I tried generating a VS 2017 solution, the generated app.vcxproj contained this:\r\n\r\n```\r\n<WindowsTargetPlatformVersion>10.0.14393.0\\</WindowsTargetPlatformVersion>\r\n```\r\n\r\nWhich then causes errors in other `.targets` files attempting to do a numeric comparison against that.\r\nThis value is probably taken straight from one of these environment variables:\r\n\r\n```\r\nWindowsSDKLibVersion=10.0.14393.0\\\r\nWindowsSDKVersion=10.0.14393.0\\\r\n```\r\n\r\nThe trailing backslash is a bit suspect, but may be there intentionally so it can be concatenated to \r\n```\r\nWindowsSdkDir=C:\\Program Files (x86)\\Windows Kits\\10\\\r\n```\r\ndirectly.\n", "before_files": [{"content": "# Copyright 2014-2016 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom .vs2010backend import Vs2010Backend\n\n\nclass Vs2017Backend(Vs2010Backend):\n def __init__(self, build):\n super().__init__(build)\n self.name = 'vs2017'\n self.platform_toolset = 'v141'\n self.vs_version = '2017'\n # WindowsSDKVersion should be set by command prompt.\n self.windows_target_platform_version = os.getenv('WindowsSDKVersion', None)\n", "path": "mesonbuild/backend/vs2017backend.py"}]}
| 1,033 | 144 |
gh_patches_debug_10204
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1098
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEAT][CV] Add a "per-class" option to property drift & heatmap comparison
In this per class option, the drift would be shown per class for the top drifted classes.
</issue>
<code>
[start of docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py]
1 # -*- coding: utf-8 -*-
2 """
3 Image Property Drift Check
4 **************************
5 This notebooks provides an overview for using and understanding the image property drift check.
6
7 **Structure:**
8
9 * `How Does the ImagePropertyDrift Check Work? <#how-does-the-imagepropertydrift-check-work>`__
10 * `Which Image Properties Are Used? <#which-image-properties-are-used>`__
11 * `Prepare data <#prepare-data>`__
12 * `Run the check <#run-the-check>`__
13 * `Define a condition <#define-a-condition>`__
14 * `Check Parameters <#check-parameters>`__
15
16 How Does the ImagePropertyDrift Check Work?
17 =================================
18 Data drift is simply a change in the distribution of data over time. It is also one
19 of the top reasons that a machine learning model performance degrades over time.
20
21 In the context of machine learning, drift between the training set and the test set
22 will likely make the model prone to errors. In other words, if the model was trained
23 on data that is different from the current test data, it will probably make more mistakes
24 predicting the target variable.
25
26 The Image Property Drift check calculates a drift score for each image property in
27 the test dataset, by comparing its distribution to the train dataset. For this, we
28 use the Earth Movers Distance (Wasserstein distance).
29
30 Which Image Properties Are Used?
31 =================================
32 ============================== ==========
33 Property name What is it
34 ============================== ==========
35 Aspect Ratio Ratio between height and width of image (height / width)
36 Area Area of image in pixels (height * width)
37 Brightness Average intensity of image pixels. Color channels have different weights according to
38 RGB-to-Grayscale formula
39 RMS Contrast Contrast of image, calculated by standard deviation of pixels
40 Mean Red Relative Intensity Mean over all pixels of the red channel, scaled to their relative intensity in
41 comparison to the other channels [r / (r + g + b)].
42 Mean Green Relative Intensity Mean over all pixels of the green channel, scaled to their relative intensity in
43 comparison to the other channels [g / (r + g + b)].
44 Mean Blue Relative Intensity Mean over all pixels of the blue channel, scaled to their relative intensity in
45 comparison to the other channels [b / (r + g + b)].
46 ============================== ==========
47
48 Imports
49 -------
50 """
51
52 #%%
53
54 from deepchecks.vision.datasets.detection import coco
55 from deepchecks.vision.checks.distribution import ImagePropertyDrift
56
57 #%%
58 # Prepare data
59 # ------------
60 from deepchecks.vision.utils import image_properties
61
62 train_dataset = coco.load_dataset(train=True, object_type='VisionData')
63 test_dataset = coco.load_dataset(train=False, object_type='VisionData')
64
65 #%%
66 # Run the check
67 # -------------
68
69 check_result = ImagePropertyDrift().run(train_dataset, test_dataset)
70 check_result
71
72 #%%
73 # Observe the check’s output
74 # --------------------------
75 # The result value is a pandas DataFrame that contains drift score for each image property.
76
77 check_result.value
78
79 #%%
80 # Define a condition
81 # ==================
82 # We can define a condition that make sure that image properties drift scores do not
83 # exceed allowed threshold.
84
85 check_result = (
86 ImagePropertyDrift()
87 .add_condition_drift_score_not_greater_than(0.001)
88 .run(train_dataset, test_dataset)
89 )
90 check_result.show(show_additional_outputs=False)
91
92 #%%
93 # Check Parameters
94 # ----------------
95 # Image Property Drift Check accepts two parameters that allows us to control the look of the output:
96 #
97 # * `image_properties` - list of image properties that we are interested in
98 # * `max_num_categories` - Maximal number of categories to use for the calculation of drift using PSI (Population Stability Index)
99 #
100 # Only next string values are allowed for the `image_properties` parameter:
101 #
102 # * `aspect_ratio`
103 # * `area`
104 # * `brightness`
105 # * `mean_red_relative_intensity`
106 # * `mean_green_relative_intensity`
107 # * `mean_blue_relative_intensity`
108
109 from typing import List
110 import numpy as np
111
112
113 def area(images: List[np.ndarray]) -> List[int]:
114 # Return list of integers of image areas (height multiplied by width)
115 return [img.shape[0] * img.shape[1] for img in images]
116
117
118 def aspect_ratio(images: List[np.ndarray]) -> List[float]:
119 # Return list of floats of image height to width ratio
120 return [img.shape[0] / img.shape[1] for img in images]
121
122
123 properties = [
124 {'name': 'Area', 'method': area, 'output_type': 'continuous'},
125 {'name': 'Aspect Ratio', 'method': aspect_ratio, 'output_type': 'continuous'}
126 ]
127
128 check_result = ImagePropertyDrift(
129 alternative_image_properties=properties,
130 max_num_categories=20
131 ).run(train_dataset, test_dataset)
132
133 check_result
[end of docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py b/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py
--- a/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py
+++ b/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py
@@ -76,6 +76,17 @@
check_result.value
+#%%
+# We can also pass the check a list of classes we wish to inspect, and the check will calculate the properties only
+# for images either belonging to the classes or containing annotations belonging to the classes. (We'll lower the
+# min_samples to 5 to tell the check to calculate drift despite having only a few images left after the class
+# filtration)
+
+check_result = ImagePropertyDrift(classes_to_display=['person', 'cat', 'cell phone', 'car'], min_samples=5
+ ).run(train_dataset, test_dataset)
+check_result
+
+
#%%
# Define a condition
# ==================
|
{"golden_diff": "diff --git a/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py b/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py\n--- a/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py\n+++ b/docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py\n@@ -76,6 +76,17 @@\n \n check_result.value\n \n+#%%\n+# We can also pass the check a list of classes we wish to inspect, and the check will calculate the properties only\n+# for images either belonging to the classes or containing annotations belonging to the classes. (We'll lower the\n+# min_samples to 5 to tell the check to calculate drift despite having only a few images left after the class\n+# filtration)\n+\n+check_result = ImagePropertyDrift(classes_to_display=['person', 'cat', 'cell phone', 'car'], min_samples=5\n+ ).run(train_dataset, test_dataset)\n+check_result\n+\n+\n #%%\n # Define a condition\n # ==================\n", "issue": "[FEAT][CV] Add a \"per-class\" option to property drift & heatmap comparison\nIn this per class option, the drift would be shown per class for the top drifted classes. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nImage Property Drift Check\n**************************\nThis notebooks provides an overview for using and understanding the image property drift check.\n\n**Structure:**\n\n* `How Does the ImagePropertyDrift Check Work? <#how-does-the-imagepropertydrift-check-work>`__\n* `Which Image Properties Are Used? <#which-image-properties-are-used>`__\n* `Prepare data <#prepare-data>`__\n* `Run the check <#run-the-check>`__\n* `Define a condition <#define-a-condition>`__\n* `Check Parameters <#check-parameters>`__\n\nHow Does the ImagePropertyDrift Check Work?\n=================================\nData drift is simply a change in the distribution of data over time. It is also one\nof the top reasons that a machine learning model performance degrades over time.\n\nIn the context of machine learning, drift between the training set and the test set\nwill likely make the model prone to errors. In other words, if the model was trained\non data that is different from the current test data, it will probably make more mistakes\npredicting the target variable.\n\nThe Image Property Drift check calculates a drift score for each image property in\nthe test dataset, by comparing its distribution to the train dataset. For this, we\nuse the Earth Movers Distance (Wasserstein distance).\n\nWhich Image Properties Are Used?\n=================================\n============================== ==========\nProperty name What is it\n============================== ==========\nAspect Ratio Ratio between height and width of image (height / width)\nArea Area of image in pixels (height * width)\nBrightness Average intensity of image pixels. Color channels have different weights according to\n RGB-to-Grayscale formula\nRMS Contrast Contrast of image, calculated by standard deviation of pixels\nMean Red Relative Intensity Mean over all pixels of the red channel, scaled to their relative intensity in\n comparison to the other channels [r / (r + g + b)].\nMean Green Relative Intensity Mean over all pixels of the green channel, scaled to their relative intensity in\n comparison to the other channels [g / (r + g + b)].\nMean Blue Relative Intensity Mean over all pixels of the blue channel, scaled to their relative intensity in\n comparison to the other channels [b / (r + g + b)].\n============================== ==========\n\nImports\n-------\n\"\"\"\n\n#%%\n\nfrom deepchecks.vision.datasets.detection import coco\nfrom deepchecks.vision.checks.distribution import ImagePropertyDrift\n\n#%%\n# Prepare data\n# ------------\nfrom deepchecks.vision.utils import image_properties\n\ntrain_dataset = coco.load_dataset(train=True, object_type='VisionData')\ntest_dataset = coco.load_dataset(train=False, object_type='VisionData')\n\n#%%\n# Run the check \n# -------------\n\ncheck_result = ImagePropertyDrift().run(train_dataset, test_dataset)\ncheck_result\n\n#%%\n# Observe the check\u2019s output \n# --------------------------\n# The result value is a pandas DataFrame that contains drift score for each image property.\n\ncheck_result.value\n\n#%%\n# Define a condition\n# ==================\n# We can define a condition that make sure that image properties drift scores do not\n# exceed allowed threshold.\n\ncheck_result = (\n ImagePropertyDrift()\n .add_condition_drift_score_not_greater_than(0.001)\n .run(train_dataset, test_dataset)\n)\ncheck_result.show(show_additional_outputs=False)\n\n#%%\n# Check Parameters\n# ----------------\n# Image Property Drift Check accepts two parameters that allows us to control the look of the output:\n#\n# * `image_properties` - list of image properties that we are interested in\n# * `max_num_categories` - Maximal number of categories to use for the calculation of drift using PSI (Population Stability Index)\n#\n# Only next string values are allowed for the `image_properties` parameter:\n#\n# * `aspect_ratio`\n# * `area`\n# * `brightness`\n# * `mean_red_relative_intensity`\n# * `mean_green_relative_intensity`\n# * `mean_blue_relative_intensity`\n\nfrom typing import List\nimport numpy as np\n\n\ndef area(images: List[np.ndarray]) -> List[int]:\n # Return list of integers of image areas (height multiplied by width)\n return [img.shape[0] * img.shape[1] for img in images]\n\n\ndef aspect_ratio(images: List[np.ndarray]) -> List[float]:\n # Return list of floats of image height to width ratio\n return [img.shape[0] / img.shape[1] for img in images]\n\n\nproperties = [\n {'name': 'Area', 'method': area, 'output_type': 'continuous'},\n {'name': 'Aspect Ratio', 'method': aspect_ratio, 'output_type': 'continuous'}\n]\n\ncheck_result = ImagePropertyDrift(\n alternative_image_properties=properties,\n max_num_categories=20\n).run(train_dataset, test_dataset)\n\ncheck_result", "path": "docs/source/examples/vision/checks/distribution/source/plot_image_property_check.py"}]}
| 1,933 | 226 |
gh_patches_debug_3317
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-4934
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/nn/optimizer/cpu_adam.py]
1 import math
2 from typing import Optional
3
4 import torch
5
6 from colossalai.kernel.op_builder import CPUAdamBuilder
7
8 from .nvme_optimizer import NVMeOptimizer
9
10
11 class CPUAdam(NVMeOptimizer):
12 """Implements Adam algorithm.
13
14 Supports parameters updating on both GPU and CPU, depending on the device of parameters.
15 But the parameters and gradients should on the same device:
16 * Parameters on CPU and gradients on CPU is allowed.
17 * Parameters on GPU and gradients on GPU is allowed.
18 * Parameters on GPU and gradients on CPU is **not** allowed.
19
20 `CPUAdam` requires CUDA extensions which can be built during installation or runtime.
21
22 This version of CPU Adam accelerates parameters updating on CPU with SIMD.
23 Support of AVX2 or AVX512 is required.
24
25 The GPU part is implemented in an naive way.
26
27 CPU Adam also supports the hybrid precision calculation, eg. fp32 parameters and fp16 gradients.
28
29 :class:`colossalai.nn.optimizer.CPUAdam` may be used as a drop-in replacement for ``torch.optim.AdamW``,
30 or ``torch.optim.Adam`` with ``adamw_mode=False``
31
32 Adam was been proposed in `Adam: A Method for Stochastic Optimization`_.
33
34 Arguments:
35 model_params (iterable): iterable of parameters of dicts defining
36 parameter groups.
37 lr (float, optional): learning rate. (default: 1e-3)
38 betas (Tuple[float, float], optional): coefficients used for computing
39 running averages of gradient and its square. (default: (0.9, 0.999))
40 eps (float, optional): term added to the denominator to improve
41 numerical stability. (default: 1e-8)
42 weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
43 amsgrad (boolean, optional): whether to use the AMSGrad variant of this
44 algorithm from the paper `On the Convergence of Adam and Beyond`_
45 (default: False) NOT SUPPORTED yet in CPUAdam!
46 adamw_mode (boolean, optional): Apply L2 regularization or weight decay
47 True for decoupled weight decay(also known as AdamW) (default: True)
48 simd_log (boolean, optional): whether to show if you are using SIMD to
49 accelerate. (default: False)
50 nvme_offload_fraction (float, optional): Fraction of optimizer states to be offloaded to NVMe. Defaults to 0.0.
51 nvme_offload_dir (Optional[str], optional): Directory to save NVMe offload files.
52 If it's ``None``, a random temporary directory will be used. Defaults to None.
53
54 .. _Adam\: A Method for Stochastic Optimization:
55 https://arxiv.org/abs/1412.6980
56 .. _On the Convergence of Adam and Beyond:
57 https://openreview.net/forum?id=ryQu7f-RZ
58 """
59
60 # Number of fp32 shards for per parameter
61 # Param weight, grad, momentum and variance
62 num_fp32_shards_per_param = 4
63
64 def __init__(
65 self,
66 model_params,
67 lr=1e-3,
68 bias_correction=True,
69 betas=(0.9, 0.999),
70 eps=1e-8,
71 weight_decay=0,
72 adamw_mode=True,
73 nvme_offload_fraction: float = 0.0,
74 nvme_offload_dir: Optional[str] = None,
75 ):
76 default_args = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, bias_correction=bias_correction)
77 super(CPUAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)
78 self.adamw_mode = adamw_mode
79 cpu_adam = CPUAdamBuilder().load()
80 # if you find yourself stuck here, make sure that you install colossalai with CUDA_EXT=1 specification
81 self.cpu_adam_op = cpu_adam.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)
82
83 def torch_adam_update(
84 self,
85 data,
86 grad,
87 exp_avg,
88 exp_avg_sq,
89 lr,
90 beta1,
91 beta2,
92 eps,
93 weight_decay,
94 bias_correction1,
95 bias_correction2,
96 use_adamw=False,
97 ):
98 grad = grad.to(data.dtype)
99
100 if weight_decay != 0:
101 if use_adamw:
102 data.mul_(1 - lr * weight_decay)
103 else:
104 grad = grad.add(data, alpha=weight_decay)
105
106 # Decay the first and second moment running average coefficient
107 exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
108 exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
109
110 # TODO(jiaruifang) dose not support amsgrad
111 denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)
112
113 step_size = lr / bias_correction1
114
115 data.addcdiv_(exp_avg, denom, value=-step_size)
116
117 @torch.no_grad()
118 def step(self, closure=None, div_scale: float = -1):
119 loss = None
120 if closure is not None:
121 with torch.enable_grad():
122 loss = closure()
123
124 self._pre_step("exp_avg", "exp_avg_sq")
125 for _, group in enumerate(self.param_groups):
126 for _, p in enumerate(group["params"]):
127 if p.grad is None:
128 continue
129
130 state = self.state[p]
131
132 target_device = p.device
133 if len(state) == 0:
134 state["step"] = 0
135 # gradient momentums
136 state["exp_avg"] = torch.zeros_like(p, device=target_device)
137 # gradient variances
138 state["exp_avg_sq"] = torch.zeros_like(p, device=target_device)
139 self._post_state_init(p)
140
141 state["step"] += 1
142 beta1, beta2 = group["betas"]
143
144 if target_device.type == "cpu":
145 assert p.data.numel() == p.grad.data.numel(), "parameter and gradient should have the same size"
146 assert state["exp_avg"].device.type == "cpu", "exp_avg should stay on cpu"
147 assert state["exp_avg_sq"].device.type == "cpu", "exp_avg should stay on cpu"
148 self._pre_update(p, "exp_avg", "exp_avg_sq")
149 if p.grad.dtype is torch.bfloat16:
150 # cpu adam kernel does not support bf16 now
151 bias_correction1 = 1 - beta1 ** state["step"]
152 bias_correction2 = 1 - beta2 ** state["step"]
153 self.torch_adam_update(
154 p.data,
155 p.grad.data,
156 state["exp_avg"],
157 state["exp_avg_sq"],
158 group["lr"],
159 beta1,
160 beta2,
161 group["eps"],
162 group["weight_decay"],
163 bias_correction1,
164 bias_correction2,
165 self.adamw_mode,
166 )
167 else:
168 self.cpu_adam_op.step(
169 state["step"],
170 group["lr"],
171 beta1,
172 beta2,
173 group["eps"],
174 group["weight_decay"],
175 group["bias_correction"],
176 p.data,
177 p.grad.data,
178 state["exp_avg"],
179 state["exp_avg_sq"],
180 div_scale,
181 )
182 self._post_update(p, "exp_avg", "exp_avg_sq")
183 elif target_device.type == "cuda":
184 assert div_scale == -1, "div_scale should remain default"
185 assert state["exp_avg"].device.type == "cuda", "exp_avg should stay on cuda"
186 assert state["exp_avg_sq"].device.type == "cuda", "exp_avg should stay on cuda"
187
188 bias_correction1 = 1 - beta1 ** state["step"]
189 bias_correction2 = 1 - beta2 ** state["step"]
190
191 # adam on cuda
192 self.torch_adam_update(
193 p.data,
194 p.grad.data,
195 state["exp_avg"],
196 state["exp_avg_sq"],
197 group["lr"],
198 beta1,
199 beta2,
200 group["eps"],
201 group["weight_decay"],
202 bias_correction1,
203 bias_correction2,
204 self.adamw_mode,
205 )
206 else:
207 raise RuntimeError
208 self._post_step()
209 return loss
210
[end of colossalai/nn/optimizer/cpu_adam.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/colossalai/nn/optimizer/cpu_adam.py b/colossalai/nn/optimizer/cpu_adam.py
--- a/colossalai/nn/optimizer/cpu_adam.py
+++ b/colossalai/nn/optimizer/cpu_adam.py
@@ -9,7 +9,8 @@
class CPUAdam(NVMeOptimizer):
- """Implements Adam algorithm.
+ """
+ Implements Adam algorithm.
Supports parameters updating on both GPU and CPU, depending on the device of parameters.
But the parameters and gradients should on the same device:
|
{"golden_diff": "diff --git a/colossalai/nn/optimizer/cpu_adam.py b/colossalai/nn/optimizer/cpu_adam.py\n--- a/colossalai/nn/optimizer/cpu_adam.py\n+++ b/colossalai/nn/optimizer/cpu_adam.py\n@@ -9,7 +9,8 @@\n \n \n class CPUAdam(NVMeOptimizer):\n- \"\"\"Implements Adam algorithm.\n+ \"\"\"\n+ Implements Adam algorithm.\n \n Supports parameters updating on both GPU and CPU, depending on the device of parameters.\n But the parameters and gradients should on the same device:\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import math\nfrom typing import Optional\n\nimport torch\n\nfrom colossalai.kernel.op_builder import CPUAdamBuilder\n\nfrom .nvme_optimizer import NVMeOptimizer\n\n\nclass CPUAdam(NVMeOptimizer):\n \"\"\"Implements Adam algorithm.\n\n Supports parameters updating on both GPU and CPU, depending on the device of parameters.\n But the parameters and gradients should on the same device:\n * Parameters on CPU and gradients on CPU is allowed.\n * Parameters on GPU and gradients on GPU is allowed.\n * Parameters on GPU and gradients on CPU is **not** allowed.\n\n `CPUAdam` requires CUDA extensions which can be built during installation or runtime.\n\n This version of CPU Adam accelerates parameters updating on CPU with SIMD.\n Support of AVX2 or AVX512 is required.\n\n The GPU part is implemented in an naive way.\n\n CPU Adam also supports the hybrid precision calculation, eg. fp32 parameters and fp16 gradients.\n\n :class:`colossalai.nn.optimizer.CPUAdam` may be used as a drop-in replacement for ``torch.optim.AdamW``,\n or ``torch.optim.Adam`` with ``adamw_mode=False``\n\n Adam was been proposed in `Adam: A Method for Stochastic Optimization`_.\n\n Arguments:\n model_params (iterable): iterable of parameters of dicts defining\n parameter groups.\n lr (float, optional): learning rate. (default: 1e-3)\n betas (Tuple[float, float], optional): coefficients used for computing\n running averages of gradient and its square. (default: (0.9, 0.999))\n eps (float, optional): term added to the denominator to improve\n numerical stability. (default: 1e-8)\n weight_decay (float, optional): weight decay (L2 penalty) (default: 0)\n amsgrad (boolean, optional): whether to use the AMSGrad variant of this\n algorithm from the paper `On the Convergence of Adam and Beyond`_\n (default: False) NOT SUPPORTED yet in CPUAdam!\n adamw_mode (boolean, optional): Apply L2 regularization or weight decay\n True for decoupled weight decay(also known as AdamW) (default: True)\n simd_log (boolean, optional): whether to show if you are using SIMD to\n accelerate. (default: False)\n nvme_offload_fraction (float, optional): Fraction of optimizer states to be offloaded to NVMe. Defaults to 0.0.\n nvme_offload_dir (Optional[str], optional): Directory to save NVMe offload files.\n If it's ``None``, a random temporary directory will be used. Defaults to None.\n\n .. _Adam\\: A Method for Stochastic Optimization:\n https://arxiv.org/abs/1412.6980\n .. _On the Convergence of Adam and Beyond:\n https://openreview.net/forum?id=ryQu7f-RZ\n \"\"\"\n\n # Number of fp32 shards for per parameter\n # Param weight, grad, momentum and variance\n num_fp32_shards_per_param = 4\n\n def __init__(\n self,\n model_params,\n lr=1e-3,\n bias_correction=True,\n betas=(0.9, 0.999),\n eps=1e-8,\n weight_decay=0,\n adamw_mode=True,\n nvme_offload_fraction: float = 0.0,\n nvme_offload_dir: Optional[str] = None,\n ):\n default_args = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, bias_correction=bias_correction)\n super(CPUAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)\n self.adamw_mode = adamw_mode\n cpu_adam = CPUAdamBuilder().load()\n # if you find yourself stuck here, make sure that you install colossalai with CUDA_EXT=1 specification\n self.cpu_adam_op = cpu_adam.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)\n\n def torch_adam_update(\n self,\n data,\n grad,\n exp_avg,\n exp_avg_sq,\n lr,\n beta1,\n beta2,\n eps,\n weight_decay,\n bias_correction1,\n bias_correction2,\n use_adamw=False,\n ):\n grad = grad.to(data.dtype)\n\n if weight_decay != 0:\n if use_adamw:\n data.mul_(1 - lr * weight_decay)\n else:\n grad = grad.add(data, alpha=weight_decay)\n\n # Decay the first and second moment running average coefficient\n exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)\n exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)\n\n # TODO(jiaruifang) dose not support amsgrad\n denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)\n\n step_size = lr / bias_correction1\n\n data.addcdiv_(exp_avg, denom, value=-step_size)\n\n @torch.no_grad()\n def step(self, closure=None, div_scale: float = -1):\n loss = None\n if closure is not None:\n with torch.enable_grad():\n loss = closure()\n\n self._pre_step(\"exp_avg\", \"exp_avg_sq\")\n for _, group in enumerate(self.param_groups):\n for _, p in enumerate(group[\"params\"]):\n if p.grad is None:\n continue\n\n state = self.state[p]\n\n target_device = p.device\n if len(state) == 0:\n state[\"step\"] = 0\n # gradient momentums\n state[\"exp_avg\"] = torch.zeros_like(p, device=target_device)\n # gradient variances\n state[\"exp_avg_sq\"] = torch.zeros_like(p, device=target_device)\n self._post_state_init(p)\n\n state[\"step\"] += 1\n beta1, beta2 = group[\"betas\"]\n\n if target_device.type == \"cpu\":\n assert p.data.numel() == p.grad.data.numel(), \"parameter and gradient should have the same size\"\n assert state[\"exp_avg\"].device.type == \"cpu\", \"exp_avg should stay on cpu\"\n assert state[\"exp_avg_sq\"].device.type == \"cpu\", \"exp_avg should stay on cpu\"\n self._pre_update(p, \"exp_avg\", \"exp_avg_sq\")\n if p.grad.dtype is torch.bfloat16:\n # cpu adam kernel does not support bf16 now\n bias_correction1 = 1 - beta1 ** state[\"step\"]\n bias_correction2 = 1 - beta2 ** state[\"step\"]\n self.torch_adam_update(\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n bias_correction1,\n bias_correction2,\n self.adamw_mode,\n )\n else:\n self.cpu_adam_op.step(\n state[\"step\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n group[\"bias_correction\"],\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n div_scale,\n )\n self._post_update(p, \"exp_avg\", \"exp_avg_sq\")\n elif target_device.type == \"cuda\":\n assert div_scale == -1, \"div_scale should remain default\"\n assert state[\"exp_avg\"].device.type == \"cuda\", \"exp_avg should stay on cuda\"\n assert state[\"exp_avg_sq\"].device.type == \"cuda\", \"exp_avg should stay on cuda\"\n\n bias_correction1 = 1 - beta1 ** state[\"step\"]\n bias_correction2 = 1 - beta2 ** state[\"step\"]\n\n # adam on cuda\n self.torch_adam_update(\n p.data,\n p.grad.data,\n state[\"exp_avg\"],\n state[\"exp_avg_sq\"],\n group[\"lr\"],\n beta1,\n beta2,\n group[\"eps\"],\n group[\"weight_decay\"],\n bias_correction1,\n bias_correction2,\n self.adamw_mode,\n )\n else:\n raise RuntimeError\n self._post_step()\n return loss\n", "path": "colossalai/nn/optimizer/cpu_adam.py"}]}
| 2,940 | 126 |
gh_patches_debug_1730
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-209
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Secure Django's language cookie
The following can be configured in [`settings.py`](https://github.com/cal-itp/benefits/blob/dev/benefits/settings.py)
* [x] [`LANGUAGE_COOKIE_HTTPONLY`](https://docs.djangoproject.com/en/3.2/ref/settings/#language-cookie-httponly) = True (same as `SESSION_COOKIE_HTTPONLY`)
* [x] [`LANGUAGE_COOKIE_SAMESITE`](https://docs.djangoproject.com/en/3.2/ref/settings/#language-cookie-samesite) = "Strict" (same as `SESSION_COOKIE_SAMESITE`)
* [x] [`LANGUAGE_COOKIE_SECURE`](https://docs.djangoproject.com/en/3.2/ref/settings/#language-cookie-secure) = True (same as `SESSION_COOKIE_SECURE`)
</issue>
<code>
[start of benefits/settings.py]
1 """
2 Django settings for benefits project.
3 """
4 import os
5
6 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
7 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
8
9 # SECURITY WARNING: keep the secret key used in production secret!
10 SECRET_KEY = os.environ["DJANGO_SECRET_KEY"]
11
12 # SECURITY WARNING: don't run with debug turned on in production!
13 DEBUG = os.environ.get("DJANGO_DEBUG", "False").lower() == "true"
14
15 ADMIN = os.environ.get("DJANGO_ADMIN", "False").lower() == "true"
16
17 ALLOWED_HOSTS = []
18
19 if DEBUG:
20 ALLOWED_HOSTS.extend(["*"])
21 else:
22 hosts = os.environ["DJANGO_ALLOWED_HOSTS"].split(",")
23 ALLOWED_HOSTS.extend(hosts)
24
25 # Application definition
26
27 INSTALLED_APPS = [
28 "django.contrib.sessions",
29 "django.contrib.staticfiles",
30 "benefits.core",
31 "benefits.enrollment",
32 "benefits.eligibility",
33 ]
34
35 if ADMIN:
36 INSTALLED_APPS.extend(
37 [
38 "django.contrib.admin",
39 "django.contrib.auth",
40 "django.contrib.contenttypes",
41 "django.contrib.messages",
42 ]
43 )
44
45 MIDDLEWARE = [
46 "django.middleware.security.SecurityMiddleware",
47 "django.contrib.sessions.middleware.SessionMiddleware",
48 "django.middleware.locale.LocaleMiddleware",
49 "benefits.core.middleware.Healthcheck",
50 "django.middleware.common.CommonMiddleware",
51 "django.middleware.csrf.CsrfViewMiddleware",
52 "django.middleware.clickjacking.XFrameOptionsMiddleware",
53 "benefits.core.middleware.DebugSession",
54 "benefits.core.middleware.ChangedLanguageEvent",
55 ]
56
57 if ADMIN:
58 MIDDLEWARE.extend(
59 [
60 "django.contrib.auth.middleware.AuthenticationMiddleware",
61 "django.contrib.messages.middleware.MessageMiddleware",
62 ]
63 )
64
65 CSRF_COOKIE_HTTPONLY = True
66
67 SESSION_COOKIE_AGE = 3600
68 SESSION_COOKIE_SAMESITE = "Strict"
69 SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"
70
71 if not DEBUG:
72 CSRF_COOKIE_SECURE = True
73 CSRF_FAILURE_VIEW = "benefits.core.views.csrf_failure"
74 SESSION_COOKIE_SECURE = True
75
76 ROOT_URLCONF = "benefits.urls"
77
78 template_ctx_processors = [
79 "django.template.context_processors.request",
80 "benefits.core.context_processors.analytics",
81 ]
82
83 if DEBUG:
84 template_ctx_processors.extend(
85 [
86 "django.template.context_processors.debug",
87 "benefits.core.context_processors.debug",
88 ]
89 )
90
91 if ADMIN:
92 template_ctx_processors.extend(
93 [
94 "django.contrib.auth.context_processors.auth",
95 "django.contrib.messages.context_processors.messages",
96 ]
97 )
98
99 TEMPLATES = [
100 {
101 "BACKEND": "django.template.backends.django.DjangoTemplates",
102 "DIRS": [os.path.join(BASE_DIR, "benefits", "templates")],
103 "APP_DIRS": True,
104 "OPTIONS": {
105 "context_processors": template_ctx_processors,
106 },
107 },
108 ]
109
110 WSGI_APPLICATION = "benefits.wsgi.application"
111
112 DATABASES = {
113 "default": {
114 "ENGINE": "django.db.backends.sqlite3",
115 "NAME": os.environ.get("DJANGO_DB", "django") + ".db",
116 }
117 }
118
119 # Password validation
120
121 AUTH_PASSWORD_VALIDATORS = []
122
123 if ADMIN:
124 AUTH_PASSWORD_VALIDATORS.extend(
125 [
126 {
127 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
128 },
129 {
130 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
131 },
132 {
133 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
134 },
135 {
136 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
137 },
138 ]
139 )
140
141 # Internationalization
142
143 LANGUAGE_CODE = "en"
144
145 LANGUAGES = [("en", "English"), ("es", "Español")]
146
147 LOCALE_PATHS = [os.path.join(BASE_DIR, "benefits", "locale")]
148
149 USE_I18N = True
150 USE_L10N = True
151
152 TIME_ZONE = "UTC"
153 USE_TZ = True
154
155 # Static files (CSS, JavaScript, Images)
156
157 STATIC_URL = "/static/"
158 STATICFILES_DIRS = [os.path.join(BASE_DIR, "benefits", "static")]
159 STATIC_ROOT = os.path.join(BASE_DIR, "static")
160
161 # Logging configuration
162
163 LOG_LEVEL = os.environ.get("DJANGO_LOG_LEVEL", "DEBUG" if DEBUG else "WARNING")
164 LOGGING = {
165 "version": 1,
166 "disable_existing_loggers": False,
167 "formatters": {
168 "default": {
169 "format": "[{asctime}] {levelname} {name}:{lineno} {message}",
170 "datefmt": "%d/%b/%Y %H:%M:%S",
171 "style": "{",
172 },
173 },
174 "handlers": {
175 "default": {"class": "logging.StreamHandler", "formatter": "default"},
176 },
177 "root": {
178 "handlers": ["default"],
179 "level": LOG_LEVEL,
180 },
181 "loggers": {"django": {"handlers": ["default"], "propagate": False}},
182 }
183
184 # Analytics configuration
185
186 ANALYTICS_KEY = os.environ.get("ANALYTICS_KEY")
187
[end of benefits/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benefits/settings.py b/benefits/settings.py
--- a/benefits/settings.py
+++ b/benefits/settings.py
@@ -142,6 +142,10 @@
LANGUAGE_CODE = "en"
+LANGUAGE_COOKIE_HTTPONLY = True
+LANGUAGE_COOKIE_SAMESITE = "Strict"
+LANGUAGE_COOKIE_SECURE = True
+
LANGUAGES = [("en", "English"), ("es", "Español")]
LOCALE_PATHS = [os.path.join(BASE_DIR, "benefits", "locale")]
|
{"golden_diff": "diff --git a/benefits/settings.py b/benefits/settings.py\n--- a/benefits/settings.py\n+++ b/benefits/settings.py\n@@ -142,6 +142,10 @@\n \n LANGUAGE_CODE = \"en\"\n \n+LANGUAGE_COOKIE_HTTPONLY = True\n+LANGUAGE_COOKIE_SAMESITE = \"Strict\"\n+LANGUAGE_COOKIE_SECURE = True\n+\n LANGUAGES = [(\"en\", \"English\"), (\"es\", \"Espa\u00f1ol\")]\n \n LOCALE_PATHS = [os.path.join(BASE_DIR, \"benefits\", \"locale\")]\n", "issue": "Secure Django's language cookie\nThe following can be configured in [`settings.py`](https://github.com/cal-itp/benefits/blob/dev/benefits/settings.py)\r\n\r\n* [x] [`LANGUAGE_COOKIE_HTTPONLY`](https://docs.djangoproject.com/en/3.2/ref/settings/#language-cookie-httponly) = True (same as `SESSION_COOKIE_HTTPONLY`)\r\n* [x] [`LANGUAGE_COOKIE_SAMESITE`](https://docs.djangoproject.com/en/3.2/ref/settings/#language-cookie-samesite) = \"Strict\" (same as `SESSION_COOKIE_SAMESITE`)\r\n* [x] [`LANGUAGE_COOKIE_SECURE`](https://docs.djangoproject.com/en/3.2/ref/settings/#language-cookie-secure) = True (same as `SESSION_COOKIE_SECURE`)\r\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for benefits project.\n\"\"\"\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ[\"DJANGO_SECRET_KEY\"]\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = os.environ.get(\"DJANGO_DEBUG\", \"False\").lower() == \"true\"\n\nADMIN = os.environ.get(\"DJANGO_ADMIN\", \"False\").lower() == \"true\"\n\nALLOWED_HOSTS = []\n\nif DEBUG:\n ALLOWED_HOSTS.extend([\"*\"])\nelse:\n hosts = os.environ[\"DJANGO_ALLOWED_HOSTS\"].split(\",\")\n ALLOWED_HOSTS.extend(hosts)\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.sessions\",\n \"django.contrib.staticfiles\",\n \"benefits.core\",\n \"benefits.enrollment\",\n \"benefits.eligibility\",\n]\n\nif ADMIN:\n INSTALLED_APPS.extend(\n [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.messages\",\n ]\n )\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"benefits.core.middleware.Healthcheck\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"benefits.core.middleware.DebugSession\",\n \"benefits.core.middleware.ChangedLanguageEvent\",\n]\n\nif ADMIN:\n MIDDLEWARE.extend(\n [\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n ]\n )\n\nCSRF_COOKIE_HTTPONLY = True\n\nSESSION_COOKIE_AGE = 3600\nSESSION_COOKIE_SAMESITE = \"Strict\"\nSESSION_ENGINE = \"django.contrib.sessions.backends.signed_cookies\"\n\nif not DEBUG:\n CSRF_COOKIE_SECURE = True\n CSRF_FAILURE_VIEW = \"benefits.core.views.csrf_failure\"\n SESSION_COOKIE_SECURE = True\n\nROOT_URLCONF = \"benefits.urls\"\n\ntemplate_ctx_processors = [\n \"django.template.context_processors.request\",\n \"benefits.core.context_processors.analytics\",\n]\n\nif DEBUG:\n template_ctx_processors.extend(\n [\n \"django.template.context_processors.debug\",\n \"benefits.core.context_processors.debug\",\n ]\n )\n\nif ADMIN:\n template_ctx_processors.extend(\n [\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ]\n )\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"benefits\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": template_ctx_processors,\n },\n },\n]\n\nWSGI_APPLICATION = \"benefits.wsgi.application\"\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.environ.get(\"DJANGO_DB\", \"django\") + \".db\",\n }\n}\n\n# Password validation\n\nAUTH_PASSWORD_VALIDATORS = []\n\nif ADMIN:\n AUTH_PASSWORD_VALIDATORS.extend(\n [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n ]\n )\n\n# Internationalization\n\nLANGUAGE_CODE = \"en\"\n\nLANGUAGES = [(\"en\", \"English\"), (\"es\", \"Espa\u00f1ol\")]\n\nLOCALE_PATHS = [os.path.join(BASE_DIR, \"benefits\", \"locale\")]\n\nUSE_I18N = True\nUSE_L10N = True\n\nTIME_ZONE = \"UTC\"\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = [os.path.join(BASE_DIR, \"benefits\", \"static\")]\nSTATIC_ROOT = os.path.join(BASE_DIR, \"static\")\n\n# Logging configuration\n\nLOG_LEVEL = os.environ.get(\"DJANGO_LOG_LEVEL\", \"DEBUG\" if DEBUG else \"WARNING\")\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"default\": {\n \"format\": \"[{asctime}] {levelname} {name}:{lineno} {message}\",\n \"datefmt\": \"%d/%b/%Y %H:%M:%S\",\n \"style\": \"{\",\n },\n },\n \"handlers\": {\n \"default\": {\"class\": \"logging.StreamHandler\", \"formatter\": \"default\"},\n },\n \"root\": {\n \"handlers\": [\"default\"],\n \"level\": LOG_LEVEL,\n },\n \"loggers\": {\"django\": {\"handlers\": [\"default\"], \"propagate\": False}},\n}\n\n# Analytics configuration\n\nANALYTICS_KEY = os.environ.get(\"ANALYTICS_KEY\")\n", "path": "benefits/settings.py"}]}
| 2,250 | 119 |
gh_patches_debug_36828
|
rasdani/github-patches
|
git_diff
|
crytic__slither-577
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FileNotFoundError: [Errno 2] No such file or directory: 'solc'
I launch program like this:
slither Contract.sol
and receive an error:
"FileNotFoundError: [Errno 2] No such file or directory: 'solc'"
I have solc installed.
$ solcjs --version
0.4.25+commit.59dbf8f1.Emscripten.clang
But executable is called **solcjs**, not **solc**. Or it is something different?
Reasoning for "Trusted" versions of Solidity
Re: https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity
I am wondering why `0.4.25 or 0.5.11` are chosen as trusted versions.
Would 0.4.26 or 0.5.17 be acceptable? (current highest patches of those minor versions)
Why are none of the 0.6.x versions included as a "trusted" version?
What are the criteria for adding / bumping the version of "trusted"? Is it based on a simple metric like time since release? Or were those two specific versions manually audited by your team members?
Sorry for all the questions. I have a project which is fairly portable between `>=0.5.0 <=0.7.0` and am wondering which version to target... I would like to use `immutable` from `>=0.6.5`, but not at the expense of some possible security issue. And after using this tool I wondered what the criteria was for "trusted versions" of the compiler.
Thanks.
</issue>
<code>
[start of slither/detectors/attributes/incorrect_solc.py]
1 """
2 Check if an incorrect version of solc is used
3 """
4
5 import re
6 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
7 from slither.formatters.attributes.incorrect_solc import format
8
9 # group:
10 # 0: ^ > >= < <= (optional)
11 # 1: ' ' (optional)
12 # 2: version number
13 # 3: version number
14 # 4: version number
15
16 PATTERN = re.compile('(\^|>|>=|<|<=)?([ ]+)?(\d+)\.(\d+)\.(\d+)')
17
18
19 class IncorrectSolc(AbstractDetector):
20 """
21 Check if an old version of solc is used
22 """
23
24 ARGUMENT = 'solc-version'
25 HELP = 'Incorrect Solidity version'
26 IMPACT = DetectorClassification.INFORMATIONAL
27 CONFIDENCE = DetectorClassification.HIGH
28
29 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity'
30
31 WIKI_TITLE = 'Incorrect versions of Solidity'
32 WIKI_DESCRIPTION = '''
33 `solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
34 We also recommend avoiding complex `pragma` statement.'''
35 WIKI_RECOMMENDATION = '''
36 Use Solidity 0.4.25 or 0.5.11. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''
37
38 COMPLEX_PRAGMA_TXT = "is too complex"
39 OLD_VERSION_TXT = "allows old versions"
40 LESS_THAN_TXT = "uses lesser than"
41
42 TOO_RECENT_VERSION_TXT = "necessitates versions too recent to be trusted. Consider deploying with 0.5.11"
43 BUGGY_VERSION_TXT = "is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)"
44
45 # Indicates the allowed versions. Must be formatted in increasing order.
46 ALLOWED_VERSIONS = ["0.4.25", "0.4.26", "0.5.11"]
47
48 # Indicates the versions that should not be used.
49 BUGGY_VERSIONS = ["0.4.22", "^0.4.22",
50 "0.5.5", "^0.5.5",
51 "0.5.6", "^0.5.6",
52 "0.5.14", "^0.5.14"]
53
54 def _check_version(self, version):
55 op = version[0]
56 if op and op not in ['>', '>=', '^']:
57 return self.LESS_THAN_TXT
58 version_number = '.'.join(version[2:])
59 if version_number not in self.ALLOWED_VERSIONS:
60 if list(map(int, version[2:])) > list(map(int, self.ALLOWED_VERSIONS[-1].split('.'))):
61 return self.TOO_RECENT_VERSION_TXT
62 return self.OLD_VERSION_TXT
63 return None
64
65 def _check_pragma(self, version):
66 if version in self.BUGGY_VERSIONS:
67 return self.BUGGY_VERSION_TXT
68 versions = PATTERN.findall(version)
69 if len(versions) == 1:
70 version = versions[0]
71 return self._check_version(version)
72 elif len(versions) == 2:
73 version_left = versions[0]
74 version_right = versions[1]
75 # Only allow two elements if the second one is
76 # <0.5.0 or <0.6.0
77 if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0'), ('<', '', '0', '7', '0')]:
78 return self.COMPLEX_PRAGMA_TXT
79 return self._check_version(version_left)
80 else:
81 return self.COMPLEX_PRAGMA_TXT
82
83 def _detect(self):
84 """
85 Detects pragma statements that allow for outdated solc versions.
86 :return: Returns the relevant JSON data for the findings.
87 """
88 # Detect all version related pragmas and check if they are disallowed.
89 results = []
90 pragma = self.slither.pragma_directives
91 disallowed_pragmas = []
92 detected_version = False
93 for p in pragma:
94 # Skip any pragma directives which do not refer to version
95 if len(p.directive) < 1 or p.directive[0] != "solidity":
96 continue
97
98 # This is version, so we test if this is disallowed.
99 detected_version = True
100 reason = self._check_pragma(p.version)
101 if reason:
102 disallowed_pragmas.append((reason, p))
103
104 # If we found any disallowed pragmas, we output our findings.
105 if disallowed_pragmas:
106 for (reason, p) in disallowed_pragmas:
107 info = ["Pragma version", p, f" {reason}\n"]
108
109 json = self.generate_result(info)
110
111 results.append(json)
112
113 return results
114
115 @staticmethod
116 def _format(slither, result):
117 format(slither, result)
118
[end of slither/detectors/attributes/incorrect_solc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py
--- a/slither/detectors/attributes/incorrect_solc.py
+++ b/slither/detectors/attributes/incorrect_solc.py
@@ -33,23 +33,30 @@
`solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
We also recommend avoiding complex `pragma` statement.'''
WIKI_RECOMMENDATION = '''
-Use Solidity 0.4.25 or 0.5.11. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''
+Deploy with any of the following Solidity versions:
+- 0.5.11 - 0.5.13,
+- 0.5.15 - 0.5.17,
+- 0.6.8,
+- 0.6.10 - 0.6.11.
+Use a simple pragma version that allows any of these versions.
+Consider using the latest version of Solidity for testing.'''
COMPLEX_PRAGMA_TXT = "is too complex"
OLD_VERSION_TXT = "allows old versions"
LESS_THAN_TXT = "uses lesser than"
- TOO_RECENT_VERSION_TXT = "necessitates versions too recent to be trusted. Consider deploying with 0.5.11"
- BUGGY_VERSION_TXT = "is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)"
+ TOO_RECENT_VERSION_TXT = "necessitates a version too recent to be trusted. Consider deploying with 0.6.11"
+ BUGGY_VERSION_TXT = "is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)"
# Indicates the allowed versions. Must be formatted in increasing order.
- ALLOWED_VERSIONS = ["0.4.25", "0.4.26", "0.5.11"]
+ ALLOWED_VERSIONS = ["0.5.11", "0.5.12", "0.5.13", "0.5.15", "0.5.16", "0.5.17", "0.6.8", "0.6.10", "0.6.11"]
# Indicates the versions that should not be used.
BUGGY_VERSIONS = ["0.4.22", "^0.4.22",
"0.5.5", "^0.5.5",
"0.5.6", "^0.5.6",
- "0.5.14", "^0.5.14"]
+ "0.5.14", "^0.5.14",
+ "0.6.9", "^0.6.9"]
def _check_version(self, version):
op = version[0]
@@ -110,6 +117,17 @@
results.append(json)
+ if self.slither.crytic_compile:
+ if self.slither.crytic_compile.compiler_version:
+ if self.slither.crytic_compile.compiler_version.version not in self.ALLOWED_VERSIONS:
+ info = ["solc-",
+ self.slither.crytic_compile.compiler_version.version,
+ " is not recommended for deployement\n"]
+
+ json = self.generate_result(info)
+
+ results.append(json)
+
return results
@staticmethod
|
{"golden_diff": "diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py\n--- a/slither/detectors/attributes/incorrect_solc.py\n+++ b/slither/detectors/attributes/incorrect_solc.py\n@@ -33,23 +33,30 @@\n `solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\n We also recommend avoiding complex `pragma` statement.'''\n WIKI_RECOMMENDATION = '''\n-Use Solidity 0.4.25 or 0.5.11. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''\n+Deploy with any of the following Solidity versions:\n+- 0.5.11 - 0.5.13,\n+- 0.5.15 - 0.5.17,\n+- 0.6.8,\n+- 0.6.10 - 0.6.11.\n+Use a simple pragma version that allows any of these versions.\n+Consider using the latest version of Solidity for testing.'''\n \n COMPLEX_PRAGMA_TXT = \"is too complex\"\n OLD_VERSION_TXT = \"allows old versions\"\n LESS_THAN_TXT = \"uses lesser than\"\n \n- TOO_RECENT_VERSION_TXT = \"necessitates versions too recent to be trusted. Consider deploying with 0.5.11\"\n- BUGGY_VERSION_TXT = \"is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)\"\n+ TOO_RECENT_VERSION_TXT = \"necessitates a version too recent to be trusted. Consider deploying with 0.6.11\"\n+ BUGGY_VERSION_TXT = \"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)\"\n \n # Indicates the allowed versions. Must be formatted in increasing order.\n- ALLOWED_VERSIONS = [\"0.4.25\", \"0.4.26\", \"0.5.11\"]\n+ ALLOWED_VERSIONS = [\"0.5.11\", \"0.5.12\", \"0.5.13\", \"0.5.15\", \"0.5.16\", \"0.5.17\", \"0.6.8\", \"0.6.10\", \"0.6.11\"]\n \n # Indicates the versions that should not be used.\n BUGGY_VERSIONS = [\"0.4.22\", \"^0.4.22\",\n \"0.5.5\", \"^0.5.5\",\n \"0.5.6\", \"^0.5.6\",\n- \"0.5.14\", \"^0.5.14\"]\n+ \"0.5.14\", \"^0.5.14\",\n+ \"0.6.9\", \"^0.6.9\"]\n \n def _check_version(self, version):\n op = version[0]\n@@ -110,6 +117,17 @@\n \n results.append(json)\n \n+ if self.slither.crytic_compile:\n+ if self.slither.crytic_compile.compiler_version:\n+ if self.slither.crytic_compile.compiler_version.version not in self.ALLOWED_VERSIONS:\n+ info = [\"solc-\",\n+ self.slither.crytic_compile.compiler_version.version,\n+ \" is not recommended for deployement\\n\"]\n+\n+ json = self.generate_result(info)\n+\n+ results.append(json)\n+\n return results\n \n @staticmethod\n", "issue": "FileNotFoundError: [Errno 2] No such file or directory: 'solc'\nI launch program like this:\r\n\r\n slither Contract.sol \r\n\r\nand receive an error:\r\n\"FileNotFoundError: [Errno 2] No such file or directory: 'solc'\"\r\n\r\nI have solc installed.\r\n\r\n $ solcjs --version\r\n 0.4.25+commit.59dbf8f1.Emscripten.clang\r\n\r\nBut executable is called **solcjs**, not **solc**. Or it is something different?\r\n\nReasoning for \"Trusted\" versions of Solidity\nRe: https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity\r\n\r\nI am wondering why `0.4.25 or 0.5.11` are chosen as trusted versions.\r\n\r\nWould 0.4.26 or 0.5.17 be acceptable? (current highest patches of those minor versions)\r\n\r\nWhy are none of the 0.6.x versions included as a \"trusted\" version?\r\n\r\nWhat are the criteria for adding / bumping the version of \"trusted\"? Is it based on a simple metric like time since release? Or were those two specific versions manually audited by your team members?\r\n\r\nSorry for all the questions. I have a project which is fairly portable between `>=0.5.0 <=0.7.0` and am wondering which version to target... I would like to use `immutable` from `>=0.6.5`, but not at the expense of some possible security issue. And after using this tool I wondered what the criteria was for \"trusted versions\" of the compiler.\r\n\r\nThanks.\n", "before_files": [{"content": "\"\"\"\n Check if an incorrect version of solc is used\n\"\"\"\n\nimport re\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.formatters.attributes.incorrect_solc import format\n\n# group:\n# 0: ^ > >= < <= (optional)\n# 1: ' ' (optional)\n# 2: version number\n# 3: version number\n# 4: version number\n\nPATTERN = re.compile('(\\^|>|>=|<|<=)?([ ]+)?(\\d+)\\.(\\d+)\\.(\\d+)')\n\n\nclass IncorrectSolc(AbstractDetector):\n \"\"\"\n Check if an old version of solc is used\n \"\"\"\n\n ARGUMENT = 'solc-version'\n HELP = 'Incorrect Solidity version'\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity'\n\n WIKI_TITLE = 'Incorrect versions of Solidity'\n WIKI_DESCRIPTION = '''\n`solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\nWe also recommend avoiding complex `pragma` statement.'''\n WIKI_RECOMMENDATION = '''\nUse Solidity 0.4.25 or 0.5.11. Consider using the latest version of Solidity for testing the compilation, and a trusted version for deploying.'''\n\n COMPLEX_PRAGMA_TXT = \"is too complex\"\n OLD_VERSION_TXT = \"allows old versions\"\n LESS_THAN_TXT = \"uses lesser than\"\n\n TOO_RECENT_VERSION_TXT = \"necessitates versions too recent to be trusted. Consider deploying with 0.5.11\"\n BUGGY_VERSION_TXT = \"is known to contain severe issue (https://solidity.readthedocs.io/en/v0.5.8/bugs.html)\"\n\n # Indicates the allowed versions. Must be formatted in increasing order.\n ALLOWED_VERSIONS = [\"0.4.25\", \"0.4.26\", \"0.5.11\"]\n\n # Indicates the versions that should not be used.\n BUGGY_VERSIONS = [\"0.4.22\", \"^0.4.22\",\n \"0.5.5\", \"^0.5.5\",\n \"0.5.6\", \"^0.5.6\",\n \"0.5.14\", \"^0.5.14\"]\n\n def _check_version(self, version):\n op = version[0]\n if op and op not in ['>', '>=', '^']:\n return self.LESS_THAN_TXT\n version_number = '.'.join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n if list(map(int, version[2:])) > list(map(int, self.ALLOWED_VERSIONS[-1].split('.'))):\n return self.TOO_RECENT_VERSION_TXT\n return self.OLD_VERSION_TXT\n return None\n\n def _check_pragma(self, version):\n if version in self.BUGGY_VERSIONS:\n return self.BUGGY_VERSION_TXT\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n return self._check_version(version)\n elif len(versions) == 2:\n version_left = versions[0]\n version_right = versions[1]\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [('<', '', '0', '5', '0'), ('<', '', '0', '6', '0'), ('<', '', '0', '7', '0')]:\n return self.COMPLEX_PRAGMA_TXT\n return self._check_version(version_left)\n else:\n return self.COMPLEX_PRAGMA_TXT\n\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n :return: Returns the relevant JSON data for the findings.\n \"\"\"\n # Detect all version related pragmas and check if they are disallowed.\n results = []\n pragma = self.slither.pragma_directives\n disallowed_pragmas = []\n detected_version = False\n for p in pragma:\n # Skip any pragma directives which do not refer to version\n if len(p.directive) < 1 or p.directive[0] != \"solidity\":\n continue\n\n # This is version, so we test if this is disallowed.\n detected_version = True\n reason = self._check_pragma(p.version)\n if reason:\n disallowed_pragmas.append((reason, p))\n\n # If we found any disallowed pragmas, we output our findings.\n if disallowed_pragmas:\n for (reason, p) in disallowed_pragmas:\n info = [\"Pragma version\", p, f\" {reason}\\n\"]\n\n json = self.generate_result(info)\n\n results.append(json)\n\n return results\n\n @staticmethod\n def _format(slither, result):\n format(slither, result)\n", "path": "slither/detectors/attributes/incorrect_solc.py"}]}
| 2,276 | 794 |
gh_patches_debug_2854
|
rasdani/github-patches
|
git_diff
|
wger-project__wger-170
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BMI And Calorie Calculator Not Working
Using this software in Linux Mint 13.
When I enter my data into either the BMI calculator or the calorie estimator nothing happens.
I have entered my height in cm and my weight in kgs.
The BMI calculator says my BMI = 0.
I'd be happy with 10.
</issue>
<code>
[start of wger/nutrition/forms.py]
1 # -*- coding: utf-8 -*-
2
3 # This file is part of wger Workout Manager.
4 #
5 # wger Workout Manager is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU Affero General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # wger Workout Manager is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU Affero General Public License
16
17 import logging
18
19 from django import forms
20 from django.utils.translation import ugettext as _
21 from wger.core.models import UserProfile
22
23 from wger.nutrition.models import IngredientWeightUnit, Ingredient, MealItem
24 from wger.utils.widgets import Html5NumberInput
25
26
27 logger = logging.getLogger(__name__)
28
29
30 class UnitChooserForm(forms.Form):
31 '''
32 A small form to select an amount and a unit for an ingredient
33 '''
34 amount = forms.DecimalField(decimal_places=2,
35 max_digits=5,
36 localize=True)
37 unit = forms.ModelChoiceField(queryset=IngredientWeightUnit.objects.none(),
38 empty_label="g",
39 required=False)
40
41 def __init__(self, *args, **kwargs):
42 super(UnitChooserForm, self).__init__(*args, **kwargs)
43
44 if len(args) and args[0].get('ingredient'):
45 ingredient_id = args[0]['ingredient']
46
47 elif kwargs.get('data'):
48 ingredient_id = kwargs['data']['ingredient_id']
49
50 else:
51 ingredient_id = -1
52
53 self.fields['unit'].queryset = IngredientWeightUnit.objects.filter(
54 ingredient_id=ingredient_id).select_related()
55
56
57 class BmiForm(forms.ModelForm):
58 weight = forms.DecimalField(widget=Html5NumberInput(),
59 max_value=999)
60
61 class Meta:
62 model = UserProfile
63 fields = ('height', )
64
65
66 class BmrForm(forms.ModelForm):
67 '''
68 Form for the basal metabolic rate
69 '''
70 weight = forms.DecimalField(widget=Html5NumberInput())
71
72 class Meta:
73 model = UserProfile
74 fields = ('age', 'height', 'gender')
75
76
77 class PhysicalActivitiesForm(forms.ModelForm):
78 '''
79 Form for the additional physical activities
80 '''
81 class Meta:
82 model = UserProfile
83 fields = ('sleep_hours',
84 'work_hours',
85 'work_intensity',
86 'sport_hours',
87 'sport_intensity',
88 'freetime_hours',
89 'freetime_intensity')
90
91
92 class DailyCaloriesForm(forms.ModelForm):
93 '''
94 Form for the total daily calories needed
95 '''
96
97 base_calories = forms.IntegerField(label=_('Basic caloric intake'),
98 help_text=_('Your basic caloric intake as calculated for '
99 'your data'),
100 required=False,
101 widget=Html5NumberInput())
102 additional_calories = forms.IntegerField(label=_('Additional calories'),
103 help_text=_('Additional calories to add to the base '
104 'rate (to substract, enter a negative '
105 'number)'),
106 initial=0,
107 required=False,
108 widget=Html5NumberInput())
109
110 class Meta:
111 model = UserProfile
112 fields = ('calories',)
113
114
115 class MealItemForm(forms.ModelForm):
116 weight_unit = forms.ModelChoiceField(queryset=IngredientWeightUnit.objects.none(),
117 empty_label="g",
118 required=False)
119 ingredient = forms.ModelChoiceField(queryset=Ingredient.objects.all(),
120 widget=forms.HiddenInput)
121
122 class Meta:
123 model = MealItem
124 fields = '__all__'
125
126 def __init__(self, *args, **kwargs):
127 super(MealItemForm, self).__init__(*args, **kwargs)
128
129 # Get the ingredient_id
130 ingredient_id = None
131
132 if kwargs.get('instance'):
133 ingredient_id = kwargs['instance'].ingredient_id
134
135 if kwargs.get('data'):
136 ingredient_id = kwargs['data']['ingredient']
137
138 # Filter the available ingredients
139 if ingredient_id:
140 self.fields['weight_unit'].queryset = \
141 IngredientWeightUnit.objects.filter(ingredient_id=ingredient_id)
142
[end of wger/nutrition/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wger/nutrition/forms.py b/wger/nutrition/forms.py
--- a/wger/nutrition/forms.py
+++ b/wger/nutrition/forms.py
@@ -55,6 +55,9 @@
class BmiForm(forms.ModelForm):
+ height = forms.DecimalField(widget=Html5NumberInput(),
+ max_value=999,
+ label=_('Height (cm)'))
weight = forms.DecimalField(widget=Html5NumberInput(),
max_value=999)
|
{"golden_diff": "diff --git a/wger/nutrition/forms.py b/wger/nutrition/forms.py\n--- a/wger/nutrition/forms.py\n+++ b/wger/nutrition/forms.py\n@@ -55,6 +55,9 @@\n \n \n class BmiForm(forms.ModelForm):\n+ height = forms.DecimalField(widget=Html5NumberInput(),\n+ max_value=999,\n+ label=_('Height (cm)'))\n weight = forms.DecimalField(widget=Html5NumberInput(),\n max_value=999)\n", "issue": "BMI And Calorie Calculator Not Working\nUsing this software in Linux Mint 13.\nWhen I enter my data into either the BMI calculator or the calorie estimator nothing happens.\nI have entered my height in cm and my weight in kgs.\nThe BMI calculator says my BMI = 0.\nI'd be happy with 10.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This file is part of wger Workout Manager.\n#\n# wger Workout Manager is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# wger Workout Manager is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n\nimport logging\n\nfrom django import forms\nfrom django.utils.translation import ugettext as _\nfrom wger.core.models import UserProfile\n\nfrom wger.nutrition.models import IngredientWeightUnit, Ingredient, MealItem\nfrom wger.utils.widgets import Html5NumberInput\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass UnitChooserForm(forms.Form):\n '''\n A small form to select an amount and a unit for an ingredient\n '''\n amount = forms.DecimalField(decimal_places=2,\n max_digits=5,\n localize=True)\n unit = forms.ModelChoiceField(queryset=IngredientWeightUnit.objects.none(),\n empty_label=\"g\",\n required=False)\n\n def __init__(self, *args, **kwargs):\n super(UnitChooserForm, self).__init__(*args, **kwargs)\n\n if len(args) and args[0].get('ingredient'):\n ingredient_id = args[0]['ingredient']\n\n elif kwargs.get('data'):\n ingredient_id = kwargs['data']['ingredient_id']\n\n else:\n ingredient_id = -1\n\n self.fields['unit'].queryset = IngredientWeightUnit.objects.filter(\n ingredient_id=ingredient_id).select_related()\n\n\nclass BmiForm(forms.ModelForm):\n weight = forms.DecimalField(widget=Html5NumberInput(),\n max_value=999)\n\n class Meta:\n model = UserProfile\n fields = ('height', )\n\n\nclass BmrForm(forms.ModelForm):\n '''\n Form for the basal metabolic rate\n '''\n weight = forms.DecimalField(widget=Html5NumberInput())\n\n class Meta:\n model = UserProfile\n fields = ('age', 'height', 'gender')\n\n\nclass PhysicalActivitiesForm(forms.ModelForm):\n '''\n Form for the additional physical activities\n '''\n class Meta:\n model = UserProfile\n fields = ('sleep_hours',\n 'work_hours',\n 'work_intensity',\n 'sport_hours',\n 'sport_intensity',\n 'freetime_hours',\n 'freetime_intensity')\n\n\nclass DailyCaloriesForm(forms.ModelForm):\n '''\n Form for the total daily calories needed\n '''\n\n base_calories = forms.IntegerField(label=_('Basic caloric intake'),\n help_text=_('Your basic caloric intake as calculated for '\n 'your data'),\n required=False,\n widget=Html5NumberInput())\n additional_calories = forms.IntegerField(label=_('Additional calories'),\n help_text=_('Additional calories to add to the base '\n 'rate (to substract, enter a negative '\n 'number)'),\n initial=0,\n required=False,\n widget=Html5NumberInput())\n\n class Meta:\n model = UserProfile\n fields = ('calories',)\n\n\nclass MealItemForm(forms.ModelForm):\n weight_unit = forms.ModelChoiceField(queryset=IngredientWeightUnit.objects.none(),\n empty_label=\"g\",\n required=False)\n ingredient = forms.ModelChoiceField(queryset=Ingredient.objects.all(),\n widget=forms.HiddenInput)\n\n class Meta:\n model = MealItem\n fields = '__all__'\n\n def __init__(self, *args, **kwargs):\n super(MealItemForm, self).__init__(*args, **kwargs)\n\n # Get the ingredient_id\n ingredient_id = None\n\n if kwargs.get('instance'):\n ingredient_id = kwargs['instance'].ingredient_id\n\n if kwargs.get('data'):\n ingredient_id = kwargs['data']['ingredient']\n\n # Filter the available ingredients\n if ingredient_id:\n self.fields['weight_unit'].queryset = \\\n IngredientWeightUnit.objects.filter(ingredient_id=ingredient_id)\n", "path": "wger/nutrition/forms.py"}]}
| 1,828 | 111 |
gh_patches_debug_98
|
rasdani/github-patches
|
git_diff
|
spack__spack-6618
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xrootd needs openssl
xrootd needs openssl headers to compile for 4.6.0
spack find : always prompt 0 installed packages
On a clean `develop` checkout :
```
$ git clone https://github.com/LLNL/spack.git
Cloning into 'spack'...
remote: Counting objects: 25613, done.
remote: Compressing objects: 100% (42/42), done.
remote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557
Receiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done.
Resolving deltas: 100% (13031/13031), done.
Checking connectivity... done.
$ cd spack
$ . share/spack/setup-env.sh
$ spack compilers
==> Available compilers
-- gcc ----------------------------------------------------------
[email protected]
$ spack install zlib
==> Installing zlib
==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz
######################################################################## 100,0%
==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz
==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
==> No patches needed for zlib
==> Building zlib
==> Successfully installed zlib
Fetch: 0.01s. Build: 3.69s. Total: 3.70s.
[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
$ spack find
==> 0 installed packages.
$ spack install szip
==> Installing szip
==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz
######################################################################## 100,0%
==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz
==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
==> No patches needed for szip
==> Building szip
==> Successfully installed szip
Fetch: 0.01s. Build: 8.09s. Total: 8.10s.
[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
$ spack find
==> 0 installed packages.
```
The db seems to be written correctly :
```
database:
installs:
d6pdl6xvnvap6ihrqcqtgvweghbszmix:
explicit: true
installed: true
path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
ref_count: 0
spec:
zlib:
arch: linux-x86_64
compiler:
name: gcc
version: '4.8'
dependencies: {}
namespace: builtin
parameters:
cflags: []
cppflags: []
cxxflags: []
fflags: []
ldflags: []
ldlibs: []
version: 1.2.8
esfmhl54wbdb7nnnip6y6jbxlbmxs2jq:
explicit: true
installed: true
path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
ref_count: 0
spec:
szip:
arch: linux-x86_64
compiler:
name: gcc
version: '4.8'
dependencies: {}
namespace: builtin
parameters:
cflags: []
cppflags: []
cxxflags: []
fflags: []
ldflags: []
ldlibs: []
version: '2.1'
version: 0.9.1
```
xrootd requires zlib to be installed on system
CMake can't find zlib when installing xrootd. zlib is not listed as a dependency fro xrootd, so CMake looks for it on the system.
</issue>
<code>
[start of var/spack/repos/builtin/packages/xrootd/package.py]
1 ##############################################################################
2 # Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/spack/spack
10 # Please also see the NOTICE and LICENSE files for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25
26 from spack import *
27
28
29 class Xrootd(CMakePackage):
30 """The XROOTD project aims at giving high performance, scalable fault
31 tolerant access to data repositories of many kinds."""
32 homepage = "http://xrootd.org"
33 url = "http://xrootd.org/download/v4.6.0/xrootd-4.6.0.tar.gz"
34
35 version('4.6.0', '5d60aade2d995b68fe0c46896bc4a5d1')
36 version('4.5.0', 'd485df3d4a991e1c35efa4bf9ef663d7')
37 version('4.4.1', '72b0842f802ccc94dede4ac5ab2a589e')
38 version('4.4.0', '58f55e56801d3661d753ff5fd33dbcc9')
39 version('4.3.0', '39c2fab9f632f35e12ff607ccaf9e16c')
40
41 depends_on('[email protected]:', type='build')
42 depends_on('zlib')
43
[end of var/spack/repos/builtin/packages/xrootd/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/xrootd/package.py b/var/spack/repos/builtin/packages/xrootd/package.py
--- a/var/spack/repos/builtin/packages/xrootd/package.py
+++ b/var/spack/repos/builtin/packages/xrootd/package.py
@@ -40,3 +40,4 @@
depends_on('[email protected]:', type='build')
depends_on('zlib')
+ depends_on('openssl')
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/xrootd/package.py b/var/spack/repos/builtin/packages/xrootd/package.py\n--- a/var/spack/repos/builtin/packages/xrootd/package.py\n+++ b/var/spack/repos/builtin/packages/xrootd/package.py\n@@ -40,3 +40,4 @@\n \n depends_on('[email protected]:', type='build')\n depends_on('zlib')\n+ depends_on('openssl')\n", "issue": "xrootd needs openssl\nxrootd needs openssl headers to compile for 4.6.0\nspack find : always prompt 0 installed packages\nOn a clean `develop` checkout : \n\n```\n$ git clone https://github.com/LLNL/spack.git\nCloning into 'spack'...\nremote: Counting objects: 25613, done.\nremote: Compressing objects: 100% (42/42), done.\nremote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557\nReceiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done.\nResolving deltas: 100% (13031/13031), done.\nChecking connectivity... done.\n\n$ cd spack\n$ . share/spack/setup-env.sh \n$ spack compilers\n==> Available compilers\n-- gcc ----------------------------------------------------------\[email protected]\n\n$ spack install zlib\n==> Installing zlib\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n==> No patches needed for zlib\n==> Building zlib\n==> Successfully installed zlib\n Fetch: 0.01s. Build: 3.69s. Total: 3.70s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n\n$ spack find\n==> 0 installed packages.\n\n$ spack install szip\n==> Installing szip\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n==> No patches needed for szip\n==> Building szip\n==> Successfully installed szip\n Fetch: 0.01s. Build: 8.09s. Total: 8.10s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n\n$ spack find \n==> 0 installed packages.\n```\n\nThe db seems to be written correctly : \n\n```\ndatabase:\n installs:\n d6pdl6xvnvap6ihrqcqtgvweghbszmix:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n ref_count: 0\n spec:\n zlib:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: 1.2.8\n esfmhl54wbdb7nnnip6y6jbxlbmxs2jq:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n ref_count: 0\n spec:\n szip:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: '2.1'\n version: 0.9.1\n```\n\nxrootd requires zlib to be installed on system\nCMake can't find zlib when installing xrootd. zlib is not listed as a dependency fro xrootd, so CMake looks for it on the system.\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\n\nfrom spack import *\n\n\nclass Xrootd(CMakePackage):\n \"\"\"The XROOTD project aims at giving high performance, scalable fault\n tolerant access to data repositories of many kinds.\"\"\"\n homepage = \"http://xrootd.org\"\n url = \"http://xrootd.org/download/v4.6.0/xrootd-4.6.0.tar.gz\"\n\n version('4.6.0', '5d60aade2d995b68fe0c46896bc4a5d1')\n version('4.5.0', 'd485df3d4a991e1c35efa4bf9ef663d7')\n version('4.4.1', '72b0842f802ccc94dede4ac5ab2a589e')\n version('4.4.0', '58f55e56801d3661d753ff5fd33dbcc9')\n version('4.3.0', '39c2fab9f632f35e12ff607ccaf9e16c')\n\n depends_on('[email protected]:', type='build')\n depends_on('zlib')\n", "path": "var/spack/repos/builtin/packages/xrootd/package.py"}]}
| 2,398 | 102 |
gh_patches_debug_39725
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-5787
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Multi-qubit conditions in DeferredMeasurementsTransformer
Use https://github.com/quantumlib/Cirq/pull/5755 to allow control keys from multi-qubit measurements in deferred measurements transformers. That new feature allows us to express "anything but all zeros", which is the condition required.
https://github.com/quantumlib/Cirq/blob/77b61f500af3726ad4b34b06c72e21eac48d3db6/cirq-core/cirq/transformers/measurement_transformers.py#L121-L124
</issue>
<code>
[start of cirq-core/cirq/transformers/measurement_transformers.py]
1 # Copyright 2022 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Any, Dict, List, Optional, TYPE_CHECKING, Union
16
17 from cirq import ops, protocols, value
18 from cirq.transformers import transformer_api, transformer_primitives
19 from cirq.transformers.synchronize_terminal_measurements import find_terminal_measurements
20
21 if TYPE_CHECKING:
22 import cirq
23
24
25 class _MeasurementQid(ops.Qid):
26 """A qubit that substitutes in for a deferred measurement.
27
28 Exactly one qubit will be created per qubit in the measurement gate.
29 """
30
31 def __init__(self, key: Union[str, 'cirq.MeasurementKey'], qid: 'cirq.Qid'):
32 """Initializes the qubit.
33
34 Args:
35 key: The key of the measurement gate being deferred.
36 qid: One qubit that is being measured. Each deferred measurement
37 should create one new _MeasurementQid per qubit being measured
38 by that gate.
39 """
40 self._key = value.MeasurementKey.parse_serialized(key) if isinstance(key, str) else key
41 self._qid = qid
42
43 @property
44 def dimension(self) -> int:
45 return self._qid.dimension
46
47 def _comparison_key(self) -> Any:
48 return (str(self._key), self._qid._comparison_key())
49
50 def __str__(self) -> str:
51 return f"M('{self._key}', q={self._qid})"
52
53 def __repr__(self) -> str:
54 return f'_MeasurementQid({self._key!r}, {self._qid!r})'
55
56
57 @transformer_api.transformer
58 def defer_measurements(
59 circuit: 'cirq.AbstractCircuit', *, context: Optional['cirq.TransformerContext'] = None
60 ) -> 'cirq.Circuit':
61 """Implements the Deferred Measurement Principle.
62
63 Uses the Deferred Measurement Principle to move all measurements to the
64 end of the circuit. All non-terminal measurements are changed to
65 conditional quantum gates onto ancilla qubits, and classically controlled
66 operations are transformed to quantum controls from those ancilla qubits.
67 Finally, measurements of all ancilla qubits are appended to the end of the
68 circuit.
69
70 Optimizing deferred measurements is an area of active research, and future
71 iterations may contain optimizations that reduce the number of ancilla
72 qubits, so one should not depend on the exact shape of the output from this
73 function. Only the logical equivalence is guaranteed to remain unchanged.
74 Moment and subcircuit structure is not preserved.
75
76 Args:
77 circuit: The circuit to transform. It will not be modified.
78 context: `cirq.TransformerContext` storing common configurable options
79 for transformers.
80 Returns:
81 A circuit with equivalent logic, but all measurements at the end of the
82 circuit.
83 Raises:
84 ValueError: If sympy-based classical conditions are used, or if
85 conditions based on multi-qubit measurements exist. (The latter of
86 these is planned to be implemented soon).
87 NotImplementedError: When attempting to defer a measurement with a
88 confusion map. (https://github.com/quantumlib/Cirq/issues/5482)
89 """
90
91 circuit = transformer_primitives.unroll_circuit_op(circuit, deep=True, tags_to_check=None)
92 terminal_measurements = {op for _, op in find_terminal_measurements(circuit)}
93 measurement_qubits: Dict['cirq.MeasurementKey', List['_MeasurementQid']] = {}
94
95 def defer(op: 'cirq.Operation', _) -> 'cirq.OP_TREE':
96 if op in terminal_measurements:
97 return op
98 gate = op.gate
99 if isinstance(gate, ops.MeasurementGate):
100 if gate.confusion_map:
101 raise NotImplementedError(
102 "Deferring confused measurement is not implemented, but found "
103 f"measurement with key={gate.key} and non-empty confusion map."
104 )
105 key = value.MeasurementKey.parse_serialized(gate.key)
106 targets = [_MeasurementQid(key, q) for q in op.qubits]
107 measurement_qubits[key] = targets
108 cxs = [ops.CX(q, target) for q, target in zip(op.qubits, targets)]
109 xs = [ops.X(targets[i]) for i, b in enumerate(gate.full_invert_mask()) if b]
110 return cxs + xs
111 elif protocols.is_measurement(op):
112 return [defer(op, None) for op in protocols.decompose_once(op)]
113 elif op.classical_controls:
114 controls = []
115 for c in op.classical_controls:
116 if isinstance(c, value.KeyCondition):
117 if c.key not in measurement_qubits:
118 raise ValueError(f'Deferred measurement for key={c.key} not found.')
119 qubits = measurement_qubits[c.key]
120 if len(qubits) != 1:
121 # TODO: Multi-qubit conditions require
122 # https://github.com/quantumlib/Cirq/issues/4512
123 # Remember to update docstring above once this works.
124 raise ValueError('Only single qubit conditions are allowed.')
125 controls.extend(qubits)
126 else:
127 raise ValueError('Only KeyConditions are allowed.')
128 return op.without_classical_controls().controlled_by(
129 *controls, control_values=[tuple(range(1, q.dimension)) for q in controls]
130 )
131 return op
132
133 circuit = transformer_primitives.map_operations_and_unroll(
134 circuit=circuit,
135 map_func=defer,
136 tags_to_ignore=context.tags_to_ignore if context else (),
137 raise_if_add_qubits=False,
138 ).unfreeze()
139 for k, qubits in measurement_qubits.items():
140 circuit.append(ops.measure(*qubits, key=k))
141 return circuit
142
143
144 @transformer_api.transformer
145 def dephase_measurements(
146 circuit: 'cirq.AbstractCircuit',
147 *,
148 context: Optional['cirq.TransformerContext'] = transformer_api.TransformerContext(deep=True),
149 ) -> 'cirq.Circuit':
150 """Changes all measurements to a dephase operation.
151
152 This transformer is useful when using a density matrix simulator, when
153 wishing to calculate the final density matrix of a circuit and not simulate
154 the measurements themselves.
155
156 Args:
157 circuit: The circuit to transform. It will not be modified.
158 context: `cirq.TransformerContext` storing common configurable options
159 for transformers. The default has `deep=True` to ensure
160 measurements at all levels are dephased.
161 Returns:
162 A copy of the circuit, with dephase operations in place of all
163 measurements.
164 Raises:
165 ValueError: If the circuit contains classical controls. In this case,
166 it is required to change these to quantum controls via
167 `cirq.defer_measurements` first. Since deferral adds ancilla qubits
168 to the circuit, this is not done automatically, to prevent
169 surprises.
170 """
171
172 def dephase(op: 'cirq.Operation', _) -> 'cirq.OP_TREE':
173 gate = op.gate
174 if isinstance(gate, ops.MeasurementGate):
175 key = value.MeasurementKey.parse_serialized(gate.key)
176 return ops.KrausChannel.from_channel(ops.phase_damp(1), key=key).on_each(op.qubits)
177 elif isinstance(op, ops.ClassicallyControlledOperation):
178 raise ValueError('Use cirq.defer_measurements first to remove classical controls.')
179 return op
180
181 ignored = () if context is None else context.tags_to_ignore
182 return transformer_primitives.map_operations(
183 circuit, dephase, deep=context.deep if context else True, tags_to_ignore=ignored
184 ).unfreeze()
185
186
187 @transformer_api.transformer
188 def drop_terminal_measurements(
189 circuit: 'cirq.AbstractCircuit',
190 *,
191 context: Optional['cirq.TransformerContext'] = transformer_api.TransformerContext(deep=True),
192 ) -> 'cirq.Circuit':
193 """Removes terminal measurements from a circuit.
194
195 This transformer is helpful when trying to capture the final state vector
196 of a circuit with many terminal measurements, as simulating the circuit
197 with those measurements in place would otherwise collapse the final state.
198
199 Args:
200 circuit: The circuit to transform. It will not be modified.
201 context: `cirq.TransformerContext` storing common configurable options
202 for transformers. The default has `deep=True`, as "terminal
203 measurements" is ill-defined without inspecting subcircuits;
204 passing a context with `deep=False` will return an error.
205 Returns:
206 A copy of the circuit, with identity or X gates in place of terminal
207 measurements.
208 Raises:
209 ValueError: if the circuit contains non-terminal measurements, or if
210 the provided context has`deep=False`.
211 """
212
213 if context is None or not context.deep:
214 raise ValueError(
215 'Context has `deep=False`, but `deep=True` is required to drop terminal measurements.'
216 )
217
218 if not circuit.are_all_measurements_terminal():
219 raise ValueError('Circuit contains a non-terminal measurement.')
220
221 def flip_inversion(op: 'cirq.Operation', _) -> 'cirq.OP_TREE':
222 if isinstance(op.gate, ops.MeasurementGate):
223 return [
224 ops.X(q) if b else ops.I(q) for q, b in zip(op.qubits, op.gate.full_invert_mask())
225 ]
226 return op
227
228 ignored = () if context is None else context.tags_to_ignore
229 return transformer_primitives.map_operations(
230 circuit, flip_inversion, deep=context.deep if context else True, tags_to_ignore=ignored
231 ).unfreeze()
232
[end of cirq-core/cirq/transformers/measurement_transformers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cirq-core/cirq/transformers/measurement_transformers.py b/cirq-core/cirq/transformers/measurement_transformers.py
--- a/cirq-core/cirq/transformers/measurement_transformers.py
+++ b/cirq-core/cirq/transformers/measurement_transformers.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import itertools
from typing import Any, Dict, List, Optional, TYPE_CHECKING, Union
from cirq import ops, protocols, value
@@ -81,9 +82,7 @@
A circuit with equivalent logic, but all measurements at the end of the
circuit.
Raises:
- ValueError: If sympy-based classical conditions are used, or if
- conditions based on multi-qubit measurements exist. (The latter of
- these is planned to be implemented soon).
+ ValueError: If sympy-based classical conditions are used.
NotImplementedError: When attempting to defer a measurement with a
confusion map. (https://github.com/quantumlib/Cirq/issues/5482)
"""
@@ -111,23 +110,22 @@
elif protocols.is_measurement(op):
return [defer(op, None) for op in protocols.decompose_once(op)]
elif op.classical_controls:
- controls = []
+ new_op = op.without_classical_controls()
for c in op.classical_controls:
if isinstance(c, value.KeyCondition):
if c.key not in measurement_qubits:
raise ValueError(f'Deferred measurement for key={c.key} not found.')
- qubits = measurement_qubits[c.key]
- if len(qubits) != 1:
- # TODO: Multi-qubit conditions require
- # https://github.com/quantumlib/Cirq/issues/4512
- # Remember to update docstring above once this works.
- raise ValueError('Only single qubit conditions are allowed.')
- controls.extend(qubits)
+ qs = measurement_qubits[c.key]
+ if len(qs) == 1:
+ control_values: Any = range(1, qs[0].dimension)
+ else:
+ all_values = itertools.product(*[range(q.dimension) for q in qs])
+ anything_but_all_zeros = tuple(itertools.islice(all_values, 1, None))
+ control_values = ops.SumOfProducts(anything_but_all_zeros)
+ new_op = new_op.controlled_by(*qs, control_values=control_values)
else:
raise ValueError('Only KeyConditions are allowed.')
- return op.without_classical_controls().controlled_by(
- *controls, control_values=[tuple(range(1, q.dimension)) for q in controls]
- )
+ return new_op
return op
circuit = transformer_primitives.map_operations_and_unroll(
|
{"golden_diff": "diff --git a/cirq-core/cirq/transformers/measurement_transformers.py b/cirq-core/cirq/transformers/measurement_transformers.py\n--- a/cirq-core/cirq/transformers/measurement_transformers.py\n+++ b/cirq-core/cirq/transformers/measurement_transformers.py\n@@ -12,6 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import itertools\n from typing import Any, Dict, List, Optional, TYPE_CHECKING, Union\n \n from cirq import ops, protocols, value\n@@ -81,9 +82,7 @@\n A circuit with equivalent logic, but all measurements at the end of the\n circuit.\n Raises:\n- ValueError: If sympy-based classical conditions are used, or if\n- conditions based on multi-qubit measurements exist. (The latter of\n- these is planned to be implemented soon).\n+ ValueError: If sympy-based classical conditions are used.\n NotImplementedError: When attempting to defer a measurement with a\n confusion map. (https://github.com/quantumlib/Cirq/issues/5482)\n \"\"\"\n@@ -111,23 +110,22 @@\n elif protocols.is_measurement(op):\n return [defer(op, None) for op in protocols.decompose_once(op)]\n elif op.classical_controls:\n- controls = []\n+ new_op = op.without_classical_controls()\n for c in op.classical_controls:\n if isinstance(c, value.KeyCondition):\n if c.key not in measurement_qubits:\n raise ValueError(f'Deferred measurement for key={c.key} not found.')\n- qubits = measurement_qubits[c.key]\n- if len(qubits) != 1:\n- # TODO: Multi-qubit conditions require\n- # https://github.com/quantumlib/Cirq/issues/4512\n- # Remember to update docstring above once this works.\n- raise ValueError('Only single qubit conditions are allowed.')\n- controls.extend(qubits)\n+ qs = measurement_qubits[c.key]\n+ if len(qs) == 1:\n+ control_values: Any = range(1, qs[0].dimension)\n+ else:\n+ all_values = itertools.product(*[range(q.dimension) for q in qs])\n+ anything_but_all_zeros = tuple(itertools.islice(all_values, 1, None))\n+ control_values = ops.SumOfProducts(anything_but_all_zeros)\n+ new_op = new_op.controlled_by(*qs, control_values=control_values)\n else:\n raise ValueError('Only KeyConditions are allowed.')\n- return op.without_classical_controls().controlled_by(\n- *controls, control_values=[tuple(range(1, q.dimension)) for q in controls]\n- )\n+ return new_op\n return op\n \n circuit = transformer_primitives.map_operations_and_unroll(\n", "issue": "Multi-qubit conditions in DeferredMeasurementsTransformer \nUse https://github.com/quantumlib/Cirq/pull/5755 to allow control keys from multi-qubit measurements in deferred measurements transformers. That new feature allows us to express \"anything but all zeros\", which is the condition required.\r\n\r\nhttps://github.com/quantumlib/Cirq/blob/77b61f500af3726ad4b34b06c72e21eac48d3db6/cirq-core/cirq/transformers/measurement_transformers.py#L121-L124\n", "before_files": [{"content": "# Copyright 2022 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Dict, List, Optional, TYPE_CHECKING, Union\n\nfrom cirq import ops, protocols, value\nfrom cirq.transformers import transformer_api, transformer_primitives\nfrom cirq.transformers.synchronize_terminal_measurements import find_terminal_measurements\n\nif TYPE_CHECKING:\n import cirq\n\n\nclass _MeasurementQid(ops.Qid):\n \"\"\"A qubit that substitutes in for a deferred measurement.\n\n Exactly one qubit will be created per qubit in the measurement gate.\n \"\"\"\n\n def __init__(self, key: Union[str, 'cirq.MeasurementKey'], qid: 'cirq.Qid'):\n \"\"\"Initializes the qubit.\n\n Args:\n key: The key of the measurement gate being deferred.\n qid: One qubit that is being measured. Each deferred measurement\n should create one new _MeasurementQid per qubit being measured\n by that gate.\n \"\"\"\n self._key = value.MeasurementKey.parse_serialized(key) if isinstance(key, str) else key\n self._qid = qid\n\n @property\n def dimension(self) -> int:\n return self._qid.dimension\n\n def _comparison_key(self) -> Any:\n return (str(self._key), self._qid._comparison_key())\n\n def __str__(self) -> str:\n return f\"M('{self._key}', q={self._qid})\"\n\n def __repr__(self) -> str:\n return f'_MeasurementQid({self._key!r}, {self._qid!r})'\n\n\n@transformer_api.transformer\ndef defer_measurements(\n circuit: 'cirq.AbstractCircuit', *, context: Optional['cirq.TransformerContext'] = None\n) -> 'cirq.Circuit':\n \"\"\"Implements the Deferred Measurement Principle.\n\n Uses the Deferred Measurement Principle to move all measurements to the\n end of the circuit. All non-terminal measurements are changed to\n conditional quantum gates onto ancilla qubits, and classically controlled\n operations are transformed to quantum controls from those ancilla qubits.\n Finally, measurements of all ancilla qubits are appended to the end of the\n circuit.\n\n Optimizing deferred measurements is an area of active research, and future\n iterations may contain optimizations that reduce the number of ancilla\n qubits, so one should not depend on the exact shape of the output from this\n function. Only the logical equivalence is guaranteed to remain unchanged.\n Moment and subcircuit structure is not preserved.\n\n Args:\n circuit: The circuit to transform. It will not be modified.\n context: `cirq.TransformerContext` storing common configurable options\n for transformers.\n Returns:\n A circuit with equivalent logic, but all measurements at the end of the\n circuit.\n Raises:\n ValueError: If sympy-based classical conditions are used, or if\n conditions based on multi-qubit measurements exist. (The latter of\n these is planned to be implemented soon).\n NotImplementedError: When attempting to defer a measurement with a\n confusion map. (https://github.com/quantumlib/Cirq/issues/5482)\n \"\"\"\n\n circuit = transformer_primitives.unroll_circuit_op(circuit, deep=True, tags_to_check=None)\n terminal_measurements = {op for _, op in find_terminal_measurements(circuit)}\n measurement_qubits: Dict['cirq.MeasurementKey', List['_MeasurementQid']] = {}\n\n def defer(op: 'cirq.Operation', _) -> 'cirq.OP_TREE':\n if op in terminal_measurements:\n return op\n gate = op.gate\n if isinstance(gate, ops.MeasurementGate):\n if gate.confusion_map:\n raise NotImplementedError(\n \"Deferring confused measurement is not implemented, but found \"\n f\"measurement with key={gate.key} and non-empty confusion map.\"\n )\n key = value.MeasurementKey.parse_serialized(gate.key)\n targets = [_MeasurementQid(key, q) for q in op.qubits]\n measurement_qubits[key] = targets\n cxs = [ops.CX(q, target) for q, target in zip(op.qubits, targets)]\n xs = [ops.X(targets[i]) for i, b in enumerate(gate.full_invert_mask()) if b]\n return cxs + xs\n elif protocols.is_measurement(op):\n return [defer(op, None) for op in protocols.decompose_once(op)]\n elif op.classical_controls:\n controls = []\n for c in op.classical_controls:\n if isinstance(c, value.KeyCondition):\n if c.key not in measurement_qubits:\n raise ValueError(f'Deferred measurement for key={c.key} not found.')\n qubits = measurement_qubits[c.key]\n if len(qubits) != 1:\n # TODO: Multi-qubit conditions require\n # https://github.com/quantumlib/Cirq/issues/4512\n # Remember to update docstring above once this works.\n raise ValueError('Only single qubit conditions are allowed.')\n controls.extend(qubits)\n else:\n raise ValueError('Only KeyConditions are allowed.')\n return op.without_classical_controls().controlled_by(\n *controls, control_values=[tuple(range(1, q.dimension)) for q in controls]\n )\n return op\n\n circuit = transformer_primitives.map_operations_and_unroll(\n circuit=circuit,\n map_func=defer,\n tags_to_ignore=context.tags_to_ignore if context else (),\n raise_if_add_qubits=False,\n ).unfreeze()\n for k, qubits in measurement_qubits.items():\n circuit.append(ops.measure(*qubits, key=k))\n return circuit\n\n\n@transformer_api.transformer\ndef dephase_measurements(\n circuit: 'cirq.AbstractCircuit',\n *,\n context: Optional['cirq.TransformerContext'] = transformer_api.TransformerContext(deep=True),\n) -> 'cirq.Circuit':\n \"\"\"Changes all measurements to a dephase operation.\n\n This transformer is useful when using a density matrix simulator, when\n wishing to calculate the final density matrix of a circuit and not simulate\n the measurements themselves.\n\n Args:\n circuit: The circuit to transform. It will not be modified.\n context: `cirq.TransformerContext` storing common configurable options\n for transformers. The default has `deep=True` to ensure\n measurements at all levels are dephased.\n Returns:\n A copy of the circuit, with dephase operations in place of all\n measurements.\n Raises:\n ValueError: If the circuit contains classical controls. In this case,\n it is required to change these to quantum controls via\n `cirq.defer_measurements` first. Since deferral adds ancilla qubits\n to the circuit, this is not done automatically, to prevent\n surprises.\n \"\"\"\n\n def dephase(op: 'cirq.Operation', _) -> 'cirq.OP_TREE':\n gate = op.gate\n if isinstance(gate, ops.MeasurementGate):\n key = value.MeasurementKey.parse_serialized(gate.key)\n return ops.KrausChannel.from_channel(ops.phase_damp(1), key=key).on_each(op.qubits)\n elif isinstance(op, ops.ClassicallyControlledOperation):\n raise ValueError('Use cirq.defer_measurements first to remove classical controls.')\n return op\n\n ignored = () if context is None else context.tags_to_ignore\n return transformer_primitives.map_operations(\n circuit, dephase, deep=context.deep if context else True, tags_to_ignore=ignored\n ).unfreeze()\n\n\n@transformer_api.transformer\ndef drop_terminal_measurements(\n circuit: 'cirq.AbstractCircuit',\n *,\n context: Optional['cirq.TransformerContext'] = transformer_api.TransformerContext(deep=True),\n) -> 'cirq.Circuit':\n \"\"\"Removes terminal measurements from a circuit.\n\n This transformer is helpful when trying to capture the final state vector\n of a circuit with many terminal measurements, as simulating the circuit\n with those measurements in place would otherwise collapse the final state.\n\n Args:\n circuit: The circuit to transform. It will not be modified.\n context: `cirq.TransformerContext` storing common configurable options\n for transformers. The default has `deep=True`, as \"terminal\n measurements\" is ill-defined without inspecting subcircuits;\n passing a context with `deep=False` will return an error.\n Returns:\n A copy of the circuit, with identity or X gates in place of terminal\n measurements.\n Raises:\n ValueError: if the circuit contains non-terminal measurements, or if\n the provided context has`deep=False`.\n \"\"\"\n\n if context is None or not context.deep:\n raise ValueError(\n 'Context has `deep=False`, but `deep=True` is required to drop terminal measurements.'\n )\n\n if not circuit.are_all_measurements_terminal():\n raise ValueError('Circuit contains a non-terminal measurement.')\n\n def flip_inversion(op: 'cirq.Operation', _) -> 'cirq.OP_TREE':\n if isinstance(op.gate, ops.MeasurementGate):\n return [\n ops.X(q) if b else ops.I(q) for q, b in zip(op.qubits, op.gate.full_invert_mask())\n ]\n return op\n\n ignored = () if context is None else context.tags_to_ignore\n return transformer_primitives.map_operations(\n circuit, flip_inversion, deep=context.deep if context else True, tags_to_ignore=ignored\n ).unfreeze()\n", "path": "cirq-core/cirq/transformers/measurement_transformers.py"}]}
| 3,445 | 632 |
gh_patches_debug_14786
|
rasdani/github-patches
|
git_diff
|
celery__celery-3392
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test_retry_kwargs_can_be_empty fails on pypy3
From https://travis-ci.org/celery/celery/jobs/151613800:
```
======================================================================
ERROR: test_retry_kwargs_can_be_empty (celery.tests.tasks.test_tasks.test_task_retries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/celery/celery/celery/tests/tasks/test_tasks.py", line 178, in test_retry_kwargs_can_be_empty
self.retry_task_mockapply.retry(args=[4, 4], kwargs=None)
File "/home/travis/build/celery/celery/celery/app/task.py", line 611, in retry
raise_with_context(exc or Retry('Task can be retried', None))
File "/home/travis/build/celery/celery/celery/utils/serialization.py", line 255, in raise_with_context
_raise_with_context(exc, exc_info[1])
File "<string>", line 1, in _raise_with_context
TypeError: exception causes must derive from BaseException, not NoneType
```
https://github.com/celery/celery/blob/5031d6f27862001d3e3bc5a2dacf1185c933f2c9/celery/tests/tasks/test_tasks.py#L169
</issue>
<code>
[start of celery/utils/serialization.py]
1 # -*- coding: utf-8 -*-
2 """Utilities for safely pickling exceptions."""
3 from __future__ import absolute_import, unicode_literals
4
5 import datetime
6 import numbers
7 import sys
8
9 from base64 import b64encode as base64encode, b64decode as base64decode
10 from functools import partial
11 from inspect import getmro
12 from itertools import takewhile
13
14 from kombu.utils.encoding import bytes_to_str, str_to_bytes
15
16 from celery.five import (
17 bytes_if_py2, python_2_unicode_compatible, items, reraise, string_t,
18 )
19
20 from .encoding import safe_repr
21
22 try:
23 import cPickle as pickle
24 except ImportError:
25 import pickle # noqa
26
27 PY3 = sys.version_info[0] >= 3
28
29 __all__ = [
30 'UnpickleableExceptionWrapper', 'subclass_exception',
31 'find_pickleable_exception', 'create_exception_cls',
32 'get_pickleable_exception', 'get_pickleable_etype',
33 'get_pickled_exception', 'strtobool',
34 ]
35
36 #: List of base classes we probably don't want to reduce to.
37 try:
38 unwanted_base_classes = (StandardError, Exception, BaseException, object)
39 except NameError: # pragma: no cover
40 unwanted_base_classes = (Exception, BaseException, object) # py3k
41
42
43 def subclass_exception(name, parent, module): # noqa
44 return type(bytes_if_py2(name), (parent,), {'__module__': module})
45
46
47 def find_pickleable_exception(exc, loads=pickle.loads,
48 dumps=pickle.dumps):
49 """With an exception instance, iterate over its super classes (by MRO)
50 and find the first super exception that's pickleable. It does
51 not go below :exc:`Exception` (i.e., it skips :exc:`Exception`,
52 :class:`BaseException` and :class:`object`). If that happens
53 you should use :exc:`UnpickleableException` instead.
54
55 Arguments:
56 exc (BaseException): An exception instance.
57
58 Returns:
59 Exception: Nearest pickleable parent exception class
60 (except :exc:`Exception` and parents), or if the exception is
61 pickleable it will return :const:`None`.
62 """
63 exc_args = getattr(exc, 'args', [])
64 for supercls in itermro(exc.__class__, unwanted_base_classes):
65 try:
66 superexc = supercls(*exc_args)
67 loads(dumps(superexc))
68 except:
69 pass
70 else:
71 return superexc
72
73
74 def itermro(cls, stop):
75 return takewhile(lambda sup: sup not in stop, getmro(cls))
76
77
78 def create_exception_cls(name, module, parent=None):
79 """Dynamically create an exception class."""
80 if not parent:
81 parent = Exception
82 return subclass_exception(name, parent, module)
83
84
85 @python_2_unicode_compatible
86 class UnpickleableExceptionWrapper(Exception):
87 """Wraps unpickleable exceptions.
88
89 Arguments:
90 exc_module (str): See :attr:`exc_module`.
91 exc_cls_name (str): See :attr:`exc_cls_name`.
92 exc_args (Tuple[Any, ...]): See :attr:`exc_args`.
93
94 Example:
95 >>> def pickle_it(raising_function):
96 ... try:
97 ... raising_function()
98 ... except Exception as e:
99 ... exc = UnpickleableExceptionWrapper(
100 ... e.__class__.__module__,
101 ... e.__class__.__name__,
102 ... e.args,
103 ... )
104 ... pickle.dumps(exc) # Works fine.
105 """
106
107 #: The module of the original exception.
108 exc_module = None
109
110 #: The name of the original exception class.
111 exc_cls_name = None
112
113 #: The arguments for the original exception.
114 exc_args = None
115
116 def __init__(self, exc_module, exc_cls_name, exc_args, text=None):
117 safe_exc_args = []
118 for arg in exc_args:
119 try:
120 pickle.dumps(arg)
121 safe_exc_args.append(arg)
122 except Exception:
123 safe_exc_args.append(safe_repr(arg))
124 self.exc_module = exc_module
125 self.exc_cls_name = exc_cls_name
126 self.exc_args = safe_exc_args
127 self.text = text
128 Exception.__init__(self, exc_module, exc_cls_name, safe_exc_args, text)
129
130 def restore(self):
131 return create_exception_cls(self.exc_cls_name,
132 self.exc_module)(*self.exc_args)
133
134 def __str__(self):
135 return self.text
136
137 @classmethod
138 def from_exception(cls, exc):
139 return cls(exc.__class__.__module__,
140 exc.__class__.__name__,
141 getattr(exc, 'args', []),
142 safe_repr(exc))
143
144
145 def get_pickleable_exception(exc):
146 """Make sure exception is pickleable."""
147 try:
148 pickle.loads(pickle.dumps(exc))
149 except Exception:
150 pass
151 else:
152 return exc
153 nearest = find_pickleable_exception(exc)
154 if nearest:
155 return nearest
156 return UnpickleableExceptionWrapper.from_exception(exc)
157
158
159 def get_pickleable_etype(cls, loads=pickle.loads, dumps=pickle.dumps):
160 try:
161 loads(dumps(cls))
162 except:
163 return Exception
164 else:
165 return cls
166
167
168 def get_pickled_exception(exc):
169 """Get original exception from exception pickled using
170 :meth:`get_pickleable_exception`."""
171 if isinstance(exc, UnpickleableExceptionWrapper):
172 return exc.restore()
173 return exc
174
175
176 def b64encode(s):
177 return bytes_to_str(base64encode(str_to_bytes(s)))
178
179
180 def b64decode(s):
181 return base64decode(str_to_bytes(s))
182
183
184 def strtobool(term, table={'false': False, 'no': False, '0': False,
185 'true': True, 'yes': True, '1': True,
186 'on': True, 'off': False}):
187 """Convert common terms for true/false to bool
188 (true/false/yes/no/on/off/1/0)."""
189 if isinstance(term, string_t):
190 try:
191 return table[term.lower()]
192 except KeyError:
193 raise TypeError('Cannot coerce {0!r} to type bool'.format(term))
194 return term
195
196
197 def jsonify(obj,
198 builtin_types=(numbers.Real, string_t), key=None,
199 keyfilter=None,
200 unknown_type_filter=None):
201 """Transforms object making it suitable for json serialization"""
202 from kombu.abstract import Object as KombuDictType
203 _jsonify = partial(jsonify, builtin_types=builtin_types, key=key,
204 keyfilter=keyfilter,
205 unknown_type_filter=unknown_type_filter)
206
207 if isinstance(obj, KombuDictType):
208 obj = obj.as_dict(recurse=True)
209
210 if obj is None or isinstance(obj, builtin_types):
211 return obj
212 elif isinstance(obj, (tuple, list)):
213 return [_jsonify(v) for v in obj]
214 elif isinstance(obj, dict):
215 return {
216 k: _jsonify(v, key=k) for k, v in items(obj)
217 if (keyfilter(k) if keyfilter else 1)
218 }
219 elif isinstance(obj, datetime.datetime):
220 # See "Date Time String Format" in the ECMA-262 specification.
221 r = obj.isoformat()
222 if obj.microsecond:
223 r = r[:23] + r[26:]
224 if r.endswith('+00:00'):
225 r = r[:-6] + 'Z'
226 return r
227 elif isinstance(obj, datetime.date):
228 return obj.isoformat()
229 elif isinstance(obj, datetime.time):
230 r = obj.isoformat()
231 if obj.microsecond:
232 r = r[:12]
233 return r
234 elif isinstance(obj, datetime.timedelta):
235 return str(obj)
236 else:
237 if unknown_type_filter is None:
238 raise ValueError(
239 'Unsupported type: {0!r} {1!r} (parent: {2})'.format(
240 type(obj), obj, key))
241 return unknown_type_filter(obj)
242
243
244 if PY3:
245 from vine.five import exec_
246 _raise_with_context = None # for flake8
247 exec_("""def _raise_with_context(exc, ctx): raise exc from ctx""")
248
249 def raise_with_context(exc):
250 exc_info = sys.exc_info()
251 if not exc_info:
252 raise exc
253 elif exc_info[1] is exc:
254 raise
255 _raise_with_context(exc, exc_info[1])
256 else:
257 def raise_with_context(exc):
258 exc_info = sys.exc_info()
259 if not exc_info:
260 raise exc
261 if exc_info[1] is exc:
262 raise
263 elif exc_info[2]:
264 reraise(type(exc), exc, exc_info[2])
265 raise exc
266
[end of celery/utils/serialization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/celery/utils/serialization.py b/celery/utils/serialization.py
--- a/celery/utils/serialization.py
+++ b/celery/utils/serialization.py
@@ -24,7 +24,7 @@
except ImportError:
import pickle # noqa
-PY3 = sys.version_info[0] >= 3
+PY33 = sys.version_info >= (3, 3)
__all__ = [
'UnpickleableExceptionWrapper', 'subclass_exception',
@@ -241,7 +241,9 @@
return unknown_type_filter(obj)
-if PY3:
+# Since PyPy 3 targets Python 3.2, 'raise exc from None' will
+# raise a TypeError so we need to look for Python 3.3 or newer
+if PY33:
from vine.five import exec_
_raise_with_context = None # for flake8
exec_("""def _raise_with_context(exc, ctx): raise exc from ctx""")
|
{"golden_diff": "diff --git a/celery/utils/serialization.py b/celery/utils/serialization.py\n--- a/celery/utils/serialization.py\n+++ b/celery/utils/serialization.py\n@@ -24,7 +24,7 @@\n except ImportError:\n import pickle # noqa\n \n-PY3 = sys.version_info[0] >= 3\n+PY33 = sys.version_info >= (3, 3)\n \n __all__ = [\n 'UnpickleableExceptionWrapper', 'subclass_exception',\n@@ -241,7 +241,9 @@\n return unknown_type_filter(obj)\n \n \n-if PY3:\n+# Since PyPy 3 targets Python 3.2, 'raise exc from None' will\n+# raise a TypeError so we need to look for Python 3.3 or newer\n+if PY33:\n from vine.five import exec_\n _raise_with_context = None # for flake8\n exec_(\"\"\"def _raise_with_context(exc, ctx): raise exc from ctx\"\"\")\n", "issue": "test_retry_kwargs_can_be_empty fails on pypy3\nFrom https://travis-ci.org/celery/celery/jobs/151613800:\n\n```\n======================================================================\nERROR: test_retry_kwargs_can_be_empty (celery.tests.tasks.test_tasks.test_task_retries)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"/home/travis/build/celery/celery/celery/tests/tasks/test_tasks.py\", line 178, in test_retry_kwargs_can_be_empty\n self.retry_task_mockapply.retry(args=[4, 4], kwargs=None)\n File \"/home/travis/build/celery/celery/celery/app/task.py\", line 611, in retry\n raise_with_context(exc or Retry('Task can be retried', None))\n File \"/home/travis/build/celery/celery/celery/utils/serialization.py\", line 255, in raise_with_context\n _raise_with_context(exc, exc_info[1])\n File \"<string>\", line 1, in _raise_with_context\nTypeError: exception causes must derive from BaseException, not NoneType\n```\n\nhttps://github.com/celery/celery/blob/5031d6f27862001d3e3bc5a2dacf1185c933f2c9/celery/tests/tasks/test_tasks.py#L169\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Utilities for safely pickling exceptions.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport datetime\nimport numbers\nimport sys\n\nfrom base64 import b64encode as base64encode, b64decode as base64decode\nfrom functools import partial\nfrom inspect import getmro\nfrom itertools import takewhile\n\nfrom kombu.utils.encoding import bytes_to_str, str_to_bytes\n\nfrom celery.five import (\n bytes_if_py2, python_2_unicode_compatible, items, reraise, string_t,\n)\n\nfrom .encoding import safe_repr\n\ntry:\n import cPickle as pickle\nexcept ImportError:\n import pickle # noqa\n\nPY3 = sys.version_info[0] >= 3\n\n__all__ = [\n 'UnpickleableExceptionWrapper', 'subclass_exception',\n 'find_pickleable_exception', 'create_exception_cls',\n 'get_pickleable_exception', 'get_pickleable_etype',\n 'get_pickled_exception', 'strtobool',\n]\n\n#: List of base classes we probably don't want to reduce to.\ntry:\n unwanted_base_classes = (StandardError, Exception, BaseException, object)\nexcept NameError: # pragma: no cover\n unwanted_base_classes = (Exception, BaseException, object) # py3k\n\n\ndef subclass_exception(name, parent, module): # noqa\n return type(bytes_if_py2(name), (parent,), {'__module__': module})\n\n\ndef find_pickleable_exception(exc, loads=pickle.loads,\n dumps=pickle.dumps):\n \"\"\"With an exception instance, iterate over its super classes (by MRO)\n and find the first super exception that's pickleable. It does\n not go below :exc:`Exception` (i.e., it skips :exc:`Exception`,\n :class:`BaseException` and :class:`object`). If that happens\n you should use :exc:`UnpickleableException` instead.\n\n Arguments:\n exc (BaseException): An exception instance.\n\n Returns:\n Exception: Nearest pickleable parent exception class\n (except :exc:`Exception` and parents), or if the exception is\n pickleable it will return :const:`None`.\n \"\"\"\n exc_args = getattr(exc, 'args', [])\n for supercls in itermro(exc.__class__, unwanted_base_classes):\n try:\n superexc = supercls(*exc_args)\n loads(dumps(superexc))\n except:\n pass\n else:\n return superexc\n\n\ndef itermro(cls, stop):\n return takewhile(lambda sup: sup not in stop, getmro(cls))\n\n\ndef create_exception_cls(name, module, parent=None):\n \"\"\"Dynamically create an exception class.\"\"\"\n if not parent:\n parent = Exception\n return subclass_exception(name, parent, module)\n\n\n@python_2_unicode_compatible\nclass UnpickleableExceptionWrapper(Exception):\n \"\"\"Wraps unpickleable exceptions.\n\n Arguments:\n exc_module (str): See :attr:`exc_module`.\n exc_cls_name (str): See :attr:`exc_cls_name`.\n exc_args (Tuple[Any, ...]): See :attr:`exc_args`.\n\n Example:\n >>> def pickle_it(raising_function):\n ... try:\n ... raising_function()\n ... except Exception as e:\n ... exc = UnpickleableExceptionWrapper(\n ... e.__class__.__module__,\n ... e.__class__.__name__,\n ... e.args,\n ... )\n ... pickle.dumps(exc) # Works fine.\n \"\"\"\n\n #: The module of the original exception.\n exc_module = None\n\n #: The name of the original exception class.\n exc_cls_name = None\n\n #: The arguments for the original exception.\n exc_args = None\n\n def __init__(self, exc_module, exc_cls_name, exc_args, text=None):\n safe_exc_args = []\n for arg in exc_args:\n try:\n pickle.dumps(arg)\n safe_exc_args.append(arg)\n except Exception:\n safe_exc_args.append(safe_repr(arg))\n self.exc_module = exc_module\n self.exc_cls_name = exc_cls_name\n self.exc_args = safe_exc_args\n self.text = text\n Exception.__init__(self, exc_module, exc_cls_name, safe_exc_args, text)\n\n def restore(self):\n return create_exception_cls(self.exc_cls_name,\n self.exc_module)(*self.exc_args)\n\n def __str__(self):\n return self.text\n\n @classmethod\n def from_exception(cls, exc):\n return cls(exc.__class__.__module__,\n exc.__class__.__name__,\n getattr(exc, 'args', []),\n safe_repr(exc))\n\n\ndef get_pickleable_exception(exc):\n \"\"\"Make sure exception is pickleable.\"\"\"\n try:\n pickle.loads(pickle.dumps(exc))\n except Exception:\n pass\n else:\n return exc\n nearest = find_pickleable_exception(exc)\n if nearest:\n return nearest\n return UnpickleableExceptionWrapper.from_exception(exc)\n\n\ndef get_pickleable_etype(cls, loads=pickle.loads, dumps=pickle.dumps):\n try:\n loads(dumps(cls))\n except:\n return Exception\n else:\n return cls\n\n\ndef get_pickled_exception(exc):\n \"\"\"Get original exception from exception pickled using\n :meth:`get_pickleable_exception`.\"\"\"\n if isinstance(exc, UnpickleableExceptionWrapper):\n return exc.restore()\n return exc\n\n\ndef b64encode(s):\n return bytes_to_str(base64encode(str_to_bytes(s)))\n\n\ndef b64decode(s):\n return base64decode(str_to_bytes(s))\n\n\ndef strtobool(term, table={'false': False, 'no': False, '0': False,\n 'true': True, 'yes': True, '1': True,\n 'on': True, 'off': False}):\n \"\"\"Convert common terms for true/false to bool\n (true/false/yes/no/on/off/1/0).\"\"\"\n if isinstance(term, string_t):\n try:\n return table[term.lower()]\n except KeyError:\n raise TypeError('Cannot coerce {0!r} to type bool'.format(term))\n return term\n\n\ndef jsonify(obj,\n builtin_types=(numbers.Real, string_t), key=None,\n keyfilter=None,\n unknown_type_filter=None):\n \"\"\"Transforms object making it suitable for json serialization\"\"\"\n from kombu.abstract import Object as KombuDictType\n _jsonify = partial(jsonify, builtin_types=builtin_types, key=key,\n keyfilter=keyfilter,\n unknown_type_filter=unknown_type_filter)\n\n if isinstance(obj, KombuDictType):\n obj = obj.as_dict(recurse=True)\n\n if obj is None or isinstance(obj, builtin_types):\n return obj\n elif isinstance(obj, (tuple, list)):\n return [_jsonify(v) for v in obj]\n elif isinstance(obj, dict):\n return {\n k: _jsonify(v, key=k) for k, v in items(obj)\n if (keyfilter(k) if keyfilter else 1)\n }\n elif isinstance(obj, datetime.datetime):\n # See \"Date Time String Format\" in the ECMA-262 specification.\n r = obj.isoformat()\n if obj.microsecond:\n r = r[:23] + r[26:]\n if r.endswith('+00:00'):\n r = r[:-6] + 'Z'\n return r\n elif isinstance(obj, datetime.date):\n return obj.isoformat()\n elif isinstance(obj, datetime.time):\n r = obj.isoformat()\n if obj.microsecond:\n r = r[:12]\n return r\n elif isinstance(obj, datetime.timedelta):\n return str(obj)\n else:\n if unknown_type_filter is None:\n raise ValueError(\n 'Unsupported type: {0!r} {1!r} (parent: {2})'.format(\n type(obj), obj, key))\n return unknown_type_filter(obj)\n\n\nif PY3:\n from vine.five import exec_\n _raise_with_context = None # for flake8\n exec_(\"\"\"def _raise_with_context(exc, ctx): raise exc from ctx\"\"\")\n\n def raise_with_context(exc):\n exc_info = sys.exc_info()\n if not exc_info:\n raise exc\n elif exc_info[1] is exc:\n raise\n _raise_with_context(exc, exc_info[1])\nelse:\n def raise_with_context(exc):\n exc_info = sys.exc_info()\n if not exc_info:\n raise exc\n if exc_info[1] is exc:\n raise\n elif exc_info[2]:\n reraise(type(exc), exc, exc_info[2])\n raise exc\n", "path": "celery/utils/serialization.py"}]}
| 3,441 | 220 |
gh_patches_debug_13384
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-5460
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/expectations/metrics/query_metrics/query_table.py]
1 from typing import Any, Dict, List, Optional, Union
2
3 from great_expectations.core.metric_domain_types import MetricDomainTypes
4 from great_expectations.execution_engine import (
5 SparkDFExecutionEngine,
6 SqlAlchemyExecutionEngine,
7 )
8 from great_expectations.expectations.metrics.import_manager import (
9 pyspark_sql_DataFrame,
10 pyspark_sql_Row,
11 pyspark_sql_SparkSession,
12 sa,
13 sqlalchemy_engine_Engine,
14 sqlalchemy_engine_Row,
15 )
16 from great_expectations.expectations.metrics.metric_provider import metric_value
17 from great_expectations.expectations.metrics.query_metric_provider import (
18 QueryMetricProvider,
19 )
20
21
22 class QueryTable(QueryMetricProvider):
23 metric_name = "query.table"
24 value_keys = ("query",)
25
26 @metric_value(engine=SqlAlchemyExecutionEngine)
27 def _sqlalchemy(
28 cls,
29 execution_engine: SqlAlchemyExecutionEngine,
30 metric_domain_kwargs: dict,
31 metric_value_kwargs: dict,
32 metrics: Dict[str, Any],
33 runtime_configuration: dict,
34 ) -> List[sqlalchemy_engine_Row]:
35 query: Optional[str] = metric_value_kwargs.get(
36 "query"
37 ) or cls.default_kwarg_values.get("query")
38
39 selectable: Union[sa.sql.Selectable, str]
40 selectable, _, _ = execution_engine.get_compute_domain(
41 metric_domain_kwargs, domain_type=MetricDomainTypes.TABLE
42 )
43
44 if isinstance(selectable, sa.Table):
45 query = query.format(active_batch=selectable)
46 elif isinstance(
47 selectable, sa.sql.Subquery
48 ): # Specifying a runtime query in a RuntimeBatchRequest returns the active bacth as a Subquery; sectioning the active batch off w/ parentheses ensures flow of operations doesn't break
49 query = query.format(active_batch=f"({selectable})")
50 elif isinstance(
51 selectable, sa.sql.Select
52 ): # Specifying a row_condition returns the active batch as a Select object, requiring compilation & aliasing when formatting the parameterized query
53 query = query.format(
54 active_batch=f'({selectable.compile(compile_kwargs={"literal_binds": True})}) AS subselect',
55 )
56 else:
57 query = query.format(active_batch=f"({selectable})")
58
59 engine: sqlalchemy_engine_Engine = execution_engine.engine
60 result: List[sqlalchemy_engine_Row] = engine.execute(sa.text(query)).fetchall()
61
62 return result
63
64 @metric_value(engine=SparkDFExecutionEngine)
65 def _spark(
66 cls,
67 execution_engine: SparkDFExecutionEngine,
68 metric_domain_kwargs: dict,
69 metric_value_kwargs: dict,
70 metrics: Dict[str, Any],
71 runtime_configuration: dict,
72 ) -> List[pyspark_sql_Row]:
73 query: Optional[str] = metric_value_kwargs.get(
74 "query"
75 ) or cls.default_kwarg_values.get("query")
76
77 df: pyspark_sql_DataFrame
78 df, _, _ = execution_engine.get_compute_domain(
79 metric_domain_kwargs, domain_type=MetricDomainTypes.TABLE
80 )
81
82 df.createOrReplaceTempView("tmp_view")
83 query = query.format(active_batch="tmp_view")
84
85 engine: pyspark_sql_SparkSession = execution_engine.spark
86 result: List[pyspark_sql_Row] = engine.sql(query).collect()
87
88 return result
89
[end of great_expectations/expectations/metrics/query_metrics/query_table.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/great_expectations/expectations/metrics/query_metrics/query_table.py b/great_expectations/expectations/metrics/query_metrics/query_table.py
--- a/great_expectations/expectations/metrics/query_metrics/query_table.py
+++ b/great_expectations/expectations/metrics/query_metrics/query_table.py
@@ -23,6 +23,7 @@
metric_name = "query.table"
value_keys = ("query",)
+ # <snippet>
@metric_value(engine=SqlAlchemyExecutionEngine)
def _sqlalchemy(
cls,
@@ -60,6 +61,7 @@
result: List[sqlalchemy_engine_Row] = engine.execute(sa.text(query)).fetchall()
return result
+ # </snippet>
@metric_value(engine=SparkDFExecutionEngine)
def _spark(
|
{"golden_diff": "diff --git a/great_expectations/expectations/metrics/query_metrics/query_table.py b/great_expectations/expectations/metrics/query_metrics/query_table.py\n--- a/great_expectations/expectations/metrics/query_metrics/query_table.py\n+++ b/great_expectations/expectations/metrics/query_metrics/query_table.py\n@@ -23,6 +23,7 @@\n metric_name = \"query.table\"\n value_keys = (\"query\",)\n \n+ # <snippet>\n @metric_value(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(\n cls,\n@@ -60,6 +61,7 @@\n result: List[sqlalchemy_engine_Row] = engine.execute(sa.text(query)).fetchall()\n \n return result\n+ # </snippet>\n \n @metric_value(engine=SparkDFExecutionEngine)\n def _spark(\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from typing import Any, Dict, List, Optional, Union\n\nfrom great_expectations.core.metric_domain_types import MetricDomainTypes\nfrom great_expectations.execution_engine import (\n SparkDFExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import (\n pyspark_sql_DataFrame,\n pyspark_sql_Row,\n pyspark_sql_SparkSession,\n sa,\n sqlalchemy_engine_Engine,\n sqlalchemy_engine_Row,\n)\nfrom great_expectations.expectations.metrics.metric_provider import metric_value\nfrom great_expectations.expectations.metrics.query_metric_provider import (\n QueryMetricProvider,\n)\n\n\nclass QueryTable(QueryMetricProvider):\n metric_name = \"query.table\"\n value_keys = (\"query\",)\n\n @metric_value(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(\n cls,\n execution_engine: SqlAlchemyExecutionEngine,\n metric_domain_kwargs: dict,\n metric_value_kwargs: dict,\n metrics: Dict[str, Any],\n runtime_configuration: dict,\n ) -> List[sqlalchemy_engine_Row]:\n query: Optional[str] = metric_value_kwargs.get(\n \"query\"\n ) or cls.default_kwarg_values.get(\"query\")\n\n selectable: Union[sa.sql.Selectable, str]\n selectable, _, _ = execution_engine.get_compute_domain(\n metric_domain_kwargs, domain_type=MetricDomainTypes.TABLE\n )\n\n if isinstance(selectable, sa.Table):\n query = query.format(active_batch=selectable)\n elif isinstance(\n selectable, sa.sql.Subquery\n ): # Specifying a runtime query in a RuntimeBatchRequest returns the active bacth as a Subquery; sectioning the active batch off w/ parentheses ensures flow of operations doesn't break\n query = query.format(active_batch=f\"({selectable})\")\n elif isinstance(\n selectable, sa.sql.Select\n ): # Specifying a row_condition returns the active batch as a Select object, requiring compilation & aliasing when formatting the parameterized query\n query = query.format(\n active_batch=f'({selectable.compile(compile_kwargs={\"literal_binds\": True})}) AS subselect',\n )\n else:\n query = query.format(active_batch=f\"({selectable})\")\n\n engine: sqlalchemy_engine_Engine = execution_engine.engine\n result: List[sqlalchemy_engine_Row] = engine.execute(sa.text(query)).fetchall()\n\n return result\n\n @metric_value(engine=SparkDFExecutionEngine)\n def _spark(\n cls,\n execution_engine: SparkDFExecutionEngine,\n metric_domain_kwargs: dict,\n metric_value_kwargs: dict,\n metrics: Dict[str, Any],\n runtime_configuration: dict,\n ) -> List[pyspark_sql_Row]:\n query: Optional[str] = metric_value_kwargs.get(\n \"query\"\n ) or cls.default_kwarg_values.get(\"query\")\n\n df: pyspark_sql_DataFrame\n df, _, _ = execution_engine.get_compute_domain(\n metric_domain_kwargs, domain_type=MetricDomainTypes.TABLE\n )\n\n df.createOrReplaceTempView(\"tmp_view\")\n query = query.format(active_batch=\"tmp_view\")\n\n engine: pyspark_sql_SparkSession = execution_engine.spark\n result: List[pyspark_sql_Row] = engine.sql(query).collect()\n\n return result\n", "path": "great_expectations/expectations/metrics/query_metrics/query_table.py"}]}
| 1,447 | 183 |
gh_patches_debug_31612
|
rasdani/github-patches
|
git_diff
|
tough-dev-school__education-backend-855
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Слать в лототрон только один ответ от одного пользователя
Сейчас, если пользователь дал три ответа на домашку, то при кросс-проверке все три ответа уйдут разным студентам. Это — плохо, нужно либо слать только первый ответ, либо собирать все ответы в пачку и слать их одному пользователю.
</issue>
<code>
[start of src/homework/services/answer_crosscheck_dispatcher.py]
1 from typing import Optional
2
3 from django.db import transaction
4 from django.db.models import Count, Q, QuerySet
5
6 from homework.models import Answer, AnswerCrossCheck
7 from users.models import User
8
9
10 class AnswerCrossCheckDispatcher:
11 """Given a bunch of answers and users, create a cross-check record
12 for each of them, making sure each answer has a user to check
13 and number of answers if equal for each user
14 """
15 def __init__(self, answers: QuerySet[Answer], answers_per_user: int = 3):
16 self.answers = Answer.objects.filter(pk__in=[answer.pk for answer in answers])
17 self.users = User.objects.filter(pk__in=[answer.author_id for answer in answers]).order_by('?')
18 self.answers_per_user = answers_per_user
19
20 @transaction.atomic
21 def __call__(self) -> list[AnswerCrossCheck]:
22 crosschecks = list()
23 for user in self.users.iterator():
24 for _ in range(self.answers_per_user):
25 answer = self.get_answer_to_check(user)
26 if answer is not None:
27 crosschecks.append(
28 self.give_answer_to_user(answer, user),
29 )
30
31 return crosschecks
32
33 def get_answer_to_check(self, user: User) -> Optional[Answer]:
34 return self.get_answers_with_crosscheck_count() \
35 .annotate(already_checking=Count('answercrosscheck', filter=Q(answercrosscheck__checker_id=user.id))) \
36 .exclude(already_checking__gte=1) \
37 .exclude(author=user) \
38 .exclude(do_not_crosscheck=True) \
39 .order_by('crosscheck_count').first()
40
41 def give_answer_to_user(self, answer: Answer, user: User) -> AnswerCrossCheck:
42 return AnswerCrossCheck.objects.create(answer=answer, checker=user)
43
44 def get_answers_with_crosscheck_count(self) -> QuerySet[Answer]:
45 return self.answers.annotate(
46 crosscheck_count=Count('answercrosscheck', filter=Q(answercrosscheck__checker__in=self.users)),
47 )
48
[end of src/homework/services/answer_crosscheck_dispatcher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/homework/services/answer_crosscheck_dispatcher.py b/src/homework/services/answer_crosscheck_dispatcher.py
--- a/src/homework/services/answer_crosscheck_dispatcher.py
+++ b/src/homework/services/answer_crosscheck_dispatcher.py
@@ -9,11 +9,12 @@
class AnswerCrossCheckDispatcher:
"""Given a bunch of answers and users, create a cross-check record
- for each of them, making sure each answer has a user to check
- and number of answers if equal for each user
+ for each of them, making sure the first answer of each user has a user to
+ check and number of answers if equal for each user
"""
def __init__(self, answers: QuerySet[Answer], answers_per_user: int = 3):
self.answers = Answer.objects.filter(pk__in=[answer.pk for answer in answers])
+ self.unique_author_answers = self.answers.order_by('author_id', 'created').distinct('author_id')
self.users = User.objects.filter(pk__in=[answer.author_id for answer in answers]).order_by('?')
self.answers_per_user = answers_per_user
@@ -27,11 +28,11 @@
crosschecks.append(
self.give_answer_to_user(answer, user),
)
-
return crosschecks
def get_answer_to_check(self, user: User) -> Optional[Answer]:
return self.get_answers_with_crosscheck_count() \
+ .filter(id__in=self.unique_author_answers) \
.annotate(already_checking=Count('answercrosscheck', filter=Q(answercrosscheck__checker_id=user.id))) \
.exclude(already_checking__gte=1) \
.exclude(author=user) \
|
{"golden_diff": "diff --git a/src/homework/services/answer_crosscheck_dispatcher.py b/src/homework/services/answer_crosscheck_dispatcher.py\n--- a/src/homework/services/answer_crosscheck_dispatcher.py\n+++ b/src/homework/services/answer_crosscheck_dispatcher.py\n@@ -9,11 +9,12 @@\n \n class AnswerCrossCheckDispatcher:\n \"\"\"Given a bunch of answers and users, create a cross-check record\n- for each of them, making sure each answer has a user to check\n- and number of answers if equal for each user\n+ for each of them, making sure the first answer of each user has a user to\n+ check and number of answers if equal for each user\n \"\"\"\n def __init__(self, answers: QuerySet[Answer], answers_per_user: int = 3):\n self.answers = Answer.objects.filter(pk__in=[answer.pk for answer in answers])\n+ self.unique_author_answers = self.answers.order_by('author_id', 'created').distinct('author_id')\n self.users = User.objects.filter(pk__in=[answer.author_id for answer in answers]).order_by('?')\n self.answers_per_user = answers_per_user\n \n@@ -27,11 +28,11 @@\n crosschecks.append(\n self.give_answer_to_user(answer, user),\n )\n-\n return crosschecks\n \n def get_answer_to_check(self, user: User) -> Optional[Answer]:\n return self.get_answers_with_crosscheck_count() \\\n+ .filter(id__in=self.unique_author_answers) \\\n .annotate(already_checking=Count('answercrosscheck', filter=Q(answercrosscheck__checker_id=user.id))) \\\n .exclude(already_checking__gte=1) \\\n .exclude(author=user) \\\n", "issue": "\u0421\u043b\u0430\u0442\u044c \u0432 \u043b\u043e\u0442\u043e\u0442\u0440\u043e\u043d \u0442\u043e\u043b\u044c\u043a\u043e \u043e\u0434\u0438\u043d \u043e\u0442\u0432\u0435\u0442 \u043e\u0442 \u043e\u0434\u043d\u043e\u0433\u043e \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044f\n\u0421\u0435\u0439\u0447\u0430\u0441, \u0435\u0441\u043b\u0438 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c \u0434\u0430\u043b \u0442\u0440\u0438 \u043e\u0442\u0432\u0435\u0442\u0430 \u043d\u0430 \u0434\u043e\u043c\u0430\u0448\u043a\u0443, \u0442\u043e \u043f\u0440\u0438 \u043a\u0440\u043e\u0441\u0441-\u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0435 \u0432\u0441\u0435 \u0442\u0440\u0438 \u043e\u0442\u0432\u0435\u0442\u0430 \u0443\u0439\u0434\u0443\u0442 \u0440\u0430\u0437\u043d\u044b\u043c \u0441\u0442\u0443\u0434\u0435\u043d\u0442\u0430\u043c. \u042d\u0442\u043e \u2014 \u043f\u043b\u043e\u0445\u043e, \u043d\u0443\u0436\u043d\u043e \u043b\u0438\u0431\u043e \u0441\u043b\u0430\u0442\u044c \u0442\u043e\u043b\u044c\u043a\u043e \u043f\u0435\u0440\u0432\u044b\u0439 \u043e\u0442\u0432\u0435\u0442, \u043b\u0438\u0431\u043e \u0441\u043e\u0431\u0438\u0440\u0430\u0442\u044c \u0432\u0441\u0435 \u043e\u0442\u0432\u0435\u0442\u044b \u0432 \u043f\u0430\u0447\u043a\u0443 \u0438 \u0441\u043b\u0430\u0442\u044c \u0438\u0445 \u043e\u0434\u043d\u043e\u043c\u0443 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044e.\n", "before_files": [{"content": "from typing import Optional\n\nfrom django.db import transaction\nfrom django.db.models import Count, Q, QuerySet\n\nfrom homework.models import Answer, AnswerCrossCheck\nfrom users.models import User\n\n\nclass AnswerCrossCheckDispatcher:\n \"\"\"Given a bunch of answers and users, create a cross-check record\n for each of them, making sure each answer has a user to check\n and number of answers if equal for each user\n \"\"\"\n def __init__(self, answers: QuerySet[Answer], answers_per_user: int = 3):\n self.answers = Answer.objects.filter(pk__in=[answer.pk for answer in answers])\n self.users = User.objects.filter(pk__in=[answer.author_id for answer in answers]).order_by('?')\n self.answers_per_user = answers_per_user\n\n @transaction.atomic\n def __call__(self) -> list[AnswerCrossCheck]:\n crosschecks = list()\n for user in self.users.iterator():\n for _ in range(self.answers_per_user):\n answer = self.get_answer_to_check(user)\n if answer is not None:\n crosschecks.append(\n self.give_answer_to_user(answer, user),\n )\n\n return crosschecks\n\n def get_answer_to_check(self, user: User) -> Optional[Answer]:\n return self.get_answers_with_crosscheck_count() \\\n .annotate(already_checking=Count('answercrosscheck', filter=Q(answercrosscheck__checker_id=user.id))) \\\n .exclude(already_checking__gte=1) \\\n .exclude(author=user) \\\n .exclude(do_not_crosscheck=True) \\\n .order_by('crosscheck_count').first()\n\n def give_answer_to_user(self, answer: Answer, user: User) -> AnswerCrossCheck:\n return AnswerCrossCheck.objects.create(answer=answer, checker=user)\n\n def get_answers_with_crosscheck_count(self) -> QuerySet[Answer]:\n return self.answers.annotate(\n crosscheck_count=Count('answercrosscheck', filter=Q(answercrosscheck__checker__in=self.users)),\n )\n", "path": "src/homework/services/answer_crosscheck_dispatcher.py"}]}
| 1,162 | 382 |
gh_patches_debug_9548
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-3676
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
c7n-mailer-replay : Python3 - TypeError: write() argument must be str, not bytes
Using c7n-mailer-replay under Python3 gives the following trace:
```
Traceback (most recent call last):
File "HOME/.pyenv/versions/cloud-custodian-3.6/bin/c7n-mailer-replay", line 11, in <module>
load_entry_point('c7n-mailer', 'console_scripts', 'c7n-mailer-replay')()
File "HOME/CLOUD_CUSTODIAN/SRC/tools/c7n_mailer/c7n_mailer/replay.py", line 134, in main
json_dump_file=options.json_dump_file
File "HOME/CLOUD_CUSTODIAN/SRC/tools/c7n_mailer/c7n_mailer/replay.py", line 46, in __init__
fh.write(raw)
TypeError: write() argument must be str, not bytes
```
I had success with the following change:
```diff
diff --git a/tools/c7n_mailer/c7n_mailer/replay.py b/tools/c7n_mailer/c7n_mailer/replay.py
index b3f5456be..72f63332f 100644
--- a/tools/c7n_mailer/c7n_mailer/replay.py
+++ b/tools/c7n_mailer/c7n_mailer/replay.py
@@ -42,7 +42,7 @@ class MailerTester(object):
logger.debug('base64-decoding and zlib decompressing message')
raw = zlib.decompress(base64.b64decode(raw))
if json_dump_file is not None:
- with open(json_dump_file, 'w') as fh:
+ with open(json_dump_file, 'wb') as fh:
fh.write(raw)
self.data = json.loads(raw)
logger.debug('Loaded message JSON')
```
I believe it could be compatible with Python2 also, but it needs some testing.
</issue>
<code>
[start of tools/c7n_mailer/c7n_mailer/replay.py]
1 """
2 Allow local testing of mailer and templates by replaying an SQS message.
3
4 MAILER_FILE input is a file containing the exact base64-encoded, gzipped
5 data that's enqueued to SQS via :py:meth:`c7n.actions.Notify.send_sqs`.
6
7 Alternatively, with -p|--plain specified, the file will be assumed to be
8 JSON data that can be loaded directly.
9 """
10 from __future__ import absolute_import, division, print_function, unicode_literals
11
12 import argparse
13 import boto3
14 import os
15 import logging
16 import zlib
17 import base64
18 import json
19
20 import jsonschema
21 from ruamel import yaml
22
23 from c7n_mailer.utils import setup_defaults
24 from c7n_mailer.cli import CONFIG_SCHEMA
25 from .email_delivery import EmailDelivery
26
27 logger = logging.getLogger(__name__)
28
29
30 class MailerTester(object):
31
32 def __init__(self, msg_file, config, msg_plain=False, json_dump_file=None):
33 if not os.path.exists(msg_file):
34 raise RuntimeError("File does not exist: %s" % msg_file)
35 logger.debug('Reading message from: %s', msg_file)
36 with open(msg_file, 'r') as fh:
37 raw = fh.read()
38 logger.debug('Read %d byte message', len(raw))
39 if msg_plain:
40 raw = raw.strip()
41 else:
42 logger.debug('base64-decoding and zlib decompressing message')
43 raw = zlib.decompress(base64.b64decode(raw))
44 if json_dump_file is not None:
45 with open(json_dump_file, 'w') as fh:
46 fh.write(raw)
47 self.data = json.loads(raw)
48 logger.debug('Loaded message JSON')
49 self.config = config
50 self.session = boto3.Session()
51
52 def run(self, dry_run=False, print_only=False):
53 emd = EmailDelivery(self.config, self.session, logger)
54 addrs_to_msgs = emd.get_to_addrs_email_messages_map(self.data)
55 logger.info('Would send email to: %s', addrs_to_msgs.keys())
56 if print_only:
57 mime = emd.get_mimetext_message(
58 self.data, self.data['resources'], ['[email protected]']
59 )
60 logger.info('Send mail with subject: "%s"', mime['Subject'])
61 print(mime.get_payload(None, True))
62 return
63 if dry_run:
64 for to_addrs, mimetext_msg in addrs_to_msgs.items():
65 print('-> SEND MESSAGE TO: %s' % '; '.join(to_addrs))
66 print(mimetext_msg.get_payload(None, True))
67 return
68 # else actually send the message...
69 for to_addrs, mimetext_msg in addrs_to_msgs.items():
70 logger.info('Actually sending mail to: %s', to_addrs)
71 emd.send_c7n_email(self.data, list(to_addrs), mimetext_msg)
72
73
74 def setup_parser():
75 parser = argparse.ArgumentParser('Test c7n-mailer templates and mail')
76 parser.add_argument('-c', '--config', required=True)
77 parser.add_argument('-d', '--dry-run', dest='dry_run', action='store_true',
78 default=False,
79 help='Log messages that would be sent, but do not send')
80 parser.add_argument('-T', '--template-print', dest='print_only',
81 action='store_true', default=False,
82 help='Just print rendered templates')
83 parser.add_argument('-t', '--templates', default=None, type=str,
84 help='message templates folder location')
85 parser.add_argument('-p', '--plain', dest='plain', action='store_true',
86 default=False,
87 help='Expect MESSAGE_FILE to be a plain string, '
88 'rather than the base64-encoded, gzipped SQS '
89 'message format')
90 parser.add_argument('-j', '--json-dump-file', dest='json_dump_file',
91 type=str, action='store', default=None,
92 help='If dump JSON of MESSAGE_FILE to this path; '
93 'useful to base64-decode and gunzip a message')
94 parser.add_argument('MESSAGE_FILE', type=str,
95 help='Path to SQS message dump/content file')
96 return parser
97
98
99 def session_factory(config):
100 return boto3.Session(
101 region_name=config['region'],
102 profile_name=config.get('profile'))
103
104
105 def main():
106 parser = setup_parser()
107 options = parser.parse_args()
108
109 module_dir = os.path.dirname(os.path.abspath(__file__))
110 default_templates = [
111 os.path.abspath(os.path.join(module_dir, 'msg-templates')),
112 os.path.abspath(os.path.join(module_dir, '..', 'msg-templates')),
113 os.path.abspath('.')
114 ]
115 templates = options.templates
116 if templates:
117 default_templates.append(
118 os.path.abspath(os.path.expanduser(os.path.expandvars(templates)))
119 )
120
121 log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
122 logging.basicConfig(level=logging.DEBUG, format=log_format)
123 logging.getLogger('botocore').setLevel(logging.WARNING)
124
125 with open(options.config) as fh:
126 config = yaml.load(fh.read(), Loader=yaml.SafeLoader)
127
128 jsonschema.validate(config, CONFIG_SCHEMA)
129 setup_defaults(config)
130 config['templates_folders'] = default_templates
131
132 tester = MailerTester(
133 options.MESSAGE_FILE, config, msg_plain=options.plain,
134 json_dump_file=options.json_dump_file
135 )
136 tester.run(options.dry_run, options.print_only)
137
138
139 if __name__ == '__main__':
140 main()
141
[end of tools/c7n_mailer/c7n_mailer/replay.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/c7n_mailer/c7n_mailer/replay.py b/tools/c7n_mailer/c7n_mailer/replay.py
--- a/tools/c7n_mailer/c7n_mailer/replay.py
+++ b/tools/c7n_mailer/c7n_mailer/replay.py
@@ -42,7 +42,7 @@
logger.debug('base64-decoding and zlib decompressing message')
raw = zlib.decompress(base64.b64decode(raw))
if json_dump_file is not None:
- with open(json_dump_file, 'w') as fh:
+ with open(json_dump_file, 'wb') as fh: # pragma: no cover
fh.write(raw)
self.data = json.loads(raw)
logger.debug('Loaded message JSON')
|
{"golden_diff": "diff --git a/tools/c7n_mailer/c7n_mailer/replay.py b/tools/c7n_mailer/c7n_mailer/replay.py\n--- a/tools/c7n_mailer/c7n_mailer/replay.py\n+++ b/tools/c7n_mailer/c7n_mailer/replay.py\n@@ -42,7 +42,7 @@\n logger.debug('base64-decoding and zlib decompressing message')\n raw = zlib.decompress(base64.b64decode(raw))\n if json_dump_file is not None:\n- with open(json_dump_file, 'w') as fh:\n+ with open(json_dump_file, 'wb') as fh: # pragma: no cover\n fh.write(raw)\n self.data = json.loads(raw)\n logger.debug('Loaded message JSON')\n", "issue": "c7n-mailer-replay : Python3 - TypeError: write() argument must be str, not bytes\nUsing c7n-mailer-replay under Python3 gives the following trace:\r\n```\r\nTraceback (most recent call last):\r\n File \"HOME/.pyenv/versions/cloud-custodian-3.6/bin/c7n-mailer-replay\", line 11, in <module>\r\n load_entry_point('c7n-mailer', 'console_scripts', 'c7n-mailer-replay')()\r\n File \"HOME/CLOUD_CUSTODIAN/SRC/tools/c7n_mailer/c7n_mailer/replay.py\", line 134, in main\r\n json_dump_file=options.json_dump_file\r\n File \"HOME/CLOUD_CUSTODIAN/SRC/tools/c7n_mailer/c7n_mailer/replay.py\", line 46, in __init__\r\n fh.write(raw)\r\nTypeError: write() argument must be str, not bytes\r\n```\r\n\r\nI had success with the following change:\r\n```diff\r\ndiff --git a/tools/c7n_mailer/c7n_mailer/replay.py b/tools/c7n_mailer/c7n_mailer/replay.py\r\nindex b3f5456be..72f63332f 100644\r\n--- a/tools/c7n_mailer/c7n_mailer/replay.py\r\n+++ b/tools/c7n_mailer/c7n_mailer/replay.py\r\n@@ -42,7 +42,7 @@ class MailerTester(object):\r\n logger.debug('base64-decoding and zlib decompressing message')\r\n raw = zlib.decompress(base64.b64decode(raw))\r\n if json_dump_file is not None:\r\n- with open(json_dump_file, 'w') as fh:\r\n+ with open(json_dump_file, 'wb') as fh:\r\n fh.write(raw)\r\n self.data = json.loads(raw)\r\n logger.debug('Loaded message JSON')\r\n```\r\n\r\nI believe it could be compatible with Python2 also, but it needs some testing.\n", "before_files": [{"content": "\"\"\"\nAllow local testing of mailer and templates by replaying an SQS message.\n\nMAILER_FILE input is a file containing the exact base64-encoded, gzipped\ndata that's enqueued to SQS via :py:meth:`c7n.actions.Notify.send_sqs`.\n\nAlternatively, with -p|--plain specified, the file will be assumed to be\nJSON data that can be loaded directly.\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport argparse\nimport boto3\nimport os\nimport logging\nimport zlib\nimport base64\nimport json\n\nimport jsonschema\nfrom ruamel import yaml\n\nfrom c7n_mailer.utils import setup_defaults\nfrom c7n_mailer.cli import CONFIG_SCHEMA\nfrom .email_delivery import EmailDelivery\n\nlogger = logging.getLogger(__name__)\n\n\nclass MailerTester(object):\n\n def __init__(self, msg_file, config, msg_plain=False, json_dump_file=None):\n if not os.path.exists(msg_file):\n raise RuntimeError(\"File does not exist: %s\" % msg_file)\n logger.debug('Reading message from: %s', msg_file)\n with open(msg_file, 'r') as fh:\n raw = fh.read()\n logger.debug('Read %d byte message', len(raw))\n if msg_plain:\n raw = raw.strip()\n else:\n logger.debug('base64-decoding and zlib decompressing message')\n raw = zlib.decompress(base64.b64decode(raw))\n if json_dump_file is not None:\n with open(json_dump_file, 'w') as fh:\n fh.write(raw)\n self.data = json.loads(raw)\n logger.debug('Loaded message JSON')\n self.config = config\n self.session = boto3.Session()\n\n def run(self, dry_run=False, print_only=False):\n emd = EmailDelivery(self.config, self.session, logger)\n addrs_to_msgs = emd.get_to_addrs_email_messages_map(self.data)\n logger.info('Would send email to: %s', addrs_to_msgs.keys())\n if print_only:\n mime = emd.get_mimetext_message(\n self.data, self.data['resources'], ['[email protected]']\n )\n logger.info('Send mail with subject: \"%s\"', mime['Subject'])\n print(mime.get_payload(None, True))\n return\n if dry_run:\n for to_addrs, mimetext_msg in addrs_to_msgs.items():\n print('-> SEND MESSAGE TO: %s' % '; '.join(to_addrs))\n print(mimetext_msg.get_payload(None, True))\n return\n # else actually send the message...\n for to_addrs, mimetext_msg in addrs_to_msgs.items():\n logger.info('Actually sending mail to: %s', to_addrs)\n emd.send_c7n_email(self.data, list(to_addrs), mimetext_msg)\n\n\ndef setup_parser():\n parser = argparse.ArgumentParser('Test c7n-mailer templates and mail')\n parser.add_argument('-c', '--config', required=True)\n parser.add_argument('-d', '--dry-run', dest='dry_run', action='store_true',\n default=False,\n help='Log messages that would be sent, but do not send')\n parser.add_argument('-T', '--template-print', dest='print_only',\n action='store_true', default=False,\n help='Just print rendered templates')\n parser.add_argument('-t', '--templates', default=None, type=str,\n help='message templates folder location')\n parser.add_argument('-p', '--plain', dest='plain', action='store_true',\n default=False,\n help='Expect MESSAGE_FILE to be a plain string, '\n 'rather than the base64-encoded, gzipped SQS '\n 'message format')\n parser.add_argument('-j', '--json-dump-file', dest='json_dump_file',\n type=str, action='store', default=None,\n help='If dump JSON of MESSAGE_FILE to this path; '\n 'useful to base64-decode and gunzip a message')\n parser.add_argument('MESSAGE_FILE', type=str,\n help='Path to SQS message dump/content file')\n return parser\n\n\ndef session_factory(config):\n return boto3.Session(\n region_name=config['region'],\n profile_name=config.get('profile'))\n\n\ndef main():\n parser = setup_parser()\n options = parser.parse_args()\n\n module_dir = os.path.dirname(os.path.abspath(__file__))\n default_templates = [\n os.path.abspath(os.path.join(module_dir, 'msg-templates')),\n os.path.abspath(os.path.join(module_dir, '..', 'msg-templates')),\n os.path.abspath('.')\n ]\n templates = options.templates\n if templates:\n default_templates.append(\n os.path.abspath(os.path.expanduser(os.path.expandvars(templates)))\n )\n\n log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'\n logging.basicConfig(level=logging.DEBUG, format=log_format)\n logging.getLogger('botocore').setLevel(logging.WARNING)\n\n with open(options.config) as fh:\n config = yaml.load(fh.read(), Loader=yaml.SafeLoader)\n\n jsonschema.validate(config, CONFIG_SCHEMA)\n setup_defaults(config)\n config['templates_folders'] = default_templates\n\n tester = MailerTester(\n options.MESSAGE_FILE, config, msg_plain=options.plain,\n json_dump_file=options.json_dump_file\n )\n tester.run(options.dry_run, options.print_only)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/c7n_mailer/c7n_mailer/replay.py"}]}
| 2,493 | 176 |
gh_patches_debug_17240
|
rasdani/github-patches
|
git_diff
|
napari__napari-6139
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Removing comments from PR does not work
## 🐛 Bug
After merging it looks like the action for removing comments does not work.
I will be happy to fast merge potential bugfix without the standard 24 hours as it needs to be merged to test.
</issue>
<code>
[start of tools/remove_html_comments_from_pr.py]
1 """
2 Edit pull request description to remove HTML comments
3
4 We might want to remove section with markdown task lists that are completely empty
5 """
6
7 import re
8 import sys
9 from os import environ
10
11 import requests
12
13
14 def remove_html_comments(text):
15 # Regular expression to remove HTML comments
16 # [^\S\r\n] is whitespace but not new line
17 html_comment_pattern = r"[^\S\r\n]*<!--(.*?)-->[^\S\r\n]*\n?"
18 return re.sub(html_comment_pattern, "", text, flags=re.DOTALL)
19
20
21 def edit_pull_request_description(repo, pull_request_number, access_token):
22 # GitHub API base URL
23 base_url = "https://api.github.com"
24
25 # Prepare the headers with the access token
26 headers = {"Authorization": f"token {access_token}"}
27
28 # Get the current pull request description
29 pr_url = f"{base_url}/repos/{repo}/pulls/{pull_request_number}"
30 response = requests.get(pr_url, headers=headers)
31 response.raise_for_status()
32 response_json = response.json()
33 current_description = response_json["body"]
34
35 # Remove HTML comments from the description
36 edited_description = remove_html_comments(current_description)
37 if edited_description == current_description:
38 print("No HTML comments found in the pull request description")
39 return
40
41 # Update the pull request description
42 update_pr_url = f"{base_url}/repos/{repo}/pulls/{pull_request_number}"
43 payload = {"body": edited_description}
44 response = requests.patch(update_pr_url, json=payload, headers=headers)
45 response.raise_for_status()
46
47 if response.status_code == 200:
48 print(
49 f"Pull request #{pull_request_number} description has been updated successfully!"
50 )
51 else:
52 print(
53 f"Failed to update pull request description. Status code: {response.status_code}"
54 )
55
56
57 if __name__ == "__main__":
58 # Replace with your repository and pull request number
59 # get cuurrent repository name from github actions
60 repository_name = environ.get("GITHUB_REPOSITORY")
61 if repository_name == "napari/napari":
62 sys.exit(0)
63
64 # get current PR number from github actions
65 github_ref = environ.get("GITHUB_REF")
66 refs, pull, number, merge = github_ref.split('/')
67 assert refs == 'refs'
68 assert pull == 'pull'
69 assert merge == 'merge'
70
71 # Replace with your GitHub access token
72 access_token = environ.get("GITHUB_TOKEN")
73
74 edit_pull_request_description(repository_name, number, access_token)
75
[end of tools/remove_html_comments_from_pr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/remove_html_comments_from_pr.py b/tools/remove_html_comments_from_pr.py
--- a/tools/remove_html_comments_from_pr.py
+++ b/tools/remove_html_comments_from_pr.py
@@ -10,6 +10,8 @@
import requests
+REPO = 'napari/napari'
+
def remove_html_comments(text):
# Regular expression to remove HTML comments
@@ -55,10 +57,12 @@
if __name__ == "__main__":
+ print('Will inspect PR description to remove html comments.')
# Replace with your repository and pull request number
# get cuurrent repository name from github actions
repository_name = environ.get("GITHUB_REPOSITORY")
- if repository_name == "napari/napari":
+ if repository_name != REPO:
+ print('Not on main repo, aborting with success')
sys.exit(0)
# get current PR number from github actions
|
{"golden_diff": "diff --git a/tools/remove_html_comments_from_pr.py b/tools/remove_html_comments_from_pr.py\n--- a/tools/remove_html_comments_from_pr.py\n+++ b/tools/remove_html_comments_from_pr.py\n@@ -10,6 +10,8 @@\n \n import requests\n \n+REPO = 'napari/napari'\n+\n \n def remove_html_comments(text):\n # Regular expression to remove HTML comments\n@@ -55,10 +57,12 @@\n \n \n if __name__ == \"__main__\":\n+ print('Will inspect PR description to remove html comments.')\n # Replace with your repository and pull request number\n # get cuurrent repository name from github actions\n repository_name = environ.get(\"GITHUB_REPOSITORY\")\n- if repository_name == \"napari/napari\":\n+ if repository_name != REPO:\n+ print('Not on main repo, aborting with success')\n sys.exit(0)\n \n # get current PR number from github actions\n", "issue": "Removing comments from PR does not work\n## \ud83d\udc1b Bug\r\nAfter merging it looks like the action for removing comments does not work. \r\n\r\nI will be happy to fast merge potential bugfix without the standard 24 hours as it needs to be merged to test. \r\n\n", "before_files": [{"content": "\"\"\"\nEdit pull request description to remove HTML comments\n\nWe might want to remove section with markdown task lists that are completely empty\n\"\"\"\n\nimport re\nimport sys\nfrom os import environ\n\nimport requests\n\n\ndef remove_html_comments(text):\n # Regular expression to remove HTML comments\n # [^\\S\\r\\n] is whitespace but not new line\n html_comment_pattern = r\"[^\\S\\r\\n]*<!--(.*?)-->[^\\S\\r\\n]*\\n?\"\n return re.sub(html_comment_pattern, \"\", text, flags=re.DOTALL)\n\n\ndef edit_pull_request_description(repo, pull_request_number, access_token):\n # GitHub API base URL\n base_url = \"https://api.github.com\"\n\n # Prepare the headers with the access token\n headers = {\"Authorization\": f\"token {access_token}\"}\n\n # Get the current pull request description\n pr_url = f\"{base_url}/repos/{repo}/pulls/{pull_request_number}\"\n response = requests.get(pr_url, headers=headers)\n response.raise_for_status()\n response_json = response.json()\n current_description = response_json[\"body\"]\n\n # Remove HTML comments from the description\n edited_description = remove_html_comments(current_description)\n if edited_description == current_description:\n print(\"No HTML comments found in the pull request description\")\n return\n\n # Update the pull request description\n update_pr_url = f\"{base_url}/repos/{repo}/pulls/{pull_request_number}\"\n payload = {\"body\": edited_description}\n response = requests.patch(update_pr_url, json=payload, headers=headers)\n response.raise_for_status()\n\n if response.status_code == 200:\n print(\n f\"Pull request #{pull_request_number} description has been updated successfully!\"\n )\n else:\n print(\n f\"Failed to update pull request description. Status code: {response.status_code}\"\n )\n\n\nif __name__ == \"__main__\":\n # Replace with your repository and pull request number\n # get cuurrent repository name from github actions\n repository_name = environ.get(\"GITHUB_REPOSITORY\")\n if repository_name == \"napari/napari\":\n sys.exit(0)\n\n # get current PR number from github actions\n github_ref = environ.get(\"GITHUB_REF\")\n refs, pull, number, merge = github_ref.split('/')\n assert refs == 'refs'\n assert pull == 'pull'\n assert merge == 'merge'\n\n # Replace with your GitHub access token\n access_token = environ.get(\"GITHUB_TOKEN\")\n\n edit_pull_request_description(repository_name, number, access_token)\n", "path": "tools/remove_html_comments_from_pr.py"}]}
| 1,294 | 207 |
gh_patches_debug_18479
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-1744
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
How could I reduce number of workers?
Could I reduce the number_of_workers?
---------
I run the doccano in my machine use this code.
```
doccano init
doccano create user ***
doccano web server --port ***
```
And then I got this log:
```
Booting worker with pid: 19
Booting worker with pid: 20
...
Booting worker with pid: 157
```
It run lots of worker and it took up a lot of memory. So, can I change the number_of_worker varlible. I saw the default number_of_worker= ``` multiprocessing.cpu_count()*2+1 ```. How could I change it?
Your Environment
---------
* Operating System: Linux
* Python Version Used: Python38
* When you install doccano: 2021-11-30
* How did you install doccano (Heroku button etc): pip install doccano
</issue>
<code>
[start of backend/cli.py]
1 import argparse
2 import multiprocessing
3 import os
4 import platform
5 import sys
6 from pathlib import Path
7
8 import django
9 from django.core import management
10
11 from .config.celery import app
12
13 DOCCANO_HOME = os.path.expanduser(os.environ.get("DOCCANO_HOME", "~/doccano"))
14 Path(DOCCANO_HOME).mkdir(parents=True, exist_ok=True)
15 os.environ["STANDALONE"] = "True"
16 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production")
17 os.environ.setdefault("DATABASE_URL", os.path.join(f"sqlite:///{DOCCANO_HOME}", "db.sqlite3"))
18 os.environ.setdefault("MEDIA_ROOT", os.path.join(DOCCANO_HOME, "media"))
19 base = os.path.abspath(os.path.dirname(__file__))
20 sys.path.append(base)
21 django.setup()
22 parser = argparse.ArgumentParser(description="doccano, text annotation for machine learning practitioners.")
23
24
25 def number_of_workers():
26 return (multiprocessing.cpu_count() * 2) + 1
27
28
29 def is_windows():
30 return platform.system() == "Windows"
31
32
33 def run_on_nix(args):
34 import gunicorn.app.base
35 import gunicorn.util
36
37 class StandaloneApplication(gunicorn.app.base.BaseApplication):
38 def __init__(self, options=None):
39 self.options = options or {}
40 super().__init__()
41
42 def load_config(self):
43 config = {
44 key: value for key, value in self.options.items() if key in self.cfg.settings and value is not None
45 }
46 for key, value in config.items():
47 self.cfg.set(key.lower(), value)
48
49 def load(self):
50 return gunicorn.util.import_app("config.wsgi")
51
52 options = {
53 "bind": "%s:%s" % ("0.0.0.0", args.port),
54 "workers": number_of_workers(),
55 "chdir": base,
56 "capture_output": True,
57 "loglevel": "debug",
58 }
59 StandaloneApplication(options).run()
60
61
62 def run_on_windows(args):
63 from waitress import serve
64
65 from config.wsgi import application
66
67 serve(application, port=args.port)
68
69
70 def command_db_init(args):
71 print("Setup Database.")
72 management.call_command("wait_for_db")
73 management.call_command("migrate")
74 management.call_command("create_roles")
75
76
77 def command_user_create(args):
78 print("Create admin user.")
79 management.call_command(
80 "create_admin", "--noinput", username=args.username, password=args.password, email=args.email
81 )
82
83
84 def command_migrate(args):
85 print("Start migration.")
86 management.call_command("migrate")
87
88
89 def command_run_webserver(args):
90 print(f"Starting server with port {args.port}.")
91 if is_windows():
92 run_on_windows(args)
93 else:
94 run_on_nix(args)
95
96
97 def command_run_task_queue(args):
98 print("Starting task queue.")
99 argv = [
100 "--app=config",
101 "--workdir={}".format(base),
102 "worker",
103 "--loglevel=info",
104 "--concurrency={}".format(args.concurrency),
105 ]
106 if is_windows():
107 argv.append("--pool=solo")
108 app.worker_main(argv=argv)
109
110
111 def command_help(args):
112 print(parser.parse_args([args.command, "--help"]))
113
114
115 def main():
116 # Create a command line parser.
117 subparsers = parser.add_subparsers()
118
119 # Create a parser for db initialization.
120 parser_init = subparsers.add_parser("init", help="see `init -h`")
121 parser_init.set_defaults(handler=command_db_init)
122
123 # Create a parser for migration.
124 parser_migration = subparsers.add_parser("migrate", help="Updates database schema.")
125 parser_migration.set_defaults(handler=command_migrate)
126
127 # Create a parser for user creation.
128 parser_create_user = subparsers.add_parser("createuser", help="see `createuser -h`")
129 parser_create_user.add_argument("--username", type=str, default="admin", help="admin username")
130 parser_create_user.add_argument("--password", type=str, default="password", help="admin password")
131 parser_create_user.add_argument("--email", type=str, default="[email protected]", help="admin email")
132 parser_create_user.set_defaults(handler=command_user_create)
133
134 # Create a parser for web server.
135 parser_server = subparsers.add_parser("webserver", help="see `webserver -h`")
136 parser_server.add_argument("--port", type=int, default=8000, help="port number")
137 parser_server.set_defaults(handler=command_run_webserver)
138
139 # Create a parser for task queue.
140 parser_queue = subparsers.add_parser("task", help="see `task -h`")
141 parser_queue.add_argument("--concurrency", type=int, default=2, help="concurrency")
142 parser_queue.set_defaults(handler=command_run_task_queue)
143
144 # Create a parser for help.
145 parser_help = subparsers.add_parser("help", help="see `help -h`")
146 parser_help.add_argument("command", help="command name which help is shown")
147 parser_help.set_defaults(handler=command_help)
148
149 # Dispatch handler.
150 args = parser.parse_args()
151 if hasattr(args, "handler"):
152 args.handler(args)
153 else:
154 # If specified unknown command, show help.
155 parser.print_help()
156
157
158 if __name__ == "__main__":
159 main()
160
[end of backend/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/cli.py b/backend/cli.py
--- a/backend/cli.py
+++ b/backend/cli.py
@@ -51,7 +51,7 @@
options = {
"bind": "%s:%s" % ("0.0.0.0", args.port),
- "workers": number_of_workers(),
+ "workers": args.workers,
"chdir": base,
"capture_output": True,
"loglevel": "debug",
@@ -134,6 +134,7 @@
# Create a parser for web server.
parser_server = subparsers.add_parser("webserver", help="see `webserver -h`")
parser_server.add_argument("--port", type=int, default=8000, help="port number")
+ parser_server.add_argument("--workers", type=int, default=number_of_workers(), help="the number of workers")
parser_server.set_defaults(handler=command_run_webserver)
# Create a parser for task queue.
|
{"golden_diff": "diff --git a/backend/cli.py b/backend/cli.py\n--- a/backend/cli.py\n+++ b/backend/cli.py\n@@ -51,7 +51,7 @@\n \n options = {\n \"bind\": \"%s:%s\" % (\"0.0.0.0\", args.port),\n- \"workers\": number_of_workers(),\n+ \"workers\": args.workers,\n \"chdir\": base,\n \"capture_output\": True,\n \"loglevel\": \"debug\",\n@@ -134,6 +134,7 @@\n # Create a parser for web server.\n parser_server = subparsers.add_parser(\"webserver\", help=\"see `webserver -h`\")\n parser_server.add_argument(\"--port\", type=int, default=8000, help=\"port number\")\n+ parser_server.add_argument(\"--workers\", type=int, default=number_of_workers(), help=\"the number of workers\")\n parser_server.set_defaults(handler=command_run_webserver)\n \n # Create a parser for task queue.\n", "issue": "How could I reduce number of workers?\nCould I reduce the number_of_workers?\r\n---------\r\nI run the doccano in my machine use this code.\r\n```\r\ndoccano init\r\ndoccano create user ***\r\ndoccano web server --port ***\r\n```\r\nAnd then I got this log: \r\n```\r\nBooting worker with pid: 19\r\nBooting worker with pid: 20\r\n...\r\nBooting worker with pid: 157\r\n```\r\nIt run lots of worker and it took up a lot of memory. So, can I change the number_of_worker varlible. I saw the default number_of_worker= ``` multiprocessing.cpu_count()*2+1 ```. How could I change it?\r\n\r\n\r\nYour Environment\r\n---------\r\n* Operating System: Linux\r\n* Python Version Used: Python38\r\n* When you install doccano: 2021-11-30\r\n* How did you install doccano (Heroku button etc): pip install doccano\r\n\n", "before_files": [{"content": "import argparse\nimport multiprocessing\nimport os\nimport platform\nimport sys\nfrom pathlib import Path\n\nimport django\nfrom django.core import management\n\nfrom .config.celery import app\n\nDOCCANO_HOME = os.path.expanduser(os.environ.get(\"DOCCANO_HOME\", \"~/doccano\"))\nPath(DOCCANO_HOME).mkdir(parents=True, exist_ok=True)\nos.environ[\"STANDALONE\"] = \"True\"\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"config.settings.production\")\nos.environ.setdefault(\"DATABASE_URL\", os.path.join(f\"sqlite:///{DOCCANO_HOME}\", \"db.sqlite3\"))\nos.environ.setdefault(\"MEDIA_ROOT\", os.path.join(DOCCANO_HOME, \"media\"))\nbase = os.path.abspath(os.path.dirname(__file__))\nsys.path.append(base)\ndjango.setup()\nparser = argparse.ArgumentParser(description=\"doccano, text annotation for machine learning practitioners.\")\n\n\ndef number_of_workers():\n return (multiprocessing.cpu_count() * 2) + 1\n\n\ndef is_windows():\n return platform.system() == \"Windows\"\n\n\ndef run_on_nix(args):\n import gunicorn.app.base\n import gunicorn.util\n\n class StandaloneApplication(gunicorn.app.base.BaseApplication):\n def __init__(self, options=None):\n self.options = options or {}\n super().__init__()\n\n def load_config(self):\n config = {\n key: value for key, value in self.options.items() if key in self.cfg.settings and value is not None\n }\n for key, value in config.items():\n self.cfg.set(key.lower(), value)\n\n def load(self):\n return gunicorn.util.import_app(\"config.wsgi\")\n\n options = {\n \"bind\": \"%s:%s\" % (\"0.0.0.0\", args.port),\n \"workers\": number_of_workers(),\n \"chdir\": base,\n \"capture_output\": True,\n \"loglevel\": \"debug\",\n }\n StandaloneApplication(options).run()\n\n\ndef run_on_windows(args):\n from waitress import serve\n\n from config.wsgi import application\n\n serve(application, port=args.port)\n\n\ndef command_db_init(args):\n print(\"Setup Database.\")\n management.call_command(\"wait_for_db\")\n management.call_command(\"migrate\")\n management.call_command(\"create_roles\")\n\n\ndef command_user_create(args):\n print(\"Create admin user.\")\n management.call_command(\n \"create_admin\", \"--noinput\", username=args.username, password=args.password, email=args.email\n )\n\n\ndef command_migrate(args):\n print(\"Start migration.\")\n management.call_command(\"migrate\")\n\n\ndef command_run_webserver(args):\n print(f\"Starting server with port {args.port}.\")\n if is_windows():\n run_on_windows(args)\n else:\n run_on_nix(args)\n\n\ndef command_run_task_queue(args):\n print(\"Starting task queue.\")\n argv = [\n \"--app=config\",\n \"--workdir={}\".format(base),\n \"worker\",\n \"--loglevel=info\",\n \"--concurrency={}\".format(args.concurrency),\n ]\n if is_windows():\n argv.append(\"--pool=solo\")\n app.worker_main(argv=argv)\n\n\ndef command_help(args):\n print(parser.parse_args([args.command, \"--help\"]))\n\n\ndef main():\n # Create a command line parser.\n subparsers = parser.add_subparsers()\n\n # Create a parser for db initialization.\n parser_init = subparsers.add_parser(\"init\", help=\"see `init -h`\")\n parser_init.set_defaults(handler=command_db_init)\n\n # Create a parser for migration.\n parser_migration = subparsers.add_parser(\"migrate\", help=\"Updates database schema.\")\n parser_migration.set_defaults(handler=command_migrate)\n\n # Create a parser for user creation.\n parser_create_user = subparsers.add_parser(\"createuser\", help=\"see `createuser -h`\")\n parser_create_user.add_argument(\"--username\", type=str, default=\"admin\", help=\"admin username\")\n parser_create_user.add_argument(\"--password\", type=str, default=\"password\", help=\"admin password\")\n parser_create_user.add_argument(\"--email\", type=str, default=\"[email protected]\", help=\"admin email\")\n parser_create_user.set_defaults(handler=command_user_create)\n\n # Create a parser for web server.\n parser_server = subparsers.add_parser(\"webserver\", help=\"see `webserver -h`\")\n parser_server.add_argument(\"--port\", type=int, default=8000, help=\"port number\")\n parser_server.set_defaults(handler=command_run_webserver)\n\n # Create a parser for task queue.\n parser_queue = subparsers.add_parser(\"task\", help=\"see `task -h`\")\n parser_queue.add_argument(\"--concurrency\", type=int, default=2, help=\"concurrency\")\n parser_queue.set_defaults(handler=command_run_task_queue)\n\n # Create a parser for help.\n parser_help = subparsers.add_parser(\"help\", help=\"see `help -h`\")\n parser_help.add_argument(\"command\", help=\"command name which help is shown\")\n parser_help.set_defaults(handler=command_help)\n\n # Dispatch handler.\n args = parser.parse_args()\n if hasattr(args, \"handler\"):\n args.handler(args)\n else:\n # If specified unknown command, show help.\n parser.print_help()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "backend/cli.py"}]}
| 2,264 | 218 |
gh_patches_debug_14081
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-439
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Double check conditional HTTP implementation
The conditional HTTP implementation doesn't check the status code of the response at all. Determine if it should, and if it should update it to do the right thing.
</issue>
<code>
[start of warehouse/cache/http.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections.abc
14 import functools
15
16 from pyramid.tweens import EXCVIEW
17
18
19 def _add_vary_callback(*varies):
20 def inner(request, response):
21 vary = set(response.vary if response.vary is not None else [])
22 vary |= set(varies)
23 response.vary = vary
24 return inner
25
26
27 def add_vary(*varies):
28 def inner(view):
29 @functools.wraps(view)
30 def wrapped(context, request):
31 request.add_response_callback(_add_vary_callback(*varies))
32 return view(context, request)
33 return wrapped
34 return inner
35
36
37 def cache_control(seconds, public=True):
38 def inner(view):
39 @functools.wraps(view)
40 def wrapped(context, request):
41 response = view(context, request)
42
43 if not request.registry.settings.get(
44 "pyramid.prevent_http_cache", False):
45 if seconds:
46 if public:
47 response.cache_control.public = True
48 else:
49 response.cache_control.private = True
50
51 response.cache_control.max_age = seconds
52 else:
53 response.cache_control.no_cache = True
54 response.cache_control.no_store = True
55 response.cache_control.must_revalidate = True
56
57 return response
58 return wrapped
59 return inner
60
61
62 def conditional_http_tween_factory(handler, registry):
63 def conditional_http_tween(request):
64 response = handler(request)
65
66 # If the Last-Modified header has been set, we want to enable the
67 # conditional response processing.
68 if response.last_modified is not None:
69 response.conditional_response = True
70
71 # We want to only enable the conditional machinery if either we
72 # were given an explicit ETag header by the view or we have a
73 # buffered response and can generate the ETag header ourself.
74 if response.etag is not None:
75 response.conditional_response = True
76 elif (isinstance(response.app_iter, collections.abc.Sequence) and
77 len(response.app_iter) == 1):
78 response.conditional_response = True
79 response.md5_etag()
80
81 return response
82 return conditional_http_tween
83
84
85 def includeme(config):
86 config.add_tween(
87 "warehouse.cache.http.conditional_http_tween_factory",
88 under=EXCVIEW,
89 )
90
[end of warehouse/cache/http.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/cache/http.py b/warehouse/cache/http.py
--- a/warehouse/cache/http.py
+++ b/warehouse/cache/http.py
@@ -75,8 +75,13 @@
response.conditional_response = True
elif (isinstance(response.app_iter, collections.abc.Sequence) and
len(response.app_iter) == 1):
- response.conditional_response = True
- response.md5_etag()
+ # We can only reasonably implement automatic ETags on 200 responses
+ # to GET or HEAD requests. The subtles of doing it in other cases
+ # are too hard to get right.
+ if (request.method in {"GET", "HEAD"} and
+ response.status_code == 200):
+ response.conditional_response = True
+ response.md5_etag()
return response
return conditional_http_tween
|
{"golden_diff": "diff --git a/warehouse/cache/http.py b/warehouse/cache/http.py\n--- a/warehouse/cache/http.py\n+++ b/warehouse/cache/http.py\n@@ -75,8 +75,13 @@\n response.conditional_response = True\n elif (isinstance(response.app_iter, collections.abc.Sequence) and\n len(response.app_iter) == 1):\n- response.conditional_response = True\n- response.md5_etag()\n+ # We can only reasonably implement automatic ETags on 200 responses\n+ # to GET or HEAD requests. The subtles of doing it in other cases\n+ # are too hard to get right.\n+ if (request.method in {\"GET\", \"HEAD\"} and\n+ response.status_code == 200):\n+ response.conditional_response = True\n+ response.md5_etag()\n \n return response\n return conditional_http_tween\n", "issue": "Double check conditional HTTP implementation\nThe conditional HTTP implementation doesn't check the status code of the response at all. Determine if it should, and if it should update it to do the right thing.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections.abc\nimport functools\n\nfrom pyramid.tweens import EXCVIEW\n\n\ndef _add_vary_callback(*varies):\n def inner(request, response):\n vary = set(response.vary if response.vary is not None else [])\n vary |= set(varies)\n response.vary = vary\n return inner\n\n\ndef add_vary(*varies):\n def inner(view):\n @functools.wraps(view)\n def wrapped(context, request):\n request.add_response_callback(_add_vary_callback(*varies))\n return view(context, request)\n return wrapped\n return inner\n\n\ndef cache_control(seconds, public=True):\n def inner(view):\n @functools.wraps(view)\n def wrapped(context, request):\n response = view(context, request)\n\n if not request.registry.settings.get(\n \"pyramid.prevent_http_cache\", False):\n if seconds:\n if public:\n response.cache_control.public = True\n else:\n response.cache_control.private = True\n\n response.cache_control.max_age = seconds\n else:\n response.cache_control.no_cache = True\n response.cache_control.no_store = True\n response.cache_control.must_revalidate = True\n\n return response\n return wrapped\n return inner\n\n\ndef conditional_http_tween_factory(handler, registry):\n def conditional_http_tween(request):\n response = handler(request)\n\n # If the Last-Modified header has been set, we want to enable the\n # conditional response processing.\n if response.last_modified is not None:\n response.conditional_response = True\n\n # We want to only enable the conditional machinery if either we\n # were given an explicit ETag header by the view or we have a\n # buffered response and can generate the ETag header ourself.\n if response.etag is not None:\n response.conditional_response = True\n elif (isinstance(response.app_iter, collections.abc.Sequence) and\n len(response.app_iter) == 1):\n response.conditional_response = True\n response.md5_etag()\n\n return response\n return conditional_http_tween\n\n\ndef includeme(config):\n config.add_tween(\n \"warehouse.cache.http.conditional_http_tween_factory\",\n under=EXCVIEW,\n )\n", "path": "warehouse/cache/http.py"}]}
| 1,351 | 198 |
gh_patches_debug_17467
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-1662
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add purchase price to Part Stock table
Add column `purchase_price` to "Part Stock" table and make it sortable.
</issue>
<code>
[start of InvenTree/stock/serializers.py]
1 """
2 JSON serializers for Stock app
3 """
4
5 from rest_framework import serializers
6
7 from .models import StockItem, StockLocation
8 from .models import StockItemTracking
9 from .models import StockItemAttachment
10 from .models import StockItemTestResult
11
12 from django.db.models.functions import Coalesce
13
14 from django.db.models import Case, When, Value
15 from django.db.models import BooleanField
16 from django.db.models import Q
17
18 from sql_util.utils import SubquerySum, SubqueryCount
19
20 from decimal import Decimal
21
22 from datetime import datetime, timedelta
23
24 import common.models
25 from company.serializers import SupplierPartSerializer
26 from part.serializers import PartBriefSerializer
27 from InvenTree.serializers import UserSerializerBrief, InvenTreeModelSerializer
28 from InvenTree.serializers import InvenTreeAttachmentSerializerField
29
30
31 class LocationBriefSerializer(InvenTreeModelSerializer):
32 """
33 Provides a brief serializer for a StockLocation object
34 """
35
36 class Meta:
37 model = StockLocation
38 fields = [
39 'pk',
40 'name',
41 'pathstring',
42 ]
43
44
45 class StockItemSerializerBrief(InvenTreeModelSerializer):
46 """ Brief serializers for a StockItem """
47
48 location_name = serializers.CharField(source='location', read_only=True)
49 part_name = serializers.CharField(source='part.full_name', read_only=True)
50 quantity = serializers.FloatField()
51
52 class Meta:
53 model = StockItem
54 fields = [
55 'pk',
56 'uid',
57 'part',
58 'part_name',
59 'supplier_part',
60 'location',
61 'location_name',
62 'quantity',
63 ]
64
65
66 class StockItemSerializer(InvenTreeModelSerializer):
67 """ Serializer for a StockItem:
68
69 - Includes serialization for the linked part
70 - Includes serialization for the item location
71 """
72
73 @staticmethod
74 def prefetch_queryset(queryset):
75 """
76 Prefetch related database tables,
77 to reduce database hits.
78 """
79
80 return queryset.prefetch_related(
81 'belongs_to',
82 'build',
83 'customer',
84 'sales_order',
85 'supplier_part',
86 'supplier_part__supplier',
87 'supplier_part__manufacturer_part__manufacturer',
88 'allocations',
89 'sales_order_allocations',
90 'location',
91 'part',
92 'tracking_info',
93 )
94
95 @staticmethod
96 def annotate_queryset(queryset):
97 """
98 Add some extra annotations to the queryset,
99 performing database queries as efficiently as possible.
100 """
101
102 # Annotate the queryset with the total allocated to sales orders
103 queryset = queryset.annotate(
104 allocated=Coalesce(
105 SubquerySum('sales_order_allocations__quantity'), Decimal(0)
106 ) + Coalesce(
107 SubquerySum('allocations__quantity'), Decimal(0)
108 )
109 )
110
111 # Annotate the queryset with the number of tracking items
112 queryset = queryset.annotate(
113 tracking_items=SubqueryCount('tracking_info')
114 )
115
116 # Add flag to indicate if the StockItem has expired
117 queryset = queryset.annotate(
118 expired=Case(
119 When(
120 StockItem.EXPIRED_FILTER, then=Value(True, output_field=BooleanField()),
121 ),
122 default=Value(False, output_field=BooleanField())
123 )
124 )
125
126 # Add flag to indicate if the StockItem is stale
127 stale_days = common.models.InvenTreeSetting.get_setting('STOCK_STALE_DAYS')
128 stale_date = datetime.now().date() + timedelta(days=stale_days)
129 stale_filter = StockItem.IN_STOCK_FILTER & ~Q(expiry_date=None) & Q(expiry_date__lt=stale_date)
130
131 queryset = queryset.annotate(
132 stale=Case(
133 When(
134 stale_filter, then=Value(True, output_field=BooleanField()),
135 ),
136 default=Value(False, output_field=BooleanField()),
137 )
138 )
139
140 return queryset
141
142 status_text = serializers.CharField(source='get_status_display', read_only=True)
143
144 supplier_part_detail = SupplierPartSerializer(source='supplier_part', many=False, read_only=True)
145
146 part_detail = PartBriefSerializer(source='part', many=False, read_only=True)
147
148 location_detail = LocationBriefSerializer(source='location', many=False, read_only=True)
149
150 tracking_items = serializers.IntegerField(source='tracking_info_count', read_only=True, required=False)
151
152 quantity = serializers.FloatField()
153
154 allocated = serializers.FloatField(source='allocation_count', required=False)
155
156 expired = serializers.BooleanField(required=False, read_only=True)
157
158 stale = serializers.BooleanField(required=False, read_only=True)
159
160 serial = serializers.CharField(required=False)
161
162 required_tests = serializers.IntegerField(source='required_test_count', read_only=True, required=False)
163
164 def __init__(self, *args, **kwargs):
165
166 part_detail = kwargs.pop('part_detail', False)
167 location_detail = kwargs.pop('location_detail', False)
168 supplier_part_detail = kwargs.pop('supplier_part_detail', False)
169 test_detail = kwargs.pop('test_detail', False)
170
171 super(StockItemSerializer, self).__init__(*args, **kwargs)
172
173 if part_detail is not True:
174 self.fields.pop('part_detail')
175
176 if location_detail is not True:
177 self.fields.pop('location_detail')
178
179 if supplier_part_detail is not True:
180 self.fields.pop('supplier_part_detail')
181
182 if test_detail is not True:
183 self.fields.pop('required_tests')
184
185 class Meta:
186 model = StockItem
187 fields = [
188 'allocated',
189 'batch',
190 'belongs_to',
191 'build',
192 'customer',
193 'expired',
194 'expiry_date',
195 'in_stock',
196 'is_building',
197 'link',
198 'location',
199 'location_detail',
200 'notes',
201 'packaging',
202 'part',
203 'part_detail',
204 'pk',
205 'quantity',
206 'required_tests',
207 'sales_order',
208 'serial',
209 'stale',
210 'status',
211 'status_text',
212 'stocktake_date',
213 'supplier_part',
214 'supplier_part_detail',
215 'tracking_items',
216 'uid',
217 'updated',
218 ]
219
220 """ These fields are read-only in this context.
221 They can be updated by accessing the appropriate API endpoints
222 """
223 read_only_fields = [
224 'allocated',
225 'stocktake_date',
226 'stocktake_user',
227 'updated',
228 'in_stock'
229 ]
230
231
232 class StockQuantitySerializer(InvenTreeModelSerializer):
233
234 class Meta:
235 model = StockItem
236 fields = ('quantity',)
237
238
239 class LocationSerializer(InvenTreeModelSerializer):
240 """ Detailed information about a stock location
241 """
242
243 url = serializers.CharField(source='get_absolute_url', read_only=True)
244
245 items = serializers.IntegerField(source='item_count', read_only=True)
246
247 class Meta:
248 model = StockLocation
249 fields = [
250 'pk',
251 'url',
252 'name',
253 'description',
254 'parent',
255 'pathstring',
256 'items',
257 ]
258
259
260 class StockItemAttachmentSerializer(InvenTreeModelSerializer):
261 """ Serializer for StockItemAttachment model """
262
263 def __init__(self, *args, **kwargs):
264 user_detail = kwargs.pop('user_detail', False)
265
266 super().__init__(*args, **kwargs)
267
268 if user_detail is not True:
269 self.fields.pop('user_detail')
270
271 user_detail = UserSerializerBrief(source='user', read_only=True)
272
273 attachment = InvenTreeAttachmentSerializerField(required=True)
274
275 class Meta:
276 model = StockItemAttachment
277
278 fields = [
279 'pk',
280 'stock_item',
281 'attachment',
282 'comment',
283 'upload_date',
284 'user',
285 'user_detail',
286 ]
287
288 read_only_fields = [
289 'upload_date',
290 'user',
291 'user_detail'
292 ]
293
294
295 class StockItemTestResultSerializer(InvenTreeModelSerializer):
296 """ Serializer for the StockItemTestResult model """
297
298 user_detail = UserSerializerBrief(source='user', read_only=True)
299
300 key = serializers.CharField(read_only=True)
301
302 attachment = InvenTreeAttachmentSerializerField(required=False)
303
304 def __init__(self, *args, **kwargs):
305 user_detail = kwargs.pop('user_detail', False)
306
307 super().__init__(*args, **kwargs)
308
309 if user_detail is not True:
310 self.fields.pop('user_detail')
311
312 class Meta:
313 model = StockItemTestResult
314
315 fields = [
316 'pk',
317 'stock_item',
318 'key',
319 'test',
320 'result',
321 'value',
322 'attachment',
323 'notes',
324 'user',
325 'user_detail',
326 'date'
327 ]
328
329 read_only_fields = [
330 'pk',
331 'user',
332 'date',
333 ]
334
335
336 class StockTrackingSerializer(InvenTreeModelSerializer):
337 """ Serializer for StockItemTracking model """
338
339 def __init__(self, *args, **kwargs):
340
341 item_detail = kwargs.pop('item_detail', False)
342 user_detail = kwargs.pop('user_detail', False)
343
344 super().__init__(*args, **kwargs)
345
346 if item_detail is not True:
347 self.fields.pop('item_detail')
348
349 if user_detail is not True:
350 self.fields.pop('user_detail')
351
352 label = serializers.CharField(read_only=True)
353
354 item_detail = StockItemSerializerBrief(source='item', many=False, read_only=True)
355
356 user_detail = UserSerializerBrief(source='user', many=False, read_only=True)
357
358 deltas = serializers.JSONField(read_only=True)
359
360 class Meta:
361 model = StockItemTracking
362 fields = [
363 'pk',
364 'item',
365 'item_detail',
366 'date',
367 'deltas',
368 'label',
369 'notes',
370 'tracking_type',
371 'user',
372 'user_detail',
373 ]
374
375 read_only_fields = [
376 'date',
377 'user',
378 'label',
379 'tracking_type',
380 ]
381
[end of InvenTree/stock/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/InvenTree/stock/serializers.py b/InvenTree/stock/serializers.py
--- a/InvenTree/stock/serializers.py
+++ b/InvenTree/stock/serializers.py
@@ -161,6 +161,13 @@
required_tests = serializers.IntegerField(source='required_test_count', read_only=True, required=False)
+ purchase_price = serializers.SerializerMethodField()
+
+ def get_purchase_price(self, obj):
+ """ Return purchase_price (Money field) as string (includes currency) """
+
+ return str(obj.purchase_price) if obj.purchase_price else '-'
+
def __init__(self, *args, **kwargs):
part_detail = kwargs.pop('part_detail', False)
@@ -215,6 +222,7 @@
'tracking_items',
'uid',
'updated',
+ 'purchase_price',
]
""" These fields are read-only in this context.
|
{"golden_diff": "diff --git a/InvenTree/stock/serializers.py b/InvenTree/stock/serializers.py\n--- a/InvenTree/stock/serializers.py\n+++ b/InvenTree/stock/serializers.py\n@@ -161,6 +161,13 @@\n \n required_tests = serializers.IntegerField(source='required_test_count', read_only=True, required=False)\n \n+ purchase_price = serializers.SerializerMethodField()\n+\n+ def get_purchase_price(self, obj):\n+ \"\"\" Return purchase_price (Money field) as string (includes currency) \"\"\"\n+\n+ return str(obj.purchase_price) if obj.purchase_price else '-'\n+\n def __init__(self, *args, **kwargs):\n \n part_detail = kwargs.pop('part_detail', False)\n@@ -215,6 +222,7 @@\n 'tracking_items',\n 'uid',\n 'updated',\n+ 'purchase_price',\n ]\n \n \"\"\" These fields are read-only in this context.\n", "issue": "Add purchase price to Part Stock table\nAdd column `purchase_price` to \"Part Stock\" table and make it sortable.\n", "before_files": [{"content": "\"\"\"\nJSON serializers for Stock app\n\"\"\"\n\nfrom rest_framework import serializers\n\nfrom .models import StockItem, StockLocation\nfrom .models import StockItemTracking\nfrom .models import StockItemAttachment\nfrom .models import StockItemTestResult\n\nfrom django.db.models.functions import Coalesce\n\nfrom django.db.models import Case, When, Value\nfrom django.db.models import BooleanField\nfrom django.db.models import Q\n\nfrom sql_util.utils import SubquerySum, SubqueryCount\n\nfrom decimal import Decimal\n\nfrom datetime import datetime, timedelta\n\nimport common.models\nfrom company.serializers import SupplierPartSerializer\nfrom part.serializers import PartBriefSerializer\nfrom InvenTree.serializers import UserSerializerBrief, InvenTreeModelSerializer\nfrom InvenTree.serializers import InvenTreeAttachmentSerializerField\n\n\nclass LocationBriefSerializer(InvenTreeModelSerializer):\n \"\"\"\n Provides a brief serializer for a StockLocation object\n \"\"\"\n\n class Meta:\n model = StockLocation\n fields = [\n 'pk',\n 'name',\n 'pathstring',\n ]\n\n\nclass StockItemSerializerBrief(InvenTreeModelSerializer):\n \"\"\" Brief serializers for a StockItem \"\"\"\n\n location_name = serializers.CharField(source='location', read_only=True)\n part_name = serializers.CharField(source='part.full_name', read_only=True)\n quantity = serializers.FloatField()\n\n class Meta:\n model = StockItem\n fields = [\n 'pk',\n 'uid',\n 'part',\n 'part_name',\n 'supplier_part',\n 'location',\n 'location_name',\n 'quantity',\n ]\n\n\nclass StockItemSerializer(InvenTreeModelSerializer):\n \"\"\" Serializer for a StockItem:\n\n - Includes serialization for the linked part\n - Includes serialization for the item location\n \"\"\"\n\n @staticmethod\n def prefetch_queryset(queryset):\n \"\"\"\n Prefetch related database tables,\n to reduce database hits.\n \"\"\"\n\n return queryset.prefetch_related(\n 'belongs_to',\n 'build',\n 'customer',\n 'sales_order',\n 'supplier_part',\n 'supplier_part__supplier',\n 'supplier_part__manufacturer_part__manufacturer',\n 'allocations',\n 'sales_order_allocations',\n 'location',\n 'part',\n 'tracking_info',\n )\n\n @staticmethod\n def annotate_queryset(queryset):\n \"\"\"\n Add some extra annotations to the queryset,\n performing database queries as efficiently as possible.\n \"\"\"\n\n # Annotate the queryset with the total allocated to sales orders\n queryset = queryset.annotate(\n allocated=Coalesce(\n SubquerySum('sales_order_allocations__quantity'), Decimal(0)\n ) + Coalesce(\n SubquerySum('allocations__quantity'), Decimal(0)\n )\n )\n\n # Annotate the queryset with the number of tracking items\n queryset = queryset.annotate(\n tracking_items=SubqueryCount('tracking_info')\n )\n\n # Add flag to indicate if the StockItem has expired\n queryset = queryset.annotate(\n expired=Case(\n When(\n StockItem.EXPIRED_FILTER, then=Value(True, output_field=BooleanField()),\n ),\n default=Value(False, output_field=BooleanField())\n )\n )\n\n # Add flag to indicate if the StockItem is stale\n stale_days = common.models.InvenTreeSetting.get_setting('STOCK_STALE_DAYS')\n stale_date = datetime.now().date() + timedelta(days=stale_days)\n stale_filter = StockItem.IN_STOCK_FILTER & ~Q(expiry_date=None) & Q(expiry_date__lt=stale_date)\n\n queryset = queryset.annotate(\n stale=Case(\n When(\n stale_filter, then=Value(True, output_field=BooleanField()),\n ),\n default=Value(False, output_field=BooleanField()),\n )\n )\n\n return queryset\n\n status_text = serializers.CharField(source='get_status_display', read_only=True)\n\n supplier_part_detail = SupplierPartSerializer(source='supplier_part', many=False, read_only=True)\n\n part_detail = PartBriefSerializer(source='part', many=False, read_only=True)\n\n location_detail = LocationBriefSerializer(source='location', many=False, read_only=True)\n\n tracking_items = serializers.IntegerField(source='tracking_info_count', read_only=True, required=False)\n\n quantity = serializers.FloatField()\n\n allocated = serializers.FloatField(source='allocation_count', required=False)\n\n expired = serializers.BooleanField(required=False, read_only=True)\n\n stale = serializers.BooleanField(required=False, read_only=True)\n\n serial = serializers.CharField(required=False)\n\n required_tests = serializers.IntegerField(source='required_test_count', read_only=True, required=False)\n\n def __init__(self, *args, **kwargs):\n\n part_detail = kwargs.pop('part_detail', False)\n location_detail = kwargs.pop('location_detail', False)\n supplier_part_detail = kwargs.pop('supplier_part_detail', False)\n test_detail = kwargs.pop('test_detail', False)\n\n super(StockItemSerializer, self).__init__(*args, **kwargs)\n\n if part_detail is not True:\n self.fields.pop('part_detail')\n\n if location_detail is not True:\n self.fields.pop('location_detail')\n\n if supplier_part_detail is not True:\n self.fields.pop('supplier_part_detail')\n\n if test_detail is not True:\n self.fields.pop('required_tests')\n\n class Meta:\n model = StockItem\n fields = [\n 'allocated',\n 'batch',\n 'belongs_to',\n 'build',\n 'customer',\n 'expired',\n 'expiry_date',\n 'in_stock',\n 'is_building',\n 'link',\n 'location',\n 'location_detail',\n 'notes',\n 'packaging',\n 'part',\n 'part_detail',\n 'pk',\n 'quantity',\n 'required_tests',\n 'sales_order',\n 'serial',\n 'stale',\n 'status',\n 'status_text',\n 'stocktake_date',\n 'supplier_part',\n 'supplier_part_detail',\n 'tracking_items',\n 'uid',\n 'updated',\n ]\n\n \"\"\" These fields are read-only in this context.\n They can be updated by accessing the appropriate API endpoints\n \"\"\"\n read_only_fields = [\n 'allocated',\n 'stocktake_date',\n 'stocktake_user',\n 'updated',\n 'in_stock'\n ]\n\n\nclass StockQuantitySerializer(InvenTreeModelSerializer):\n\n class Meta:\n model = StockItem\n fields = ('quantity',)\n\n\nclass LocationSerializer(InvenTreeModelSerializer):\n \"\"\" Detailed information about a stock location\n \"\"\"\n\n url = serializers.CharField(source='get_absolute_url', read_only=True)\n\n items = serializers.IntegerField(source='item_count', read_only=True)\n\n class Meta:\n model = StockLocation\n fields = [\n 'pk',\n 'url',\n 'name',\n 'description',\n 'parent',\n 'pathstring',\n 'items',\n ]\n\n\nclass StockItemAttachmentSerializer(InvenTreeModelSerializer):\n \"\"\" Serializer for StockItemAttachment model \"\"\"\n\n def __init__(self, *args, **kwargs):\n user_detail = kwargs.pop('user_detail', False)\n\n super().__init__(*args, **kwargs)\n\n if user_detail is not True:\n self.fields.pop('user_detail')\n\n user_detail = UserSerializerBrief(source='user', read_only=True)\n\n attachment = InvenTreeAttachmentSerializerField(required=True)\n\n class Meta:\n model = StockItemAttachment\n\n fields = [\n 'pk',\n 'stock_item',\n 'attachment',\n 'comment',\n 'upload_date',\n 'user',\n 'user_detail',\n ]\n\n read_only_fields = [\n 'upload_date',\n 'user',\n 'user_detail'\n ]\n\n\nclass StockItemTestResultSerializer(InvenTreeModelSerializer):\n \"\"\" Serializer for the StockItemTestResult model \"\"\"\n\n user_detail = UserSerializerBrief(source='user', read_only=True)\n\n key = serializers.CharField(read_only=True)\n\n attachment = InvenTreeAttachmentSerializerField(required=False)\n\n def __init__(self, *args, **kwargs):\n user_detail = kwargs.pop('user_detail', False)\n\n super().__init__(*args, **kwargs)\n\n if user_detail is not True:\n self.fields.pop('user_detail')\n\n class Meta:\n model = StockItemTestResult\n\n fields = [\n 'pk',\n 'stock_item',\n 'key',\n 'test',\n 'result',\n 'value',\n 'attachment',\n 'notes',\n 'user',\n 'user_detail',\n 'date'\n ]\n\n read_only_fields = [\n 'pk',\n 'user',\n 'date',\n ]\n\n\nclass StockTrackingSerializer(InvenTreeModelSerializer):\n \"\"\" Serializer for StockItemTracking model \"\"\"\n\n def __init__(self, *args, **kwargs):\n\n item_detail = kwargs.pop('item_detail', False)\n user_detail = kwargs.pop('user_detail', False)\n\n super().__init__(*args, **kwargs)\n\n if item_detail is not True:\n self.fields.pop('item_detail')\n\n if user_detail is not True:\n self.fields.pop('user_detail')\n\n label = serializers.CharField(read_only=True)\n\n item_detail = StockItemSerializerBrief(source='item', many=False, read_only=True)\n\n user_detail = UserSerializerBrief(source='user', many=False, read_only=True)\n\n deltas = serializers.JSONField(read_only=True)\n\n class Meta:\n model = StockItemTracking\n fields = [\n 'pk',\n 'item',\n 'item_detail',\n 'date',\n 'deltas',\n 'label',\n 'notes',\n 'tracking_type',\n 'user',\n 'user_detail',\n ]\n\n read_only_fields = [\n 'date',\n 'user',\n 'label',\n 'tracking_type',\n ]\n", "path": "InvenTree/stock/serializers.py"}]}
| 3,716 | 216 |
gh_patches_debug_35947
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-1099
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invert constraints
I can imagine situations where it would be useful to be able to invert constraints so that they do the opposite of what they are designed to do.
```python
from opsdroid.skill import Skill
from opsdroid.matchers import match_regex
from opsdroid.constraints import constrain_users
class MySkill(Skill):
@match_regex(r'hi')
@constrain_users(['alice', 'bob'], invert=True)
async def hello(self, message):
"""Says 'Hey' to anyone EXCEPT 'alice' and 'bob'."""
await message.respond('Hey')
```
</issue>
<code>
[start of opsdroid/constraints.py]
1 """Decorator functions to use when creating skill modules.
2
3 These decorators are for specifying when a skill should not be called despite
4 having a matcher which matches the current message.
5 """
6
7 import logging
8
9 from opsdroid.helper import add_skill_attributes
10
11
12 _LOGGER = logging.getLogger(__name__)
13
14
15 def constrain_rooms(rooms):
16 """Return room constraint decorator."""
17
18 def constraint_decorator(func):
19 """Add room constraint to skill."""
20
21 def constraint_callback(message, rooms=rooms):
22 """Check if the room is correct."""
23 return message.target in rooms
24
25 func = add_skill_attributes(func)
26 func.constraints.append(constraint_callback)
27 return func
28
29 return constraint_decorator
30
31
32 def constrain_users(users):
33 """Return user constraint decorator."""
34
35 def constraint_decorator(func):
36 """Add user constraint to skill."""
37
38 def constraint_callback(message, users=users):
39 """Check if the user is correct."""
40 return message.user in users
41
42 func = add_skill_attributes(func)
43 func.constraints.append(constraint_callback)
44 return func
45
46 return constraint_decorator
47
48
49 def constrain_connectors(connectors):
50 """Return connector constraint decorator."""
51
52 def constraint_decorator(func):
53 """Add connectors constraint to skill."""
54
55 def constraint_callback(message, connectors=connectors):
56 """Check if the connectors is correct."""
57 return message.connector and (message.connector.name in connectors)
58
59 func = add_skill_attributes(func)
60 func.constraints.append(constraint_callback)
61 return func
62
63 return constraint_decorator
64
[end of opsdroid/constraints.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opsdroid/constraints.py b/opsdroid/constraints.py
--- a/opsdroid/constraints.py
+++ b/opsdroid/constraints.py
@@ -5,6 +5,7 @@
"""
import logging
+from functools import wraps
from opsdroid.helper import add_skill_attributes
@@ -12,7 +13,17 @@
_LOGGER = logging.getLogger(__name__)
-def constrain_rooms(rooms):
+def invert_wrapper(func):
+ """Inverts the result of a function."""
+
+ @wraps(func)
+ def inverted_func(*args, **kwargs):
+ return not func(*args, **kwargs)
+
+ return inverted_func
+
+
+def constrain_rooms(rooms, invert=False):
"""Return room constraint decorator."""
def constraint_decorator(func):
@@ -23,13 +34,15 @@
return message.target in rooms
func = add_skill_attributes(func)
+ if invert:
+ constraint_callback = invert_wrapper(constraint_callback)
func.constraints.append(constraint_callback)
return func
return constraint_decorator
-def constrain_users(users):
+def constrain_users(users, invert=False):
"""Return user constraint decorator."""
def constraint_decorator(func):
@@ -40,13 +53,15 @@
return message.user in users
func = add_skill_attributes(func)
+ if invert:
+ constraint_callback = invert_wrapper(constraint_callback)
func.constraints.append(constraint_callback)
return func
return constraint_decorator
-def constrain_connectors(connectors):
+def constrain_connectors(connectors, invert=False):
"""Return connector constraint decorator."""
def constraint_decorator(func):
@@ -57,6 +72,8 @@
return message.connector and (message.connector.name in connectors)
func = add_skill_attributes(func)
+ if invert:
+ constraint_callback = invert_wrapper(constraint_callback)
func.constraints.append(constraint_callback)
return func
|
{"golden_diff": "diff --git a/opsdroid/constraints.py b/opsdroid/constraints.py\n--- a/opsdroid/constraints.py\n+++ b/opsdroid/constraints.py\n@@ -5,6 +5,7 @@\n \"\"\"\n \n import logging\n+from functools import wraps\n \n from opsdroid.helper import add_skill_attributes\n \n@@ -12,7 +13,17 @@\n _LOGGER = logging.getLogger(__name__)\n \n \n-def constrain_rooms(rooms):\n+def invert_wrapper(func):\n+ \"\"\"Inverts the result of a function.\"\"\"\n+\n+ @wraps(func)\n+ def inverted_func(*args, **kwargs):\n+ return not func(*args, **kwargs)\n+\n+ return inverted_func\n+\n+\n+def constrain_rooms(rooms, invert=False):\n \"\"\"Return room constraint decorator.\"\"\"\n \n def constraint_decorator(func):\n@@ -23,13 +34,15 @@\n return message.target in rooms\n \n func = add_skill_attributes(func)\n+ if invert:\n+ constraint_callback = invert_wrapper(constraint_callback)\n func.constraints.append(constraint_callback)\n return func\n \n return constraint_decorator\n \n \n-def constrain_users(users):\n+def constrain_users(users, invert=False):\n \"\"\"Return user constraint decorator.\"\"\"\n \n def constraint_decorator(func):\n@@ -40,13 +53,15 @@\n return message.user in users\n \n func = add_skill_attributes(func)\n+ if invert:\n+ constraint_callback = invert_wrapper(constraint_callback)\n func.constraints.append(constraint_callback)\n return func\n \n return constraint_decorator\n \n \n-def constrain_connectors(connectors):\n+def constrain_connectors(connectors, invert=False):\n \"\"\"Return connector constraint decorator.\"\"\"\n \n def constraint_decorator(func):\n@@ -57,6 +72,8 @@\n return message.connector and (message.connector.name in connectors)\n \n func = add_skill_attributes(func)\n+ if invert:\n+ constraint_callback = invert_wrapper(constraint_callback)\n func.constraints.append(constraint_callback)\n return func\n", "issue": "Invert constraints\nI can imagine situations where it would be useful to be able to invert constraints so that they do the opposite of what they are designed to do.\r\n\r\n```python\r\nfrom opsdroid.skill import Skill\r\nfrom opsdroid.matchers import match_regex\r\nfrom opsdroid.constraints import constrain_users\r\n\r\nclass MySkill(Skill):\r\n\r\n @match_regex(r'hi')\r\n @constrain_users(['alice', 'bob'], invert=True)\r\n async def hello(self, message):\r\n \"\"\"Says 'Hey' to anyone EXCEPT 'alice' and 'bob'.\"\"\"\r\n await message.respond('Hey')\r\n```\n", "before_files": [{"content": "\"\"\"Decorator functions to use when creating skill modules.\n\nThese decorators are for specifying when a skill should not be called despite\nhaving a matcher which matches the current message.\n\"\"\"\n\nimport logging\n\nfrom opsdroid.helper import add_skill_attributes\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\ndef constrain_rooms(rooms):\n \"\"\"Return room constraint decorator.\"\"\"\n\n def constraint_decorator(func):\n \"\"\"Add room constraint to skill.\"\"\"\n\n def constraint_callback(message, rooms=rooms):\n \"\"\"Check if the room is correct.\"\"\"\n return message.target in rooms\n\n func = add_skill_attributes(func)\n func.constraints.append(constraint_callback)\n return func\n\n return constraint_decorator\n\n\ndef constrain_users(users):\n \"\"\"Return user constraint decorator.\"\"\"\n\n def constraint_decorator(func):\n \"\"\"Add user constraint to skill.\"\"\"\n\n def constraint_callback(message, users=users):\n \"\"\"Check if the user is correct.\"\"\"\n return message.user in users\n\n func = add_skill_attributes(func)\n func.constraints.append(constraint_callback)\n return func\n\n return constraint_decorator\n\n\ndef constrain_connectors(connectors):\n \"\"\"Return connector constraint decorator.\"\"\"\n\n def constraint_decorator(func):\n \"\"\"Add connectors constraint to skill.\"\"\"\n\n def constraint_callback(message, connectors=connectors):\n \"\"\"Check if the connectors is correct.\"\"\"\n return message.connector and (message.connector.name in connectors)\n\n func = add_skill_attributes(func)\n func.constraints.append(constraint_callback)\n return func\n\n return constraint_decorator\n", "path": "opsdroid/constraints.py"}]}
| 1,102 | 426 |
gh_patches_debug_13732
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-3113
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
print(torch.DoubleTensor(1)) # ZeroDivisionError: float division by zero
on print(torch.DoubleTensor(1)) I got ZeroDivisionError: float division by zero
but torch.DoubleTensor(0)) or torch.DoubleTensor(2)) work just fine.
I work in Jupyter notebook pytorch 0.1.12
</issue>
<code>
[start of torch/_tensor_str.py]
1 import math
2 import torch
3 from functools import reduce
4 from ._utils import _range
5
6
7 class __PrinterOptions(object):
8 precision = 4
9 threshold = 1000
10 edgeitems = 3
11 linewidth = 80
12
13
14 PRINT_OPTS = __PrinterOptions()
15 SCALE_FORMAT = '{:.5e} *\n'
16
17
18 # We could use **kwargs, but this will give better docs
19 def set_printoptions(
20 precision=None,
21 threshold=None,
22 edgeitems=None,
23 linewidth=None,
24 profile=None,
25 ):
26 """Set options for printing. Items shamelessly taken from Numpy
27
28 Args:
29 precision: Number of digits of precision for floating point output
30 (default 8).
31 threshold: Total number of array elements which trigger summarization
32 rather than full repr (default 1000).
33 edgeitems: Number of array items in summary at beginning and end of
34 each dimension (default 3).
35 linewidth: The number of characters per line for the purpose of
36 inserting line breaks (default 80). Thresholded matricies will
37 ignore this parameter.
38 profile: Sane defaults for pretty printing. Can override with any of
39 the above options. (default, short, full)
40 """
41 if profile is not None:
42 if profile == "default":
43 PRINT_OPTS.precision = 4
44 PRINT_OPTS.threshold = 1000
45 PRINT_OPTS.edgeitems = 3
46 PRINT_OPTS.linewidth = 80
47 elif profile == "short":
48 PRINT_OPTS.precision = 2
49 PRINT_OPTS.threshold = 1000
50 PRINT_OPTS.edgeitems = 2
51 PRINT_OPTS.linewidth = 80
52 elif profile == "full":
53 PRINT_OPTS.precision = 4
54 PRINT_OPTS.threshold = float('inf')
55 PRINT_OPTS.edgeitems = 3
56 PRINT_OPTS.linewidth = 80
57
58 if precision is not None:
59 PRINT_OPTS.precision = precision
60 if threshold is not None:
61 PRINT_OPTS.threshold = threshold
62 if edgeitems is not None:
63 PRINT_OPTS.edgeitems = edgeitems
64 if linewidth is not None:
65 PRINT_OPTS.linewidth = linewidth
66
67
68 def _number_format(tensor, min_sz=-1):
69 min_sz = max(min_sz, 2)
70 tensor = torch.DoubleTensor(tensor.size()).copy_(tensor).abs_().view(tensor.nelement())
71
72 pos_inf_mask = tensor.eq(float('inf'))
73 neg_inf_mask = tensor.eq(float('-inf'))
74 nan_mask = tensor.ne(tensor)
75 invalid_value_mask = pos_inf_mask + neg_inf_mask + nan_mask
76 if invalid_value_mask.all():
77 example_value = 0
78 else:
79 example_value = tensor[invalid_value_mask.eq(0)][0]
80 tensor[invalid_value_mask] = example_value
81 if invalid_value_mask.any():
82 min_sz = max(min_sz, 3)
83
84 int_mode = True
85 # TODO: use fmod?
86 for value in tensor:
87 if value != math.ceil(value):
88 int_mode = False
89 break
90
91 exp_min = tensor.min()
92 if exp_min != 0:
93 exp_min = math.floor(math.log10(exp_min)) + 1
94 else:
95 exp_min = 1
96 exp_max = tensor.max()
97 if exp_max != 0:
98 exp_max = math.floor(math.log10(exp_max)) + 1
99 else:
100 exp_max = 1
101
102 scale = 1
103 exp_max = int(exp_max)
104 prec = PRINT_OPTS.precision
105 if int_mode:
106 if exp_max > prec + 1:
107 format = '{{:11.{}e}}'.format(prec)
108 sz = max(min_sz, 7 + prec)
109 else:
110 sz = max(min_sz, exp_max + 1)
111 format = '{:' + str(sz) + '.0f}'
112 else:
113 if exp_max - exp_min > prec:
114 sz = 7 + prec
115 if abs(exp_max) > 99 or abs(exp_min) > 99:
116 sz = sz + 1
117 sz = max(min_sz, sz)
118 format = '{{:{}.{}e}}'.format(sz, prec)
119 else:
120 if exp_max > prec + 1 or exp_max < 0:
121 sz = max(min_sz, 7)
122 scale = math.pow(10, exp_max - 1)
123 else:
124 if exp_max == 0:
125 sz = 7
126 else:
127 sz = exp_max + 6
128 sz = max(min_sz, sz)
129 format = '{{:{}.{}f}}'.format(sz, prec)
130 return format, scale, sz
131
132
133 def _tensor_str(self):
134 n = PRINT_OPTS.edgeitems
135 has_hdots = self.size()[-1] > 2 * n
136 has_vdots = self.size()[-2] > 2 * n
137 print_full_mat = not has_hdots and not has_vdots
138 formatter = _number_format(self, min_sz=3 if not print_full_mat else 0)
139 print_dots = self.numel() >= PRINT_OPTS.threshold
140
141 dim_sz = max(2, max(len(str(x)) for x in self.size()))
142 dim_fmt = "{:^" + str(dim_sz) + "}"
143 dot_fmt = u"{:^" + str(dim_sz + 1) + "}"
144
145 counter_dim = self.ndimension() - 2
146 counter = torch.LongStorage(counter_dim).fill_(0)
147 counter[counter.size() - 1] = -1
148 finished = False
149 strt = ''
150 while True:
151 nrestarted = [False for i in counter]
152 nskipped = [False for i in counter]
153 for i in _range(counter_dim - 1, -1, -1):
154 counter[i] += 1
155 if print_dots and counter[i] == n and self.size(i) > 2 * n:
156 counter[i] = self.size(i) - n
157 nskipped[i] = True
158 if counter[i] == self.size(i):
159 if i == 0:
160 finished = True
161 counter[i] = 0
162 nrestarted[i] = True
163 else:
164 break
165 if finished:
166 break
167 elif print_dots:
168 if any(nskipped):
169 for hdot in nskipped:
170 strt += dot_fmt.format('...') if hdot \
171 else dot_fmt.format('')
172 strt += '\n'
173 if any(nrestarted):
174 strt += ' '
175 for vdot in nrestarted:
176 strt += dot_fmt.format(u'\u22EE' if vdot else '')
177 strt += '\n'
178 if strt != '':
179 strt += '\n'
180 strt += '({},.,.) = \n'.format(
181 ','.join(dim_fmt.format(i) for i in counter))
182 submatrix = reduce(lambda t, i: t.select(0, i), counter, self)
183 strt += _matrix_str(submatrix, ' ', formatter, print_dots)
184 return strt
185
186
187 def __repr_row(row, indent, fmt, scale, sz, truncate=None):
188 if truncate is not None:
189 dotfmt = " {:^5} "
190 return (indent +
191 ' '.join(fmt.format(val / scale) for val in row[:truncate]) +
192 dotfmt.format('...') +
193 ' '.join(fmt.format(val / scale) for val in row[-truncate:]) +
194 '\n')
195 else:
196 return indent + ' '.join(fmt.format(val / scale) for val in row) + '\n'
197
198
199 def _matrix_str(self, indent='', formatter=None, force_truncate=False):
200 n = PRINT_OPTS.edgeitems
201 has_hdots = self.size(1) > 2 * n
202 has_vdots = self.size(0) > 2 * n
203 print_full_mat = not has_hdots and not has_vdots
204
205 if formatter is None:
206 fmt, scale, sz = _number_format(self,
207 min_sz=5 if not print_full_mat else 0)
208 else:
209 fmt, scale, sz = formatter
210 nColumnPerLine = int(math.floor((PRINT_OPTS.linewidth - len(indent)) / (sz + 1)))
211 strt = ''
212 firstColumn = 0
213
214 if not force_truncate and \
215 (self.numel() < PRINT_OPTS.threshold or print_full_mat):
216 while firstColumn < self.size(1):
217 lastColumn = min(firstColumn + nColumnPerLine - 1, self.size(1) - 1)
218 if nColumnPerLine < self.size(1):
219 strt += '\n' if firstColumn != 1 else ''
220 strt += 'Columns {} to {} \n{}'.format(
221 firstColumn, lastColumn, indent)
222 if scale != 1:
223 strt += SCALE_FORMAT.format(scale)
224 for l in _range(self.size(0)):
225 strt += indent + (' ' if scale != 1 else '')
226 row_slice = self[l, firstColumn:lastColumn + 1]
227 strt += ' '.join(fmt.format(val / scale) for val in row_slice)
228 strt += '\n'
229 firstColumn = lastColumn + 1
230 else:
231 if scale != 1:
232 strt += SCALE_FORMAT.format(scale)
233 if has_vdots and has_hdots:
234 vdotfmt = "{:^" + str((sz + 1) * n - 1) + "}"
235 ddotfmt = u"{:^5}"
236 for row in self[:n]:
237 strt += __repr_row(row, indent, fmt, scale, sz, n)
238 strt += indent + ' '.join([vdotfmt.format('...'),
239 ddotfmt.format(u'\u22F1'),
240 vdotfmt.format('...')]) + "\n"
241 for row in self[-n:]:
242 strt += __repr_row(row, indent, fmt, scale, sz, n)
243 elif not has_vdots and has_hdots:
244 for row in self:
245 strt += __repr_row(row, indent, fmt, scale, sz, n)
246 elif has_vdots and not has_hdots:
247 vdotfmt = u"{:^" + \
248 str(len(__repr_row(self[0], '', fmt, scale, sz))) + \
249 "}\n"
250 for row in self[:n]:
251 strt += __repr_row(row, indent, fmt, scale, sz)
252 strt += vdotfmt.format(u'\u22EE')
253 for row in self[-n:]:
254 strt += __repr_row(row, indent, fmt, scale, sz)
255 else:
256 for row in self:
257 strt += __repr_row(row, indent, fmt, scale, sz)
258 return strt
259
260
261 def _vector_str(self):
262 fmt, scale, sz = _number_format(self)
263 strt = ''
264 ident = ''
265 n = PRINT_OPTS.edgeitems
266 dotfmt = u"{:^" + str(sz) + "}\n"
267 if scale != 1:
268 strt += SCALE_FORMAT.format(scale)
269 ident = ' '
270 if self.numel() < PRINT_OPTS.threshold:
271 return (strt +
272 '\n'.join(ident + fmt.format(val / scale) for val in self) +
273 '\n')
274 else:
275 return (strt +
276 '\n'.join(ident + fmt.format(val / scale) for val in self[:n]) +
277 '\n' + (ident + dotfmt.format(u"\u22EE")) +
278 '\n'.join(ident + fmt.format(val / scale) for val in self[-n:]) +
279 '\n')
280
281
282 def _str(self):
283 if self.ndimension() == 0:
284 return '[{} with no dimension]\n'.format(torch.typename(self))
285 elif self.ndimension() == 1:
286 strt = _vector_str(self)
287 elif self.ndimension() == 2:
288 strt = _matrix_str(self)
289 else:
290 strt = _tensor_str(self)
291
292 size_str = 'x'.join(str(size) for size in self.size())
293 device_str = '' if not self.is_cuda else \
294 ' (GPU {})'.format(self.get_device())
295 strt += '[{} of size {}{}]\n'.format(torch.typename(self),
296 size_str, device_str)
297 return '\n' + strt
298
[end of torch/_tensor_str.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torch/_tensor_str.py b/torch/_tensor_str.py
--- a/torch/_tensor_str.py
+++ b/torch/_tensor_str.py
@@ -2,6 +2,10 @@
import torch
from functools import reduce
from ._utils import _range
+from sys import float_info
+
+
+__MIN_LOG_SCALE = math.ceil(math.log(float_info.min * float_info.epsilon, 10))
class __PrinterOptions(object):
@@ -119,7 +123,7 @@
else:
if exp_max > prec + 1 or exp_max < 0:
sz = max(min_sz, 7)
- scale = math.pow(10, exp_max - 1)
+ scale = math.pow(10, max(exp_max - 1, __MIN_LOG_SCALE))
else:
if exp_max == 0:
sz = 7
|
{"golden_diff": "diff --git a/torch/_tensor_str.py b/torch/_tensor_str.py\n--- a/torch/_tensor_str.py\n+++ b/torch/_tensor_str.py\n@@ -2,6 +2,10 @@\n import torch\n from functools import reduce\n from ._utils import _range\n+from sys import float_info\n+\n+\n+__MIN_LOG_SCALE = math.ceil(math.log(float_info.min * float_info.epsilon, 10))\n \n \n class __PrinterOptions(object):\n@@ -119,7 +123,7 @@\n else:\n if exp_max > prec + 1 or exp_max < 0:\n sz = max(min_sz, 7)\n- scale = math.pow(10, exp_max - 1)\n+ scale = math.pow(10, max(exp_max - 1, __MIN_LOG_SCALE))\n else:\n if exp_max == 0:\n sz = 7\n", "issue": "print(torch.DoubleTensor(1)) # ZeroDivisionError: float division by zero\non print(torch.DoubleTensor(1)) I got ZeroDivisionError: float division by zero\r\nbut torch.DoubleTensor(0)) or torch.DoubleTensor(2)) work just fine.\r\nI work in Jupyter notebook pytorch 0.1.12\n", "before_files": [{"content": "import math\nimport torch\nfrom functools import reduce\nfrom ._utils import _range\n\n\nclass __PrinterOptions(object):\n precision = 4\n threshold = 1000\n edgeitems = 3\n linewidth = 80\n\n\nPRINT_OPTS = __PrinterOptions()\nSCALE_FORMAT = '{:.5e} *\\n'\n\n\n# We could use **kwargs, but this will give better docs\ndef set_printoptions(\n precision=None,\n threshold=None,\n edgeitems=None,\n linewidth=None,\n profile=None,\n):\n \"\"\"Set options for printing. Items shamelessly taken from Numpy\n\n Args:\n precision: Number of digits of precision for floating point output\n (default 8).\n threshold: Total number of array elements which trigger summarization\n rather than full repr (default 1000).\n edgeitems: Number of array items in summary at beginning and end of\n each dimension (default 3).\n linewidth: The number of characters per line for the purpose of\n inserting line breaks (default 80). Thresholded matricies will\n ignore this parameter.\n profile: Sane defaults for pretty printing. Can override with any of\n the above options. (default, short, full)\n \"\"\"\n if profile is not None:\n if profile == \"default\":\n PRINT_OPTS.precision = 4\n PRINT_OPTS.threshold = 1000\n PRINT_OPTS.edgeitems = 3\n PRINT_OPTS.linewidth = 80\n elif profile == \"short\":\n PRINT_OPTS.precision = 2\n PRINT_OPTS.threshold = 1000\n PRINT_OPTS.edgeitems = 2\n PRINT_OPTS.linewidth = 80\n elif profile == \"full\":\n PRINT_OPTS.precision = 4\n PRINT_OPTS.threshold = float('inf')\n PRINT_OPTS.edgeitems = 3\n PRINT_OPTS.linewidth = 80\n\n if precision is not None:\n PRINT_OPTS.precision = precision\n if threshold is not None:\n PRINT_OPTS.threshold = threshold\n if edgeitems is not None:\n PRINT_OPTS.edgeitems = edgeitems\n if linewidth is not None:\n PRINT_OPTS.linewidth = linewidth\n\n\ndef _number_format(tensor, min_sz=-1):\n min_sz = max(min_sz, 2)\n tensor = torch.DoubleTensor(tensor.size()).copy_(tensor).abs_().view(tensor.nelement())\n\n pos_inf_mask = tensor.eq(float('inf'))\n neg_inf_mask = tensor.eq(float('-inf'))\n nan_mask = tensor.ne(tensor)\n invalid_value_mask = pos_inf_mask + neg_inf_mask + nan_mask\n if invalid_value_mask.all():\n example_value = 0\n else:\n example_value = tensor[invalid_value_mask.eq(0)][0]\n tensor[invalid_value_mask] = example_value\n if invalid_value_mask.any():\n min_sz = max(min_sz, 3)\n\n int_mode = True\n # TODO: use fmod?\n for value in tensor:\n if value != math.ceil(value):\n int_mode = False\n break\n\n exp_min = tensor.min()\n if exp_min != 0:\n exp_min = math.floor(math.log10(exp_min)) + 1\n else:\n exp_min = 1\n exp_max = tensor.max()\n if exp_max != 0:\n exp_max = math.floor(math.log10(exp_max)) + 1\n else:\n exp_max = 1\n\n scale = 1\n exp_max = int(exp_max)\n prec = PRINT_OPTS.precision\n if int_mode:\n if exp_max > prec + 1:\n format = '{{:11.{}e}}'.format(prec)\n sz = max(min_sz, 7 + prec)\n else:\n sz = max(min_sz, exp_max + 1)\n format = '{:' + str(sz) + '.0f}'\n else:\n if exp_max - exp_min > prec:\n sz = 7 + prec\n if abs(exp_max) > 99 or abs(exp_min) > 99:\n sz = sz + 1\n sz = max(min_sz, sz)\n format = '{{:{}.{}e}}'.format(sz, prec)\n else:\n if exp_max > prec + 1 or exp_max < 0:\n sz = max(min_sz, 7)\n scale = math.pow(10, exp_max - 1)\n else:\n if exp_max == 0:\n sz = 7\n else:\n sz = exp_max + 6\n sz = max(min_sz, sz)\n format = '{{:{}.{}f}}'.format(sz, prec)\n return format, scale, sz\n\n\ndef _tensor_str(self):\n n = PRINT_OPTS.edgeitems\n has_hdots = self.size()[-1] > 2 * n\n has_vdots = self.size()[-2] > 2 * n\n print_full_mat = not has_hdots and not has_vdots\n formatter = _number_format(self, min_sz=3 if not print_full_mat else 0)\n print_dots = self.numel() >= PRINT_OPTS.threshold\n\n dim_sz = max(2, max(len(str(x)) for x in self.size()))\n dim_fmt = \"{:^\" + str(dim_sz) + \"}\"\n dot_fmt = u\"{:^\" + str(dim_sz + 1) + \"}\"\n\n counter_dim = self.ndimension() - 2\n counter = torch.LongStorage(counter_dim).fill_(0)\n counter[counter.size() - 1] = -1\n finished = False\n strt = ''\n while True:\n nrestarted = [False for i in counter]\n nskipped = [False for i in counter]\n for i in _range(counter_dim - 1, -1, -1):\n counter[i] += 1\n if print_dots and counter[i] == n and self.size(i) > 2 * n:\n counter[i] = self.size(i) - n\n nskipped[i] = True\n if counter[i] == self.size(i):\n if i == 0:\n finished = True\n counter[i] = 0\n nrestarted[i] = True\n else:\n break\n if finished:\n break\n elif print_dots:\n if any(nskipped):\n for hdot in nskipped:\n strt += dot_fmt.format('...') if hdot \\\n else dot_fmt.format('')\n strt += '\\n'\n if any(nrestarted):\n strt += ' '\n for vdot in nrestarted:\n strt += dot_fmt.format(u'\\u22EE' if vdot else '')\n strt += '\\n'\n if strt != '':\n strt += '\\n'\n strt += '({},.,.) = \\n'.format(\n ','.join(dim_fmt.format(i) for i in counter))\n submatrix = reduce(lambda t, i: t.select(0, i), counter, self)\n strt += _matrix_str(submatrix, ' ', formatter, print_dots)\n return strt\n\n\ndef __repr_row(row, indent, fmt, scale, sz, truncate=None):\n if truncate is not None:\n dotfmt = \" {:^5} \"\n return (indent +\n ' '.join(fmt.format(val / scale) for val in row[:truncate]) +\n dotfmt.format('...') +\n ' '.join(fmt.format(val / scale) for val in row[-truncate:]) +\n '\\n')\n else:\n return indent + ' '.join(fmt.format(val / scale) for val in row) + '\\n'\n\n\ndef _matrix_str(self, indent='', formatter=None, force_truncate=False):\n n = PRINT_OPTS.edgeitems\n has_hdots = self.size(1) > 2 * n\n has_vdots = self.size(0) > 2 * n\n print_full_mat = not has_hdots and not has_vdots\n\n if formatter is None:\n fmt, scale, sz = _number_format(self,\n min_sz=5 if not print_full_mat else 0)\n else:\n fmt, scale, sz = formatter\n nColumnPerLine = int(math.floor((PRINT_OPTS.linewidth - len(indent)) / (sz + 1)))\n strt = ''\n firstColumn = 0\n\n if not force_truncate and \\\n (self.numel() < PRINT_OPTS.threshold or print_full_mat):\n while firstColumn < self.size(1):\n lastColumn = min(firstColumn + nColumnPerLine - 1, self.size(1) - 1)\n if nColumnPerLine < self.size(1):\n strt += '\\n' if firstColumn != 1 else ''\n strt += 'Columns {} to {} \\n{}'.format(\n firstColumn, lastColumn, indent)\n if scale != 1:\n strt += SCALE_FORMAT.format(scale)\n for l in _range(self.size(0)):\n strt += indent + (' ' if scale != 1 else '')\n row_slice = self[l, firstColumn:lastColumn + 1]\n strt += ' '.join(fmt.format(val / scale) for val in row_slice)\n strt += '\\n'\n firstColumn = lastColumn + 1\n else:\n if scale != 1:\n strt += SCALE_FORMAT.format(scale)\n if has_vdots and has_hdots:\n vdotfmt = \"{:^\" + str((sz + 1) * n - 1) + \"}\"\n ddotfmt = u\"{:^5}\"\n for row in self[:n]:\n strt += __repr_row(row, indent, fmt, scale, sz, n)\n strt += indent + ' '.join([vdotfmt.format('...'),\n ddotfmt.format(u'\\u22F1'),\n vdotfmt.format('...')]) + \"\\n\"\n for row in self[-n:]:\n strt += __repr_row(row, indent, fmt, scale, sz, n)\n elif not has_vdots and has_hdots:\n for row in self:\n strt += __repr_row(row, indent, fmt, scale, sz, n)\n elif has_vdots and not has_hdots:\n vdotfmt = u\"{:^\" + \\\n str(len(__repr_row(self[0], '', fmt, scale, sz))) + \\\n \"}\\n\"\n for row in self[:n]:\n strt += __repr_row(row, indent, fmt, scale, sz)\n strt += vdotfmt.format(u'\\u22EE')\n for row in self[-n:]:\n strt += __repr_row(row, indent, fmt, scale, sz)\n else:\n for row in self:\n strt += __repr_row(row, indent, fmt, scale, sz)\n return strt\n\n\ndef _vector_str(self):\n fmt, scale, sz = _number_format(self)\n strt = ''\n ident = ''\n n = PRINT_OPTS.edgeitems\n dotfmt = u\"{:^\" + str(sz) + \"}\\n\"\n if scale != 1:\n strt += SCALE_FORMAT.format(scale)\n ident = ' '\n if self.numel() < PRINT_OPTS.threshold:\n return (strt +\n '\\n'.join(ident + fmt.format(val / scale) for val in self) +\n '\\n')\n else:\n return (strt +\n '\\n'.join(ident + fmt.format(val / scale) for val in self[:n]) +\n '\\n' + (ident + dotfmt.format(u\"\\u22EE\")) +\n '\\n'.join(ident + fmt.format(val / scale) for val in self[-n:]) +\n '\\n')\n\n\ndef _str(self):\n if self.ndimension() == 0:\n return '[{} with no dimension]\\n'.format(torch.typename(self))\n elif self.ndimension() == 1:\n strt = _vector_str(self)\n elif self.ndimension() == 2:\n strt = _matrix_str(self)\n else:\n strt = _tensor_str(self)\n\n size_str = 'x'.join(str(size) for size in self.size())\n device_str = '' if not self.is_cuda else \\\n ' (GPU {})'.format(self.get_device())\n strt += '[{} of size {}{}]\\n'.format(torch.typename(self),\n size_str, device_str)\n return '\\n' + strt\n", "path": "torch/_tensor_str.py"}]}
| 4,065 | 198 |
gh_patches_debug_24278
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-783
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
if person browsing the site has the url, they can view user detail pages
while investigating #479, I discovered that the detail pages for users who are not indexers are visible to not-logged-in users if you know the URL (e.g. http://206.12.88.113/user/1327). User detail pages should only be accessible to not-logged-in visitor if the user's `is-indexer` field is set to `True` (e.g. http://206.12.88.113/user/613 is (correctly) visible to everyone).
This is not enormously urgent - I don't believe there are any links to the detail pages of non-indexer users displayed anywhere on the site, so you have to know the URL to visit these pages. But there's the potential of people's information (i.e. name and institution) being divulged even if they're not officially affiliated with CantusDB. And these pages should not be visible to anonymous users anyways.
</issue>
<code>
[start of django/cantusdb_project/main_app/views/user.py]
1 from django.urls import reverse
2 from django.db.models.aggregates import Count
3 from django.views.generic import DetailView
4 from django.contrib.auth import get_user_model, login as auth_login
5 from main_app.models import Source
6 from django.views.generic import ListView
7 from django.contrib.auth.mixins import LoginRequiredMixin
8 from django.db.models import Q
9 from django.core.paginator import Paginator
10 from django.contrib.auth.views import LogoutView, LoginView
11 from django.contrib import messages
12 from extra_views import SearchableListMixin
13 from django.http import HttpResponseRedirect
14
15
16 class UserDetailView(DetailView):
17 """Detail view for User model
18
19 Accessed by /users/<pk>
20 """
21
22 model = get_user_model()
23 context_object_name = "user"
24 template_name = "user_detail.html"
25
26 def get_context_data(self, **kwargs):
27 context = super().get_context_data(**kwargs)
28 user = self.get_object()
29 display_unpublished = self.request.user.is_authenticated
30 sort_by_siglum = lambda source: source.siglum
31 if display_unpublished:
32 context["inventoried_sources"] = sorted(
33 user.inventoried_sources.all(), key=sort_by_siglum
34 )
35 context["full_text_sources"] = sorted(
36 user.entered_full_text_for_sources.all(), key=sort_by_siglum
37 )
38 context["melody_sources"] = sorted(
39 user.entered_melody_for_sources.all(), key=sort_by_siglum
40 )
41 context["proofread_sources"] = sorted(
42 user.proofread_sources.all(), key=sort_by_siglum
43 )
44 context["edited_sources"] = sorted(
45 user.edited_sources.all(), key=sort_by_siglum
46 )
47 else:
48 context["inventoried_sources"] = sorted(
49 user.inventoried_sources.all().filter(published=True),
50 key=sort_by_siglum,
51 )
52 context["full_text_sources"] = sorted(
53 user.entered_full_text_for_sources.all().filter(published=True),
54 key=sort_by_siglum,
55 )
56 context["melody_sources"] = sorted(
57 user.entered_melody_for_sources.all().filter(published=True),
58 key=sort_by_siglum,
59 )
60 context["proofread_sources"] = sorted(
61 user.proofread_sources.all().filter(published=True), key=sort_by_siglum
62 )
63 context["edited_sources"] = sorted(
64 user.edited_sources.all().filter(published=True), key=sort_by_siglum
65 )
66
67 return context
68
69
70 class UserSourceListView(LoginRequiredMixin, ListView):
71 model = Source
72 context_object_name = "sources"
73 template_name = "user_source_list.html"
74 paginate_by = 100
75
76 def get_queryset(self):
77 return (
78 Source.objects.filter(
79 Q(current_editors=self.request.user)
80 | Q(created_by=self.request.user)
81 # | Q(inventoried_by=self.request.user)
82 # | Q(full_text_entered_by=self.request.user)
83 # | Q(melodies_entered_by=self.request.user)
84 # | Q(proofreaders=self.request.user)
85 # | Q(other_editors=self.request.user)
86 )
87 .order_by("-date_created")
88 .distinct()
89 )
90
91 def get_context_data(self, **kwargs):
92 context = super().get_context_data(**kwargs)
93
94 user_created_sources = (
95 Source.objects.filter(created_by=self.request.user)
96 .order_by("-date_created")
97 .distinct()
98 )
99 paginator = Paginator(user_created_sources, 10)
100 page_number = self.request.GET.get("page2")
101 page_obj = paginator.get_page(page_number)
102
103 context["user_created_sources_page_obj"] = page_obj
104 return context
105
106
107 class CustomLogoutView(LogoutView):
108 def get_next_page(self):
109 next_page = super().get_next_page()
110 messages.success(self.request, "You have successfully logged out!")
111 return next_page
112
113
114 class UserListView(LoginRequiredMixin, SearchableListMixin, ListView):
115 """A list of all User objects
116
117 This view is equivalent to the user list view on the old Cantus.
118 This includes all User objects on the old Cantus.
119 When passed a `?q=<query>` argument in the GET request, it will filter users
120 based on the fields defined in `search_fields` with the `icontains` lookup.
121
122 Accessed by /users/
123 """
124
125 model = get_user_model()
126 ordering = "full_name"
127 search_fields = ["full_name", "institution", "city", "country"]
128 paginate_by = 100
129 template_name = "user_list.html"
130 context_object_name = "users"
131
132
133 class IndexerListView(SearchableListMixin, ListView):
134 """A list of User objects shown to the public
135
136 This view replaces the indexer list view on the old Cantus.
137 The indexers are considered a subset of all User objects, the subset shown to the public.
138 This includes the User objects corresponding to Indexer objects on the old Cantus.
139 When passed a `?q=<query>` argument in the GET request, it will filter users
140 based on the fields defined in `search_fields` with the `icontains` lookup.
141
142 Accessed by /indexers/
143 """
144
145 model = get_user_model()
146 ordering = "full_name"
147 search_fields = ["full_name", "institution", "city", "country"]
148 paginate_by = 100
149 template_name = "indexer_list.html"
150 context_object_name = "indexers"
151
152 def get_queryset(self):
153 all_users = super().get_queryset()
154 indexers = all_users.filter(is_indexer=True)
155 display_unpublished = self.request.user.is_authenticated
156 if display_unpublished:
157 indexers = indexers.annotate(source_count=Count("inventoried_sources"))
158 # display those who have at least one source
159 return indexers.filter(source_count__gte=1)
160 else:
161 indexers = indexers.annotate(
162 source_count=Count(
163 "inventoried_sources", filter=Q(inventoried_sources__published=True)
164 )
165 )
166 # display those who have at least one published source
167 return indexers.filter(source_count__gte=1)
168
169
170 class CustomLoginView(LoginView):
171 def form_valid(self, form):
172 auth_login(self.request, form.get_user())
173 # if the user has not yet changed the initial password that was assigned to them,
174 # redirect them to the change-password page everytime they log in
175 # with warning messages prompting them to change their password
176 if form.get_user().changed_initial_password == False:
177 return HttpResponseRedirect(reverse("change-password"))
178 return HttpResponseRedirect(self.get_success_url())
179
[end of django/cantusdb_project/main_app/views/user.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py
--- a/django/cantusdb_project/main_app/views/user.py
+++ b/django/cantusdb_project/main_app/views/user.py
@@ -11,6 +11,7 @@
from django.contrib import messages
from extra_views import SearchableListMixin
from django.http import HttpResponseRedirect
+from django.core.exceptions import PermissionDenied
class UserDetailView(DetailView):
@@ -24,9 +25,16 @@
template_name = "user_detail.html"
def get_context_data(self, **kwargs):
- context = super().get_context_data(**kwargs)
user = self.get_object()
- display_unpublished = self.request.user.is_authenticated
+ # to begin, if the person viewing the site is not logged in,
+ # they should only be able to view the detail pages of indexers,
+ # and not the detail pages of run-of-the-mill users
+ viewing_user = self.request.user
+ if not (viewing_user.is_authenticated or user.is_indexer):
+ raise PermissionDenied()
+
+ context = super().get_context_data(**kwargs)
+ display_unpublished = viewing_user.is_authenticated
sort_by_siglum = lambda source: source.siglum
if display_unpublished:
context["inventoried_sources"] = sorted(
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py\n--- a/django/cantusdb_project/main_app/views/user.py\n+++ b/django/cantusdb_project/main_app/views/user.py\n@@ -11,6 +11,7 @@\n from django.contrib import messages\n from extra_views import SearchableListMixin\n from django.http import HttpResponseRedirect\n+from django.core.exceptions import PermissionDenied\n \n \n class UserDetailView(DetailView):\n@@ -24,9 +25,16 @@\n template_name = \"user_detail.html\"\n \n def get_context_data(self, **kwargs):\n- context = super().get_context_data(**kwargs)\n user = self.get_object()\n- display_unpublished = self.request.user.is_authenticated\n+ # to begin, if the person viewing the site is not logged in,\n+ # they should only be able to view the detail pages of indexers,\n+ # and not the detail pages of run-of-the-mill users\n+ viewing_user = self.request.user\n+ if not (viewing_user.is_authenticated or user.is_indexer):\n+ raise PermissionDenied()\n+\n+ context = super().get_context_data(**kwargs)\n+ display_unpublished = viewing_user.is_authenticated\n sort_by_siglum = lambda source: source.siglum\n if display_unpublished:\n context[\"inventoried_sources\"] = sorted(\n", "issue": "if person browsing the site has the url, they can view user detail pages\nwhile investigating #479, I discovered that the detail pages for users who are not indexers are visible to not-logged-in users if you know the URL (e.g. http://206.12.88.113/user/1327). User detail pages should only be accessible to not-logged-in visitor if the user's `is-indexer` field is set to `True` (e.g. http://206.12.88.113/user/613 is (correctly) visible to everyone).\r\n\r\nThis is not enormously urgent - I don't believe there are any links to the detail pages of non-indexer users displayed anywhere on the site, so you have to know the URL to visit these pages. But there's the potential of people's information (i.e. name and institution) being divulged even if they're not officially affiliated with CantusDB. And these pages should not be visible to anonymous users anyways.\n", "before_files": [{"content": "from django.urls import reverse\nfrom django.db.models.aggregates import Count\nfrom django.views.generic import DetailView\nfrom django.contrib.auth import get_user_model, login as auth_login\nfrom main_app.models import Source\nfrom django.views.generic import ListView\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.db.models import Q\nfrom django.core.paginator import Paginator\nfrom django.contrib.auth.views import LogoutView, LoginView\nfrom django.contrib import messages\nfrom extra_views import SearchableListMixin\nfrom django.http import HttpResponseRedirect\n\n\nclass UserDetailView(DetailView):\n \"\"\"Detail view for User model\n\n Accessed by /users/<pk>\n \"\"\"\n\n model = get_user_model()\n context_object_name = \"user\"\n template_name = \"user_detail.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n user = self.get_object()\n display_unpublished = self.request.user.is_authenticated\n sort_by_siglum = lambda source: source.siglum\n if display_unpublished:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all(), key=sort_by_siglum\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all(), key=sort_by_siglum\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all(), key=sort_by_siglum\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all(), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all(), key=sort_by_siglum\n )\n else:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all().filter(published=True), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all().filter(published=True), key=sort_by_siglum\n )\n\n return context\n\n\nclass UserSourceListView(LoginRequiredMixin, ListView):\n model = Source\n context_object_name = \"sources\"\n template_name = \"user_source_list.html\"\n paginate_by = 100\n\n def get_queryset(self):\n return (\n Source.objects.filter(\n Q(current_editors=self.request.user)\n | Q(created_by=self.request.user)\n # | Q(inventoried_by=self.request.user)\n # | Q(full_text_entered_by=self.request.user)\n # | Q(melodies_entered_by=self.request.user)\n # | Q(proofreaders=self.request.user)\n # | Q(other_editors=self.request.user)\n )\n .order_by(\"-date_created\")\n .distinct()\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n user_created_sources = (\n Source.objects.filter(created_by=self.request.user)\n .order_by(\"-date_created\")\n .distinct()\n )\n paginator = Paginator(user_created_sources, 10)\n page_number = self.request.GET.get(\"page2\")\n page_obj = paginator.get_page(page_number)\n\n context[\"user_created_sources_page_obj\"] = page_obj\n return context\n\n\nclass CustomLogoutView(LogoutView):\n def get_next_page(self):\n next_page = super().get_next_page()\n messages.success(self.request, \"You have successfully logged out!\")\n return next_page\n\n\nclass UserListView(LoginRequiredMixin, SearchableListMixin, ListView):\n \"\"\"A list of all User objects\n\n This view is equivalent to the user list view on the old Cantus.\n This includes all User objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /users/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"user_list.html\"\n context_object_name = \"users\"\n\n\nclass IndexerListView(SearchableListMixin, ListView):\n \"\"\"A list of User objects shown to the public\n\n This view replaces the indexer list view on the old Cantus.\n The indexers are considered a subset of all User objects, the subset shown to the public.\n This includes the User objects corresponding to Indexer objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /indexers/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"indexer_list.html\"\n context_object_name = \"indexers\"\n\n def get_queryset(self):\n all_users = super().get_queryset()\n indexers = all_users.filter(is_indexer=True)\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n indexers = indexers.annotate(source_count=Count(\"inventoried_sources\"))\n # display those who have at least one source\n return indexers.filter(source_count__gte=1)\n else:\n indexers = indexers.annotate(\n source_count=Count(\n \"inventoried_sources\", filter=Q(inventoried_sources__published=True)\n )\n )\n # display those who have at least one published source\n return indexers.filter(source_count__gte=1)\n\n\nclass CustomLoginView(LoginView):\n def form_valid(self, form):\n auth_login(self.request, form.get_user())\n # if the user has not yet changed the initial password that was assigned to them,\n # redirect them to the change-password page everytime they log in\n # with warning messages prompting them to change their password\n if form.get_user().changed_initial_password == False:\n return HttpResponseRedirect(reverse(\"change-password\"))\n return HttpResponseRedirect(self.get_success_url())\n", "path": "django/cantusdb_project/main_app/views/user.py"}]}
| 2,650 | 312 |
gh_patches_debug_1158
|
rasdani/github-patches
|
git_diff
|
Netflix__lemur-455
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
A custom cert name with spaces causes AWS Upload failures
Creating a cert with a custom name that has spaces, such as: `My Certificate` will not properly get uploaded to AWS.
-- Potential Fixes:
1. Prevent spaces in custom names
2. Allow custom cert names to be editable
3. If spaces are allowed, the AWS uploader plugin needs to upload it in a way that can work properly.
</issue>
<code>
[start of lemur/certificates/models.py]
1 """
2 .. module: lemur.certificates.models
3 :platform: Unix
4 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
5 :license: Apache, see LICENSE for more details.
6 .. moduleauthor:: Kevin Glisson <[email protected]>
7 """
8 import datetime
9
10 import lemur.common.utils
11 from flask import current_app
12
13 from sqlalchemy.orm import relationship
14 from sqlalchemy.sql.expression import case
15 from sqlalchemy.ext.hybrid import hybrid_property
16 from sqlalchemy import event, Integer, ForeignKey, String, DateTime, PassiveDefault, func, Column, Text, Boolean
17
18 from lemur.database import db
19 from lemur.models import certificate_associations, certificate_source_associations, \
20 certificate_destination_associations, certificate_notification_associations, \
21 certificate_replacement_associations, roles_certificates
22 from lemur.plugins.base import plugins
23 from lemur.utils import Vault
24
25 from lemur.common import defaults
26 from lemur.domains.models import Domain
27
28
29 def get_or_increase_name(name):
30 count = Certificate.query.filter(Certificate.name.ilike('{0}%'.format(name))).count()
31
32 if count >= 1:
33 return name + '-' + str(count)
34
35 return name
36
37
38 class Certificate(db.Model):
39 __tablename__ = 'certificates'
40 id = Column(Integer, primary_key=True)
41 owner = Column(String(128), nullable=False)
42 name = Column(String(128), unique=True)
43 description = Column(String(1024))
44 notify = Column(Boolean, default=True)
45
46 body = Column(Text(), nullable=False)
47 chain = Column(Text())
48 private_key = Column(Vault)
49
50 issuer = Column(String(128))
51 serial = Column(String(128))
52 cn = Column(String(128))
53 deleted = Column(Boolean, index=True)
54
55 not_before = Column(DateTime)
56 not_after = Column(DateTime)
57 date_created = Column(DateTime, PassiveDefault(func.now()), nullable=False)
58
59 signing_algorithm = Column(String(128))
60 status = Column(String(128))
61 bits = Column(Integer())
62 san = Column(String(1024)) # TODO this should be migrated to boolean
63
64 user_id = Column(Integer, ForeignKey('users.id'))
65 authority_id = Column(Integer, ForeignKey('authorities.id', ondelete="CASCADE"))
66 root_authority_id = Column(Integer, ForeignKey('authorities.id', ondelete="CASCADE"))
67
68 notifications = relationship("Notification", secondary=certificate_notification_associations, backref='certificate')
69 destinations = relationship("Destination", secondary=certificate_destination_associations, backref='certificate')
70 sources = relationship("Source", secondary=certificate_source_associations, backref='certificate')
71 domains = relationship("Domain", secondary=certificate_associations, backref="certificate")
72 roles = relationship("Role", secondary=roles_certificates, backref="certificate")
73 replaces = relationship("Certificate",
74 secondary=certificate_replacement_associations,
75 primaryjoin=id == certificate_replacement_associations.c.certificate_id, # noqa
76 secondaryjoin=id == certificate_replacement_associations.c.replaced_certificate_id, # noqa
77 backref='replaced')
78
79 endpoints = relationship("Endpoint", backref='certificate')
80
81 def __init__(self, **kwargs):
82 cert = lemur.common.utils.parse_certificate(kwargs['body'])
83
84 self.issuer = defaults.issuer(cert)
85 self.cn = defaults.common_name(cert)
86 self.san = defaults.san(cert)
87 self.not_before = defaults.not_before(cert)
88 self.not_after = defaults.not_after(cert)
89
90 # when destinations are appended they require a valid name.
91 if kwargs.get('name'):
92 self.name = get_or_increase_name(kwargs['name'])
93 else:
94 self.name = get_or_increase_name(defaults.certificate_name(self.cn, self.issuer, self.not_before, self.not_after, self.san))
95
96 self.owner = kwargs['owner']
97 self.body = kwargs['body'].strip()
98
99 if kwargs.get('private_key'):
100 self.private_key = kwargs['private_key'].strip()
101
102 if kwargs.get('chain'):
103 self.chain = kwargs['chain'].strip()
104
105 self.destinations = kwargs.get('destinations', [])
106 self.notifications = kwargs.get('notifications', [])
107 self.description = kwargs.get('description')
108 self.roles = list(set(kwargs.get('roles', [])))
109 self.replaces = kwargs.get('replacements', [])
110 self.signing_algorithm = defaults.signing_algorithm(cert)
111 self.bits = defaults.bitstrength(cert)
112 self.serial = defaults.serial(cert)
113
114 for domain in defaults.domains(cert):
115 self.domains.append(Domain(name=domain))
116
117 @property
118 def active(self):
119 if self.endpoints:
120 return True
121
122 @hybrid_property
123 def expired(self):
124 if self.not_after <= datetime.datetime.now():
125 return True
126
127 @expired.expression
128 def expired(cls):
129 return case(
130 [
131 (cls.now_after <= datetime.datetime.now(), True)
132 ],
133 else_=False
134 )
135
136 @hybrid_property
137 def revoked(self):
138 if 'revoked' == self.status:
139 return True
140
141 @revoked.expression
142 def revoked(cls):
143 return case(
144 [
145 (cls.status == 'revoked', True)
146 ],
147 else_=False
148 )
149
150 def get_arn(self, account_number):
151 """
152 Generate a valid AWS IAM arn
153
154 :rtype : str
155 :param account_number:
156 :return:
157 """
158 return "arn:aws:iam::{}:server-certificate/{}".format(account_number, self.name)
159
160
161 @event.listens_for(Certificate.destinations, 'append')
162 def update_destinations(target, value, initiator):
163 """
164 Attempt to upload the new certificate to the new destination
165
166 :param target:
167 :param value:
168 :param initiator:
169 :return:
170 """
171 destination_plugin = plugins.get(value.plugin_name)
172
173 try:
174 destination_plugin.upload(target.name, target.body, target.private_key, target.chain, value.options)
175 except Exception as e:
176 current_app.logger.exception(e)
177
178
179 @event.listens_for(Certificate.replaces, 'append')
180 def update_replacement(target, value, initiator):
181 """
182 When a certificate is marked as 'replaced' it is then marked as in-active
183
184 :param target:
185 :param value:
186 :param initiator:
187 :return:
188 """
189 value.active = False
190
191
192 @event.listens_for(Certificate, 'before_update')
193 def protect_active(mapper, connection, target):
194 """
195 When a certificate has a replacement do not allow it to be marked as 'active'
196
197 :param connection:
198 :param mapper:
199 :param target:
200 :return:
201 """
202 if target.active:
203 if not target.notify:
204 raise Exception(
205 "Cannot silence notification for a certificate Lemur has been found to be currently deployed onto endpoints"
206 )
207
[end of lemur/certificates/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lemur/certificates/models.py b/lemur/certificates/models.py
--- a/lemur/certificates/models.py
+++ b/lemur/certificates/models.py
@@ -27,6 +27,7 @@
def get_or_increase_name(name):
+ name = '-'.join(name.strip().split(' '))
count = Certificate.query.filter(Certificate.name.ilike('{0}%'.format(name))).count()
if count >= 1:
|
{"golden_diff": "diff --git a/lemur/certificates/models.py b/lemur/certificates/models.py\n--- a/lemur/certificates/models.py\n+++ b/lemur/certificates/models.py\n@@ -27,6 +27,7 @@\n \n \n def get_or_increase_name(name):\n+ name = '-'.join(name.strip().split(' '))\n count = Certificate.query.filter(Certificate.name.ilike('{0}%'.format(name))).count()\n \n if count >= 1:\n", "issue": "A custom cert name with spaces causes AWS Upload failures\nCreating a cert with a custom name that has spaces, such as: `My Certificate` will not properly get uploaded to AWS.\n\n-- Potential Fixes:\n1. Prevent spaces in custom names\n2. Allow custom cert names to be editable\n3. If spaces are allowed, the AWS uploader plugin needs to upload it in a way that can work properly.\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.certificates.models\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nimport datetime\n\nimport lemur.common.utils\nfrom flask import current_app\n\nfrom sqlalchemy.orm import relationship\nfrom sqlalchemy.sql.expression import case\nfrom sqlalchemy.ext.hybrid import hybrid_property\nfrom sqlalchemy import event, Integer, ForeignKey, String, DateTime, PassiveDefault, func, Column, Text, Boolean\n\nfrom lemur.database import db\nfrom lemur.models import certificate_associations, certificate_source_associations, \\\n certificate_destination_associations, certificate_notification_associations, \\\n certificate_replacement_associations, roles_certificates\nfrom lemur.plugins.base import plugins\nfrom lemur.utils import Vault\n\nfrom lemur.common import defaults\nfrom lemur.domains.models import Domain\n\n\ndef get_or_increase_name(name):\n count = Certificate.query.filter(Certificate.name.ilike('{0}%'.format(name))).count()\n\n if count >= 1:\n return name + '-' + str(count)\n\n return name\n\n\nclass Certificate(db.Model):\n __tablename__ = 'certificates'\n id = Column(Integer, primary_key=True)\n owner = Column(String(128), nullable=False)\n name = Column(String(128), unique=True)\n description = Column(String(1024))\n notify = Column(Boolean, default=True)\n\n body = Column(Text(), nullable=False)\n chain = Column(Text())\n private_key = Column(Vault)\n\n issuer = Column(String(128))\n serial = Column(String(128))\n cn = Column(String(128))\n deleted = Column(Boolean, index=True)\n\n not_before = Column(DateTime)\n not_after = Column(DateTime)\n date_created = Column(DateTime, PassiveDefault(func.now()), nullable=False)\n\n signing_algorithm = Column(String(128))\n status = Column(String(128))\n bits = Column(Integer())\n san = Column(String(1024)) # TODO this should be migrated to boolean\n\n user_id = Column(Integer, ForeignKey('users.id'))\n authority_id = Column(Integer, ForeignKey('authorities.id', ondelete=\"CASCADE\"))\n root_authority_id = Column(Integer, ForeignKey('authorities.id', ondelete=\"CASCADE\"))\n\n notifications = relationship(\"Notification\", secondary=certificate_notification_associations, backref='certificate')\n destinations = relationship(\"Destination\", secondary=certificate_destination_associations, backref='certificate')\n sources = relationship(\"Source\", secondary=certificate_source_associations, backref='certificate')\n domains = relationship(\"Domain\", secondary=certificate_associations, backref=\"certificate\")\n roles = relationship(\"Role\", secondary=roles_certificates, backref=\"certificate\")\n replaces = relationship(\"Certificate\",\n secondary=certificate_replacement_associations,\n primaryjoin=id == certificate_replacement_associations.c.certificate_id, # noqa\n secondaryjoin=id == certificate_replacement_associations.c.replaced_certificate_id, # noqa\n backref='replaced')\n\n endpoints = relationship(\"Endpoint\", backref='certificate')\n\n def __init__(self, **kwargs):\n cert = lemur.common.utils.parse_certificate(kwargs['body'])\n\n self.issuer = defaults.issuer(cert)\n self.cn = defaults.common_name(cert)\n self.san = defaults.san(cert)\n self.not_before = defaults.not_before(cert)\n self.not_after = defaults.not_after(cert)\n\n # when destinations are appended they require a valid name.\n if kwargs.get('name'):\n self.name = get_or_increase_name(kwargs['name'])\n else:\n self.name = get_or_increase_name(defaults.certificate_name(self.cn, self.issuer, self.not_before, self.not_after, self.san))\n\n self.owner = kwargs['owner']\n self.body = kwargs['body'].strip()\n\n if kwargs.get('private_key'):\n self.private_key = kwargs['private_key'].strip()\n\n if kwargs.get('chain'):\n self.chain = kwargs['chain'].strip()\n\n self.destinations = kwargs.get('destinations', [])\n self.notifications = kwargs.get('notifications', [])\n self.description = kwargs.get('description')\n self.roles = list(set(kwargs.get('roles', [])))\n self.replaces = kwargs.get('replacements', [])\n self.signing_algorithm = defaults.signing_algorithm(cert)\n self.bits = defaults.bitstrength(cert)\n self.serial = defaults.serial(cert)\n\n for domain in defaults.domains(cert):\n self.domains.append(Domain(name=domain))\n\n @property\n def active(self):\n if self.endpoints:\n return True\n\n @hybrid_property\n def expired(self):\n if self.not_after <= datetime.datetime.now():\n return True\n\n @expired.expression\n def expired(cls):\n return case(\n [\n (cls.now_after <= datetime.datetime.now(), True)\n ],\n else_=False\n )\n\n @hybrid_property\n def revoked(self):\n if 'revoked' == self.status:\n return True\n\n @revoked.expression\n def revoked(cls):\n return case(\n [\n (cls.status == 'revoked', True)\n ],\n else_=False\n )\n\n def get_arn(self, account_number):\n \"\"\"\n Generate a valid AWS IAM arn\n\n :rtype : str\n :param account_number:\n :return:\n \"\"\"\n return \"arn:aws:iam::{}:server-certificate/{}\".format(account_number, self.name)\n\n\[email protected]_for(Certificate.destinations, 'append')\ndef update_destinations(target, value, initiator):\n \"\"\"\n Attempt to upload the new certificate to the new destination\n\n :param target:\n :param value:\n :param initiator:\n :return:\n \"\"\"\n destination_plugin = plugins.get(value.plugin_name)\n\n try:\n destination_plugin.upload(target.name, target.body, target.private_key, target.chain, value.options)\n except Exception as e:\n current_app.logger.exception(e)\n\n\[email protected]_for(Certificate.replaces, 'append')\ndef update_replacement(target, value, initiator):\n \"\"\"\n When a certificate is marked as 'replaced' it is then marked as in-active\n\n :param target:\n :param value:\n :param initiator:\n :return:\n \"\"\"\n value.active = False\n\n\[email protected]_for(Certificate, 'before_update')\ndef protect_active(mapper, connection, target):\n \"\"\"\n When a certificate has a replacement do not allow it to be marked as 'active'\n\n :param connection:\n :param mapper:\n :param target:\n :return:\n \"\"\"\n if target.active:\n if not target.notify:\n raise Exception(\n \"Cannot silence notification for a certificate Lemur has been found to be currently deployed onto endpoints\"\n )\n", "path": "lemur/certificates/models.py"}]}
| 2,630 | 105 |
gh_patches_debug_8106
|
rasdani/github-patches
|
git_diff
|
aws__aws-sam-cli-815
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Region from Env Vars or profile are not respected for ALL commands but package and deploy
The region option in SAM CLI was changed between 0.7.0 and 0.8.0 to add the default explicitly on the [command line option](https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/cli/options.py#L44). This causes the region to always be set and not allow boto3 to do its resolving of credentials and regions, which is used to set the correct values into the docker container.
Current workaround is to explicitly set the region when invoking a function or interacting with commands that interact with AWS Services.
Fix is in #811
</issue>
<code>
[start of samcli/cli/options.py]
1 """
2 This file contains common CLI options common to all commands. As we add more commands, this will
3 become a repository of options that other commands could use when needed.
4 """
5
6 import click
7
8 from .context import Context
9
10
11 def debug_option(f):
12 """
13 Configures --debug option for CLI
14
15 :param f: Callback Function to be passed to Click
16 """
17 def callback(ctx, param, value):
18 state = ctx.ensure_object(Context)
19 state.debug = value
20 return value
21
22 return click.option('--debug',
23 expose_value=False,
24 is_flag=True,
25 envvar="SAM_DEBUG",
26 help='Turn on debug logging to print debug message generated by SAM CLI.',
27 callback=callback)(f)
28
29
30 def region_option(f):
31 """
32 Configures --region option for CLI
33
34 :param f: Callback Function to be passed to Click
35 """
36 def callback(ctx, param, value):
37 state = ctx.ensure_object(Context)
38 state.region = value
39 return value
40
41 return click.option('--region',
42 expose_value=False,
43 help='Set the AWS Region of the service (e.g. us-east-1).',
44 default='us-east-1',
45 callback=callback)(f)
46
47
48 def profile_option(f):
49 """
50 Configures --profile option for CLI
51
52 :param f: Callback Function to be passed to Click
53 """
54 def callback(ctx, param, value):
55 state = ctx.ensure_object(Context)
56 state.profile = value
57 return value
58
59 return click.option('--profile',
60 expose_value=False,
61 help='Select a specific profile from your credential file to get AWS credentials.',
62 callback=callback)(f)
63
[end of samcli/cli/options.py]
[start of samcli/__init__.py]
1 """
2 SAM CLI version
3 """
4
5 __version__ = '0.8.0'
6
[end of samcli/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/samcli/__init__.py b/samcli/__init__.py
--- a/samcli/__init__.py
+++ b/samcli/__init__.py
@@ -2,4 +2,4 @@
SAM CLI version
"""
-__version__ = '0.8.0'
+__version__ = '0.8.1'
diff --git a/samcli/cli/options.py b/samcli/cli/options.py
--- a/samcli/cli/options.py
+++ b/samcli/cli/options.py
@@ -41,7 +41,6 @@
return click.option('--region',
expose_value=False,
help='Set the AWS Region of the service (e.g. us-east-1).',
- default='us-east-1',
callback=callback)(f)
|
{"golden_diff": "diff --git a/samcli/__init__.py b/samcli/__init__.py\n--- a/samcli/__init__.py\n+++ b/samcli/__init__.py\n@@ -2,4 +2,4 @@\n SAM CLI version\n \"\"\"\n \n-__version__ = '0.8.0'\n+__version__ = '0.8.1'\ndiff --git a/samcli/cli/options.py b/samcli/cli/options.py\n--- a/samcli/cli/options.py\n+++ b/samcli/cli/options.py\n@@ -41,7 +41,6 @@\n return click.option('--region',\n expose_value=False,\n help='Set the AWS Region of the service (e.g. us-east-1).',\n- default='us-east-1',\n callback=callback)(f)\n", "issue": "Region from Env Vars or profile are not respected for ALL commands but package and deploy\nThe region option in SAM CLI was changed between 0.7.0 and 0.8.0 to add the default explicitly on the [command line option](https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/cli/options.py#L44). This causes the region to always be set and not allow boto3 to do its resolving of credentials and regions, which is used to set the correct values into the docker container.\r\n\r\nCurrent workaround is to explicitly set the region when invoking a function or interacting with commands that interact with AWS Services.\r\n\r\nFix is in #811\n", "before_files": [{"content": "\"\"\"\nThis file contains common CLI options common to all commands. As we add more commands, this will\nbecome a repository of options that other commands could use when needed.\n\"\"\"\n\nimport click\n\nfrom .context import Context\n\n\ndef debug_option(f):\n \"\"\"\n Configures --debug option for CLI\n\n :param f: Callback Function to be passed to Click\n \"\"\"\n def callback(ctx, param, value):\n state = ctx.ensure_object(Context)\n state.debug = value\n return value\n\n return click.option('--debug',\n expose_value=False,\n is_flag=True,\n envvar=\"SAM_DEBUG\",\n help='Turn on debug logging to print debug message generated by SAM CLI.',\n callback=callback)(f)\n\n\ndef region_option(f):\n \"\"\"\n Configures --region option for CLI\n\n :param f: Callback Function to be passed to Click\n \"\"\"\n def callback(ctx, param, value):\n state = ctx.ensure_object(Context)\n state.region = value\n return value\n\n return click.option('--region',\n expose_value=False,\n help='Set the AWS Region of the service (e.g. us-east-1).',\n default='us-east-1',\n callback=callback)(f)\n\n\ndef profile_option(f):\n \"\"\"\n Configures --profile option for CLI\n\n :param f: Callback Function to be passed to Click\n \"\"\"\n def callback(ctx, param, value):\n state = ctx.ensure_object(Context)\n state.profile = value\n return value\n\n return click.option('--profile',\n expose_value=False,\n help='Select a specific profile from your credential file to get AWS credentials.',\n callback=callback)(f)\n", "path": "samcli/cli/options.py"}, {"content": "\"\"\"\nSAM CLI version\n\"\"\"\n\n__version__ = '0.8.0'\n", "path": "samcli/__init__.py"}]}
| 1,199 | 176 |
gh_patches_debug_29338
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-498
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bar_plot_stacked example unfinished?
**Problem Description**
The example in https://github.com/enthought/chaco/blob/master/examples/demo/basic/bar_plot_stacked.py
doesn't do any stacking.
**Expected behavior:**
I the bars were really stacked, I would expect the sum of all bars to reach (10+5+2) * array([1,2,3,4,5]) (the sum of all values) respectively. Instead, I am getting the following:

Looking at the code, it doesn't use the bar plot's `starting_value` as expected, so the demo doesn't even seem to try to do the right thing.
</issue>
<code>
[start of examples/demo/basic/bar_plot_stacked.py]
1 """
2 Simple example of a stacked bar chart
3 """
4
5 # Major library imports
6 import numpy
7
8 # Enthought library imports
9 from enable.api import ComponentEditor
10 from traits.api import HasTraits, Instance
11 from traitsui.api import UItem, View
12
13 # Chaco imports
14 from chaco.api import LabelAxis, Plot, ArrayPlotData
15
16 class PlotExample(HasTraits):
17 plot = Instance(Plot)
18 traits_view = View(UItem('plot', editor=ComponentEditor()),
19 width=400, height=400, resizable=True,
20 )
21
22 def __init__(self, index, series_a, series_b, series_c, **kw):
23 super(PlotExample, self).__init__(**kw)
24
25 plot_data = ArrayPlotData(index=index)
26 plot_data.set_data('series_a', series_a)
27 plot_data.set_data('series_b', series_b)
28 plot_data.set_data('series_c', series_c)
29 self.plot = Plot(plot_data)
30 self.plot.plot(('index', 'series_a'), type='bar', bar_width=0.8, color='auto')
31 self.plot.plot(('index', 'series_b'), type='bar', bar_width=0.8, color='auto')
32 self.plot.plot(('index', 'series_c'), type='bar', bar_width=0.8, color='auto')
33
34 # set the plot's value range to 0, otherwise it may pad too much
35 self.plot.value_range.low = 0
36
37 # replace the index values with some nicer labels
38 label_axis = LabelAxis(self.plot, orientation='bottom',
39 title='Months',
40 positions = list(range(1, 10)),
41 labels = ['jan', 'feb', 'march', 'april', 'may'],
42 small_haxis_style=True)
43
44 self.plot.underlays.remove(self.plot.index_axis)
45 self.plot.index_axis = label_axis
46 self.plot.underlays.append(label_axis)
47
48
49 index = numpy.array([1,2,3,4,5])
50 demo = PlotExample(index, index*10, index*5, index*2)
51
52 if __name__ == "__main__":
53 demo.configure_traits()
54
[end of examples/demo/basic/bar_plot_stacked.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/demo/basic/bar_plot_stacked.py b/examples/demo/basic/bar_plot_stacked.py
--- a/examples/demo/basic/bar_plot_stacked.py
+++ b/examples/demo/basic/bar_plot_stacked.py
@@ -11,7 +11,7 @@
from traitsui.api import UItem, View
# Chaco imports
-from chaco.api import LabelAxis, Plot, ArrayPlotData
+from chaco.api import LabelAxis, Plot, ArrayPlotData, ArrayDataSource
class PlotExample(HasTraits):
plot = Instance(Plot)
@@ -22,14 +22,18 @@
def __init__(self, index, series_a, series_b, series_c, **kw):
super(PlotExample, self).__init__(**kw)
+ # Stack them up
+ series_c = series_c + series_b + series_a
+ series_b = series_b + series_a
+
plot_data = ArrayPlotData(index=index)
plot_data.set_data('series_a', series_a)
plot_data.set_data('series_b', series_b)
plot_data.set_data('series_c', series_c)
self.plot = Plot(plot_data)
self.plot.plot(('index', 'series_a'), type='bar', bar_width=0.8, color='auto')
- self.plot.plot(('index', 'series_b'), type='bar', bar_width=0.8, color='auto')
- self.plot.plot(('index', 'series_c'), type='bar', bar_width=0.8, color='auto')
+ self.plot.plot(('index', 'series_b'), type='bar', bar_width=0.8, color='auto', starting_value=ArrayDataSource(series_a))
+ self.plot.plot(('index', 'series_c'), type='bar', bar_width=0.8, color='auto', starting_value=ArrayDataSource(series_b))
# set the plot's value range to 0, otherwise it may pad too much
self.plot.value_range.low = 0
|
{"golden_diff": "diff --git a/examples/demo/basic/bar_plot_stacked.py b/examples/demo/basic/bar_plot_stacked.py\n--- a/examples/demo/basic/bar_plot_stacked.py\n+++ b/examples/demo/basic/bar_plot_stacked.py\n@@ -11,7 +11,7 @@\n from traitsui.api import UItem, View\n \n # Chaco imports\n-from chaco.api import LabelAxis, Plot, ArrayPlotData\n+from chaco.api import LabelAxis, Plot, ArrayPlotData, ArrayDataSource\n \n class PlotExample(HasTraits):\n plot = Instance(Plot)\n@@ -22,14 +22,18 @@\n def __init__(self, index, series_a, series_b, series_c, **kw):\n super(PlotExample, self).__init__(**kw)\n \n+ # Stack them up\n+ series_c = series_c + series_b + series_a\n+ series_b = series_b + series_a\n+\n plot_data = ArrayPlotData(index=index)\n plot_data.set_data('series_a', series_a)\n plot_data.set_data('series_b', series_b)\n plot_data.set_data('series_c', series_c)\n self.plot = Plot(plot_data)\n self.plot.plot(('index', 'series_a'), type='bar', bar_width=0.8, color='auto')\n- self.plot.plot(('index', 'series_b'), type='bar', bar_width=0.8, color='auto')\n- self.plot.plot(('index', 'series_c'), type='bar', bar_width=0.8, color='auto')\n+ self.plot.plot(('index', 'series_b'), type='bar', bar_width=0.8, color='auto', starting_value=ArrayDataSource(series_a))\n+ self.plot.plot(('index', 'series_c'), type='bar', bar_width=0.8, color='auto', starting_value=ArrayDataSource(series_b))\n \n # set the plot's value range to 0, otherwise it may pad too much\n self.plot.value_range.low = 0\n", "issue": "bar_plot_stacked example unfinished?\n**Problem Description**\r\nThe example in https://github.com/enthought/chaco/blob/master/examples/demo/basic/bar_plot_stacked.py\r\ndoesn't do any stacking.\r\n\r\n**Expected behavior:**\r\nI the bars were really stacked, I would expect the sum of all bars to reach (10+5+2) * array([1,2,3,4,5]) (the sum of all values) respectively. Instead, I am getting the following:\r\n\r\n\r\nLooking at the code, it doesn't use the bar plot's `starting_value` as expected, so the demo doesn't even seem to try to do the right thing.\r\n\n", "before_files": [{"content": "\"\"\"\nSimple example of a stacked bar chart\n\"\"\"\n\n# Major library imports\nimport numpy\n\n# Enthought library imports\nfrom enable.api import ComponentEditor\nfrom traits.api import HasTraits, Instance\nfrom traitsui.api import UItem, View\n\n# Chaco imports\nfrom chaco.api import LabelAxis, Plot, ArrayPlotData\n\nclass PlotExample(HasTraits):\n plot = Instance(Plot)\n traits_view = View(UItem('plot', editor=ComponentEditor()),\n width=400, height=400, resizable=True, \n )\n\n def __init__(self, index, series_a, series_b, series_c, **kw):\n super(PlotExample, self).__init__(**kw)\n\n plot_data = ArrayPlotData(index=index)\n plot_data.set_data('series_a', series_a)\n plot_data.set_data('series_b', series_b)\n plot_data.set_data('series_c', series_c)\n self.plot = Plot(plot_data)\n self.plot.plot(('index', 'series_a'), type='bar', bar_width=0.8, color='auto')\n self.plot.plot(('index', 'series_b'), type='bar', bar_width=0.8, color='auto')\n self.plot.plot(('index', 'series_c'), type='bar', bar_width=0.8, color='auto')\n\n # set the plot's value range to 0, otherwise it may pad too much\n self.plot.value_range.low = 0\n\n # replace the index values with some nicer labels\n label_axis = LabelAxis(self.plot, orientation='bottom',\n title='Months',\n positions = list(range(1, 10)),\n labels = ['jan', 'feb', 'march', 'april', 'may'],\n small_haxis_style=True)\n\n self.plot.underlays.remove(self.plot.index_axis)\n self.plot.index_axis = label_axis\n self.plot.underlays.append(label_axis)\n\n\nindex = numpy.array([1,2,3,4,5])\ndemo = PlotExample(index, index*10, index*5, index*2)\n\nif __name__ == \"__main__\":\n demo.configure_traits()\n", "path": "examples/demo/basic/bar_plot_stacked.py"}]}
| 1,334 | 433 |
gh_patches_debug_26646
|
rasdani/github-patches
|
git_diff
|
tobymao__sqlglot-1705
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DATE_SUB from BigQuery produces DuckDB code giving number of intervals, rather than subtracted date
```
>>> sqlglot.transpile("SELECT DATE_SUB(date_in_table, INTERVAL 5 DAY) AS five_days_ago from table;", read="bigquery", write="duckdb")
['SELECT DATE_SUB(date_in_table, 5, DAY) AS five_days_ago FROM table']
>>> sqlglot.transpile("SELECT DATE_ADD(date_in_table, INTERVAL 5 DAY) AS five_days_from_now from table;", read="bigquery", write="duckdb")
['SELECT date_in_table + INTERVAL 5 DAY AS five_days_from_now FROM table']
```
BigQuery uses DATE_SUB with INTERVAL to subtract dates, whereas DuckDb uses the `-` operator with INTERVAL.
As you can see conversion from BigQuery DATE_ADD works as expected.
**Official Documentation**
* https://duckdb.org/docs/sql/functions/date.html
* https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions#date_sub
</issue>
<code>
[start of sqlglot/dialects/duckdb.py]
1 from __future__ import annotations
2
3 import typing as t
4
5 from sqlglot import exp, generator, parser, tokens
6 from sqlglot.dialects.dialect import (
7 Dialect,
8 approx_count_distinct_sql,
9 arrow_json_extract_scalar_sql,
10 arrow_json_extract_sql,
11 datestrtodate_sql,
12 format_time_lambda,
13 no_comment_column_constraint_sql,
14 no_properties_sql,
15 no_safe_divide_sql,
16 pivot_column_names,
17 rename_func,
18 str_position_sql,
19 str_to_time_sql,
20 timestamptrunc_sql,
21 timestrtotime_sql,
22 ts_or_ds_to_date_sql,
23 )
24 from sqlglot.helper import seq_get
25 from sqlglot.tokens import TokenType
26
27
28 def _ts_or_ds_add_sql(self: generator.Generator, expression: exp.TsOrDsAdd) -> str:
29 this = self.sql(expression, "this")
30 unit = self.sql(expression, "unit").strip("'") or "DAY"
31 return f"CAST({this} AS DATE) + {self.sql(exp.Interval(this=expression.expression, unit=unit))}"
32
33
34 def _date_add_sql(self: generator.Generator, expression: exp.DateAdd) -> str:
35 this = self.sql(expression, "this")
36 unit = self.sql(expression, "unit").strip("'") or "DAY"
37 return f"{this} + {self.sql(exp.Interval(this=expression.expression, unit=unit))}"
38
39
40 def _array_sort_sql(self: generator.Generator, expression: exp.ArraySort) -> str:
41 if expression.expression:
42 self.unsupported("DUCKDB ARRAY_SORT does not support a comparator")
43 return f"ARRAY_SORT({self.sql(expression, 'this')})"
44
45
46 def _sort_array_sql(self: generator.Generator, expression: exp.SortArray) -> str:
47 this = self.sql(expression, "this")
48 if expression.args.get("asc") == exp.false():
49 return f"ARRAY_REVERSE_SORT({this})"
50 return f"ARRAY_SORT({this})"
51
52
53 def _sort_array_reverse(args: t.List) -> exp.Expression:
54 return exp.SortArray(this=seq_get(args, 0), asc=exp.false())
55
56
57 def _parse_date_diff(args: t.List) -> exp.Expression:
58 return exp.DateDiff(
59 this=seq_get(args, 2),
60 expression=seq_get(args, 1),
61 unit=seq_get(args, 0),
62 )
63
64
65 def _struct_sql(self: generator.Generator, expression: exp.Struct) -> str:
66 args = [
67 f"'{e.name or e.this.name}': {self.sql(e, 'expression')}" for e in expression.expressions
68 ]
69 return f"{{{', '.join(args)}}}"
70
71
72 def _datatype_sql(self: generator.Generator, expression: exp.DataType) -> str:
73 if expression.this == exp.DataType.Type.ARRAY:
74 return f"{self.expressions(expression, flat=True)}[]"
75 return self.datatype_sql(expression)
76
77
78 def _regexp_extract_sql(self: generator.Generator, expression: exp.RegexpExtract) -> str:
79 bad_args = list(filter(expression.args.get, ("position", "occurrence")))
80 if bad_args:
81 self.unsupported(f"REGEXP_EXTRACT does not support arg(s) {bad_args}")
82
83 return self.func(
84 "REGEXP_EXTRACT",
85 expression.args.get("this"),
86 expression.args.get("expression"),
87 expression.args.get("group"),
88 )
89
90
91 class DuckDB(Dialect):
92 null_ordering = "nulls_are_last"
93
94 class Tokenizer(tokens.Tokenizer):
95 KEYWORDS = {
96 **tokens.Tokenizer.KEYWORDS,
97 "~": TokenType.RLIKE,
98 ":=": TokenType.EQ,
99 "//": TokenType.DIV,
100 "ATTACH": TokenType.COMMAND,
101 "BINARY": TokenType.VARBINARY,
102 "BPCHAR": TokenType.TEXT,
103 "BITSTRING": TokenType.BIT,
104 "CHAR": TokenType.TEXT,
105 "CHARACTER VARYING": TokenType.TEXT,
106 "EXCLUDE": TokenType.EXCEPT,
107 "INT1": TokenType.TINYINT,
108 "LOGICAL": TokenType.BOOLEAN,
109 "NUMERIC": TokenType.DOUBLE,
110 "SIGNED": TokenType.INT,
111 "STRING": TokenType.VARCHAR,
112 "UBIGINT": TokenType.UBIGINT,
113 "UINTEGER": TokenType.UINT,
114 "USMALLINT": TokenType.USMALLINT,
115 "UTINYINT": TokenType.UTINYINT,
116 }
117
118 class Parser(parser.Parser):
119 FUNCTIONS = {
120 **parser.Parser.FUNCTIONS,
121 "ARRAY_LENGTH": exp.ArraySize.from_arg_list,
122 "ARRAY_SORT": exp.SortArray.from_arg_list,
123 "ARRAY_REVERSE_SORT": _sort_array_reverse,
124 "DATEDIFF": _parse_date_diff,
125 "DATE_DIFF": _parse_date_diff,
126 "EPOCH": exp.TimeToUnix.from_arg_list,
127 "EPOCH_MS": lambda args: exp.UnixToTime(
128 this=exp.Div(
129 this=seq_get(args, 0),
130 expression=exp.Literal.number(1000),
131 )
132 ),
133 "LIST_REVERSE_SORT": _sort_array_reverse,
134 "LIST_SORT": exp.SortArray.from_arg_list,
135 "LIST_VALUE": exp.Array.from_arg_list,
136 "REGEXP_MATCHES": exp.RegexpLike.from_arg_list,
137 "STRFTIME": format_time_lambda(exp.TimeToStr, "duckdb"),
138 "STRING_SPLIT": exp.Split.from_arg_list,
139 "STRING_SPLIT_REGEX": exp.RegexpSplit.from_arg_list,
140 "STRING_TO_ARRAY": exp.Split.from_arg_list,
141 "STRPTIME": format_time_lambda(exp.StrToTime, "duckdb"),
142 "STRUCT_PACK": exp.Struct.from_arg_list,
143 "STR_SPLIT": exp.Split.from_arg_list,
144 "STR_SPLIT_REGEX": exp.RegexpSplit.from_arg_list,
145 "TO_TIMESTAMP": exp.UnixToTime.from_arg_list,
146 "UNNEST": exp.Explode.from_arg_list,
147 }
148
149 TYPE_TOKENS = {
150 *parser.Parser.TYPE_TOKENS,
151 TokenType.UBIGINT,
152 TokenType.UINT,
153 TokenType.USMALLINT,
154 TokenType.UTINYINT,
155 }
156
157 def _pivot_column_names(self, aggregations: t.List[exp.Expression]) -> t.List[str]:
158 if len(aggregations) == 1:
159 return super()._pivot_column_names(aggregations)
160 return pivot_column_names(aggregations, dialect="duckdb")
161
162 class Generator(generator.Generator):
163 JOIN_HINTS = False
164 TABLE_HINTS = False
165 LIMIT_FETCH = "LIMIT"
166 STRUCT_DELIMITER = ("(", ")")
167 RENAME_TABLE_WITH_DB = False
168
169 TRANSFORMS = {
170 **generator.Generator.TRANSFORMS,
171 exp.ApproxDistinct: approx_count_distinct_sql,
172 exp.Array: lambda self, e: self.func("ARRAY", e.expressions[0])
173 if isinstance(seq_get(e.expressions, 0), exp.Select)
174 else rename_func("LIST_VALUE")(self, e),
175 exp.ArraySize: rename_func("ARRAY_LENGTH"),
176 exp.ArraySort: _array_sort_sql,
177 exp.ArraySum: rename_func("LIST_SUM"),
178 exp.CommentColumnConstraint: no_comment_column_constraint_sql,
179 exp.CurrentDate: lambda self, e: "CURRENT_DATE",
180 exp.CurrentTime: lambda self, e: "CURRENT_TIME",
181 exp.CurrentTimestamp: lambda self, e: "CURRENT_TIMESTAMP",
182 exp.DayOfMonth: rename_func("DAYOFMONTH"),
183 exp.DayOfWeek: rename_func("DAYOFWEEK"),
184 exp.DayOfYear: rename_func("DAYOFYEAR"),
185 exp.DataType: _datatype_sql,
186 exp.DateAdd: _date_add_sql,
187 exp.DateDiff: lambda self, e: self.func(
188 "DATE_DIFF", f"'{e.args.get('unit', 'day')}'", e.expression, e.this
189 ),
190 exp.DateStrToDate: datestrtodate_sql,
191 exp.DateToDi: lambda self, e: f"CAST(STRFTIME({self.sql(e, 'this')}, {DuckDB.dateint_format}) AS INT)",
192 exp.DiToDate: lambda self, e: f"CAST(STRPTIME(CAST({self.sql(e, 'this')} AS TEXT), {DuckDB.dateint_format}) AS DATE)",
193 exp.Explode: rename_func("UNNEST"),
194 exp.IntDiv: lambda self, e: self.binary(e, "//"),
195 exp.JSONExtract: arrow_json_extract_sql,
196 exp.JSONExtractScalar: arrow_json_extract_scalar_sql,
197 exp.JSONBExtract: arrow_json_extract_sql,
198 exp.JSONBExtractScalar: arrow_json_extract_scalar_sql,
199 exp.LogicalOr: rename_func("BOOL_OR"),
200 exp.LogicalAnd: rename_func("BOOL_AND"),
201 exp.Properties: no_properties_sql,
202 exp.RegexpExtract: _regexp_extract_sql,
203 exp.RegexpLike: rename_func("REGEXP_MATCHES"),
204 exp.RegexpSplit: rename_func("STR_SPLIT_REGEX"),
205 exp.SafeDivide: no_safe_divide_sql,
206 exp.Split: rename_func("STR_SPLIT"),
207 exp.SortArray: _sort_array_sql,
208 exp.StrPosition: str_position_sql,
209 exp.StrToDate: lambda self, e: f"CAST({str_to_time_sql(self, e)} AS DATE)",
210 exp.StrToTime: str_to_time_sql,
211 exp.StrToUnix: lambda self, e: f"EPOCH(STRPTIME({self.sql(e, 'this')}, {self.format_time(e)}))",
212 exp.Struct: _struct_sql,
213 exp.TimestampTrunc: timestamptrunc_sql,
214 exp.TimeStrToDate: lambda self, e: f"CAST({self.sql(e, 'this')} AS DATE)",
215 exp.TimeStrToTime: timestrtotime_sql,
216 exp.TimeStrToUnix: lambda self, e: f"EPOCH(CAST({self.sql(e, 'this')} AS TIMESTAMP))",
217 exp.TimeToStr: lambda self, e: f"STRFTIME({self.sql(e, 'this')}, {self.format_time(e)})",
218 exp.TimeToUnix: rename_func("EPOCH"),
219 exp.TsOrDiToDi: lambda self, e: f"CAST(SUBSTR(REPLACE(CAST({self.sql(e, 'this')} AS TEXT), '-', ''), 1, 8) AS INT)",
220 exp.TsOrDsAdd: _ts_or_ds_add_sql,
221 exp.TsOrDsToDate: ts_or_ds_to_date_sql("duckdb"),
222 exp.UnixToStr: lambda self, e: f"STRFTIME(TO_TIMESTAMP({self.sql(e, 'this')}), {self.format_time(e)})",
223 exp.UnixToTime: rename_func("TO_TIMESTAMP"),
224 exp.UnixToTimeStr: lambda self, e: f"CAST(TO_TIMESTAMP({self.sql(e, 'this')}) AS TEXT)",
225 exp.WeekOfYear: rename_func("WEEKOFYEAR"),
226 }
227
228 TYPE_MAPPING = {
229 **generator.Generator.TYPE_MAPPING,
230 exp.DataType.Type.BINARY: "BLOB",
231 exp.DataType.Type.CHAR: "TEXT",
232 exp.DataType.Type.FLOAT: "REAL",
233 exp.DataType.Type.NCHAR: "TEXT",
234 exp.DataType.Type.NVARCHAR: "TEXT",
235 exp.DataType.Type.UINT: "UINTEGER",
236 exp.DataType.Type.VARBINARY: "BLOB",
237 exp.DataType.Type.VARCHAR: "TEXT",
238 }
239
240 STAR_MAPPING = {**generator.Generator.STAR_MAPPING, "except": "EXCLUDE"}
241
242 PROPERTIES_LOCATION = {
243 **generator.Generator.PROPERTIES_LOCATION,
244 exp.VolatileProperty: exp.Properties.Location.UNSUPPORTED,
245 }
246
247 def tablesample_sql(
248 self, expression: exp.TableSample, seed_prefix: str = "SEED", sep: str = " AS "
249 ) -> str:
250 return super().tablesample_sql(expression, seed_prefix="REPEATABLE", sep=sep)
251
[end of sqlglot/dialects/duckdb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py
--- a/sqlglot/dialects/duckdb.py
+++ b/sqlglot/dialects/duckdb.py
@@ -31,10 +31,11 @@
return f"CAST({this} AS DATE) + {self.sql(exp.Interval(this=expression.expression, unit=unit))}"
-def _date_add_sql(self: generator.Generator, expression: exp.DateAdd) -> str:
+def _date_delta_sql(self: generator.Generator, expression: exp.DateAdd | exp.DateSub) -> str:
this = self.sql(expression, "this")
unit = self.sql(expression, "unit").strip("'") or "DAY"
- return f"{this} + {self.sql(exp.Interval(this=expression.expression, unit=unit))}"
+ op = "+" if isinstance(expression, exp.DateAdd) else "-"
+ return f"{this} {op} {self.sql(exp.Interval(this=expression.expression, unit=unit))}"
def _array_sort_sql(self: generator.Generator, expression: exp.ArraySort) -> str:
@@ -183,7 +184,8 @@
exp.DayOfWeek: rename_func("DAYOFWEEK"),
exp.DayOfYear: rename_func("DAYOFYEAR"),
exp.DataType: _datatype_sql,
- exp.DateAdd: _date_add_sql,
+ exp.DateAdd: _date_delta_sql,
+ exp.DateSub: _date_delta_sql,
exp.DateDiff: lambda self, e: self.func(
"DATE_DIFF", f"'{e.args.get('unit', 'day')}'", e.expression, e.this
),
|
{"golden_diff": "diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py\n--- a/sqlglot/dialects/duckdb.py\n+++ b/sqlglot/dialects/duckdb.py\n@@ -31,10 +31,11 @@\n return f\"CAST({this} AS DATE) + {self.sql(exp.Interval(this=expression.expression, unit=unit))}\"\n \n \n-def _date_add_sql(self: generator.Generator, expression: exp.DateAdd) -> str:\n+def _date_delta_sql(self: generator.Generator, expression: exp.DateAdd | exp.DateSub) -> str:\n this = self.sql(expression, \"this\")\n unit = self.sql(expression, \"unit\").strip(\"'\") or \"DAY\"\n- return f\"{this} + {self.sql(exp.Interval(this=expression.expression, unit=unit))}\"\n+ op = \"+\" if isinstance(expression, exp.DateAdd) else \"-\"\n+ return f\"{this} {op} {self.sql(exp.Interval(this=expression.expression, unit=unit))}\"\n \n \n def _array_sort_sql(self: generator.Generator, expression: exp.ArraySort) -> str:\n@@ -183,7 +184,8 @@\n exp.DayOfWeek: rename_func(\"DAYOFWEEK\"),\n exp.DayOfYear: rename_func(\"DAYOFYEAR\"),\n exp.DataType: _datatype_sql,\n- exp.DateAdd: _date_add_sql,\n+ exp.DateAdd: _date_delta_sql,\n+ exp.DateSub: _date_delta_sql,\n exp.DateDiff: lambda self, e: self.func(\n \"DATE_DIFF\", f\"'{e.args.get('unit', 'day')}'\", e.expression, e.this\n ),\n", "issue": "DATE_SUB from BigQuery produces DuckDB code giving number of intervals, rather than subtracted date\n```\r\n>>> sqlglot.transpile(\"SELECT DATE_SUB(date_in_table, INTERVAL 5 DAY) AS five_days_ago from table;\", read=\"bigquery\", write=\"duckdb\")\r\n['SELECT DATE_SUB(date_in_table, 5, DAY) AS five_days_ago FROM table']\r\n>>> sqlglot.transpile(\"SELECT DATE_ADD(date_in_table, INTERVAL 5 DAY) AS five_days_from_now from table;\", read=\"bigquery\", write=\"duckdb\")\r\n['SELECT date_in_table + INTERVAL 5 DAY AS five_days_from_now FROM table']\r\n```\r\n\r\nBigQuery uses DATE_SUB with INTERVAL to subtract dates, whereas DuckDb uses the `-` operator with INTERVAL.\r\nAs you can see conversion from BigQuery DATE_ADD works as expected.\r\n\r\n**Official Documentation**\r\n* https://duckdb.org/docs/sql/functions/date.html\r\n* https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions#date_sub\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport typing as t\n\nfrom sqlglot import exp, generator, parser, tokens\nfrom sqlglot.dialects.dialect import (\n Dialect,\n approx_count_distinct_sql,\n arrow_json_extract_scalar_sql,\n arrow_json_extract_sql,\n datestrtodate_sql,\n format_time_lambda,\n no_comment_column_constraint_sql,\n no_properties_sql,\n no_safe_divide_sql,\n pivot_column_names,\n rename_func,\n str_position_sql,\n str_to_time_sql,\n timestamptrunc_sql,\n timestrtotime_sql,\n ts_or_ds_to_date_sql,\n)\nfrom sqlglot.helper import seq_get\nfrom sqlglot.tokens import TokenType\n\n\ndef _ts_or_ds_add_sql(self: generator.Generator, expression: exp.TsOrDsAdd) -> str:\n this = self.sql(expression, \"this\")\n unit = self.sql(expression, \"unit\").strip(\"'\") or \"DAY\"\n return f\"CAST({this} AS DATE) + {self.sql(exp.Interval(this=expression.expression, unit=unit))}\"\n\n\ndef _date_add_sql(self: generator.Generator, expression: exp.DateAdd) -> str:\n this = self.sql(expression, \"this\")\n unit = self.sql(expression, \"unit\").strip(\"'\") or \"DAY\"\n return f\"{this} + {self.sql(exp.Interval(this=expression.expression, unit=unit))}\"\n\n\ndef _array_sort_sql(self: generator.Generator, expression: exp.ArraySort) -> str:\n if expression.expression:\n self.unsupported(\"DUCKDB ARRAY_SORT does not support a comparator\")\n return f\"ARRAY_SORT({self.sql(expression, 'this')})\"\n\n\ndef _sort_array_sql(self: generator.Generator, expression: exp.SortArray) -> str:\n this = self.sql(expression, \"this\")\n if expression.args.get(\"asc\") == exp.false():\n return f\"ARRAY_REVERSE_SORT({this})\"\n return f\"ARRAY_SORT({this})\"\n\n\ndef _sort_array_reverse(args: t.List) -> exp.Expression:\n return exp.SortArray(this=seq_get(args, 0), asc=exp.false())\n\n\ndef _parse_date_diff(args: t.List) -> exp.Expression:\n return exp.DateDiff(\n this=seq_get(args, 2),\n expression=seq_get(args, 1),\n unit=seq_get(args, 0),\n )\n\n\ndef _struct_sql(self: generator.Generator, expression: exp.Struct) -> str:\n args = [\n f\"'{e.name or e.this.name}': {self.sql(e, 'expression')}\" for e in expression.expressions\n ]\n return f\"{{{', '.join(args)}}}\"\n\n\ndef _datatype_sql(self: generator.Generator, expression: exp.DataType) -> str:\n if expression.this == exp.DataType.Type.ARRAY:\n return f\"{self.expressions(expression, flat=True)}[]\"\n return self.datatype_sql(expression)\n\n\ndef _regexp_extract_sql(self: generator.Generator, expression: exp.RegexpExtract) -> str:\n bad_args = list(filter(expression.args.get, (\"position\", \"occurrence\")))\n if bad_args:\n self.unsupported(f\"REGEXP_EXTRACT does not support arg(s) {bad_args}\")\n\n return self.func(\n \"REGEXP_EXTRACT\",\n expression.args.get(\"this\"),\n expression.args.get(\"expression\"),\n expression.args.get(\"group\"),\n )\n\n\nclass DuckDB(Dialect):\n null_ordering = \"nulls_are_last\"\n\n class Tokenizer(tokens.Tokenizer):\n KEYWORDS = {\n **tokens.Tokenizer.KEYWORDS,\n \"~\": TokenType.RLIKE,\n \":=\": TokenType.EQ,\n \"//\": TokenType.DIV,\n \"ATTACH\": TokenType.COMMAND,\n \"BINARY\": TokenType.VARBINARY,\n \"BPCHAR\": TokenType.TEXT,\n \"BITSTRING\": TokenType.BIT,\n \"CHAR\": TokenType.TEXT,\n \"CHARACTER VARYING\": TokenType.TEXT,\n \"EXCLUDE\": TokenType.EXCEPT,\n \"INT1\": TokenType.TINYINT,\n \"LOGICAL\": TokenType.BOOLEAN,\n \"NUMERIC\": TokenType.DOUBLE,\n \"SIGNED\": TokenType.INT,\n \"STRING\": TokenType.VARCHAR,\n \"UBIGINT\": TokenType.UBIGINT,\n \"UINTEGER\": TokenType.UINT,\n \"USMALLINT\": TokenType.USMALLINT,\n \"UTINYINT\": TokenType.UTINYINT,\n }\n\n class Parser(parser.Parser):\n FUNCTIONS = {\n **parser.Parser.FUNCTIONS,\n \"ARRAY_LENGTH\": exp.ArraySize.from_arg_list,\n \"ARRAY_SORT\": exp.SortArray.from_arg_list,\n \"ARRAY_REVERSE_SORT\": _sort_array_reverse,\n \"DATEDIFF\": _parse_date_diff,\n \"DATE_DIFF\": _parse_date_diff,\n \"EPOCH\": exp.TimeToUnix.from_arg_list,\n \"EPOCH_MS\": lambda args: exp.UnixToTime(\n this=exp.Div(\n this=seq_get(args, 0),\n expression=exp.Literal.number(1000),\n )\n ),\n \"LIST_REVERSE_SORT\": _sort_array_reverse,\n \"LIST_SORT\": exp.SortArray.from_arg_list,\n \"LIST_VALUE\": exp.Array.from_arg_list,\n \"REGEXP_MATCHES\": exp.RegexpLike.from_arg_list,\n \"STRFTIME\": format_time_lambda(exp.TimeToStr, \"duckdb\"),\n \"STRING_SPLIT\": exp.Split.from_arg_list,\n \"STRING_SPLIT_REGEX\": exp.RegexpSplit.from_arg_list,\n \"STRING_TO_ARRAY\": exp.Split.from_arg_list,\n \"STRPTIME\": format_time_lambda(exp.StrToTime, \"duckdb\"),\n \"STRUCT_PACK\": exp.Struct.from_arg_list,\n \"STR_SPLIT\": exp.Split.from_arg_list,\n \"STR_SPLIT_REGEX\": exp.RegexpSplit.from_arg_list,\n \"TO_TIMESTAMP\": exp.UnixToTime.from_arg_list,\n \"UNNEST\": exp.Explode.from_arg_list,\n }\n\n TYPE_TOKENS = {\n *parser.Parser.TYPE_TOKENS,\n TokenType.UBIGINT,\n TokenType.UINT,\n TokenType.USMALLINT,\n TokenType.UTINYINT,\n }\n\n def _pivot_column_names(self, aggregations: t.List[exp.Expression]) -> t.List[str]:\n if len(aggregations) == 1:\n return super()._pivot_column_names(aggregations)\n return pivot_column_names(aggregations, dialect=\"duckdb\")\n\n class Generator(generator.Generator):\n JOIN_HINTS = False\n TABLE_HINTS = False\n LIMIT_FETCH = \"LIMIT\"\n STRUCT_DELIMITER = (\"(\", \")\")\n RENAME_TABLE_WITH_DB = False\n\n TRANSFORMS = {\n **generator.Generator.TRANSFORMS,\n exp.ApproxDistinct: approx_count_distinct_sql,\n exp.Array: lambda self, e: self.func(\"ARRAY\", e.expressions[0])\n if isinstance(seq_get(e.expressions, 0), exp.Select)\n else rename_func(\"LIST_VALUE\")(self, e),\n exp.ArraySize: rename_func(\"ARRAY_LENGTH\"),\n exp.ArraySort: _array_sort_sql,\n exp.ArraySum: rename_func(\"LIST_SUM\"),\n exp.CommentColumnConstraint: no_comment_column_constraint_sql,\n exp.CurrentDate: lambda self, e: \"CURRENT_DATE\",\n exp.CurrentTime: lambda self, e: \"CURRENT_TIME\",\n exp.CurrentTimestamp: lambda self, e: \"CURRENT_TIMESTAMP\",\n exp.DayOfMonth: rename_func(\"DAYOFMONTH\"),\n exp.DayOfWeek: rename_func(\"DAYOFWEEK\"),\n exp.DayOfYear: rename_func(\"DAYOFYEAR\"),\n exp.DataType: _datatype_sql,\n exp.DateAdd: _date_add_sql,\n exp.DateDiff: lambda self, e: self.func(\n \"DATE_DIFF\", f\"'{e.args.get('unit', 'day')}'\", e.expression, e.this\n ),\n exp.DateStrToDate: datestrtodate_sql,\n exp.DateToDi: lambda self, e: f\"CAST(STRFTIME({self.sql(e, 'this')}, {DuckDB.dateint_format}) AS INT)\",\n exp.DiToDate: lambda self, e: f\"CAST(STRPTIME(CAST({self.sql(e, 'this')} AS TEXT), {DuckDB.dateint_format}) AS DATE)\",\n exp.Explode: rename_func(\"UNNEST\"),\n exp.IntDiv: lambda self, e: self.binary(e, \"//\"),\n exp.JSONExtract: arrow_json_extract_sql,\n exp.JSONExtractScalar: arrow_json_extract_scalar_sql,\n exp.JSONBExtract: arrow_json_extract_sql,\n exp.JSONBExtractScalar: arrow_json_extract_scalar_sql,\n exp.LogicalOr: rename_func(\"BOOL_OR\"),\n exp.LogicalAnd: rename_func(\"BOOL_AND\"),\n exp.Properties: no_properties_sql,\n exp.RegexpExtract: _regexp_extract_sql,\n exp.RegexpLike: rename_func(\"REGEXP_MATCHES\"),\n exp.RegexpSplit: rename_func(\"STR_SPLIT_REGEX\"),\n exp.SafeDivide: no_safe_divide_sql,\n exp.Split: rename_func(\"STR_SPLIT\"),\n exp.SortArray: _sort_array_sql,\n exp.StrPosition: str_position_sql,\n exp.StrToDate: lambda self, e: f\"CAST({str_to_time_sql(self, e)} AS DATE)\",\n exp.StrToTime: str_to_time_sql,\n exp.StrToUnix: lambda self, e: f\"EPOCH(STRPTIME({self.sql(e, 'this')}, {self.format_time(e)}))\",\n exp.Struct: _struct_sql,\n exp.TimestampTrunc: timestamptrunc_sql,\n exp.TimeStrToDate: lambda self, e: f\"CAST({self.sql(e, 'this')} AS DATE)\",\n exp.TimeStrToTime: timestrtotime_sql,\n exp.TimeStrToUnix: lambda self, e: f\"EPOCH(CAST({self.sql(e, 'this')} AS TIMESTAMP))\",\n exp.TimeToStr: lambda self, e: f\"STRFTIME({self.sql(e, 'this')}, {self.format_time(e)})\",\n exp.TimeToUnix: rename_func(\"EPOCH\"),\n exp.TsOrDiToDi: lambda self, e: f\"CAST(SUBSTR(REPLACE(CAST({self.sql(e, 'this')} AS TEXT), '-', ''), 1, 8) AS INT)\",\n exp.TsOrDsAdd: _ts_or_ds_add_sql,\n exp.TsOrDsToDate: ts_or_ds_to_date_sql(\"duckdb\"),\n exp.UnixToStr: lambda self, e: f\"STRFTIME(TO_TIMESTAMP({self.sql(e, 'this')}), {self.format_time(e)})\",\n exp.UnixToTime: rename_func(\"TO_TIMESTAMP\"),\n exp.UnixToTimeStr: lambda self, e: f\"CAST(TO_TIMESTAMP({self.sql(e, 'this')}) AS TEXT)\",\n exp.WeekOfYear: rename_func(\"WEEKOFYEAR\"),\n }\n\n TYPE_MAPPING = {\n **generator.Generator.TYPE_MAPPING,\n exp.DataType.Type.BINARY: \"BLOB\",\n exp.DataType.Type.CHAR: \"TEXT\",\n exp.DataType.Type.FLOAT: \"REAL\",\n exp.DataType.Type.NCHAR: \"TEXT\",\n exp.DataType.Type.NVARCHAR: \"TEXT\",\n exp.DataType.Type.UINT: \"UINTEGER\",\n exp.DataType.Type.VARBINARY: \"BLOB\",\n exp.DataType.Type.VARCHAR: \"TEXT\",\n }\n\n STAR_MAPPING = {**generator.Generator.STAR_MAPPING, \"except\": \"EXCLUDE\"}\n\n PROPERTIES_LOCATION = {\n **generator.Generator.PROPERTIES_LOCATION,\n exp.VolatileProperty: exp.Properties.Location.UNSUPPORTED,\n }\n\n def tablesample_sql(\n self, expression: exp.TableSample, seed_prefix: str = \"SEED\", sep: str = \" AS \"\n ) -> str:\n return super().tablesample_sql(expression, seed_prefix=\"REPEATABLE\", sep=sep)\n", "path": "sqlglot/dialects/duckdb.py"}]}
| 3,938 | 369 |
gh_patches_debug_44
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-6179
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Retirer les dernier restes de Travis
**Description du bug**
J'ai l'impression qu'il reste quelques miettes de Travis :
* https://github.com/zestedesavoir/zds-site/blob/dev/zds/settings/travis_fixture.py
* https://github.com/zestedesavoir/zds-site/blob/fe854d9b006e5ca500a911c48e3b25b11154d926/scripts/define_function.sh#L13-L66
**Comportement attendu**
A priori, on ne se sert plus de Travis, donc tout ça devrait disparaître.
</issue>
<code>
[start of zds/settings/travis_fixture.py]
1 from .ci_test import *
2
3 LOGGING["loggers"]["zds.utils.templatetags.emarkdown"] = {
4 "level": "INFO",
5 "handlers": ["console"],
6 }
7
[end of zds/settings/travis_fixture.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zds/settings/travis_fixture.py b/zds/settings/travis_fixture.py
deleted file mode 100644
--- a/zds/settings/travis_fixture.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .ci_test import *
-
-LOGGING["loggers"]["zds.utils.templatetags.emarkdown"] = {
- "level": "INFO",
- "handlers": ["console"],
-}
|
{"golden_diff": "diff --git a/zds/settings/travis_fixture.py b/zds/settings/travis_fixture.py\ndeleted file mode 100644\n--- a/zds/settings/travis_fixture.py\n+++ /dev/null\n@@ -1,6 +0,0 @@\n-from .ci_test import *\n-\n-LOGGING[\"loggers\"][\"zds.utils.templatetags.emarkdown\"] = {\n- \"level\": \"INFO\",\n- \"handlers\": [\"console\"],\n-}\n", "issue": "Retirer les dernier restes de Travis\n**Description du bug**\r\n\r\nJ'ai l'impression qu'il reste quelques miettes de Travis :\r\n\r\n* https://github.com/zestedesavoir/zds-site/blob/dev/zds/settings/travis_fixture.py\r\n* https://github.com/zestedesavoir/zds-site/blob/fe854d9b006e5ca500a911c48e3b25b11154d926/scripts/define_function.sh#L13-L66\r\n\r\n**Comportement attendu**\r\n\r\nA priori, on ne se sert plus de Travis, donc tout \u00e7a devrait dispara\u00eetre.\r\n\n", "before_files": [{"content": "from .ci_test import *\n\nLOGGING[\"loggers\"][\"zds.utils.templatetags.emarkdown\"] = {\n \"level\": \"INFO\",\n \"handlers\": [\"console\"],\n}\n", "path": "zds/settings/travis_fixture.py"}]}
| 741 | 102 |
gh_patches_debug_43370
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1484
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
get_standings should return more flexible data
get_standings is a little inflexible because the data is constrained.
</issue>
<code>
[start of CTFd/utils/scores/__init__.py]
1 from sqlalchemy.sql.expression import union_all
2
3 from CTFd.cache import cache
4 from CTFd.models import Awards, Challenges, Solves, Teams, Users, db
5 from CTFd.utils import get_config
6 from CTFd.utils.dates import unix_time_to_utc
7 from CTFd.utils.modes import get_model
8
9
10 @cache.memoize(timeout=60)
11 def get_standings(count=None, admin=False):
12 """
13 Get standings as a list of tuples containing account_id, name, and score e.g. [(account_id, team_name, score)].
14
15 Ties are broken by who reached a given score first based on the solve ID. Two users can have the same score but one
16 user will have a solve ID that is before the others. That user will be considered the tie-winner.
17
18 Challenges & Awards with a value of zero are filtered out of the calculations to avoid incorrect tie breaks.
19 """
20 Model = get_model()
21
22 scores = (
23 db.session.query(
24 Solves.account_id.label("account_id"),
25 db.func.sum(Challenges.value).label("score"),
26 db.func.max(Solves.id).label("id"),
27 db.func.max(Solves.date).label("date"),
28 )
29 .join(Challenges)
30 .filter(Challenges.value != 0)
31 .group_by(Solves.account_id)
32 )
33
34 awards = (
35 db.session.query(
36 Awards.account_id.label("account_id"),
37 db.func.sum(Awards.value).label("score"),
38 db.func.max(Awards.id).label("id"),
39 db.func.max(Awards.date).label("date"),
40 )
41 .filter(Awards.value != 0)
42 .group_by(Awards.account_id)
43 )
44
45 """
46 Filter out solves and awards that are before a specific time point.
47 """
48 freeze = get_config("freeze")
49 if not admin and freeze:
50 scores = scores.filter(Solves.date < unix_time_to_utc(freeze))
51 awards = awards.filter(Awards.date < unix_time_to_utc(freeze))
52
53 """
54 Combine awards and solves with a union. They should have the same amount of columns
55 """
56 results = union_all(scores, awards).alias("results")
57
58 """
59 Sum each of the results by the team id to get their score.
60 """
61 sumscores = (
62 db.session.query(
63 results.columns.account_id,
64 db.func.sum(results.columns.score).label("score"),
65 db.func.max(results.columns.id).label("id"),
66 db.func.max(results.columns.date).label("date"),
67 )
68 .group_by(results.columns.account_id)
69 .subquery()
70 )
71
72 """
73 Admins can see scores for all users but the public cannot see banned users.
74
75 Filters out banned users.
76 Properly resolves value ties by ID.
77
78 Different databases treat time precision differently so resolve by the row ID instead.
79 """
80 if admin:
81 standings_query = (
82 db.session.query(
83 Model.id.label("account_id"),
84 Model.oauth_id.label("oauth_id"),
85 Model.name.label("name"),
86 Model.hidden,
87 Model.banned,
88 sumscores.columns.score,
89 )
90 .join(sumscores, Model.id == sumscores.columns.account_id)
91 .order_by(sumscores.columns.score.desc(), sumscores.columns.id)
92 )
93 else:
94 standings_query = (
95 db.session.query(
96 Model.id.label("account_id"),
97 Model.oauth_id.label("oauth_id"),
98 Model.name.label("name"),
99 sumscores.columns.score,
100 )
101 .join(sumscores, Model.id == sumscores.columns.account_id)
102 .filter(Model.banned == False, Model.hidden == False)
103 .order_by(sumscores.columns.score.desc(), sumscores.columns.id)
104 )
105
106 """
107 Only select a certain amount of users if asked.
108 """
109 if count is None:
110 standings = standings_query.all()
111 else:
112 standings = standings_query.limit(count).all()
113
114 return standings
115
116
117 @cache.memoize(timeout=60)
118 def get_team_standings(count=None, admin=False):
119 scores = (
120 db.session.query(
121 Solves.team_id.label("team_id"),
122 db.func.sum(Challenges.value).label("score"),
123 db.func.max(Solves.id).label("id"),
124 db.func.max(Solves.date).label("date"),
125 )
126 .join(Challenges)
127 .filter(Challenges.value != 0)
128 .group_by(Solves.team_id)
129 )
130
131 awards = (
132 db.session.query(
133 Awards.team_id.label("team_id"),
134 db.func.sum(Awards.value).label("score"),
135 db.func.max(Awards.id).label("id"),
136 db.func.max(Awards.date).label("date"),
137 )
138 .filter(Awards.value != 0)
139 .group_by(Awards.team_id)
140 )
141
142 freeze = get_config("freeze")
143 if not admin and freeze:
144 scores = scores.filter(Solves.date < unix_time_to_utc(freeze))
145 awards = awards.filter(Awards.date < unix_time_to_utc(freeze))
146
147 results = union_all(scores, awards).alias("results")
148
149 sumscores = (
150 db.session.query(
151 results.columns.team_id,
152 db.func.sum(results.columns.score).label("score"),
153 db.func.max(results.columns.id).label("id"),
154 db.func.max(results.columns.date).label("date"),
155 )
156 .group_by(results.columns.team_id)
157 .subquery()
158 )
159
160 if admin:
161 standings_query = (
162 db.session.query(
163 Teams.id.label("team_id"),
164 Teams.oauth_id.label("oauth_id"),
165 Teams.name.label("name"),
166 Teams.hidden,
167 Teams.banned,
168 sumscores.columns.score,
169 )
170 .join(sumscores, Teams.id == sumscores.columns.team_id)
171 .order_by(sumscores.columns.score.desc(), sumscores.columns.id)
172 )
173 else:
174 standings_query = (
175 db.session.query(
176 Teams.id.label("team_id"),
177 Teams.oauth_id.label("oauth_id"),
178 Teams.name.label("name"),
179 sumscores.columns.score,
180 )
181 .join(sumscores, Teams.id == sumscores.columns.team_id)
182 .filter(Teams.banned == False)
183 .filter(Teams.hidden == False)
184 .order_by(sumscores.columns.score.desc(), sumscores.columns.id)
185 )
186
187 if count is None:
188 standings = standings_query.all()
189 else:
190 standings = standings_query.limit(count).all()
191
192 return standings
193
194
195 @cache.memoize(timeout=60)
196 def get_user_standings(count=None, admin=False):
197 scores = (
198 db.session.query(
199 Solves.user_id.label("user_id"),
200 db.func.sum(Challenges.value).label("score"),
201 db.func.max(Solves.id).label("id"),
202 db.func.max(Solves.date).label("date"),
203 )
204 .join(Challenges)
205 .filter(Challenges.value != 0)
206 .group_by(Solves.user_id)
207 )
208
209 awards = (
210 db.session.query(
211 Awards.user_id.label("user_id"),
212 db.func.sum(Awards.value).label("score"),
213 db.func.max(Awards.id).label("id"),
214 db.func.max(Awards.date).label("date"),
215 )
216 .filter(Awards.value != 0)
217 .group_by(Awards.user_id)
218 )
219
220 freeze = get_config("freeze")
221 if not admin and freeze:
222 scores = scores.filter(Solves.date < unix_time_to_utc(freeze))
223 awards = awards.filter(Awards.date < unix_time_to_utc(freeze))
224
225 results = union_all(scores, awards).alias("results")
226
227 sumscores = (
228 db.session.query(
229 results.columns.user_id,
230 db.func.sum(results.columns.score).label("score"),
231 db.func.max(results.columns.id).label("id"),
232 db.func.max(results.columns.date).label("date"),
233 )
234 .group_by(results.columns.user_id)
235 .subquery()
236 )
237
238 if admin:
239 standings_query = (
240 db.session.query(
241 Users.id.label("user_id"),
242 Users.oauth_id.label("oauth_id"),
243 Users.name.label("name"),
244 Users.hidden,
245 Users.banned,
246 sumscores.columns.score,
247 )
248 .join(sumscores, Users.id == sumscores.columns.user_id)
249 .order_by(sumscores.columns.score.desc(), sumscores.columns.id)
250 )
251 else:
252 standings_query = (
253 db.session.query(
254 Users.id.label("user_id"),
255 Users.oauth_id.label("oauth_id"),
256 Users.name.label("name"),
257 sumscores.columns.score,
258 )
259 .join(sumscores, Users.id == sumscores.columns.user_id)
260 .filter(Users.banned == False, Users.hidden == False)
261 .order_by(sumscores.columns.score.desc(), sumscores.columns.id)
262 )
263
264 if count is None:
265 standings = standings_query.all()
266 else:
267 standings = standings_query.limit(count).all()
268
269 return standings
270
[end of CTFd/utils/scores/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/CTFd/utils/scores/__init__.py b/CTFd/utils/scores/__init__.py
--- a/CTFd/utils/scores/__init__.py
+++ b/CTFd/utils/scores/__init__.py
@@ -8,7 +8,7 @@
@cache.memoize(timeout=60)
-def get_standings(count=None, admin=False):
+def get_standings(count=None, admin=False, fields=[]):
"""
Get standings as a list of tuples containing account_id, name, and score e.g. [(account_id, team_name, score)].
@@ -86,6 +86,7 @@
Model.hidden,
Model.banned,
sumscores.columns.score,
+ *fields,
)
.join(sumscores, Model.id == sumscores.columns.account_id)
.order_by(sumscores.columns.score.desc(), sumscores.columns.id)
@@ -97,6 +98,7 @@
Model.oauth_id.label("oauth_id"),
Model.name.label("name"),
sumscores.columns.score,
+ *fields,
)
.join(sumscores, Model.id == sumscores.columns.account_id)
.filter(Model.banned == False, Model.hidden == False)
@@ -115,7 +117,7 @@
@cache.memoize(timeout=60)
-def get_team_standings(count=None, admin=False):
+def get_team_standings(count=None, admin=False, fields=[]):
scores = (
db.session.query(
Solves.team_id.label("team_id"),
@@ -166,6 +168,7 @@
Teams.hidden,
Teams.banned,
sumscores.columns.score,
+ *fields,
)
.join(sumscores, Teams.id == sumscores.columns.team_id)
.order_by(sumscores.columns.score.desc(), sumscores.columns.id)
@@ -177,6 +180,7 @@
Teams.oauth_id.label("oauth_id"),
Teams.name.label("name"),
sumscores.columns.score,
+ *fields,
)
.join(sumscores, Teams.id == sumscores.columns.team_id)
.filter(Teams.banned == False)
@@ -193,7 +197,7 @@
@cache.memoize(timeout=60)
-def get_user_standings(count=None, admin=False):
+def get_user_standings(count=None, admin=False, fields=[]):
scores = (
db.session.query(
Solves.user_id.label("user_id"),
@@ -244,6 +248,7 @@
Users.hidden,
Users.banned,
sumscores.columns.score,
+ *fields,
)
.join(sumscores, Users.id == sumscores.columns.user_id)
.order_by(sumscores.columns.score.desc(), sumscores.columns.id)
@@ -255,6 +260,7 @@
Users.oauth_id.label("oauth_id"),
Users.name.label("name"),
sumscores.columns.score,
+ *fields,
)
.join(sumscores, Users.id == sumscores.columns.user_id)
.filter(Users.banned == False, Users.hidden == False)
|
{"golden_diff": "diff --git a/CTFd/utils/scores/__init__.py b/CTFd/utils/scores/__init__.py\n--- a/CTFd/utils/scores/__init__.py\n+++ b/CTFd/utils/scores/__init__.py\n@@ -8,7 +8,7 @@\n \n \n @cache.memoize(timeout=60)\n-def get_standings(count=None, admin=False):\n+def get_standings(count=None, admin=False, fields=[]):\n \"\"\"\n Get standings as a list of tuples containing account_id, name, and score e.g. [(account_id, team_name, score)].\n \n@@ -86,6 +86,7 @@\n Model.hidden,\n Model.banned,\n sumscores.columns.score,\n+ *fields,\n )\n .join(sumscores, Model.id == sumscores.columns.account_id)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n@@ -97,6 +98,7 @@\n Model.oauth_id.label(\"oauth_id\"),\n Model.name.label(\"name\"),\n sumscores.columns.score,\n+ *fields,\n )\n .join(sumscores, Model.id == sumscores.columns.account_id)\n .filter(Model.banned == False, Model.hidden == False)\n@@ -115,7 +117,7 @@\n \n \n @cache.memoize(timeout=60)\n-def get_team_standings(count=None, admin=False):\n+def get_team_standings(count=None, admin=False, fields=[]):\n scores = (\n db.session.query(\n Solves.team_id.label(\"team_id\"),\n@@ -166,6 +168,7 @@\n Teams.hidden,\n Teams.banned,\n sumscores.columns.score,\n+ *fields,\n )\n .join(sumscores, Teams.id == sumscores.columns.team_id)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n@@ -177,6 +180,7 @@\n Teams.oauth_id.label(\"oauth_id\"),\n Teams.name.label(\"name\"),\n sumscores.columns.score,\n+ *fields,\n )\n .join(sumscores, Teams.id == sumscores.columns.team_id)\n .filter(Teams.banned == False)\n@@ -193,7 +197,7 @@\n \n \n @cache.memoize(timeout=60)\n-def get_user_standings(count=None, admin=False):\n+def get_user_standings(count=None, admin=False, fields=[]):\n scores = (\n db.session.query(\n Solves.user_id.label(\"user_id\"),\n@@ -244,6 +248,7 @@\n Users.hidden,\n Users.banned,\n sumscores.columns.score,\n+ *fields,\n )\n .join(sumscores, Users.id == sumscores.columns.user_id)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n@@ -255,6 +260,7 @@\n Users.oauth_id.label(\"oauth_id\"),\n Users.name.label(\"name\"),\n sumscores.columns.score,\n+ *fields,\n )\n .join(sumscores, Users.id == sumscores.columns.user_id)\n .filter(Users.banned == False, Users.hidden == False)\n", "issue": "get_standings should return more flexible data\nget_standings is a little inflexible because the data is constrained. \n", "before_files": [{"content": "from sqlalchemy.sql.expression import union_all\n\nfrom CTFd.cache import cache\nfrom CTFd.models import Awards, Challenges, Solves, Teams, Users, db\nfrom CTFd.utils import get_config\nfrom CTFd.utils.dates import unix_time_to_utc\nfrom CTFd.utils.modes import get_model\n\n\[email protected](timeout=60)\ndef get_standings(count=None, admin=False):\n \"\"\"\n Get standings as a list of tuples containing account_id, name, and score e.g. [(account_id, team_name, score)].\n\n Ties are broken by who reached a given score first based on the solve ID. Two users can have the same score but one\n user will have a solve ID that is before the others. That user will be considered the tie-winner.\n\n Challenges & Awards with a value of zero are filtered out of the calculations to avoid incorrect tie breaks.\n \"\"\"\n Model = get_model()\n\n scores = (\n db.session.query(\n Solves.account_id.label(\"account_id\"),\n db.func.sum(Challenges.value).label(\"score\"),\n db.func.max(Solves.id).label(\"id\"),\n db.func.max(Solves.date).label(\"date\"),\n )\n .join(Challenges)\n .filter(Challenges.value != 0)\n .group_by(Solves.account_id)\n )\n\n awards = (\n db.session.query(\n Awards.account_id.label(\"account_id\"),\n db.func.sum(Awards.value).label(\"score\"),\n db.func.max(Awards.id).label(\"id\"),\n db.func.max(Awards.date).label(\"date\"),\n )\n .filter(Awards.value != 0)\n .group_by(Awards.account_id)\n )\n\n \"\"\"\n Filter out solves and awards that are before a specific time point.\n \"\"\"\n freeze = get_config(\"freeze\")\n if not admin and freeze:\n scores = scores.filter(Solves.date < unix_time_to_utc(freeze))\n awards = awards.filter(Awards.date < unix_time_to_utc(freeze))\n\n \"\"\"\n Combine awards and solves with a union. They should have the same amount of columns\n \"\"\"\n results = union_all(scores, awards).alias(\"results\")\n\n \"\"\"\n Sum each of the results by the team id to get their score.\n \"\"\"\n sumscores = (\n db.session.query(\n results.columns.account_id,\n db.func.sum(results.columns.score).label(\"score\"),\n db.func.max(results.columns.id).label(\"id\"),\n db.func.max(results.columns.date).label(\"date\"),\n )\n .group_by(results.columns.account_id)\n .subquery()\n )\n\n \"\"\"\n Admins can see scores for all users but the public cannot see banned users.\n\n Filters out banned users.\n Properly resolves value ties by ID.\n\n Different databases treat time precision differently so resolve by the row ID instead.\n \"\"\"\n if admin:\n standings_query = (\n db.session.query(\n Model.id.label(\"account_id\"),\n Model.oauth_id.label(\"oauth_id\"),\n Model.name.label(\"name\"),\n Model.hidden,\n Model.banned,\n sumscores.columns.score,\n )\n .join(sumscores, Model.id == sumscores.columns.account_id)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n )\n else:\n standings_query = (\n db.session.query(\n Model.id.label(\"account_id\"),\n Model.oauth_id.label(\"oauth_id\"),\n Model.name.label(\"name\"),\n sumscores.columns.score,\n )\n .join(sumscores, Model.id == sumscores.columns.account_id)\n .filter(Model.banned == False, Model.hidden == False)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n )\n\n \"\"\"\n Only select a certain amount of users if asked.\n \"\"\"\n if count is None:\n standings = standings_query.all()\n else:\n standings = standings_query.limit(count).all()\n\n return standings\n\n\[email protected](timeout=60)\ndef get_team_standings(count=None, admin=False):\n scores = (\n db.session.query(\n Solves.team_id.label(\"team_id\"),\n db.func.sum(Challenges.value).label(\"score\"),\n db.func.max(Solves.id).label(\"id\"),\n db.func.max(Solves.date).label(\"date\"),\n )\n .join(Challenges)\n .filter(Challenges.value != 0)\n .group_by(Solves.team_id)\n )\n\n awards = (\n db.session.query(\n Awards.team_id.label(\"team_id\"),\n db.func.sum(Awards.value).label(\"score\"),\n db.func.max(Awards.id).label(\"id\"),\n db.func.max(Awards.date).label(\"date\"),\n )\n .filter(Awards.value != 0)\n .group_by(Awards.team_id)\n )\n\n freeze = get_config(\"freeze\")\n if not admin and freeze:\n scores = scores.filter(Solves.date < unix_time_to_utc(freeze))\n awards = awards.filter(Awards.date < unix_time_to_utc(freeze))\n\n results = union_all(scores, awards).alias(\"results\")\n\n sumscores = (\n db.session.query(\n results.columns.team_id,\n db.func.sum(results.columns.score).label(\"score\"),\n db.func.max(results.columns.id).label(\"id\"),\n db.func.max(results.columns.date).label(\"date\"),\n )\n .group_by(results.columns.team_id)\n .subquery()\n )\n\n if admin:\n standings_query = (\n db.session.query(\n Teams.id.label(\"team_id\"),\n Teams.oauth_id.label(\"oauth_id\"),\n Teams.name.label(\"name\"),\n Teams.hidden,\n Teams.banned,\n sumscores.columns.score,\n )\n .join(sumscores, Teams.id == sumscores.columns.team_id)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n )\n else:\n standings_query = (\n db.session.query(\n Teams.id.label(\"team_id\"),\n Teams.oauth_id.label(\"oauth_id\"),\n Teams.name.label(\"name\"),\n sumscores.columns.score,\n )\n .join(sumscores, Teams.id == sumscores.columns.team_id)\n .filter(Teams.banned == False)\n .filter(Teams.hidden == False)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n )\n\n if count is None:\n standings = standings_query.all()\n else:\n standings = standings_query.limit(count).all()\n\n return standings\n\n\[email protected](timeout=60)\ndef get_user_standings(count=None, admin=False):\n scores = (\n db.session.query(\n Solves.user_id.label(\"user_id\"),\n db.func.sum(Challenges.value).label(\"score\"),\n db.func.max(Solves.id).label(\"id\"),\n db.func.max(Solves.date).label(\"date\"),\n )\n .join(Challenges)\n .filter(Challenges.value != 0)\n .group_by(Solves.user_id)\n )\n\n awards = (\n db.session.query(\n Awards.user_id.label(\"user_id\"),\n db.func.sum(Awards.value).label(\"score\"),\n db.func.max(Awards.id).label(\"id\"),\n db.func.max(Awards.date).label(\"date\"),\n )\n .filter(Awards.value != 0)\n .group_by(Awards.user_id)\n )\n\n freeze = get_config(\"freeze\")\n if not admin and freeze:\n scores = scores.filter(Solves.date < unix_time_to_utc(freeze))\n awards = awards.filter(Awards.date < unix_time_to_utc(freeze))\n\n results = union_all(scores, awards).alias(\"results\")\n\n sumscores = (\n db.session.query(\n results.columns.user_id,\n db.func.sum(results.columns.score).label(\"score\"),\n db.func.max(results.columns.id).label(\"id\"),\n db.func.max(results.columns.date).label(\"date\"),\n )\n .group_by(results.columns.user_id)\n .subquery()\n )\n\n if admin:\n standings_query = (\n db.session.query(\n Users.id.label(\"user_id\"),\n Users.oauth_id.label(\"oauth_id\"),\n Users.name.label(\"name\"),\n Users.hidden,\n Users.banned,\n sumscores.columns.score,\n )\n .join(sumscores, Users.id == sumscores.columns.user_id)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n )\n else:\n standings_query = (\n db.session.query(\n Users.id.label(\"user_id\"),\n Users.oauth_id.label(\"oauth_id\"),\n Users.name.label(\"name\"),\n sumscores.columns.score,\n )\n .join(sumscores, Users.id == sumscores.columns.user_id)\n .filter(Users.banned == False, Users.hidden == False)\n .order_by(sumscores.columns.score.desc(), sumscores.columns.id)\n )\n\n if count is None:\n standings = standings_query.all()\n else:\n standings = standings_query.limit(count).all()\n\n return standings\n", "path": "CTFd/utils/scores/__init__.py"}]}
| 3,193 | 683 |
gh_patches_debug_10578
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-1167
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
We should disable the remap_user_ids command for the time being
We have changes on Staging that need to make their way to Production soon.
The `remap_user_ids` command is not working properly (#1165).
We should disable the command for now so we can deploy recent changes to Production.
</issue>
<code>
[start of django/cantusdb_project/main_app/management/commands/remap_user_ids.py]
1 from main_app.models import Source, Chant
2 from django.contrib.auth import get_user_model
3 from django.core.management.base import BaseCommand
4 from sys import stdout
5 from django.db.models.query import QuerySet
6 from typing import Optional
7
8 User = get_user_model()
9
10 USER_ID_MAPPING = {
11 # Fake user accounts with sequential numbering were created on NewCantus
12 # for OldCantus Indexers. In the time since user accounts were
13 # programmatically synced, new user accounts were created on OldCantus,
14 # which duplicated these IDs. Then, we manually created new user accounts
15 # on NewCantus for these newer users, with new IDs that don't match those
16 # in OldCantus.
17 #
18 # In this dictionary:
19 # - Keys represent the IDs of users recently created on OldCantus, which collide
20 # with those of NewCantus Indexers
21 # - Values represent the IDs of manually-created users in NewCantus.
22 251610: 251660,
23 251611: 251661,
24 251612: 251662,
25 251613: 251663,
26 251614: 251664,
27 251616: 251665,
28 251617: 251666,
29 251618: 251667,
30 251619: 251668,
31 251620: 251669,
32 251621: 251670,
33 251622: 251671,
34 251623: 251672,
35 251624: 251673,
36 251625: 251674,
37 251626: 251657,
38 251627: 251675,
39 251630: 251676,
40 251632: 251678,
41 251633: 251679,
42 251638: 251656,
43 251639: 251680,
44 251640: 251681,
45 251641: 251682,
46 251642: 251683,
47 251643: 251684,
48 251645: 251685,
49 }
50
51
52 def reassign_sources() -> None:
53 CHUNK_SIZE = 1_000
54 sources: QuerySet[Source] = Source.objects.all()
55 sources_count: int = sources.count()
56 start_index: int = 0
57 while start_index <= sources_count:
58 stdout.write(f"processing chunk with {start_index=}\n")
59 chunk: QuerySet[Source] = sources[start_index : start_index + CHUNK_SIZE]
60 for source in chunk:
61 old_creator: Optional[User] = source.created_by
62
63 updated_id: Optional[int] = None
64 try:
65 updated_id: int = USER_ID_MAPPING[old_creator.id]
66 except (
67 KeyError, # old_creator.id not in USER_ID_MAPPING
68 AttributeError, # old_creator is None
69 ):
70 pass
71
72 if updated_id is None:
73 # user ID doesn't need to be remapped
74 continue
75
76 updated_creator: Optional[User] = None
77 try:
78 updated_creator = User.objects.get(id=updated_id)
79 except (
80 User.DoesNotExist,
81 AttributeError,
82 ):
83 pass
84
85 source.created_by = updated_creator
86 source.save()
87 start_index += CHUNK_SIZE
88
89
90 def reassign_chants() -> None:
91 CHUNK_SIZE = 1_000
92 chants: QuerySet[Chant] = Chant.objects.all()
93 chants_count: int = chants.count()
94 start_index: int = 0
95 while start_index <= chants_count:
96 stdout.write(f"processing chunk with {start_index=}\n")
97 chunk: QuerySet[Chant] = chants[start_index : start_index + CHUNK_SIZE]
98 for chant in chunk:
99 old_creator: Optional[User] = chant.created_by
100
101 updated_id: Optional[int] = None
102 try:
103 updated_id: int = USER_ID_MAPPING[old_creator.id]
104 except (
105 KeyError, # old_creator.id not in USER_ID_MAPPING
106 AttributeError, # old_creator is None
107 ):
108 pass
109
110 if updated_id is None:
111 # user ID doesn't need to be remapped
112 continue
113
114 updated_creator: Optional[User] = None
115 try:
116 updated_creator = User.objects.get(id=updated_id)
117 except User.DoesNotExist:
118 pass
119
120 chant.created_by = updated_creator
121 chant.save()
122 start_index += CHUNK_SIZE
123
124
125 class Command(BaseCommand):
126 def handle(self, *args, **kwargs) -> None:
127 stdout.write("\n\n==== Reassigning Sources ====\n")
128 reassign_sources()
129 stdout.write("\n== All sources successfully remapped! ==\n")
130 stdout.write("\n\n==== Reassigning Chants ====\n")
131 reassign_chants()
132 stdout.write("\n== All chants successfully remapped! ==\n")
133
[end of django/cantusdb_project/main_app/management/commands/remap_user_ids.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/django/cantusdb_project/main_app/management/commands/remap_user_ids.py b/django/cantusdb_project/main_app/management/commands/remap_user_ids.py
--- a/django/cantusdb_project/main_app/management/commands/remap_user_ids.py
+++ b/django/cantusdb_project/main_app/management/commands/remap_user_ids.py
@@ -124,6 +124,11 @@
class Command(BaseCommand):
def handle(self, *args, **kwargs) -> None:
+ error_message = (
+ "As of late November 2023, this command is not working. "
+ "It has been temporarily disabled until the bugs have been worked out."
+ )
+ raise NotImplementedError(error_message)
stdout.write("\n\n==== Reassigning Sources ====\n")
reassign_sources()
stdout.write("\n== All sources successfully remapped! ==\n")
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/management/commands/remap_user_ids.py b/django/cantusdb_project/main_app/management/commands/remap_user_ids.py\n--- a/django/cantusdb_project/main_app/management/commands/remap_user_ids.py\n+++ b/django/cantusdb_project/main_app/management/commands/remap_user_ids.py\n@@ -124,6 +124,11 @@\n \n class Command(BaseCommand):\n def handle(self, *args, **kwargs) -> None:\n+ error_message = (\n+ \"As of late November 2023, this command is not working. \"\n+ \"It has been temporarily disabled until the bugs have been worked out.\"\n+ )\n+ raise NotImplementedError(error_message)\n stdout.write(\"\\n\\n==== Reassigning Sources ====\\n\")\n reassign_sources()\n stdout.write(\"\\n== All sources successfully remapped! ==\\n\")\n", "issue": "We should disable the remap_user_ids command for the time being\nWe have changes on Staging that need to make their way to Production soon.\r\n\r\nThe `remap_user_ids` command is not working properly (#1165).\r\n\r\nWe should disable the command for now so we can deploy recent changes to Production.\n", "before_files": [{"content": "from main_app.models import Source, Chant\nfrom django.contrib.auth import get_user_model\nfrom django.core.management.base import BaseCommand\nfrom sys import stdout\nfrom django.db.models.query import QuerySet\nfrom typing import Optional\n\nUser = get_user_model()\n\nUSER_ID_MAPPING = {\n # Fake user accounts with sequential numbering were created on NewCantus\n # for OldCantus Indexers. In the time since user accounts were\n # programmatically synced, new user accounts were created on OldCantus,\n # which duplicated these IDs. Then, we manually created new user accounts\n # on NewCantus for these newer users, with new IDs that don't match those\n # in OldCantus.\n #\n # In this dictionary:\n # - Keys represent the IDs of users recently created on OldCantus, which collide\n # with those of NewCantus Indexers\n # - Values represent the IDs of manually-created users in NewCantus.\n 251610: 251660,\n 251611: 251661,\n 251612: 251662,\n 251613: 251663,\n 251614: 251664,\n 251616: 251665,\n 251617: 251666,\n 251618: 251667,\n 251619: 251668,\n 251620: 251669,\n 251621: 251670,\n 251622: 251671,\n 251623: 251672,\n 251624: 251673,\n 251625: 251674,\n 251626: 251657,\n 251627: 251675,\n 251630: 251676,\n 251632: 251678,\n 251633: 251679,\n 251638: 251656,\n 251639: 251680,\n 251640: 251681,\n 251641: 251682,\n 251642: 251683,\n 251643: 251684,\n 251645: 251685,\n}\n\n\ndef reassign_sources() -> None:\n CHUNK_SIZE = 1_000\n sources: QuerySet[Source] = Source.objects.all()\n sources_count: int = sources.count()\n start_index: int = 0\n while start_index <= sources_count:\n stdout.write(f\"processing chunk with {start_index=}\\n\")\n chunk: QuerySet[Source] = sources[start_index : start_index + CHUNK_SIZE]\n for source in chunk:\n old_creator: Optional[User] = source.created_by\n\n updated_id: Optional[int] = None\n try:\n updated_id: int = USER_ID_MAPPING[old_creator.id]\n except (\n KeyError, # old_creator.id not in USER_ID_MAPPING\n AttributeError, # old_creator is None\n ):\n pass\n\n if updated_id is None:\n # user ID doesn't need to be remapped\n continue\n\n updated_creator: Optional[User] = None\n try:\n updated_creator = User.objects.get(id=updated_id)\n except (\n User.DoesNotExist,\n AttributeError,\n ):\n pass\n\n source.created_by = updated_creator\n source.save()\n start_index += CHUNK_SIZE\n\n\ndef reassign_chants() -> None:\n CHUNK_SIZE = 1_000\n chants: QuerySet[Chant] = Chant.objects.all()\n chants_count: int = chants.count()\n start_index: int = 0\n while start_index <= chants_count:\n stdout.write(f\"processing chunk with {start_index=}\\n\")\n chunk: QuerySet[Chant] = chants[start_index : start_index + CHUNK_SIZE]\n for chant in chunk:\n old_creator: Optional[User] = chant.created_by\n\n updated_id: Optional[int] = None\n try:\n updated_id: int = USER_ID_MAPPING[old_creator.id]\n except (\n KeyError, # old_creator.id not in USER_ID_MAPPING\n AttributeError, # old_creator is None\n ):\n pass\n\n if updated_id is None:\n # user ID doesn't need to be remapped\n continue\n\n updated_creator: Optional[User] = None\n try:\n updated_creator = User.objects.get(id=updated_id)\n except User.DoesNotExist:\n pass\n\n chant.created_by = updated_creator\n chant.save()\n start_index += CHUNK_SIZE\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **kwargs) -> None:\n stdout.write(\"\\n\\n==== Reassigning Sources ====\\n\")\n reassign_sources()\n stdout.write(\"\\n== All sources successfully remapped! ==\\n\")\n stdout.write(\"\\n\\n==== Reassigning Chants ====\\n\")\n reassign_chants()\n stdout.write(\"\\n== All chants successfully remapped! ==\\n\")\n", "path": "django/cantusdb_project/main_app/management/commands/remap_user_ids.py"}]}
| 2,197 | 207 |
gh_patches_debug_43470
|
rasdani/github-patches
|
git_diff
|
opendatacube__datacube-core-386
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No index driver found for 'default'
Greetings!
Using the latest Datacube version and libraries, indexing and ingestion runs well except when I try to access the database on Jupyter I get the "No index driver found for 'default': 0 available: " error.
> RuntimeError Traceback (most recent call last)
> ipython-input-2-f074b535470c> in <module>()
> 1
> 2 import datacube
> ----> 3 dc = datacube.Datacube(config='E:\Datacubes\datacube-core-develop\datacube.conf')
> 4 dc
>
> E:\Datacubes\datacube-core-develop\datacube\api\core.py in __init__(self, index, config, app, env, validate_connection)
> 116 index = index_connect(normalise_config(config),
> 117 application_name=app,
> --> 118 validate_connection=validate_connection)
> 119
> 120 self.index = index
>
> E:\Datacubes\datacube-core-develop\datacube\index\_api.py in index_connect(local_config, application_name, validate_connection)
> 37 raise RuntimeError(
> 38 "No index driver found for %r. %s available: %s" % (
> ---> 39 driver_name, len(index_drivers()), ', '.join(index_drivers())
> 40 )
> 41 )
>
> RuntimeError: No index driver found for 'default'. 0 available:
I've reinstalled PostgreSQL and all libraries, to no avail.
All suggestions are welcome.
**edit:** While ingestion I get this warning
> datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.io.write::s3aio
> datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.io.write::s3aio_test
where do I update the driver from? Is it because of this plugin?
</issue>
<code>
[start of datacube/drivers/driver_cache.py]
1 from __future__ import absolute_import, print_function
2
3 import logging
4
5 from pkg_resources import iter_entry_points
6
7 _LOG = logging.getLogger(__name__)
8
9
10 def load_drivers(group):
11 """
12 Load available drivers for a given group name.
13
14 Gracefully handles:
15
16 - Driver module not able to be imported
17 - Driver init function throwing an exception or returning None
18
19 By having driver entry_points pointing to a function, we defer loading the driver
20 module or running any code until required.
21
22 :param str group: Name of the entry point group e.g. "datacube.plugins.io.read"
23
24 :returns: Dictionary String -> Driver Object
25 """
26
27 def safe_load(ep):
28 # pylint: disable=bare-except
29 try:
30 driver_init = ep.resolve()
31 except:
32 _LOG.warning('Failed to resolve driver %s::%s', group, ep.name)
33 return None
34
35 try:
36 driver = driver_init()
37 except:
38 _LOG.warning('Exception during driver init, driver name: %s::%s', group, ep.name)
39 return None
40
41 if driver is None:
42 _LOG.warning('Driver init returned None, driver name: %s::%s', group, ep.name)
43
44 return driver
45
46 def resolve_all(group):
47 for ep in iter_entry_points(group=group, name=None):
48 driver = safe_load(ep)
49 if driver is not None:
50 yield (ep.name, driver)
51
52 return dict((name, driver) for name, driver in resolve_all(group))
53
[end of datacube/drivers/driver_cache.py]
[start of datacube/drivers/indexes.py]
1 from __future__ import absolute_import
2
3 from .driver_cache import load_drivers
4
5
6 class IndexDriverCache(object):
7 def __init__(self, group):
8 self._drivers = load_drivers(group)
9
10 for driver in list(self._drivers.values()):
11 if hasattr(driver, 'aliases'):
12 for alias in driver.aliases:
13 self._drivers[alias] = driver
14
15 def __call__(self, name):
16 """
17 :returns: None if driver with a given name is not found
18
19 :param str name: Driver name
20 :param str fmt: Dataset format
21 :return: Returns WriterDriver
22 """
23 return self._drivers.get(name, None)
24
25 def drivers(self):
26 """ Returns list of driver names
27 """
28 return list(self._drivers.keys())
29
30
31 def index_cache():
32 """ Singleton for WriterDriverCache
33 """
34 # pylint: disable=protected-access
35 if not hasattr(index_cache, '_instance'):
36 index_cache._instance = IndexDriverCache('datacube.plugins.index')
37 return index_cache._instance
38
39
40 def index_drivers():
41 """ Returns list driver names
42 """
43 return index_cache().drivers()
44
45
46 def index_driver_by_name(name):
47 """ Lookup writer driver by name
48
49 :returns: Initialised writer driver instance
50 :returns: None if driver with this name doesn't exist
51 """
52 return index_cache()(name)
53
[end of datacube/drivers/indexes.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 import versioneer
4 from setuptools import setup, find_packages
5
6 tests_require = [
7 'compliance-checker',
8 'hypothesis',
9 'mock',
10 'objgraph',
11 'pycodestyle',
12 'pylint',
13 'pytest',
14 'pytest-cov',
15 'pytest-timeout',
16 ]
17
18 extras_require = {
19 'performance': ['ciso8601', 'bottleneck'],
20 'interactive': ['matplotlib', 'fiona'],
21 'distributed': ['distributed', 'dask[distributed]'],
22 'analytics': ['scipy', 'pyparsing', 'numexpr'],
23 'doc': ['Sphinx', 'setuptools'],
24 'replicas': ['paramiko', 'sshtunnel', 'tqdm'],
25 'celery': ['celery>=4', 'redis'],
26 's3': ['boto3==1.4.3', 'SharedArray', 'pathos', 'zstandard'],
27 'test': tests_require,
28 }
29 # An 'all' option, following ipython naming conventions.
30 extras_require['all'] = sorted(set(sum(extras_require.values(), [])))
31
32 setup(
33 name='datacube',
34 version=versioneer.get_version(),
35 cmdclass=versioneer.get_cmdclass(),
36 python_requires='>=3.5.2',
37
38 url='https://github.com/opendatacube/datacube-core',
39 author='AGDC Collaboration',
40 maintainer='AGDC Collaboration',
41 maintainer_email='',
42 description='An analysis environment for satellite and other earth observation data',
43 long_description=open('README.rst').read(),
44 license='Apache License 2.0',
45 classifiers=[
46 "Development Status :: 4 - Beta",
47 "Intended Audience :: Developers",
48 "Intended Audience :: Science/Research",
49 "License :: OSI Approved :: Apache Software License",
50 "Natural Language :: English",
51 "Operating System :: MacOS :: MacOS X",
52 "Operating System :: POSIX",
53 "Operating System :: POSIX :: BSD",
54 "Operating System :: POSIX :: Linux",
55 "Operating System :: Microsoft :: Windows",
56 "Programming Language :: Python",
57 "Programming Language :: Python :: 3",
58 "Programming Language :: Python :: 3.5",
59 "Programming Language :: Python :: 3.6",
60 "Topic :: Scientific/Engineering :: GIS",
61 "Topic :: Scientific/Engineering :: Information Analysis",
62 ],
63
64 packages=find_packages(
65 exclude=('tests', 'tests.*',
66 'integration_tests', 'integration_tests.*')
67 ),
68 package_data={
69 '': ['*.yaml', '*/*.yaml'],
70 },
71 scripts=[
72 'datacube_apps/scripts/pbs_helpers.sh'
73 ],
74 setup_requires=[
75 'pytest-runner'
76 ],
77 install_requires=[
78 'affine',
79 'cachetools',
80 'click>=5.0',
81 'cloudpickle>=0.4',
82 'dask[array]',
83 'gdal>=1.9',
84 'jsonschema',
85 'netcdf4',
86 'numpy',
87 'psycopg2',
88 'pypeg2',
89 'python-dateutil',
90 'pyyaml',
91 'rasterio>=0.9a10', # required for zip reading, 0.9 gets around 1.0a ordering problems
92 'singledispatch',
93 'sqlalchemy',
94 'xarray>=0.9', # >0.9 fixes most problems with `crs` attributes being lost
95 ],
96 extras_require=extras_require,
97 tests_require=tests_require,
98
99 entry_points={
100 'console_scripts': [
101 'datacube = datacube.scripts.cli_app:cli',
102 'datacube-search = datacube.scripts.search_tool:cli',
103 'datacube-stacker = datacube_apps.stacker:main',
104 'datacube-worker = datacube.execution.worker:main',
105 'datacube-fixer = datacube_apps.stacker:fixer_main',
106 'datacube-ncml = datacube_apps.ncml:ncml_app',
107 'pixeldrill = datacube_apps.pixeldrill:main [interactive]',
108 'movie_generator = datacube_apps.movie_generator:main',
109 'datacube-simple-replica = datacube_apps.simple_replica:replicate [replicas]'
110 ],
111 'datacube.plugins.io.read': [
112 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',
113 's3aio = datacube.drivers.s3.driver:reader_driver_init',
114 's3aio_test = datacube.drivers.s3.driver:reader_test_driver_init'
115 ],
116 'datacube.plugins.io.write': [
117 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',
118 's3aio = datacube.drivers.s3.driver:writer_driver_init',
119 's3aio_test = datacube.drivers.s3.driver:writer_test_driver_init',
120 ],
121 'datacube.plugins.index': [
122 'default = datacube.index.index:index_driver_init',
123 's3aio_index = datacube.drivers.s3aio_index:index_driver_init',
124 ],
125 },
126 )
127
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/datacube/drivers/driver_cache.py b/datacube/drivers/driver_cache.py
--- a/datacube/drivers/driver_cache.py
+++ b/datacube/drivers/driver_cache.py
@@ -2,7 +2,7 @@
import logging
-from pkg_resources import iter_entry_points
+from pkg_resources import iter_entry_points, DistributionNotFound
_LOG = logging.getLogger(__name__)
@@ -27,7 +27,11 @@
def safe_load(ep):
# pylint: disable=bare-except
try:
- driver_init = ep.resolve()
+ driver_init = ep.load()
+ except DistributionNotFound:
+ # This happens when entry points were marked with extra features,
+ # but extra feature were not requested for installation
+ return None
except:
_LOG.warning('Failed to resolve driver %s::%s', group, ep.name)
return None
diff --git a/datacube/drivers/indexes.py b/datacube/drivers/indexes.py
--- a/datacube/drivers/indexes.py
+++ b/datacube/drivers/indexes.py
@@ -7,6 +7,10 @@
def __init__(self, group):
self._drivers = load_drivers(group)
+ if len(self._drivers) == 0:
+ from datacube.index.index import index_driver_init
+ self._drivers = dict(default=index_driver_init())
+
for driver in list(self._drivers.values()):
if hasattr(driver, 'aliases'):
for alias in driver.aliases:
@@ -17,8 +21,7 @@
:returns: None if driver with a given name is not found
:param str name: Driver name
- :param str fmt: Dataset format
- :return: Returns WriterDriver
+ :return: Returns IndexDriver
"""
return self._drivers.get(name, None)
@@ -29,7 +32,7 @@
def index_cache():
- """ Singleton for WriterDriverCache
+ """ Singleton for IndexDriverCache
"""
# pylint: disable=protected-access
if not hasattr(index_cache, '_instance'):
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -2,6 +2,7 @@
import versioneer
from setuptools import setup, find_packages
+import os
tests_require = [
'compliance-checker',
@@ -29,6 +30,22 @@
# An 'all' option, following ipython naming conventions.
extras_require['all'] = sorted(set(sum(extras_require.values(), [])))
+extra_plugins = dict(read=[], write=[], index=[])
+
+if os.name != 'nt':
+ extra_plugins['read'].extend([
+ 's3aio = datacube.drivers.s3.driver:reader_driver_init [s3]',
+ 's3aio_test = datacube.drivers.s3.driver:reader_test_driver_init [s3]',
+ ])
+ extra_plugins['write'].extend([
+ 's3aio = datacube.drivers.s3.driver:writer_driver_init [s3]',
+ 's3aio_test = datacube.drivers.s3.driver:writer_test_driver_init [s3]',
+ ])
+
+ extra_plugins['index'].extend([
+ 's3aio_index = datacube.drivers.s3aio_index:index_driver_init [s3]',
+ ])
+
setup(
name='datacube',
version=versioneer.get_version(),
@@ -110,17 +127,15 @@
],
'datacube.plugins.io.read': [
'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',
- 's3aio = datacube.drivers.s3.driver:reader_driver_init',
- 's3aio_test = datacube.drivers.s3.driver:reader_test_driver_init'
+ *extra_plugins['read'],
],
'datacube.plugins.io.write': [
'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',
- 's3aio = datacube.drivers.s3.driver:writer_driver_init',
- 's3aio_test = datacube.drivers.s3.driver:writer_test_driver_init',
+ *extra_plugins['write'],
],
'datacube.plugins.index': [
'default = datacube.index.index:index_driver_init',
- 's3aio_index = datacube.drivers.s3aio_index:index_driver_init',
+ *extra_plugins['index'],
],
},
)
|
{"golden_diff": "diff --git a/datacube/drivers/driver_cache.py b/datacube/drivers/driver_cache.py\n--- a/datacube/drivers/driver_cache.py\n+++ b/datacube/drivers/driver_cache.py\n@@ -2,7 +2,7 @@\n \n import logging\n \n-from pkg_resources import iter_entry_points\n+from pkg_resources import iter_entry_points, DistributionNotFound\n \n _LOG = logging.getLogger(__name__)\n \n@@ -27,7 +27,11 @@\n def safe_load(ep):\n # pylint: disable=bare-except\n try:\n- driver_init = ep.resolve()\n+ driver_init = ep.load()\n+ except DistributionNotFound:\n+ # This happens when entry points were marked with extra features,\n+ # but extra feature were not requested for installation\n+ return None\n except:\n _LOG.warning('Failed to resolve driver %s::%s', group, ep.name)\n return None\ndiff --git a/datacube/drivers/indexes.py b/datacube/drivers/indexes.py\n--- a/datacube/drivers/indexes.py\n+++ b/datacube/drivers/indexes.py\n@@ -7,6 +7,10 @@\n def __init__(self, group):\n self._drivers = load_drivers(group)\n \n+ if len(self._drivers) == 0:\n+ from datacube.index.index import index_driver_init\n+ self._drivers = dict(default=index_driver_init())\n+\n for driver in list(self._drivers.values()):\n if hasattr(driver, 'aliases'):\n for alias in driver.aliases:\n@@ -17,8 +21,7 @@\n :returns: None if driver with a given name is not found\n \n :param str name: Driver name\n- :param str fmt: Dataset format\n- :return: Returns WriterDriver\n+ :return: Returns IndexDriver\n \"\"\"\n return self._drivers.get(name, None)\n \n@@ -29,7 +32,7 @@\n \n \n def index_cache():\n- \"\"\" Singleton for WriterDriverCache\n+ \"\"\" Singleton for IndexDriverCache\n \"\"\"\n # pylint: disable=protected-access\n if not hasattr(index_cache, '_instance'):\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -2,6 +2,7 @@\n \n import versioneer\n from setuptools import setup, find_packages\n+import os\n \n tests_require = [\n 'compliance-checker',\n@@ -29,6 +30,22 @@\n # An 'all' option, following ipython naming conventions.\n extras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n \n+extra_plugins = dict(read=[], write=[], index=[])\n+\n+if os.name != 'nt':\n+ extra_plugins['read'].extend([\n+ 's3aio = datacube.drivers.s3.driver:reader_driver_init [s3]',\n+ 's3aio_test = datacube.drivers.s3.driver:reader_test_driver_init [s3]',\n+ ])\n+ extra_plugins['write'].extend([\n+ 's3aio = datacube.drivers.s3.driver:writer_driver_init [s3]',\n+ 's3aio_test = datacube.drivers.s3.driver:writer_test_driver_init [s3]',\n+ ])\n+\n+ extra_plugins['index'].extend([\n+ 's3aio_index = datacube.drivers.s3aio_index:index_driver_init [s3]',\n+ ])\n+\n setup(\n name='datacube',\n version=versioneer.get_version(),\n@@ -110,17 +127,15 @@\n ],\n 'datacube.plugins.io.read': [\n 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',\n- 's3aio = datacube.drivers.s3.driver:reader_driver_init',\n- 's3aio_test = datacube.drivers.s3.driver:reader_test_driver_init'\n+ *extra_plugins['read'],\n ],\n 'datacube.plugins.io.write': [\n 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',\n- 's3aio = datacube.drivers.s3.driver:writer_driver_init',\n- 's3aio_test = datacube.drivers.s3.driver:writer_test_driver_init',\n+ *extra_plugins['write'],\n ],\n 'datacube.plugins.index': [\n 'default = datacube.index.index:index_driver_init',\n- 's3aio_index = datacube.drivers.s3aio_index:index_driver_init',\n+ *extra_plugins['index'],\n ],\n },\n )\n", "issue": "No index driver found for 'default'\nGreetings!\r\n\r\nUsing the latest Datacube version and libraries, indexing and ingestion runs well except when I try to access the database on Jupyter I get the \"No index driver found for 'default': 0 available: \" error.\r\n\r\n> RuntimeError Traceback (most recent call last)\r\n> ipython-input-2-f074b535470c> in <module>()\r\n> 1 \r\n> 2 import datacube\r\n> ----> 3 dc = datacube.Datacube(config='E:\\Datacubes\\datacube-core-develop\\datacube.conf')\r\n> 4 dc\r\n> \r\n> E:\\Datacubes\\datacube-core-develop\\datacube\\api\\core.py in __init__(self, index, config, app, env, validate_connection)\r\n> 116 index = index_connect(normalise_config(config),\r\n> 117 application_name=app,\r\n> --> 118 validate_connection=validate_connection)\r\n> 119 \r\n> 120 self.index = index\r\n> \r\n> E:\\Datacubes\\datacube-core-develop\\datacube\\index\\_api.py in index_connect(local_config, application_name, validate_connection)\r\n> 37 raise RuntimeError(\r\n> 38 \"No index driver found for %r. %s available: %s\" % (\r\n> ---> 39 driver_name, len(index_drivers()), ', '.join(index_drivers())\r\n> 40 )\r\n> 41 )\r\n> \r\n> RuntimeError: No index driver found for 'default'. 0 available: \r\n\r\nI've reinstalled PostgreSQL and all libraries, to no avail.\r\nAll suggestions are welcome.\r\n\r\n**edit:** While ingestion I get this warning\r\n\r\n> datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.io.write::s3aio\r\n> datacube.drivers.driver_cache WARNING Failed to resolve driver datacube.plugins.io.write::s3aio_test\r\n\r\nwhere do I update the driver from? Is it because of this plugin?\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport logging\n\nfrom pkg_resources import iter_entry_points\n\n_LOG = logging.getLogger(__name__)\n\n\ndef load_drivers(group):\n \"\"\"\n Load available drivers for a given group name.\n\n Gracefully handles:\n\n - Driver module not able to be imported\n - Driver init function throwing an exception or returning None\n\n By having driver entry_points pointing to a function, we defer loading the driver\n module or running any code until required.\n\n :param str group: Name of the entry point group e.g. \"datacube.plugins.io.read\"\n\n :returns: Dictionary String -> Driver Object\n \"\"\"\n\n def safe_load(ep):\n # pylint: disable=bare-except\n try:\n driver_init = ep.resolve()\n except:\n _LOG.warning('Failed to resolve driver %s::%s', group, ep.name)\n return None\n\n try:\n driver = driver_init()\n except:\n _LOG.warning('Exception during driver init, driver name: %s::%s', group, ep.name)\n return None\n\n if driver is None:\n _LOG.warning('Driver init returned None, driver name: %s::%s', group, ep.name)\n\n return driver\n\n def resolve_all(group):\n for ep in iter_entry_points(group=group, name=None):\n driver = safe_load(ep)\n if driver is not None:\n yield (ep.name, driver)\n\n return dict((name, driver) for name, driver in resolve_all(group))\n", "path": "datacube/drivers/driver_cache.py"}, {"content": "from __future__ import absolute_import\n\nfrom .driver_cache import load_drivers\n\n\nclass IndexDriverCache(object):\n def __init__(self, group):\n self._drivers = load_drivers(group)\n\n for driver in list(self._drivers.values()):\n if hasattr(driver, 'aliases'):\n for alias in driver.aliases:\n self._drivers[alias] = driver\n\n def __call__(self, name):\n \"\"\"\n :returns: None if driver with a given name is not found\n\n :param str name: Driver name\n :param str fmt: Dataset format\n :return: Returns WriterDriver\n \"\"\"\n return self._drivers.get(name, None)\n\n def drivers(self):\n \"\"\" Returns list of driver names\n \"\"\"\n return list(self._drivers.keys())\n\n\ndef index_cache():\n \"\"\" Singleton for WriterDriverCache\n \"\"\"\n # pylint: disable=protected-access\n if not hasattr(index_cache, '_instance'):\n index_cache._instance = IndexDriverCache('datacube.plugins.index')\n return index_cache._instance\n\n\ndef index_drivers():\n \"\"\" Returns list driver names\n \"\"\"\n return index_cache().drivers()\n\n\ndef index_driver_by_name(name):\n \"\"\" Lookup writer driver by name\n\n :returns: Initialised writer driver instance\n :returns: None if driver with this name doesn't exist\n \"\"\"\n return index_cache()(name)\n", "path": "datacube/drivers/indexes.py"}, {"content": "#!/usr/bin/env python\n\nimport versioneer\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'compliance-checker',\n 'hypothesis',\n 'mock',\n 'objgraph',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n]\n\nextras_require = {\n 'performance': ['ciso8601', 'bottleneck'],\n 'interactive': ['matplotlib', 'fiona'],\n 'distributed': ['distributed', 'dask[distributed]'],\n 'analytics': ['scipy', 'pyparsing', 'numexpr'],\n 'doc': ['Sphinx', 'setuptools'],\n 'replicas': ['paramiko', 'sshtunnel', 'tqdm'],\n 'celery': ['celery>=4', 'redis'],\n 's3': ['boto3==1.4.3', 'SharedArray', 'pathos', 'zstandard'],\n 'test': tests_require,\n}\n# An 'all' option, following ipython naming conventions.\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='datacube',\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n python_requires='>=3.5.2',\n\n url='https://github.com/opendatacube/datacube-core',\n author='AGDC Collaboration',\n maintainer='AGDC Collaboration',\n maintainer_email='',\n description='An analysis environment for satellite and other earth observation data',\n long_description=open('README.rst').read(),\n license='Apache License 2.0',\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: GIS\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n ],\n\n packages=find_packages(\n exclude=('tests', 'tests.*',\n 'integration_tests', 'integration_tests.*')\n ),\n package_data={\n '': ['*.yaml', '*/*.yaml'],\n },\n scripts=[\n 'datacube_apps/scripts/pbs_helpers.sh'\n ],\n setup_requires=[\n 'pytest-runner'\n ],\n install_requires=[\n 'affine',\n 'cachetools',\n 'click>=5.0',\n 'cloudpickle>=0.4',\n 'dask[array]',\n 'gdal>=1.9',\n 'jsonschema',\n 'netcdf4',\n 'numpy',\n 'psycopg2',\n 'pypeg2',\n 'python-dateutil',\n 'pyyaml',\n 'rasterio>=0.9a10', # required for zip reading, 0.9 gets around 1.0a ordering problems\n 'singledispatch',\n 'sqlalchemy',\n 'xarray>=0.9', # >0.9 fixes most problems with `crs` attributes being lost\n ],\n extras_require=extras_require,\n tests_require=tests_require,\n\n entry_points={\n 'console_scripts': [\n 'datacube = datacube.scripts.cli_app:cli',\n 'datacube-search = datacube.scripts.search_tool:cli',\n 'datacube-stacker = datacube_apps.stacker:main',\n 'datacube-worker = datacube.execution.worker:main',\n 'datacube-fixer = datacube_apps.stacker:fixer_main',\n 'datacube-ncml = datacube_apps.ncml:ncml_app',\n 'pixeldrill = datacube_apps.pixeldrill:main [interactive]',\n 'movie_generator = datacube_apps.movie_generator:main',\n 'datacube-simple-replica = datacube_apps.simple_replica:replicate [replicas]'\n ],\n 'datacube.plugins.io.read': [\n 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',\n 's3aio = datacube.drivers.s3.driver:reader_driver_init',\n 's3aio_test = datacube.drivers.s3.driver:reader_test_driver_init'\n ],\n 'datacube.plugins.io.write': [\n 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',\n 's3aio = datacube.drivers.s3.driver:writer_driver_init',\n 's3aio_test = datacube.drivers.s3.driver:writer_test_driver_init',\n ],\n 'datacube.plugins.index': [\n 'default = datacube.index.index:index_driver_init',\n 's3aio_index = datacube.drivers.s3aio_index:index_driver_init',\n ],\n },\n)\n", "path": "setup.py"}]}
| 3,218 | 984 |
gh_patches_debug_5603
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-15611
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Empty search entries are being created for device asset tags
### Deployment Type
NetBox Cloud
### NetBox Version
v3.7.4
### Python Version
3.11
### Steps to Reproduce
1. Create a new device and note its database ID
2. In the NetBox shell, inspect all search entries associated with it:
```python
ct = ContentType.objects.get_for_model(Device)
device_id = 107
entries = CachedValue.objects.filter(object_type=ct, object_id=device_id)
for entry in entries:
print(f'{entry.field}: {entry.value}')
```
### Expected Behavior
Only fields which have a meaningful value set should have search entries created.
### Observed Behavior
After creating a device with a description, I see three entries for it:
```
asset_tag: None
name: device1
description: asdasdasd
```
The value of `asset_tag` is null.
</issue>
<code>
[start of netbox/netbox/search/__init__.py]
1 from collections import namedtuple
2
3 from django.db import models
4
5 from ipam.fields import IPAddressField, IPNetworkField
6 from netbox.registry import registry
7
8 ObjectFieldValue = namedtuple('ObjectFieldValue', ('name', 'type', 'weight', 'value'))
9
10
11 class FieldTypes:
12 FLOAT = 'float'
13 INTEGER = 'int'
14 STRING = 'str'
15 INET = 'inet'
16 CIDR = 'cidr'
17
18
19 class LookupTypes:
20 PARTIAL = 'icontains'
21 EXACT = 'iexact'
22 STARTSWITH = 'istartswith'
23 ENDSWITH = 'iendswith'
24 REGEX = 'iregex'
25
26
27 class SearchIndex:
28 """
29 Base class for building search indexes.
30
31 Attributes:
32 model: The model class for which this index is used.
33 category: The label of the group under which this indexer is categorized (for form field display). If none,
34 the name of the model's app will be used.
35 fields: An iterable of two-tuples defining the model fields to be indexed and the weight associated with each.
36 display_attrs: An iterable of additional object attributes to include when displaying search results.
37 """
38 model = None
39 category = None
40 fields = ()
41 display_attrs = ()
42
43 @staticmethod
44 def get_field_type(instance, field_name):
45 """
46 Return the data type of the specified model field.
47 """
48 field_cls = instance._meta.get_field(field_name).__class__
49 if issubclass(field_cls, (models.FloatField, models.DecimalField)):
50 return FieldTypes.FLOAT
51 if issubclass(field_cls, IPAddressField):
52 return FieldTypes.INET
53 if issubclass(field_cls, IPNetworkField):
54 return FieldTypes.CIDR
55 if issubclass(field_cls, models.IntegerField):
56 return FieldTypes.INTEGER
57 return FieldTypes.STRING
58
59 @staticmethod
60 def get_field_value(instance, field_name):
61 """
62 Return the value of the specified model field as a string.
63 """
64 return str(getattr(instance, field_name))
65
66 @classmethod
67 def get_category(cls):
68 return cls.category or cls.model._meta.app_config.verbose_name
69
70 @classmethod
71 def to_cache(cls, instance, custom_fields=None):
72 """
73 Return a list of ObjectFieldValue representing the instance fields to be cached.
74
75 Args:
76 instance: The instance being cached.
77 custom_fields: An iterable of CustomFields to include when caching the instance. If None, all custom fields
78 defined for the model will be included. (This can also be provided during bulk caching to avoid looking
79 up the available custom fields for each instance.)
80 """
81 values = []
82
83 # Capture built-in fields
84 for name, weight in cls.fields:
85 type_ = cls.get_field_type(instance, name)
86 value = cls.get_field_value(instance, name)
87 if type_ and value:
88 values.append(
89 ObjectFieldValue(name, type_, weight, value)
90 )
91
92 # Capture custom fields
93 if getattr(instance, 'custom_field_data', None):
94 if custom_fields is None:
95 custom_fields = instance.custom_fields
96 for cf in custom_fields:
97 type_ = cf.search_type
98 value = instance.custom_field_data.get(cf.name)
99 weight = cf.search_weight
100 if type_ and value and weight:
101 values.append(
102 ObjectFieldValue(f'cf_{cf.name}', type_, weight, value)
103 )
104
105 return values
106
107
108 def get_indexer(model):
109 """
110 Get the SearchIndex class for the given model.
111 """
112 label = f'{model._meta.app_label}.{model._meta.model_name}'
113
114 return registry['search'][label]
115
116
117 def register_search(cls):
118 """
119 Decorator for registering a SearchIndex class.
120 """
121 model = cls.model
122 label = f'{model._meta.app_label}.{model._meta.model_name}'
123 registry['search'][label] = cls
124
125 return cls
126
[end of netbox/netbox/search/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/netbox/netbox/search/__init__.py b/netbox/netbox/search/__init__.py
--- a/netbox/netbox/search/__init__.py
+++ b/netbox/netbox/search/__init__.py
@@ -59,9 +59,10 @@
@staticmethod
def get_field_value(instance, field_name):
"""
- Return the value of the specified model field as a string.
+ Return the value of the specified model field as a string (or None).
"""
- return str(getattr(instance, field_name))
+ if value := getattr(instance, field_name):
+ return str(value)
@classmethod
def get_category(cls):
|
{"golden_diff": "diff --git a/netbox/netbox/search/__init__.py b/netbox/netbox/search/__init__.py\n--- a/netbox/netbox/search/__init__.py\n+++ b/netbox/netbox/search/__init__.py\n@@ -59,9 +59,10 @@\n @staticmethod\n def get_field_value(instance, field_name):\n \"\"\"\n- Return the value of the specified model field as a string.\n+ Return the value of the specified model field as a string (or None).\n \"\"\"\n- return str(getattr(instance, field_name))\n+ if value := getattr(instance, field_name):\n+ return str(value)\n \n @classmethod\n def get_category(cls):\n", "issue": "Empty search entries are being created for device asset tags\n### Deployment Type\n\nNetBox Cloud\n\n### NetBox Version\n\nv3.7.4\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\n1. Create a new device and note its database ID\r\n2. In the NetBox shell, inspect all search entries associated with it:\r\n\r\n```python\r\nct = ContentType.objects.get_for_model(Device)\r\ndevice_id = 107\r\nentries = CachedValue.objects.filter(object_type=ct, object_id=device_id)\r\nfor entry in entries:\r\n print(f'{entry.field}: {entry.value}')\r\n```\n\n### Expected Behavior\n\nOnly fields which have a meaningful value set should have search entries created.\n\n### Observed Behavior\n\nAfter creating a device with a description, I see three entries for it:\r\n\r\n```\r\nasset_tag: None\r\nname: device1\r\ndescription: asdasdasd\r\n```\r\n\r\nThe value of `asset_tag` is null.\n", "before_files": [{"content": "from collections import namedtuple\n\nfrom django.db import models\n\nfrom ipam.fields import IPAddressField, IPNetworkField\nfrom netbox.registry import registry\n\nObjectFieldValue = namedtuple('ObjectFieldValue', ('name', 'type', 'weight', 'value'))\n\n\nclass FieldTypes:\n FLOAT = 'float'\n INTEGER = 'int'\n STRING = 'str'\n INET = 'inet'\n CIDR = 'cidr'\n\n\nclass LookupTypes:\n PARTIAL = 'icontains'\n EXACT = 'iexact'\n STARTSWITH = 'istartswith'\n ENDSWITH = 'iendswith'\n REGEX = 'iregex'\n\n\nclass SearchIndex:\n \"\"\"\n Base class for building search indexes.\n\n Attributes:\n model: The model class for which this index is used.\n category: The label of the group under which this indexer is categorized (for form field display). If none,\n the name of the model's app will be used.\n fields: An iterable of two-tuples defining the model fields to be indexed and the weight associated with each.\n display_attrs: An iterable of additional object attributes to include when displaying search results.\n \"\"\"\n model = None\n category = None\n fields = ()\n display_attrs = ()\n\n @staticmethod\n def get_field_type(instance, field_name):\n \"\"\"\n Return the data type of the specified model field.\n \"\"\"\n field_cls = instance._meta.get_field(field_name).__class__\n if issubclass(field_cls, (models.FloatField, models.DecimalField)):\n return FieldTypes.FLOAT\n if issubclass(field_cls, IPAddressField):\n return FieldTypes.INET\n if issubclass(field_cls, IPNetworkField):\n return FieldTypes.CIDR\n if issubclass(field_cls, models.IntegerField):\n return FieldTypes.INTEGER\n return FieldTypes.STRING\n\n @staticmethod\n def get_field_value(instance, field_name):\n \"\"\"\n Return the value of the specified model field as a string.\n \"\"\"\n return str(getattr(instance, field_name))\n\n @classmethod\n def get_category(cls):\n return cls.category or cls.model._meta.app_config.verbose_name\n\n @classmethod\n def to_cache(cls, instance, custom_fields=None):\n \"\"\"\n Return a list of ObjectFieldValue representing the instance fields to be cached.\n\n Args:\n instance: The instance being cached.\n custom_fields: An iterable of CustomFields to include when caching the instance. If None, all custom fields\n defined for the model will be included. (This can also be provided during bulk caching to avoid looking\n up the available custom fields for each instance.)\n \"\"\"\n values = []\n\n # Capture built-in fields\n for name, weight in cls.fields:\n type_ = cls.get_field_type(instance, name)\n value = cls.get_field_value(instance, name)\n if type_ and value:\n values.append(\n ObjectFieldValue(name, type_, weight, value)\n )\n\n # Capture custom fields\n if getattr(instance, 'custom_field_data', None):\n if custom_fields is None:\n custom_fields = instance.custom_fields\n for cf in custom_fields:\n type_ = cf.search_type\n value = instance.custom_field_data.get(cf.name)\n weight = cf.search_weight\n if type_ and value and weight:\n values.append(\n ObjectFieldValue(f'cf_{cf.name}', type_, weight, value)\n )\n\n return values\n\n\ndef get_indexer(model):\n \"\"\"\n Get the SearchIndex class for the given model.\n \"\"\"\n label = f'{model._meta.app_label}.{model._meta.model_name}'\n\n return registry['search'][label]\n\n\ndef register_search(cls):\n \"\"\"\n Decorator for registering a SearchIndex class.\n \"\"\"\n model = cls.model\n label = f'{model._meta.app_label}.{model._meta.model_name}'\n registry['search'][label] = cls\n\n return cls\n", "path": "netbox/netbox/search/__init__.py"}]}
| 1,853 | 149 |
gh_patches_debug_9209
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-223
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dynaconf.contrib.flask_dynaconf.DynaconfConfig to flask.config.Config
Hello, is there a way to convert a dynaconf.contrib.flask_dynaconf.DynaconfConfig object into a flask.config.Config one?
Otherwise, is there a way to convert dynaconf.contrib.flask_dynaconf.DynaconfConfig into a dict?
I have been struggling trying to pass a dynaconf.contrib.flask_dynaconf.DynaconfConfig to a Flask Cache constructor as a config. With flask.config.Config it works but with the dynaconf class it doesn't :-/.
cache = Cache().init_app(app, app.config)
</issue>
<code>
[start of dynaconf/default_settings.py]
1 import importlib
2 import os
3 import sys
4 import warnings
5
6 from dynaconf.utils import raw_logger
7 from dynaconf.utils import RENAMED_VARS
8 from dynaconf.utils import warn_deprecations
9 from dynaconf.utils.files import find_file
10 from dynaconf.utils.parse_conf import parse_conf_data
11
12 try:
13 from dotenv import load_dotenv
14 except ImportError: # pragma: no cover
15 load_dotenv = lambda *args, **kwargs: None # noqa
16
17
18 def try_renamed(key, value, older_key, current_key):
19 if value is None:
20 if key == current_key:
21 if older_key in os.environ:
22 warnings.warn(
23 "{0} is deprecated please use {1}".format(
24 older_key, current_key
25 ),
26 DeprecationWarning,
27 )
28 value = os.environ[older_key]
29 return value
30
31
32 def get(key, default=None):
33 value = os.environ.get(key.upper())
34
35 # compatibility with renamed variables
36 for old, new in RENAMED_VARS.items():
37 value = try_renamed(key, value, old, new)
38
39 return (
40 parse_conf_data(value, tomlfy=True) if value is not None else default
41 )
42
43
44 def start_dotenv(obj=None, root_path=None):
45 # load_from_dotenv_if_installed
46 obj = obj or {}
47 _find_file = getattr(obj, "find_file", find_file)
48 root_path = (
49 root_path
50 or getattr(obj, "_root_path", None)
51 or get("ROOT_PATH_FOR_DYNACONF")
52 )
53 raw_logger().debug(
54 "Starting Dynaconf Dotenv %s",
55 "for {0}".format(root_path) if root_path else "Base",
56 )
57
58 dotenv_path = (
59 obj.get("DOTENV_PATH_FOR_DYNACONF")
60 or get("DOTENV_PATH_FOR_DYNACONF")
61 or _find_file(".env", project_root=root_path)
62 )
63
64 load_dotenv(
65 dotenv_path,
66 verbose=obj.get("DOTENV_VERBOSE_FOR_DYNACONF", False),
67 override=obj.get("DOTENV_OVERRIDE_FOR_DYNACONF", False),
68 )
69
70 warn_deprecations(os.environ)
71
72
73 def reload(*args, **kwargs):
74 start_dotenv(*args, **kwargs)
75 importlib.reload(sys.modules[__name__])
76
77
78 # default proj root
79 # pragma: no cover
80 ROOT_PATH_FOR_DYNACONF = get("ROOT_PATH_FOR_DYNACONF", None)
81
82 # Default settings file
83 default_paths = (
84 "settings.py,.secrets.py,"
85 "settings.toml,settings.tml,.secrets.toml,.secrets.tml,"
86 "settings.yaml,settings.yml,.secrets.yaml,.secrets.yml,"
87 "settings.ini,settings.conf,settings.properties,"
88 ".secrets.ini,.secrets.conf,.secrets.properties,"
89 "settings.json,.secrets.json"
90 )
91 SETTINGS_FILE_FOR_DYNACONF = get("SETTINGS_FILE_FOR_DYNACONF", default_paths)
92
93 # # ENV SETTINGS
94 # # In dynaconf 1.0.0 `NAMESPACE` got renamed to `ENV`
95
96 # The environment variable to switch current env
97 ENV_SWITCHER_FOR_DYNACONF = get(
98 "ENV_SWITCHER_FOR_DYNACONF", "ENV_FOR_DYNACONF"
99 )
100
101 # The current env by default is DEVELOPMENT
102 # to switch is needed to `export ENV_FOR_DYNACONF=PRODUCTION`
103 # or put that value in .env file
104 # this value is used only when reading files like .toml|yaml|ini|json
105 ENV_FOR_DYNACONF = get(ENV_SWITCHER_FOR_DYNACONF, "DEVELOPMENT")
106
107 # Default values is taken from DEFAULT pseudo env
108 # this value is used only when reading files like .toml|yaml|ini|json
109 DEFAULT_ENV_FOR_DYNACONF = get("DEFAULT_ENV_FOR_DYNACONF", "DEFAULT")
110
111 # Global values are taken from DYNACONF env used for exported envvars
112 # Values here overwrites all other envs
113 # This namespace is used for files and also envvars
114 ENVVAR_PREFIX_FOR_DYNACONF = get("ENVVAR_PREFIX_FOR_DYNACONF", "DYNACONF")
115
116 # The default encoding to open settings files
117 ENCODING_FOR_DYNACONF = get("ENCODING_FOR_DYNACONF", "utf-8")
118
119 # Merge objects on load
120 MERGE_ENABLED_FOR_DYNACONF = get("MERGE_ENABLED_FOR_DYNACONF", False)
121
122 # BY default `__` is the separator for nested env vars
123 # export `DYNACONF__DATABASE__server=server.com`
124 # export `DYNACONF__DATABASE__PORT=6666`
125 # Should result in settings.DATABASE == {'server': 'server.com', 'PORT': 6666}
126 # To disable it one can set `NESTED_SEPARATOR_FOR_DYNACONF=false`
127 NESTED_SEPARATOR_FOR_DYNACONF = get("NESTED_SEPARATOR_FOR_DYNACONF", "__")
128
129 # The env var specifying settings module
130 ENVVAR_FOR_DYNACONF = get("ENVVAR_FOR_DYNACONF", "SETTINGS_FILE_FOR_DYNACONF")
131
132 # Default values for redis configs
133 default_redis = {
134 "host": get("REDIS_HOST_FOR_DYNACONF", "localhost"),
135 "port": int(get("REDIS_PORT_FOR_DYNACONF", 6379)),
136 "db": int(get("REDIS_DB_FOR_DYNACONF", 0)),
137 "decode_responses": get("REDIS_DECODE_FOR_DYNACONF", True),
138 }
139 REDIS_FOR_DYNACONF = get("REDIS_FOR_DYNACONF", default_redis)
140 REDIS_ENABLED_FOR_DYNACONF = get("REDIS_ENABLED_FOR_DYNACONF", False)
141
142 # Hashicorp Vault Project
143 vault_scheme = get("VAULT_SCHEME_FOR_DYNACONF", "http")
144 vault_host = get("VAULT_HOST_FOR_DYNACONF", "localhost")
145 vault_port = get("VAULT_PORT_FOR_DYNACONF", "8200")
146 default_vault = {
147 "url": get(
148 "VAULT_URL_FOR_DYNACONF",
149 "{}://{}:{}".format(vault_scheme, vault_host, vault_port),
150 ),
151 "token": get("VAULT_TOKEN_FOR_DYNACONF", None),
152 "cert": get("VAULT_CERT_FOR_DYNACONF", None),
153 "verify": get("VAULT_VERIFY_FOR_DYNACONF", None),
154 "timeout": get("VAULT_TIMEOUT_FOR_DYNACONF", None),
155 "proxies": get("VAULT_PROXIES_FOR_DYNACONF", None),
156 "allow_redirects": get("VAULT_ALLOW_REDIRECTS_FOR_DYNACONF", None),
157 }
158 VAULT_FOR_DYNACONF = get("VAULT_FOR_DYNACONF", default_vault)
159 VAULT_ENABLED_FOR_DYNACONF = get("VAULT_ENABLED_FOR_DYNACONF", False)
160 VAULT_PATH_FOR_DYNACONF = get("VAULT_PATH_FOR_DYNACONF", "dynaconf")
161 VAULT_ROLE_ID_FOR_DYNACONF = get("VAULT_ROLE_ID_FOR_DYNACONF", None)
162 VAULT_SECRET_ID_FOR_DYNACONF = get("VAULT_SECRET_ID_FOR_DYNACONF", None)
163
164 # Only core loaders defined on this list will be invoked
165 core_loaders = ["YAML", "TOML", "INI", "JSON", "PY"]
166 CORE_LOADERS_FOR_DYNACONF = get("CORE_LOADERS_FOR_DYNACONF", core_loaders)
167
168 # External Loaders to read vars from different data stores
169 default_loaders = [
170 "dynaconf.loaders.env_loader",
171 # 'dynaconf.loaders.redis_loader'
172 # 'dynaconf.loaders.vault_loader'
173 ]
174 LOADERS_FOR_DYNACONF = get("LOADERS_FOR_DYNACONF", default_loaders)
175
176 # Errors in loaders should be silenced?
177 SILENT_ERRORS_FOR_DYNACONF = get("SILENT_ERRORS_FOR_DYNACONF", True)
178
179 # always fresh variables
180 FRESH_VARS_FOR_DYNACONF = get("FRESH_VARS_FOR_DYNACONF", [])
181
182 # debug
183 DEBUG_LEVEL_FOR_DYNACONF = get("DEBUG_LEVEL_FOR_DYNACONF", "NOTSET")
184
185 YAML = get("YAML", None)
186 TOML = get("TOML", None)
187 JSON = get("JSON", None)
188 INI = get("INI", None)
189
190 DOTENV_PATH_FOR_DYNACONF = get("DOTENV_PATH_FOR_DYNACONF", None)
191 DOTENV_VERBOSE_FOR_DYNACONF = get("DOTENV_VERBOSE_FOR_DYNACONF", False)
192 DOTENV_OVERRIDE_FOR_DYNACONF = get("DOTENV_OVERRIDE_FOR_DYNACONF", False)
193
194 # Currently this is only used by cli. INSTANCE_FOR_DYNACONF specifies python
195 # dotted path to custom LazySettings instance. Last dotted path item should be
196 # instance of LazySettings.
197 INSTANCE_FOR_DYNACONF = get("INSTANCE_FOR_DYNACONF", None)
198
199 # https://msg.pyyaml.org/load
200 YAML_LOADER_FOR_DYNACONF = get("YAML_LOADER_FOR_DYNACONF", "full_load")
201
202 # Use commentjson? https://commentjson.readthedocs.io/en/latest/
203 COMMENTJSON_ENABLED_FOR_DYNACONF = get(
204 "COMMENTJSON_ENABLED_FOR_DYNACONF", False
205 )
206
207 # Extra file, or list of files where to look for secrets
208 # useful for CI environment like jenkins
209 # where you can export this variable pointing to a local
210 # absolute path of the secrets file.
211 SECRETS_FOR_DYNACONF = get("SECRETS_FOR_DYNACONF", None)
212
213 # To include extra paths based on envvar
214 INCLUDES_FOR_DYNACONF = get("INCLUDES_FOR_DYNACONF", [])
215
216 # Files to skip if found on search tree
217 SKIP_FILES_FOR_DYNACONF = get("SKIP_FILES_FOR_DYNACONF", [])
218
219
220 # Backwards compatibility with renamed variables
221 for old, new in RENAMED_VARS.items():
222 setattr(sys.modules[__name__], old, locals()[new])
223
[end of dynaconf/default_settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dynaconf/default_settings.py b/dynaconf/default_settings.py
--- a/dynaconf/default_settings.py
+++ b/dynaconf/default_settings.py
@@ -182,11 +182,6 @@
# debug
DEBUG_LEVEL_FOR_DYNACONF = get("DEBUG_LEVEL_FOR_DYNACONF", "NOTSET")
-YAML = get("YAML", None)
-TOML = get("TOML", None)
-JSON = get("JSON", None)
-INI = get("INI", None)
-
DOTENV_PATH_FOR_DYNACONF = get("DOTENV_PATH_FOR_DYNACONF", None)
DOTENV_VERBOSE_FOR_DYNACONF = get("DOTENV_VERBOSE_FOR_DYNACONF", False)
DOTENV_OVERRIDE_FOR_DYNACONF = get("DOTENV_OVERRIDE_FOR_DYNACONF", False)
|
{"golden_diff": "diff --git a/dynaconf/default_settings.py b/dynaconf/default_settings.py\n--- a/dynaconf/default_settings.py\n+++ b/dynaconf/default_settings.py\n@@ -182,11 +182,6 @@\n # debug\n DEBUG_LEVEL_FOR_DYNACONF = get(\"DEBUG_LEVEL_FOR_DYNACONF\", \"NOTSET\")\n \n-YAML = get(\"YAML\", None)\n-TOML = get(\"TOML\", None)\n-JSON = get(\"JSON\", None)\n-INI = get(\"INI\", None)\n-\n DOTENV_PATH_FOR_DYNACONF = get(\"DOTENV_PATH_FOR_DYNACONF\", None)\n DOTENV_VERBOSE_FOR_DYNACONF = get(\"DOTENV_VERBOSE_FOR_DYNACONF\", False)\n DOTENV_OVERRIDE_FOR_DYNACONF = get(\"DOTENV_OVERRIDE_FOR_DYNACONF\", False)\n", "issue": "dynaconf.contrib.flask_dynaconf.DynaconfConfig to flask.config.Config\nHello, is there a way to convert a dynaconf.contrib.flask_dynaconf.DynaconfConfig object into a flask.config.Config one?\r\nOtherwise, is there a way to convert dynaconf.contrib.flask_dynaconf.DynaconfConfig into a dict?\r\n\r\nI have been struggling trying to pass a dynaconf.contrib.flask_dynaconf.DynaconfConfig to a Flask Cache constructor as a config. With flask.config.Config it works but with the dynaconf class it doesn't :-/.\r\n \r\ncache = Cache().init_app(app, app.config)\r\n\n", "before_files": [{"content": "import importlib\nimport os\nimport sys\nimport warnings\n\nfrom dynaconf.utils import raw_logger\nfrom dynaconf.utils import RENAMED_VARS\nfrom dynaconf.utils import warn_deprecations\nfrom dynaconf.utils.files import find_file\nfrom dynaconf.utils.parse_conf import parse_conf_data\n\ntry:\n from dotenv import load_dotenv\nexcept ImportError: # pragma: no cover\n load_dotenv = lambda *args, **kwargs: None # noqa\n\n\ndef try_renamed(key, value, older_key, current_key):\n if value is None:\n if key == current_key:\n if older_key in os.environ:\n warnings.warn(\n \"{0} is deprecated please use {1}\".format(\n older_key, current_key\n ),\n DeprecationWarning,\n )\n value = os.environ[older_key]\n return value\n\n\ndef get(key, default=None):\n value = os.environ.get(key.upper())\n\n # compatibility with renamed variables\n for old, new in RENAMED_VARS.items():\n value = try_renamed(key, value, old, new)\n\n return (\n parse_conf_data(value, tomlfy=True) if value is not None else default\n )\n\n\ndef start_dotenv(obj=None, root_path=None):\n # load_from_dotenv_if_installed\n obj = obj or {}\n _find_file = getattr(obj, \"find_file\", find_file)\n root_path = (\n root_path\n or getattr(obj, \"_root_path\", None)\n or get(\"ROOT_PATH_FOR_DYNACONF\")\n )\n raw_logger().debug(\n \"Starting Dynaconf Dotenv %s\",\n \"for {0}\".format(root_path) if root_path else \"Base\",\n )\n\n dotenv_path = (\n obj.get(\"DOTENV_PATH_FOR_DYNACONF\")\n or get(\"DOTENV_PATH_FOR_DYNACONF\")\n or _find_file(\".env\", project_root=root_path)\n )\n\n load_dotenv(\n dotenv_path,\n verbose=obj.get(\"DOTENV_VERBOSE_FOR_DYNACONF\", False),\n override=obj.get(\"DOTENV_OVERRIDE_FOR_DYNACONF\", False),\n )\n\n warn_deprecations(os.environ)\n\n\ndef reload(*args, **kwargs):\n start_dotenv(*args, **kwargs)\n importlib.reload(sys.modules[__name__])\n\n\n# default proj root\n# pragma: no cover\nROOT_PATH_FOR_DYNACONF = get(\"ROOT_PATH_FOR_DYNACONF\", None)\n\n# Default settings file\ndefault_paths = (\n \"settings.py,.secrets.py,\"\n \"settings.toml,settings.tml,.secrets.toml,.secrets.tml,\"\n \"settings.yaml,settings.yml,.secrets.yaml,.secrets.yml,\"\n \"settings.ini,settings.conf,settings.properties,\"\n \".secrets.ini,.secrets.conf,.secrets.properties,\"\n \"settings.json,.secrets.json\"\n)\nSETTINGS_FILE_FOR_DYNACONF = get(\"SETTINGS_FILE_FOR_DYNACONF\", default_paths)\n\n# # ENV SETTINGS\n# # In dynaconf 1.0.0 `NAMESPACE` got renamed to `ENV`\n\n# The environment variable to switch current env\nENV_SWITCHER_FOR_DYNACONF = get(\n \"ENV_SWITCHER_FOR_DYNACONF\", \"ENV_FOR_DYNACONF\"\n)\n\n# The current env by default is DEVELOPMENT\n# to switch is needed to `export ENV_FOR_DYNACONF=PRODUCTION`\n# or put that value in .env file\n# this value is used only when reading files like .toml|yaml|ini|json\nENV_FOR_DYNACONF = get(ENV_SWITCHER_FOR_DYNACONF, \"DEVELOPMENT\")\n\n# Default values is taken from DEFAULT pseudo env\n# this value is used only when reading files like .toml|yaml|ini|json\nDEFAULT_ENV_FOR_DYNACONF = get(\"DEFAULT_ENV_FOR_DYNACONF\", \"DEFAULT\")\n\n# Global values are taken from DYNACONF env used for exported envvars\n# Values here overwrites all other envs\n# This namespace is used for files and also envvars\nENVVAR_PREFIX_FOR_DYNACONF = get(\"ENVVAR_PREFIX_FOR_DYNACONF\", \"DYNACONF\")\n\n# The default encoding to open settings files\nENCODING_FOR_DYNACONF = get(\"ENCODING_FOR_DYNACONF\", \"utf-8\")\n\n# Merge objects on load\nMERGE_ENABLED_FOR_DYNACONF = get(\"MERGE_ENABLED_FOR_DYNACONF\", False)\n\n# BY default `__` is the separator for nested env vars\n# export `DYNACONF__DATABASE__server=server.com`\n# export `DYNACONF__DATABASE__PORT=6666`\n# Should result in settings.DATABASE == {'server': 'server.com', 'PORT': 6666}\n# To disable it one can set `NESTED_SEPARATOR_FOR_DYNACONF=false`\nNESTED_SEPARATOR_FOR_DYNACONF = get(\"NESTED_SEPARATOR_FOR_DYNACONF\", \"__\")\n\n# The env var specifying settings module\nENVVAR_FOR_DYNACONF = get(\"ENVVAR_FOR_DYNACONF\", \"SETTINGS_FILE_FOR_DYNACONF\")\n\n# Default values for redis configs\ndefault_redis = {\n \"host\": get(\"REDIS_HOST_FOR_DYNACONF\", \"localhost\"),\n \"port\": int(get(\"REDIS_PORT_FOR_DYNACONF\", 6379)),\n \"db\": int(get(\"REDIS_DB_FOR_DYNACONF\", 0)),\n \"decode_responses\": get(\"REDIS_DECODE_FOR_DYNACONF\", True),\n}\nREDIS_FOR_DYNACONF = get(\"REDIS_FOR_DYNACONF\", default_redis)\nREDIS_ENABLED_FOR_DYNACONF = get(\"REDIS_ENABLED_FOR_DYNACONF\", False)\n\n# Hashicorp Vault Project\nvault_scheme = get(\"VAULT_SCHEME_FOR_DYNACONF\", \"http\")\nvault_host = get(\"VAULT_HOST_FOR_DYNACONF\", \"localhost\")\nvault_port = get(\"VAULT_PORT_FOR_DYNACONF\", \"8200\")\ndefault_vault = {\n \"url\": get(\n \"VAULT_URL_FOR_DYNACONF\",\n \"{}://{}:{}\".format(vault_scheme, vault_host, vault_port),\n ),\n \"token\": get(\"VAULT_TOKEN_FOR_DYNACONF\", None),\n \"cert\": get(\"VAULT_CERT_FOR_DYNACONF\", None),\n \"verify\": get(\"VAULT_VERIFY_FOR_DYNACONF\", None),\n \"timeout\": get(\"VAULT_TIMEOUT_FOR_DYNACONF\", None),\n \"proxies\": get(\"VAULT_PROXIES_FOR_DYNACONF\", None),\n \"allow_redirects\": get(\"VAULT_ALLOW_REDIRECTS_FOR_DYNACONF\", None),\n}\nVAULT_FOR_DYNACONF = get(\"VAULT_FOR_DYNACONF\", default_vault)\nVAULT_ENABLED_FOR_DYNACONF = get(\"VAULT_ENABLED_FOR_DYNACONF\", False)\nVAULT_PATH_FOR_DYNACONF = get(\"VAULT_PATH_FOR_DYNACONF\", \"dynaconf\")\nVAULT_ROLE_ID_FOR_DYNACONF = get(\"VAULT_ROLE_ID_FOR_DYNACONF\", None)\nVAULT_SECRET_ID_FOR_DYNACONF = get(\"VAULT_SECRET_ID_FOR_DYNACONF\", None)\n\n# Only core loaders defined on this list will be invoked\ncore_loaders = [\"YAML\", \"TOML\", \"INI\", \"JSON\", \"PY\"]\nCORE_LOADERS_FOR_DYNACONF = get(\"CORE_LOADERS_FOR_DYNACONF\", core_loaders)\n\n# External Loaders to read vars from different data stores\ndefault_loaders = [\n \"dynaconf.loaders.env_loader\",\n # 'dynaconf.loaders.redis_loader'\n # 'dynaconf.loaders.vault_loader'\n]\nLOADERS_FOR_DYNACONF = get(\"LOADERS_FOR_DYNACONF\", default_loaders)\n\n# Errors in loaders should be silenced?\nSILENT_ERRORS_FOR_DYNACONF = get(\"SILENT_ERRORS_FOR_DYNACONF\", True)\n\n# always fresh variables\nFRESH_VARS_FOR_DYNACONF = get(\"FRESH_VARS_FOR_DYNACONF\", [])\n\n# debug\nDEBUG_LEVEL_FOR_DYNACONF = get(\"DEBUG_LEVEL_FOR_DYNACONF\", \"NOTSET\")\n\nYAML = get(\"YAML\", None)\nTOML = get(\"TOML\", None)\nJSON = get(\"JSON\", None)\nINI = get(\"INI\", None)\n\nDOTENV_PATH_FOR_DYNACONF = get(\"DOTENV_PATH_FOR_DYNACONF\", None)\nDOTENV_VERBOSE_FOR_DYNACONF = get(\"DOTENV_VERBOSE_FOR_DYNACONF\", False)\nDOTENV_OVERRIDE_FOR_DYNACONF = get(\"DOTENV_OVERRIDE_FOR_DYNACONF\", False)\n\n# Currently this is only used by cli. INSTANCE_FOR_DYNACONF specifies python\n# dotted path to custom LazySettings instance. Last dotted path item should be\n# instance of LazySettings.\nINSTANCE_FOR_DYNACONF = get(\"INSTANCE_FOR_DYNACONF\", None)\n\n# https://msg.pyyaml.org/load\nYAML_LOADER_FOR_DYNACONF = get(\"YAML_LOADER_FOR_DYNACONF\", \"full_load\")\n\n# Use commentjson? https://commentjson.readthedocs.io/en/latest/\nCOMMENTJSON_ENABLED_FOR_DYNACONF = get(\n \"COMMENTJSON_ENABLED_FOR_DYNACONF\", False\n)\n\n# Extra file, or list of files where to look for secrets\n# useful for CI environment like jenkins\n# where you can export this variable pointing to a local\n# absolute path of the secrets file.\nSECRETS_FOR_DYNACONF = get(\"SECRETS_FOR_DYNACONF\", None)\n\n# To include extra paths based on envvar\nINCLUDES_FOR_DYNACONF = get(\"INCLUDES_FOR_DYNACONF\", [])\n\n# Files to skip if found on search tree\nSKIP_FILES_FOR_DYNACONF = get(\"SKIP_FILES_FOR_DYNACONF\", [])\n\n\n# Backwards compatibility with renamed variables\nfor old, new in RENAMED_VARS.items():\n setattr(sys.modules[__name__], old, locals()[new])\n", "path": "dynaconf/default_settings.py"}]}
| 3,541 | 192 |
gh_patches_debug_11540
|
rasdani/github-patches
|
git_diff
|
plotly__dash-1493
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] dash doesn't get imported when a file named "org.py", "dash.py", or "test.py" with specific content is present in the current directory // "AttributeError: module 'dash' has no attribute 'Dash'"
**Describe your context**
```
dash (1.9.1)
dash-core-components (1.8.1)
dash-html-components (1.0.2)
dash-renderer (1.2.4)
dash-table (4.6.1)
```
**Describe the bug**
If a file named ``org.py`` is present in the current directory with the following content:
```
import dash_core_components as dcc
```
then dash doesn't import and I get the following message:
```
>>> import dash
Dash was not successfully imported. Make sure you don't have a file named
'dash.py' in your current directory.
```
**Expected behavior**
dash should import without any error.
**Additional info**
- The org.py is never imported
- If I rename the file to a different name dash get imported without any problem.
- The problem is shown also with ``import dash_html_components as html```
- The problem is shown either on Windows and in Linux
- Tested with python3.4, python3.6, python3.8
**Steps to replicate the problem on Linux**
```
$ mkdir mytest
$ cd mytest
$ echo "import dash_core_components as dcc" > org.py
$ python3 -m venv venv
$ . venv/bin/activate
(venv) $ pip install dash
(venv) $ python
Python 3.4.6 (default, Mar 01 2017, 16:52:22) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dash
Dash was not successfully imported. Make sure you don't have a file named
'dash.py' in your current directory.
(venv) $
```
if I rename the file the import works:
```
(venv) $ mv org.py othername.py
(venv) $ python
Python 3.4.6 (default, Mar 01 2017, 16:52:22) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dash
>>>
```
</issue>
<code>
[start of dash/__init__.py]
1 from .dash import Dash, no_update # noqa: F401
2 from . import dependencies # noqa: F401
3 from . import development # noqa: F401
4 from . import exceptions # noqa: F401
5 from . import resources # noqa: F401
6 from .version import __version__ # noqa: F401
7 from ._callback_context import callback_context # noqa: F401
8
[end of dash/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dash/__init__.py b/dash/__init__.py
--- a/dash/__init__.py
+++ b/dash/__init__.py
@@ -1,7 +1,11 @@
-from .dash import Dash, no_update # noqa: F401
-from . import dependencies # noqa: F401
-from . import development # noqa: F401
-from . import exceptions # noqa: F401
-from . import resources # noqa: F401
-from .version import __version__ # noqa: F401
-from ._callback_context import callback_context # noqa: F401
+# pylint: disable=C0413
+# __plotly_dash is for the "make sure you don't have a dash.py" check
+# must come before any other imports.
+__plotly_dash = True
+from .dash import Dash, no_update # noqa: F401,E402
+from . import dependencies # noqa: F401,E402
+from . import development # noqa: F401,E402
+from . import exceptions # noqa: F401,E402
+from . import resources # noqa: F401,E402
+from .version import __version__ # noqa: F401,E402
+from ._callback_context import callback_context # noqa: F401,E402
|
{"golden_diff": "diff --git a/dash/__init__.py b/dash/__init__.py\n--- a/dash/__init__.py\n+++ b/dash/__init__.py\n@@ -1,7 +1,11 @@\n-from .dash import Dash, no_update # noqa: F401\n-from . import dependencies # noqa: F401\n-from . import development # noqa: F401\n-from . import exceptions # noqa: F401\n-from . import resources # noqa: F401\n-from .version import __version__ # noqa: F401\n-from ._callback_context import callback_context # noqa: F401\n+# pylint: disable=C0413\n+# __plotly_dash is for the \"make sure you don't have a dash.py\" check\n+# must come before any other imports.\n+__plotly_dash = True\n+from .dash import Dash, no_update # noqa: F401,E402\n+from . import dependencies # noqa: F401,E402\n+from . import development # noqa: F401,E402\n+from . import exceptions # noqa: F401,E402\n+from . import resources # noqa: F401,E402\n+from .version import __version__ # noqa: F401,E402\n+from ._callback_context import callback_context # noqa: F401,E402\n", "issue": "[BUG] dash doesn't get imported when a file named \"org.py\", \"dash.py\", or \"test.py\" with specific content is present in the current directory // \"AttributeError: module 'dash' has no attribute 'Dash'\"\n**Describe your context**\r\n\r\n```\r\ndash (1.9.1)\r\ndash-core-components (1.8.1)\r\ndash-html-components (1.0.2)\r\ndash-renderer (1.2.4)\r\ndash-table (4.6.1)\r\n\r\n```\r\n\r\n**Describe the bug**\r\n\r\nIf a file named ``org.py`` is present in the current directory with the following content:\r\n\r\n```\r\nimport dash_core_components as dcc\r\n```\r\n\r\nthen dash doesn't import and I get the following message:\r\n```\r\n>>> import dash\r\nDash was not successfully imported. Make sure you don't have a file named\r\n'dash.py' in your current directory.\r\n```\r\n\r\n**Expected behavior**\r\ndash should import without any error.\r\n\r\n**Additional info**\r\n- The org.py is never imported\r\n- If I rename the file to a different name dash get imported without any problem.\r\n- The problem is shown also with ``import dash_html_components as html```\r\n- The problem is shown either on Windows and in Linux\r\n- Tested with python3.4, python3.6, python3.8\r\n\r\n**Steps to replicate the problem on Linux**\r\n```\r\n$ mkdir mytest\r\n$ cd mytest\r\n$ echo \"import dash_core_components as dcc\" > org.py\r\n$ python3 -m venv venv\r\n$ . venv/bin/activate\r\n(venv) $ pip install dash\r\n(venv) $ python\r\nPython 3.4.6 (default, Mar 01 2017, 16:52:22) [GCC] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import dash\r\nDash was not successfully imported. Make sure you don't have a file named\r\n'dash.py' in your current directory.\r\n(venv) $\r\n```\r\n\r\nif I rename the file the import works:\r\n```\r\n(venv) $ mv org.py othername.py\r\n(venv) $ python\r\nPython 3.4.6 (default, Mar 01 2017, 16:52:22) [GCC] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import dash\r\n>>>\r\n```\r\n\n", "before_files": [{"content": "from .dash import Dash, no_update # noqa: F401\nfrom . import dependencies # noqa: F401\nfrom . import development # noqa: F401\nfrom . import exceptions # noqa: F401\nfrom . import resources # noqa: F401\nfrom .version import __version__ # noqa: F401\nfrom ._callback_context import callback_context # noqa: F401\n", "path": "dash/__init__.py"}]}
| 1,169 | 331 |
gh_patches_debug_25606
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-521
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: all_members_are_administrators fails
I don;t know if telegram api changed but the parameter to tell if all administrators in a group are admin has changed: to `all_members_are_administrators` Chat's fail to update with this parameter
### Steps to reproduce
1. Create a group with "all members are administrators enabled
2. Add a bot to it
3. send the bot a message
4. ` assert print(update.message.chat.all_members_are_admins)==True`
### Expected behaviour
It should pass the assert
### Actual behaviour
This failes the assert
### Way to fix
rename to `all_members_are_administrators`
</issue>
<code>
[start of telegram/chat.py]
1 #!/usr/bin/env python
2 # pylint: disable=C0103,W0622
3 #
4 # A library that provides a Python interface to the Telegram Bot API
5 # Copyright (C) 2015-2016
6 # Leandro Toledo de Souza <[email protected]>
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU Lesser Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU Lesser Public License for more details.
17 #
18 # You should have received a copy of the GNU Lesser Public License
19 # along with this program. If not, see [http://www.gnu.org/licenses/].
20 """This module contains an object that represents a Telegram Chat."""
21
22 from telegram import TelegramObject
23
24
25 class Chat(TelegramObject):
26 """This object represents a Telegram Chat.
27
28 Attributes:
29 id (int):
30 type (str): Can be 'private', 'group', 'supergroup' or 'channel'
31 title (str): Title, for channels and group chats
32 username (str): Username, for private chats and channels if available
33 first_name (str): First name of the other party in a private chat
34 last_name (str): Last name of the other party in a private chat
35 all_members_are_admins (bool): True if a group has 'All Members Are Admins' enabled.
36
37 Args:
38 id (int):
39 type (str):
40 title (Optional[str]):
41 username(Optional[str]):
42 first_name(Optional[str]):
43 last_name(Optional[str]):
44 bot (Optional[Bot]): The Bot to use for instance methods
45 **kwargs (dict): Arbitrary keyword arguments.
46
47 """
48 PRIVATE = 'private'
49 GROUP = 'group'
50 SUPERGROUP = 'supergroup'
51 CHANNEL = 'channel'
52
53 def __init__(self,
54 id,
55 type,
56 title='',
57 username='',
58 first_name='',
59 last_name='',
60 all_members_are_admins=False,
61 bot=None,
62 **kwargs):
63 # Required
64 self.id = int(id)
65 self.type = type
66 # Optionals
67 self.title = title
68 self.username = username
69 self.first_name = first_name
70 self.last_name = last_name
71 self.all_members_are_admins = all_members_are_admins
72
73 self.bot = bot
74
75 @staticmethod
76 def de_json(data, bot):
77 """
78 Args:
79 data (dict):
80 bot (telegram.Bot):
81
82 Returns:
83 telegram.Chat:
84 """
85 if not data:
86 return None
87
88 return Chat(bot=bot, **data)
89
90 def send_action(self, *args, **kwargs):
91 """Shortcut for ``bot.sendChatAction(update.message.chat.id, *args, **kwargs)``"""
92 return self.bot.sendChatAction(self.id, *args, **kwargs)
93
94 def leave(self, *args, **kwargs):
95 """Shortcut for ``bot.leaveChat(update.message.chat.id, *args, **kwargs)``"""
96 return self.bot.leaveChat(self.id, *args, **kwargs)
97
98 def get_administrators(self, *args, **kwargs):
99 """Shortcut for ``bot.getChatAdministrators(update.message.chat.id, *args, **kwargs)``"""
100 return self.bot.getChatAdministrators(self.id, *args, **kwargs)
101
102 def get_members_count(self, *args, **kwargs):
103 """Shortcut for ``bot.getChatMembersCount(update.message.chat.id, *args, **kwargs)``"""
104 return self.bot.getChatMembersCount(self.id, *args, **kwargs)
105
106 def get_member(self, *args, **kwargs):
107 """Shortcut for ``bot.getChatMember(update.message.chat.id, *args, **kwargs)``"""
108 return self.bot.getChatMember(self.id, *args, **kwargs)
109
110 def kick_member(self, *args, **kwargs):
111 """Shortcut for ``bot.kickChatMember(update.message.chat.id, *args, **kwargs)``"""
112 return self.bot.kickChatMember(self.id, *args, **kwargs)
113
114 def unban_member(self, *args, **kwargs):
115 """Shortcut for ``bot.unbanChatMember(update.message.chat.id, *args, **kwargs)``"""
116 return self.bot.unbanChatMember(self.id, *args, **kwargs)
117
[end of telegram/chat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram/chat.py b/telegram/chat.py
--- a/telegram/chat.py
+++ b/telegram/chat.py
@@ -32,7 +32,7 @@
username (str): Username, for private chats and channels if available
first_name (str): First name of the other party in a private chat
last_name (str): Last name of the other party in a private chat
- all_members_are_admins (bool): True if a group has 'All Members Are Admins' enabled.
+ all_members_are_administrators (bool): True if group has 'All Members Are Administrators'
Args:
id (int):
@@ -57,7 +57,7 @@
username='',
first_name='',
last_name='',
- all_members_are_admins=False,
+ all_members_are_administrators=False,
bot=None,
**kwargs):
# Required
@@ -68,7 +68,7 @@
self.username = username
self.first_name = first_name
self.last_name = last_name
- self.all_members_are_admins = all_members_are_admins
+ self.all_members_are_administrators = all_members_are_administrators
self.bot = bot
|
{"golden_diff": "diff --git a/telegram/chat.py b/telegram/chat.py\n--- a/telegram/chat.py\n+++ b/telegram/chat.py\n@@ -32,7 +32,7 @@\n username (str): Username, for private chats and channels if available\n first_name (str): First name of the other party in a private chat\n last_name (str): Last name of the other party in a private chat\n- all_members_are_admins (bool): True if a group has 'All Members Are Admins' enabled.\n+ all_members_are_administrators (bool): True if group has 'All Members Are Administrators'\n \n Args:\n id (int):\n@@ -57,7 +57,7 @@\n username='',\n first_name='',\n last_name='',\n- all_members_are_admins=False,\n+ all_members_are_administrators=False,\n bot=None,\n **kwargs):\n # Required\n@@ -68,7 +68,7 @@\n self.username = username\n self.first_name = first_name\n self.last_name = last_name\n- self.all_members_are_admins = all_members_are_admins\n+ self.all_members_are_administrators = all_members_are_administrators\n \n self.bot = bot\n", "issue": "BUG: all_members_are_administrators fails\nI don;t know if telegram api changed but the parameter to tell if all administrators in a group are admin has changed: to `all_members_are_administrators` Chat's fail to update with this parameter\r\n\r\n### Steps to reproduce\r\n1. Create a group with \"all members are administrators enabled\r\n2. Add a bot to it\r\n3. send the bot a message\r\n4. ` assert print(update.message.chat.all_members_are_admins)==True`\r\n\r\n### Expected behaviour\r\nIt should pass the assert\r\n\r\n### Actual behaviour\r\nThis failes the assert\r\n\r\n### Way to fix\r\nrename to `all_members_are_administrators`\n", "before_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=C0103,W0622\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2016\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram Chat.\"\"\"\n\nfrom telegram import TelegramObject\n\n\nclass Chat(TelegramObject):\n \"\"\"This object represents a Telegram Chat.\n\n Attributes:\n id (int):\n type (str): Can be 'private', 'group', 'supergroup' or 'channel'\n title (str): Title, for channels and group chats\n username (str): Username, for private chats and channels if available\n first_name (str): First name of the other party in a private chat\n last_name (str): Last name of the other party in a private chat\n all_members_are_admins (bool): True if a group has 'All Members Are Admins' enabled.\n\n Args:\n id (int):\n type (str):\n title (Optional[str]):\n username(Optional[str]):\n first_name(Optional[str]):\n last_name(Optional[str]):\n bot (Optional[Bot]): The Bot to use for instance methods\n **kwargs (dict): Arbitrary keyword arguments.\n\n \"\"\"\n PRIVATE = 'private'\n GROUP = 'group'\n SUPERGROUP = 'supergroup'\n CHANNEL = 'channel'\n\n def __init__(self,\n id,\n type,\n title='',\n username='',\n first_name='',\n last_name='',\n all_members_are_admins=False,\n bot=None,\n **kwargs):\n # Required\n self.id = int(id)\n self.type = type\n # Optionals\n self.title = title\n self.username = username\n self.first_name = first_name\n self.last_name = last_name\n self.all_members_are_admins = all_members_are_admins\n\n self.bot = bot\n\n @staticmethod\n def de_json(data, bot):\n \"\"\"\n Args:\n data (dict):\n bot (telegram.Bot):\n\n Returns:\n telegram.Chat:\n \"\"\"\n if not data:\n return None\n\n return Chat(bot=bot, **data)\n\n def send_action(self, *args, **kwargs):\n \"\"\"Shortcut for ``bot.sendChatAction(update.message.chat.id, *args, **kwargs)``\"\"\"\n return self.bot.sendChatAction(self.id, *args, **kwargs)\n\n def leave(self, *args, **kwargs):\n \"\"\"Shortcut for ``bot.leaveChat(update.message.chat.id, *args, **kwargs)``\"\"\"\n return self.bot.leaveChat(self.id, *args, **kwargs)\n\n def get_administrators(self, *args, **kwargs):\n \"\"\"Shortcut for ``bot.getChatAdministrators(update.message.chat.id, *args, **kwargs)``\"\"\"\n return self.bot.getChatAdministrators(self.id, *args, **kwargs)\n\n def get_members_count(self, *args, **kwargs):\n \"\"\"Shortcut for ``bot.getChatMembersCount(update.message.chat.id, *args, **kwargs)``\"\"\"\n return self.bot.getChatMembersCount(self.id, *args, **kwargs)\n\n def get_member(self, *args, **kwargs):\n \"\"\"Shortcut for ``bot.getChatMember(update.message.chat.id, *args, **kwargs)``\"\"\"\n return self.bot.getChatMember(self.id, *args, **kwargs)\n\n def kick_member(self, *args, **kwargs):\n \"\"\"Shortcut for ``bot.kickChatMember(update.message.chat.id, *args, **kwargs)``\"\"\"\n return self.bot.kickChatMember(self.id, *args, **kwargs)\n\n def unban_member(self, *args, **kwargs):\n \"\"\"Shortcut for ``bot.unbanChatMember(update.message.chat.id, *args, **kwargs)``\"\"\"\n return self.bot.unbanChatMember(self.id, *args, **kwargs)\n", "path": "telegram/chat.py"}]}
| 1,909 | 270 |
gh_patches_debug_13165
|
rasdani/github-patches
|
git_diff
|
wright-group__WrightTools-1027
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Zero
>>> wt.units.convert(0, "wn", "nm")
ZeroDivisionError: division by zero
>>> wt.units.convert(0, "nm", "wn")
ZeroDivisionError: division by zero
Should return inf
</issue>
<code>
[start of WrightTools/units.py]
1 """Unit and label handling in WrightTools."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import warnings
8
9 import pint
10
11
12 # --- define --------------------------------------------------------------------------------------
13
14 # Thise "blessed" units are here primarily for backwards compatibility, in particular
15 # to enable the behavior of `data.convert` which will convert freely between the energy units
16 # but does not go to time (where delay will)
17 # Since both of these context can convert to [length] units, they are interconvertible, but we
18 # do not want them to automatically do so.
19 # This list is (at creation time) purely reflective of historical units supported pre pint
20 # There is nothing preventing other units from being used and converted to, only to enable
21 # expected behavior
22 # 2021-01-29 KFS
23 blessed_units = (
24 # angle
25 "rad",
26 "deg",
27 # delay
28 "fs",
29 "ps",
30 "ns",
31 "mm_delay",
32 # energy
33 "nm",
34 "wn",
35 "eV",
36 "meV",
37 "Hz",
38 "THz",
39 "GHz",
40 # optical density
41 "mOD",
42 # position
43 "nm_p",
44 "um",
45 "mm",
46 "cm",
47 "in",
48 # absolute temperature
49 "K",
50 "deg_C",
51 "deg_F",
52 "deg_R",
53 # time
54 "fs_t",
55 "ps_t",
56 "ns_t",
57 "us_t",
58 "ns_t",
59 "s_t",
60 "m_t",
61 "h_t",
62 "d_t",
63 )
64
65 ureg = pint.UnitRegistry()
66 ureg.define("[fluence] = [energy] / [area]")
67
68 ureg.define("OD = [] ")
69
70 ureg.define("wavenumber = 1 / cm = cm^{-1} = wn")
71
72
73 # Aliases for backwards compatability
74 ureg.define("@alias s = s_t")
75 ureg.define("@alias min = m_t")
76 ureg.define("@alias hour = h_t")
77 ureg.define("@alias d = d_t")
78
79 ureg.define("@alias degC = deg_C")
80 ureg.define("@alias degF = deg_F")
81 ureg.define("@alias degR = deg_R")
82
83 ureg.define("@alias m = m_delay")
84
85 delay = pint.Context("delay", defaults={"n": 1, "num_pass": 2})
86 delay.add_transformation(
87 "[length]", "[time]", lambda ureg, x, n=1, num_pass=2: num_pass * x / ureg.speed_of_light * n
88 )
89 delay.add_transformation(
90 "[time]", "[length]", lambda ureg, x, n=1, num_pass=2: x / num_pass * ureg.speed_of_light / n
91 )
92 ureg.enable_contexts("spectroscopy", delay)
93
94 # --- functions -----------------------------------------------------------------------------------
95
96
97 def converter(val, current_unit, destination_unit):
98 """Convert from one unit to another.
99
100 Parameters
101 ----------
102 val : number
103 Number to convert.
104 current_unit : string
105 Current unit.
106 destination_unit : string
107 Destination unit.
108
109 Returns
110 -------
111 number
112 Converted value.
113 """
114 try:
115 val = ureg.Quantity(val, current_unit).to(destination_unit).magnitude
116 except (pint.errors.DimensionalityError, pint.errors.UndefinedUnitError, AttributeError):
117 warnings.warn(
118 "conversion {0} to {1} not valid: returning input".format(
119 current_unit, destination_unit
120 )
121 )
122 return val
123
124
125 convert = converter
126
127
128 def get_symbol(units) -> str:
129 """Get default symbol type.
130
131 Parameters
132 ----------
133 units_str : string
134 Units.
135
136 Returns
137 -------
138 string
139 LaTeX formatted symbol.
140 """
141 quantity = ureg.Quantity(1, ureg[units])
142 if quantity.check("[length]"):
143 return r"\lambda"
144 elif quantity.check("1 / [length]"):
145 return r"\bar\nu"
146 elif quantity.check("[energy]"):
147 return r"\hslash\omega"
148 elif quantity.check("1 / [time]"):
149 return "f"
150 elif quantity.check("[time]"):
151 return r"\tau"
152 elif quantity.check("[fluence]"):
153 return r"\mathcal{F}"
154 elif quantity.check("[temperature]"):
155 return "T"
156 elif ureg[units] in (ureg.deg, ureg.radian):
157 return r"\omega"
158 else:
159 return None
160
161
162 def get_valid_conversions(units, options=blessed_units) -> tuple:
163 return tuple(i for i in options if is_valid_conversion(units, i) and units != i)
164
165
166 def is_valid_conversion(a, b, blessed=True) -> bool:
167 if a is None:
168 return b is None
169 if blessed and a in blessed_units and b in blessed_units:
170 blessed_energy_units = {"nm", "wn", "eV", "meV", "Hz", "THz", "GHz"}
171 if a in blessed_energy_units:
172 return b in blessed_energy_units
173 blessed_delay_units = {"fs", "ps", "ns", "mm_delay"}
174 if a in blessed_delay_units:
175 return b in blessed_delay_units
176 return ureg.Unit(a).dimensionality == ureg.Unit(b).dimensionality
177 try:
178 return ureg.Unit(a).is_compatible_with(b, "spectroscopy")
179 except pint.UndefinedUnitError:
180 return False
181
182
183 def kind(units):
184 """Find the dimensionality of given units.
185
186 Parameters
187 ----------
188 units : string
189 The units of interest
190
191 Returns
192 -------
193 string
194 The kind of the given units. If no match is found, returns None.
195 """
196 if units is None:
197 return None
198 return str(ureg.Unit(units).dimensionality)
199
[end of WrightTools/units.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/WrightTools/units.py b/WrightTools/units.py
--- a/WrightTools/units.py
+++ b/WrightTools/units.py
@@ -115,10 +115,13 @@
val = ureg.Quantity(val, current_unit).to(destination_unit).magnitude
except (pint.errors.DimensionalityError, pint.errors.UndefinedUnitError, AttributeError):
warnings.warn(
- "conversion {0} to {1} not valid: returning input".format(
- current_unit, destination_unit
- )
+ f"conversion {current_unit} to {destination_unit} not valid: returning input"
)
+ except ZeroDivisionError:
+ warnings.warn(
+ f"conversion {current_unit} to {destination_unit} resulted in ZeroDivisionError: returning inf"
+ )
+ return float("inf")
return val
|
{"golden_diff": "diff --git a/WrightTools/units.py b/WrightTools/units.py\n--- a/WrightTools/units.py\n+++ b/WrightTools/units.py\n@@ -115,10 +115,13 @@\n val = ureg.Quantity(val, current_unit).to(destination_unit).magnitude\n except (pint.errors.DimensionalityError, pint.errors.UndefinedUnitError, AttributeError):\n warnings.warn(\n- \"conversion {0} to {1} not valid: returning input\".format(\n- current_unit, destination_unit\n- )\n+ f\"conversion {current_unit} to {destination_unit} not valid: returning input\"\n )\n+ except ZeroDivisionError:\n+ warnings.warn(\n+ f\"conversion {current_unit} to {destination_unit} resulted in ZeroDivisionError: returning inf\"\n+ )\n+ return float(\"inf\")\n return val\n", "issue": "Zero\n>>> wt.units.convert(0, \"wn\", \"nm\")\r\nZeroDivisionError: division by zero\r\n>>> wt.units.convert(0, \"nm\", \"wn\")\r\nZeroDivisionError: division by zero\r\n\r\nShould return inf\n", "before_files": [{"content": "\"\"\"Unit and label handling in WrightTools.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport warnings\n\nimport pint\n\n\n# --- define --------------------------------------------------------------------------------------\n\n# Thise \"blessed\" units are here primarily for backwards compatibility, in particular\n# to enable the behavior of `data.convert` which will convert freely between the energy units\n# but does not go to time (where delay will)\n# Since both of these context can convert to [length] units, they are interconvertible, but we\n# do not want them to automatically do so.\n# This list is (at creation time) purely reflective of historical units supported pre pint\n# There is nothing preventing other units from being used and converted to, only to enable\n# expected behavior\n# 2021-01-29 KFS\nblessed_units = (\n # angle\n \"rad\",\n \"deg\",\n # delay\n \"fs\",\n \"ps\",\n \"ns\",\n \"mm_delay\",\n # energy\n \"nm\",\n \"wn\",\n \"eV\",\n \"meV\",\n \"Hz\",\n \"THz\",\n \"GHz\",\n # optical density\n \"mOD\",\n # position\n \"nm_p\",\n \"um\",\n \"mm\",\n \"cm\",\n \"in\",\n # absolute temperature\n \"K\",\n \"deg_C\",\n \"deg_F\",\n \"deg_R\",\n # time\n \"fs_t\",\n \"ps_t\",\n \"ns_t\",\n \"us_t\",\n \"ns_t\",\n \"s_t\",\n \"m_t\",\n \"h_t\",\n \"d_t\",\n)\n\nureg = pint.UnitRegistry()\nureg.define(\"[fluence] = [energy] / [area]\")\n\nureg.define(\"OD = [] \")\n\nureg.define(\"wavenumber = 1 / cm = cm^{-1} = wn\")\n\n\n# Aliases for backwards compatability\nureg.define(\"@alias s = s_t\")\nureg.define(\"@alias min = m_t\")\nureg.define(\"@alias hour = h_t\")\nureg.define(\"@alias d = d_t\")\n\nureg.define(\"@alias degC = deg_C\")\nureg.define(\"@alias degF = deg_F\")\nureg.define(\"@alias degR = deg_R\")\n\nureg.define(\"@alias m = m_delay\")\n\ndelay = pint.Context(\"delay\", defaults={\"n\": 1, \"num_pass\": 2})\ndelay.add_transformation(\n \"[length]\", \"[time]\", lambda ureg, x, n=1, num_pass=2: num_pass * x / ureg.speed_of_light * n\n)\ndelay.add_transformation(\n \"[time]\", \"[length]\", lambda ureg, x, n=1, num_pass=2: x / num_pass * ureg.speed_of_light / n\n)\nureg.enable_contexts(\"spectroscopy\", delay)\n\n# --- functions -----------------------------------------------------------------------------------\n\n\ndef converter(val, current_unit, destination_unit):\n \"\"\"Convert from one unit to another.\n\n Parameters\n ----------\n val : number\n Number to convert.\n current_unit : string\n Current unit.\n destination_unit : string\n Destination unit.\n\n Returns\n -------\n number\n Converted value.\n \"\"\"\n try:\n val = ureg.Quantity(val, current_unit).to(destination_unit).magnitude\n except (pint.errors.DimensionalityError, pint.errors.UndefinedUnitError, AttributeError):\n warnings.warn(\n \"conversion {0} to {1} not valid: returning input\".format(\n current_unit, destination_unit\n )\n )\n return val\n\n\nconvert = converter\n\n\ndef get_symbol(units) -> str:\n \"\"\"Get default symbol type.\n\n Parameters\n ----------\n units_str : string\n Units.\n\n Returns\n -------\n string\n LaTeX formatted symbol.\n \"\"\"\n quantity = ureg.Quantity(1, ureg[units])\n if quantity.check(\"[length]\"):\n return r\"\\lambda\"\n elif quantity.check(\"1 / [length]\"):\n return r\"\\bar\\nu\"\n elif quantity.check(\"[energy]\"):\n return r\"\\hslash\\omega\"\n elif quantity.check(\"1 / [time]\"):\n return \"f\"\n elif quantity.check(\"[time]\"):\n return r\"\\tau\"\n elif quantity.check(\"[fluence]\"):\n return r\"\\mathcal{F}\"\n elif quantity.check(\"[temperature]\"):\n return \"T\"\n elif ureg[units] in (ureg.deg, ureg.radian):\n return r\"\\omega\"\n else:\n return None\n\n\ndef get_valid_conversions(units, options=blessed_units) -> tuple:\n return tuple(i for i in options if is_valid_conversion(units, i) and units != i)\n\n\ndef is_valid_conversion(a, b, blessed=True) -> bool:\n if a is None:\n return b is None\n if blessed and a in blessed_units and b in blessed_units:\n blessed_energy_units = {\"nm\", \"wn\", \"eV\", \"meV\", \"Hz\", \"THz\", \"GHz\"}\n if a in blessed_energy_units:\n return b in blessed_energy_units\n blessed_delay_units = {\"fs\", \"ps\", \"ns\", \"mm_delay\"}\n if a in blessed_delay_units:\n return b in blessed_delay_units\n return ureg.Unit(a).dimensionality == ureg.Unit(b).dimensionality\n try:\n return ureg.Unit(a).is_compatible_with(b, \"spectroscopy\")\n except pint.UndefinedUnitError:\n return False\n\n\ndef kind(units):\n \"\"\"Find the dimensionality of given units.\n\n Parameters\n ----------\n units : string\n The units of interest\n\n Returns\n -------\n string\n The kind of the given units. If no match is found, returns None.\n \"\"\"\n if units is None:\n return None\n return str(ureg.Unit(units).dimensionality)\n", "path": "WrightTools/units.py"}]}
| 2,347 | 194 |
gh_patches_debug_14699
|
rasdani/github-patches
|
git_diff
|
OpenCTI-Platform__connectors-608
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Riskiq Connector throwing errors
## Description
RiskIQ connector is not working as expected with the correct credentials defined.
## Environment
1. OS - Ubuntu
2. OpenCTI version: 5.1.3
## Riskiq Connector Logs:
INFO:root:Listing Threat-Actors with filters null.,
INFO:root:Connector registered with ID: c455a3a4-cc8f-4133-9f8d-4098fa984de8,
INFO:root:Starting ping alive thread,
INFO:riskiq.client:URL: https://api.riskiq.net/pt/v2,
INFO:root:Starting RiskIQ connector...,
INFO:root:Running RiskIQ connector...,
INFO:root:Connector interval sec: 60,
INFO:root:[RiskIQ] loaded state: {},
INFO:root:RiskIQ connector clean run,
INFO:root:Initiate work for c455a3a4-cc8f-4133-9f8d-4098fa984de8,
INFO:root:[RiskIQ] workid opencti-work--2c314a8c-484e-4a68-9b31-bb782b3b22ed initiated,
INFO:root:[RiskIQ] last run: None,
**ERROR:root:Parser must be a string or character stream, not NoneType **

++Config File++
connector-riskiq:
image: opencti/connector-riskiq:5.1.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=c9dc7053-6bdf-44ca-9dfd-c0e3ff249eb8
- CONNECTOR_ID=c455a3a4-cc8f-4133-9f8d-4098fa984de8
- CONNECTOR_TYPE=EXTERNAL_IMPORT
- CONNECTOR_NAME=RISKIQ
- CONNECTOR_SCOPE=riskiq
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
- RISKIQ_BASE_URL=https://api.riskiq.net/pt/v2
- [email protected]
- RISKIQ_PASSWORD=xxxxxxx
- RISKIQ_INTERVAL_SEC=86400
restart: always
It was working before, after a reboot the riskiq connector started logging the above error as "ERROR:root:Parser must be a string or character stream, not NoneType".
Please help to fix the same.
Thanks
</issue>
<code>
[start of external-import/riskiq/src/riskiq/article_importer.py]
1 # -*- coding: utf-8 -*-
2 """OpenCTI RiskIQ's article importer module."""
3 import datetime
4 import itertools
5 from typing import Any, Mapping, Optional
6
7 from dateutil import parser
8 from pycti import OpenCTIConnectorHelper
9 from stix2 import (
10 Bundle,
11 DomainName,
12 EmailAddress,
13 File,
14 Identity,
15 Indicator,
16 IPv4Address,
17 Mutex,
18 Report,
19 TLP_AMBER,
20 TLP_WHITE,
21 URL,
22 X509Certificate,
23 utils,
24 )
25 from stix2.v21 import _Observable
26
27 from .utils import datetime_to_timestamp
28
29
30 class ArticleImporter:
31 """Article importer class."""
32
33 _LATEST_ARTICLE_TIMESTAMP = "latest_article_timestamp"
34
35 def __init__(
36 self, helper: OpenCTIConnectorHelper, article: dict[str, Any], author: Identity
37 ):
38 """Initialization of the article importer."""
39 self.helper = helper
40 self.article = article
41 self.author = author
42 self.work_id: Optional[str] = None
43 # Use custom properties to set the author and the confidence level of the object.
44 self.custom_props = {
45 "x_opencti_created_by_ref": self.author["id"],
46 }
47
48 def _process_indicator(self, indicator: Indicator) -> list[_Observable]:
49 """
50 Process the indicator depending on its type.
51
52 Parameters
53 ----------
54 indicator : Indicator
55 One indicator from an article.
56
57 Returns
58 -------
59 List of Observable
60 A list of Observable depending on the indicator type.
61 """
62 indicator_type = indicator["type"]
63 values = indicator["values"]
64 tlp_marking = TLP_WHITE if indicator["source"] == "public" else TLP_AMBER
65
66 if indicator_type == "hash_md5":
67 return [
68 File(
69 type="file",
70 hashes={"MD5": v},
71 object_marking_refs=tlp_marking,
72 custom_properties=self.custom_props,
73 )
74 for v in values
75 ]
76
77 if indicator_type in ["hash_sha1", "sha1"]:
78 return [
79 File(
80 type="file",
81 hashes={"SHA-1": v},
82 object_marking_refs=tlp_marking,
83 custom_properties=self.custom_props,
84 )
85 for v in values
86 ]
87
88 if indicator_type in ["sha256", "hash_sha256"]:
89 return [
90 File(
91 type="file",
92 hashes={"SHA-256": v},
93 object_marking_refs=tlp_marking,
94 custom_properties=self.custom_props,
95 )
96 for v in values
97 ]
98
99 if indicator_type == "domain":
100 return [
101 DomainName(
102 type="domain-name",
103 value=v,
104 object_marking_refs=tlp_marking,
105 custom_properties=self.custom_props,
106 )
107 for v in values
108 ]
109
110 if indicator_type in ["email", "emails"]:
111 return [
112 EmailAddress(
113 type="email-addr",
114 value=v,
115 object_marking_refs=tlp_marking,
116 custom_properties=self.custom_props,
117 )
118 for v in values
119 ]
120
121 if indicator_type in ["filename", "filepath"]:
122 return [
123 File(
124 type="file",
125 name=v,
126 object_marking_refs=tlp_marking,
127 custom_properties=self.custom_props,
128 )
129 for v in values
130 ]
131
132 if indicator_type == "ip":
133 return [
134 IPv4Address(
135 type="ipv4-addr",
136 value=v,
137 object_marking_refs=tlp_marking,
138 custom_properties=self.custom_props,
139 )
140 for v in values
141 ]
142
143 if indicator_type in ["proces_mutex", "process_mutex", "mutex"]:
144 return [
145 Mutex(
146 type="mutex",
147 name=v,
148 object_marking_refs=tlp_marking,
149 custom_properties=self.custom_props,
150 )
151 for v in values
152 ]
153
154 if indicator_type == "url":
155 return [
156 URL(
157 type="url",
158 value=v,
159 object_marking_refs=tlp_marking,
160 defanged=False,
161 custom_properties=self.custom_props,
162 )
163 for v in values
164 ]
165
166 if indicator_type == "certificate_sha1":
167 return [
168 X509Certificate(
169 type="x509-certificate",
170 hashes={"SHA-1": v},
171 object_marking_refs=tlp_marking,
172 custom_properties=self.custom_props,
173 )
174 for v in values
175 ]
176
177 if indicator_type in [
178 "certificate_issuerorganizationname",
179 "certificate_issuercommonname",
180 ]:
181 return [
182 X509Certificate(
183 type="x509-certificate",
184 issuer=v,
185 object_marking_refs=tlp_marking,
186 custom_properties=self.custom_props,
187 )
188 for v in values
189 ]
190
191 if indicator_type in [
192 "certificate_subjectorganizationname",
193 "certificate_subjectcountry",
194 "certificate_subjectcommonname",
195 ]:
196 return [
197 X509Certificate(
198 type="x509-certificate",
199 subject=v,
200 object_marking_refs=tlp_marking,
201 custom_properties=self.custom_props,
202 )
203 for v in values
204 ]
205
206 if indicator_type in ["certificate_serialnumber", "code_certificate_serial"]:
207 return [
208 X509Certificate(
209 type="x509-certificate",
210 serial_number=v,
211 object_marking_refs=tlp_marking,
212 custom_properties=self.custom_props,
213 )
214 for v in values
215 ]
216
217 self.helper.log_warning(
218 f"[RiskIQ] indicator with key {indicator_type} not supported. (Values: {values})"
219 )
220 return []
221
222 def run(self, work_id: str, state: Mapping[str, Any]) -> Mapping[str, Any]:
223 """Run the importation of the article."""
224 self.work_id = work_id
225 published = parser.parse(self.article["publishedDate"])
226 created = parser.parse(self.article["createdDate"])
227
228 indicators = itertools.chain(
229 *[
230 self._process_indicator(indicator)
231 for indicator in self.article["indicators"]
232 ]
233 )
234
235 indicators = utils.deduplicate(list(indicators))
236 # Return the initial state if we don't have any indicators.
237 if not indicators:
238 self.helper.log_info("No indicator in article, report will not be created.")
239 return state
240
241 self.helper.log_debug(f"Number of indicators: {len(indicators)}")
242
243 # Check if all indicators' TLP marking are `TLP_WHITE`.
244 report_tlp = TLP_WHITE
245 if TLP_AMBER in [i["object_marking_refs"][0] for i in indicators]:
246 report_tlp = TLP_AMBER
247
248 report = Report(
249 type="report",
250 name=self.article.get("title", "RiskIQ Threat Report"),
251 description=self.article["summary"],
252 report_types=["threat-report"],
253 created_by_ref=self.author,
254 created=created,
255 published=published,
256 lang="en",
257 labels=self.article["tags"],
258 object_refs=indicators,
259 object_marking_refs=report_tlp,
260 external_references=[
261 {
262 "source_name": "riskiq",
263 "url": self.article["link"],
264 "external_id": self.article["guid"],
265 }
266 ],
267 allow_custom=True,
268 )
269 self.helper.log_debug(f"[RiskIQ] Report = {report}")
270
271 bundle = Bundle(objects=indicators + [report, self.author], allow_custom=True)
272 self.helper.log_info("[RiskIQ] Sending report STIX2 bundle")
273 self._send_bundle(bundle)
274
275 return self._create_state(created)
276
277 @classmethod
278 def _create_state(
279 cls, latest_datetime: Optional[datetime.datetime]
280 ) -> Mapping[str, Any]:
281 if latest_datetime is None:
282 return {}
283
284 return {cls._LATEST_ARTICLE_TIMESTAMP: datetime_to_timestamp(latest_datetime)}
285
286 def _send_bundle(self, bundle: Bundle) -> None:
287 serialized_bundle = bundle.serialize()
288 self.helper.send_stix2_bundle(serialized_bundle, work_id=self.work_id)
289
[end of external-import/riskiq/src/riskiq/article_importer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/external-import/riskiq/src/riskiq/article_importer.py b/external-import/riskiq/src/riskiq/article_importer.py
--- a/external-import/riskiq/src/riskiq/article_importer.py
+++ b/external-import/riskiq/src/riskiq/article_importer.py
@@ -222,8 +222,14 @@
def run(self, work_id: str, state: Mapping[str, Any]) -> Mapping[str, Any]:
"""Run the importation of the article."""
self.work_id = work_id
- published = parser.parse(self.article["publishedDate"])
created = parser.parse(self.article["createdDate"])
+ # RisIQ API does not always provide the `publishedDate`.
+ # If it does not exist, take the value of the `createdDate` instead.
+ published = (
+ parser.parse(self.article["publishedDate"])
+ if self.article["publishedDate"] is not None
+ else created
+ )
indicators = itertools.chain(
*[
|
{"golden_diff": "diff --git a/external-import/riskiq/src/riskiq/article_importer.py b/external-import/riskiq/src/riskiq/article_importer.py\n--- a/external-import/riskiq/src/riskiq/article_importer.py\n+++ b/external-import/riskiq/src/riskiq/article_importer.py\n@@ -222,8 +222,14 @@\n def run(self, work_id: str, state: Mapping[str, Any]) -> Mapping[str, Any]:\n \"\"\"Run the importation of the article.\"\"\"\n self.work_id = work_id\n- published = parser.parse(self.article[\"publishedDate\"])\n created = parser.parse(self.article[\"createdDate\"])\n+ # RisIQ API does not always provide the `publishedDate`.\n+ # If it does not exist, take the value of the `createdDate` instead.\n+ published = (\n+ parser.parse(self.article[\"publishedDate\"])\n+ if self.article[\"publishedDate\"] is not None\n+ else created\n+ )\n \n indicators = itertools.chain(\n *[\n", "issue": "Riskiq Connector throwing errors\n## Description\r\n\r\nRiskIQ connector is not working as expected with the correct credentials defined.\r\n\r\n## Environment\r\n\r\n1. OS - Ubuntu \r\n2. OpenCTI version: 5.1.3\r\n\r\n## Riskiq Connector Logs:\r\nINFO:root:Listing Threat-Actors with filters null.,\r\nINFO:root:Connector registered with ID: c455a3a4-cc8f-4133-9f8d-4098fa984de8,\r\nINFO:root:Starting ping alive thread,\r\nINFO:riskiq.client:URL: https://api.riskiq.net/pt/v2,\r\nINFO:root:Starting RiskIQ connector...,\r\nINFO:root:Running RiskIQ connector...,\r\nINFO:root:Connector interval sec: 60,\r\nINFO:root:[RiskIQ] loaded state: {},\r\nINFO:root:RiskIQ connector clean run,\r\nINFO:root:Initiate work for c455a3a4-cc8f-4133-9f8d-4098fa984de8,\r\nINFO:root:[RiskIQ] workid opencti-work--2c314a8c-484e-4a68-9b31-bb782b3b22ed initiated,\r\nINFO:root:[RiskIQ] last run: None,\r\n**ERROR:root:Parser must be a string or character stream, not NoneType **\r\n\r\n\r\n\r\n++Config File++\r\n\r\n connector-riskiq:\r\n image: opencti/connector-riskiq:5.1.3\r\n environment:\r\n - OPENCTI_URL=http://opencti:8080\r\n - OPENCTI_TOKEN=c9dc7053-6bdf-44ca-9dfd-c0e3ff249eb8\r\n - CONNECTOR_ID=c455a3a4-cc8f-4133-9f8d-4098fa984de8\r\n - CONNECTOR_TYPE=EXTERNAL_IMPORT\r\n - CONNECTOR_NAME=RISKIQ\r\n - CONNECTOR_SCOPE=riskiq\r\n - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)\r\n - CONNECTOR_LOG_LEVEL=info\r\n - RISKIQ_BASE_URL=https://api.riskiq.net/pt/v2\r\n - [email protected]\r\n - RISKIQ_PASSWORD=xxxxxxx\r\n - RISKIQ_INTERVAL_SEC=86400\r\n restart: always\r\n\r\n\r\nIt was working before, after a reboot the riskiq connector started logging the above error as \"ERROR:root:Parser must be a string or character stream, not NoneType\".\r\n\r\nPlease help to fix the same.\r\n\r\nThanks\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"OpenCTI RiskIQ's article importer module.\"\"\"\nimport datetime\nimport itertools\nfrom typing import Any, Mapping, Optional\n\nfrom dateutil import parser\nfrom pycti import OpenCTIConnectorHelper\nfrom stix2 import (\n Bundle,\n DomainName,\n EmailAddress,\n File,\n Identity,\n Indicator,\n IPv4Address,\n Mutex,\n Report,\n TLP_AMBER,\n TLP_WHITE,\n URL,\n X509Certificate,\n utils,\n)\nfrom stix2.v21 import _Observable\n\nfrom .utils import datetime_to_timestamp\n\n\nclass ArticleImporter:\n \"\"\"Article importer class.\"\"\"\n\n _LATEST_ARTICLE_TIMESTAMP = \"latest_article_timestamp\"\n\n def __init__(\n self, helper: OpenCTIConnectorHelper, article: dict[str, Any], author: Identity\n ):\n \"\"\"Initialization of the article importer.\"\"\"\n self.helper = helper\n self.article = article\n self.author = author\n self.work_id: Optional[str] = None\n # Use custom properties to set the author and the confidence level of the object.\n self.custom_props = {\n \"x_opencti_created_by_ref\": self.author[\"id\"],\n }\n\n def _process_indicator(self, indicator: Indicator) -> list[_Observable]:\n \"\"\"\n Process the indicator depending on its type.\n\n Parameters\n ----------\n indicator : Indicator\n One indicator from an article.\n\n Returns\n -------\n List of Observable\n A list of Observable depending on the indicator type.\n \"\"\"\n indicator_type = indicator[\"type\"]\n values = indicator[\"values\"]\n tlp_marking = TLP_WHITE if indicator[\"source\"] == \"public\" else TLP_AMBER\n\n if indicator_type == \"hash_md5\":\n return [\n File(\n type=\"file\",\n hashes={\"MD5\": v},\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\"hash_sha1\", \"sha1\"]:\n return [\n File(\n type=\"file\",\n hashes={\"SHA-1\": v},\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\"sha256\", \"hash_sha256\"]:\n return [\n File(\n type=\"file\",\n hashes={\"SHA-256\": v},\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type == \"domain\":\n return [\n DomainName(\n type=\"domain-name\",\n value=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\"email\", \"emails\"]:\n return [\n EmailAddress(\n type=\"email-addr\",\n value=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\"filename\", \"filepath\"]:\n return [\n File(\n type=\"file\",\n name=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type == \"ip\":\n return [\n IPv4Address(\n type=\"ipv4-addr\",\n value=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\"proces_mutex\", \"process_mutex\", \"mutex\"]:\n return [\n Mutex(\n type=\"mutex\",\n name=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type == \"url\":\n return [\n URL(\n type=\"url\",\n value=v,\n object_marking_refs=tlp_marking,\n defanged=False,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type == \"certificate_sha1\":\n return [\n X509Certificate(\n type=\"x509-certificate\",\n hashes={\"SHA-1\": v},\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\n \"certificate_issuerorganizationname\",\n \"certificate_issuercommonname\",\n ]:\n return [\n X509Certificate(\n type=\"x509-certificate\",\n issuer=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\n \"certificate_subjectorganizationname\",\n \"certificate_subjectcountry\",\n \"certificate_subjectcommonname\",\n ]:\n return [\n X509Certificate(\n type=\"x509-certificate\",\n subject=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n if indicator_type in [\"certificate_serialnumber\", \"code_certificate_serial\"]:\n return [\n X509Certificate(\n type=\"x509-certificate\",\n serial_number=v,\n object_marking_refs=tlp_marking,\n custom_properties=self.custom_props,\n )\n for v in values\n ]\n\n self.helper.log_warning(\n f\"[RiskIQ] indicator with key {indicator_type} not supported. (Values: {values})\"\n )\n return []\n\n def run(self, work_id: str, state: Mapping[str, Any]) -> Mapping[str, Any]:\n \"\"\"Run the importation of the article.\"\"\"\n self.work_id = work_id\n published = parser.parse(self.article[\"publishedDate\"])\n created = parser.parse(self.article[\"createdDate\"])\n\n indicators = itertools.chain(\n *[\n self._process_indicator(indicator)\n for indicator in self.article[\"indicators\"]\n ]\n )\n\n indicators = utils.deduplicate(list(indicators))\n # Return the initial state if we don't have any indicators.\n if not indicators:\n self.helper.log_info(\"No indicator in article, report will not be created.\")\n return state\n\n self.helper.log_debug(f\"Number of indicators: {len(indicators)}\")\n\n # Check if all indicators' TLP marking are `TLP_WHITE`.\n report_tlp = TLP_WHITE\n if TLP_AMBER in [i[\"object_marking_refs\"][0] for i in indicators]:\n report_tlp = TLP_AMBER\n\n report = Report(\n type=\"report\",\n name=self.article.get(\"title\", \"RiskIQ Threat Report\"),\n description=self.article[\"summary\"],\n report_types=[\"threat-report\"],\n created_by_ref=self.author,\n created=created,\n published=published,\n lang=\"en\",\n labels=self.article[\"tags\"],\n object_refs=indicators,\n object_marking_refs=report_tlp,\n external_references=[\n {\n \"source_name\": \"riskiq\",\n \"url\": self.article[\"link\"],\n \"external_id\": self.article[\"guid\"],\n }\n ],\n allow_custom=True,\n )\n self.helper.log_debug(f\"[RiskIQ] Report = {report}\")\n\n bundle = Bundle(objects=indicators + [report, self.author], allow_custom=True)\n self.helper.log_info(\"[RiskIQ] Sending report STIX2 bundle\")\n self._send_bundle(bundle)\n\n return self._create_state(created)\n\n @classmethod\n def _create_state(\n cls, latest_datetime: Optional[datetime.datetime]\n ) -> Mapping[str, Any]:\n if latest_datetime is None:\n return {}\n\n return {cls._LATEST_ARTICLE_TIMESTAMP: datetime_to_timestamp(latest_datetime)}\n\n def _send_bundle(self, bundle: Bundle) -> None:\n serialized_bundle = bundle.serialize()\n self.helper.send_stix2_bundle(serialized_bundle, work_id=self.work_id)\n", "path": "external-import/riskiq/src/riskiq/article_importer.py"}]}
| 3,736 | 228 |
gh_patches_debug_32416
|
rasdani/github-patches
|
git_diff
|
linz__geostore-1651
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use latest of each STAC extension version
### Enabler
So that we don't have to manually update the code to use the latest version, we want to automatically use the latest version available in the relevant Git submodule.
Need to check what happens when a file is submitted that references and old version of a stac schema
#### Acceptance Criteria
- [ ] Dependabot PRs for any of the STAC submodules run tests with the latest version of all the extensions in that submodule
- [ ] Add a note to the release documentation about notifying users which STAC extension versions are supported
#### Additional context
This avoids manual work like [this PR to use the latest LINZ STAC extensions](https://github.com/linz/geostore/pull/1444).
Caveat: We currently only support one version of each extension. When extensions release breaking changes this could affect our existing users, and we need to notify them.
#### Tasks
<!-- Tasks needed to complete this enabler -->
- [ ] ...
- [ ] ...
#### Definition of Ready
- [ ] This story is **ready** to work on
- [ ] Negotiable (team can decide how to design and implement)
- [ ] Valuable (from a user perspective)
- [ ] Estimate value applied (agreed by team)
- [ ] Small (so as to fit within an iteration)
- [ ] Testable (in principle, even if there isn't a test for it yet)
- [ ] Environments are ready to meet definition of done
- [ ] Resources required to implement will be ready
- [ ] Everyone understands and agrees with the tasks to complete the story
- [ ] Release value (e.g. Iteration 3) applied
- [ ] Sprint value (e.g. Aug 1 - Aug 15) applied
#### Definition of Done
- [ ] This story is **done**:
- [ ] Acceptance criteria completed
- [ ] Automated tests are passing
- [ ] Code is peer reviewed and pushed to master
- [ ] Deployed successfully to test environment
- [ ] Checked against [CODING guidelines](https://github.com/linz/geostore/blob/master/CODING.md)
- [ ] Relevant new tasks are added to backlog and communicated to the team
- [ ] Important decisions recorded in the issue ticket
- [ ] Readme/Changelog/Diagrams are updated
- [ ] Product Owner has approved acceptance criteria as complete
- [ ] Meets non-functional requirements:
- [ ] Scalability (data): Can scale to 300TB of data and 100,000,000 files and ability to
increase 10% every year
- [ ] Scability (users): Can scale to 100 concurrent users
- [ ] Cost: Data can be stored at < 0.5 NZD per GB per year
- [ ] Performance: A large dataset (500 GB and 50,000 files - e.g. Akl aerial imagery) can be
validated, imported and stored within 24 hours
- [ ] Accessibility: Can be used from LINZ networks and the public internet
- [ ] Availability: System available 24 hours a day and 7 days a week, this does not include
maintenance windows < 4 hours and does not include operational support
- [ ] Recoverability: RPO of fully imported datasets < 4 hours, RTO of a single 3 TB dataset <
12 hours
<!-- Please add one or more of these labels: 'spike', 'refactor', 'architecture', 'infrastructure', 'compliance' -->
</issue>
<code>
[start of geostore/check_stac_metadata/stac_validators.py]
1 from functools import cached_property
2 from json import load
3 from os.path import dirname, join
4
5 from jsonschema import Draft7Validator, FormatChecker, RefResolver
6 from jsonschema._utils import URIDict
7 from jsonschema.validators import extend
8
9 from ..stac_format import LINZ_STAC_EXTENSIONS_LOCAL_PATH
10 from ..types import JsonObject
11
12
13 class Schema:
14 def __init__(self, path: str):
15 self.path = path
16
17 @cached_property
18 def as_dict(self) -> JsonObject:
19 with open(join(dirname(__file__), self.path), encoding="utf-8") as file_pointer:
20 result: JsonObject = load(file_pointer)
21 return result
22
23 @cached_property
24 def schema_id(self) -> str:
25 id_: str = self.as_dict["$id"]
26 return id_
27
28 @cached_property
29 def uri(self) -> str:
30 uri_: str = URIDict().normalize(self.schema_id)
31 return uri_
32
33
34 FILE_STAC_SCHEMA_PATH = "file/v2.0.0/schema.json"
35 PROJECTION_STAC_SCHEMA_PATH = "projection/v1.0.0/schema.json"
36 VERSION_STAC_SCHEMA_PATH = "version/v1.0.0/schema.json"
37 FILE_SCHEMA = Schema(FILE_STAC_SCHEMA_PATH)
38
39 STAC_VERSION = "1.0.0"
40 STAC_SPEC_PATH = f"stac-spec/v{STAC_VERSION}"
41 CATALOG_SCHEMA = Schema(f"{STAC_SPEC_PATH}/catalog-spec/json-schema/catalog.json")
42 LINZ_STAC_EXTENSIONS_URL_PATH = "v0.0.14"
43 LINZ_SCHEMA_URL_DIRECTORY = f"{LINZ_STAC_EXTENSIONS_URL_PATH}/linz"
44 LINZ_SCHEMA_URL_PATH = f"{LINZ_SCHEMA_URL_DIRECTORY}/schema.json"
45 LINZ_SCHEMA = Schema(join(LINZ_STAC_EXTENSIONS_LOCAL_PATH, LINZ_SCHEMA_URL_PATH))
46 STAC_ITEM_SPEC_PATH = f"{STAC_SPEC_PATH}/item-spec/json-schema"
47 ITEM_SCHEMA = Schema(f"{STAC_ITEM_SPEC_PATH}/item.json")
48 QUALITY_SCHEMA_PATH = f"{LINZ_STAC_EXTENSIONS_URL_PATH}/quality/schema.json"
49
50 schema_store = {}
51 for schema in [
52 CATALOG_SCHEMA,
53 Schema(f"{STAC_SPEC_PATH}/collection-spec/json-schema/collection.json"),
54 FILE_SCHEMA,
55 Schema("geojson-spec/Feature.json"),
56 Schema("geojson-spec/Geometry.json"),
57 ITEM_SCHEMA,
58 Schema(f"{STAC_ITEM_SPEC_PATH}/basics.json"),
59 Schema(f"{STAC_ITEM_SPEC_PATH}/datetime.json"),
60 Schema(f"{STAC_ITEM_SPEC_PATH}/instrument.json"),
61 Schema(f"{STAC_ITEM_SPEC_PATH}/licensing.json"),
62 Schema(f"{STAC_ITEM_SPEC_PATH}/provider.json"),
63 LINZ_SCHEMA,
64 Schema(PROJECTION_STAC_SCHEMA_PATH),
65 Schema(VERSION_STAC_SCHEMA_PATH),
66 Schema(join(LINZ_STAC_EXTENSIONS_LOCAL_PATH, QUALITY_SCHEMA_PATH)),
67 ]:
68 # Normalize URLs the same way as jsonschema does
69 schema_store[schema.uri] = schema.as_dict
70
71 BaseSTACValidator = extend(Draft7Validator)
72 BaseSTACValidator.format_checker = FormatChecker()
73
74 STACCatalogSchemaValidator = extend(BaseSTACValidator)(
75 resolver=RefResolver.from_schema(CATALOG_SCHEMA.as_dict, store=schema_store),
76 schema=CATALOG_SCHEMA.as_dict,
77 )
78
79 STACCollectionSchemaValidator = extend(BaseSTACValidator)(
80 resolver=RefResolver.from_schema(LINZ_SCHEMA.as_dict, store=schema_store),
81 schema=LINZ_SCHEMA.as_dict,
82 )
83
84 STACItemSchemaValidator = extend(BaseSTACValidator)(
85 resolver=RefResolver.from_schema(LINZ_SCHEMA.as_dict, store=schema_store),
86 schema=LINZ_SCHEMA.as_dict,
87 )
88
[end of geostore/check_stac_metadata/stac_validators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/geostore/check_stac_metadata/stac_validators.py b/geostore/check_stac_metadata/stac_validators.py
--- a/geostore/check_stac_metadata/stac_validators.py
+++ b/geostore/check_stac_metadata/stac_validators.py
@@ -1,6 +1,9 @@
-from functools import cached_property
+from distutils.version import StrictVersion
+from functools import cached_property, lru_cache
from json import load
+from os import scandir
from os.path import dirname, join
+from re import fullmatch
from jsonschema import Draft7Validator, FormatChecker, RefResolver
from jsonschema._utils import URIDict
@@ -31,15 +34,28 @@
return uri_
+@lru_cache
+def get_latest_extension_schema_version(extension_path: str) -> str:
+ directories = scandir(join(dirname(__file__), extension_path))
+ versions = []
+ for directory in directories:
+ if directory.is_dir() and fullmatch(r"v\d+\.\d+\.\d+", directory.name):
+ versions.append(directory.name[1:])
+ return sorted(versions, key=StrictVersion, reverse=True)[0]
+
+
FILE_STAC_SCHEMA_PATH = "file/v2.0.0/schema.json"
PROJECTION_STAC_SCHEMA_PATH = "projection/v1.0.0/schema.json"
VERSION_STAC_SCHEMA_PATH = "version/v1.0.0/schema.json"
FILE_SCHEMA = Schema(FILE_STAC_SCHEMA_PATH)
-STAC_VERSION = "1.0.0"
-STAC_SPEC_PATH = f"stac-spec/v{STAC_VERSION}"
+STAC_SPEC_EXTENSION_PATH = "stac-spec"
+STAC_VERSION = get_latest_extension_schema_version(STAC_SPEC_EXTENSION_PATH)
+STAC_SPEC_PATH = f"{STAC_SPEC_EXTENSION_PATH}/v{STAC_VERSION}"
CATALOG_SCHEMA = Schema(f"{STAC_SPEC_PATH}/catalog-spec/json-schema/catalog.json")
-LINZ_STAC_EXTENSIONS_URL_PATH = "v0.0.14"
+LINZ_STAC_EXTENSIONS_URL_PATH = (
+ f"v{get_latest_extension_schema_version(LINZ_STAC_EXTENSIONS_LOCAL_PATH)}"
+)
LINZ_SCHEMA_URL_DIRECTORY = f"{LINZ_STAC_EXTENSIONS_URL_PATH}/linz"
LINZ_SCHEMA_URL_PATH = f"{LINZ_SCHEMA_URL_DIRECTORY}/schema.json"
LINZ_SCHEMA = Schema(join(LINZ_STAC_EXTENSIONS_LOCAL_PATH, LINZ_SCHEMA_URL_PATH))
|
{"golden_diff": "diff --git a/geostore/check_stac_metadata/stac_validators.py b/geostore/check_stac_metadata/stac_validators.py\n--- a/geostore/check_stac_metadata/stac_validators.py\n+++ b/geostore/check_stac_metadata/stac_validators.py\n@@ -1,6 +1,9 @@\n-from functools import cached_property\n+from distutils.version import StrictVersion\n+from functools import cached_property, lru_cache\n from json import load\n+from os import scandir\n from os.path import dirname, join\n+from re import fullmatch\n \n from jsonschema import Draft7Validator, FormatChecker, RefResolver\n from jsonschema._utils import URIDict\n@@ -31,15 +34,28 @@\n return uri_\n \n \n+@lru_cache\n+def get_latest_extension_schema_version(extension_path: str) -> str:\n+ directories = scandir(join(dirname(__file__), extension_path))\n+ versions = []\n+ for directory in directories:\n+ if directory.is_dir() and fullmatch(r\"v\\d+\\.\\d+\\.\\d+\", directory.name):\n+ versions.append(directory.name[1:])\n+ return sorted(versions, key=StrictVersion, reverse=True)[0]\n+\n+\n FILE_STAC_SCHEMA_PATH = \"file/v2.0.0/schema.json\"\n PROJECTION_STAC_SCHEMA_PATH = \"projection/v1.0.0/schema.json\"\n VERSION_STAC_SCHEMA_PATH = \"version/v1.0.0/schema.json\"\n FILE_SCHEMA = Schema(FILE_STAC_SCHEMA_PATH)\n \n-STAC_VERSION = \"1.0.0\"\n-STAC_SPEC_PATH = f\"stac-spec/v{STAC_VERSION}\"\n+STAC_SPEC_EXTENSION_PATH = \"stac-spec\"\n+STAC_VERSION = get_latest_extension_schema_version(STAC_SPEC_EXTENSION_PATH)\n+STAC_SPEC_PATH = f\"{STAC_SPEC_EXTENSION_PATH}/v{STAC_VERSION}\"\n CATALOG_SCHEMA = Schema(f\"{STAC_SPEC_PATH}/catalog-spec/json-schema/catalog.json\")\n-LINZ_STAC_EXTENSIONS_URL_PATH = \"v0.0.14\"\n+LINZ_STAC_EXTENSIONS_URL_PATH = (\n+ f\"v{get_latest_extension_schema_version(LINZ_STAC_EXTENSIONS_LOCAL_PATH)}\"\n+)\n LINZ_SCHEMA_URL_DIRECTORY = f\"{LINZ_STAC_EXTENSIONS_URL_PATH}/linz\"\n LINZ_SCHEMA_URL_PATH = f\"{LINZ_SCHEMA_URL_DIRECTORY}/schema.json\"\n LINZ_SCHEMA = Schema(join(LINZ_STAC_EXTENSIONS_LOCAL_PATH, LINZ_SCHEMA_URL_PATH))\n", "issue": "Use latest of each STAC extension version\n### Enabler\r\n\r\nSo that we don't have to manually update the code to use the latest version, we want to automatically use the latest version available in the relevant Git submodule.\r\n\r\nNeed to check what happens when a file is submitted that references and old version of a stac schema\r\n\r\n#### Acceptance Criteria\r\n\r\n- [ ] Dependabot PRs for any of the STAC submodules run tests with the latest version of all the extensions in that submodule\r\n- [ ] Add a note to the release documentation about notifying users which STAC extension versions are supported\r\n\r\n#### Additional context\r\n\r\nThis avoids manual work like [this PR to use the latest LINZ STAC extensions](https://github.com/linz/geostore/pull/1444).\r\n\r\nCaveat: We currently only support one version of each extension. When extensions release breaking changes this could affect our existing users, and we need to notify them.\r\n\r\n#### Tasks\r\n\r\n<!-- Tasks needed to complete this enabler -->\r\n\r\n- [ ] ...\r\n- [ ] ...\r\n\r\n#### Definition of Ready\r\n\r\n- [ ] This story is **ready** to work on\r\n - [ ] Negotiable (team can decide how to design and implement)\r\n - [ ] Valuable (from a user perspective)\r\n - [ ] Estimate value applied (agreed by team)\r\n - [ ] Small (so as to fit within an iteration)\r\n - [ ] Testable (in principle, even if there isn't a test for it yet)\r\n - [ ] Environments are ready to meet definition of done\r\n - [ ] Resources required to implement will be ready\r\n - [ ] Everyone understands and agrees with the tasks to complete the story\r\n - [ ] Release value (e.g. Iteration 3) applied\r\n - [ ] Sprint value (e.g. Aug 1 - Aug 15) applied\r\n\r\n#### Definition of Done\r\n\r\n- [ ] This story is **done**:\r\n - [ ] Acceptance criteria completed\r\n - [ ] Automated tests are passing\r\n - [ ] Code is peer reviewed and pushed to master\r\n - [ ] Deployed successfully to test environment\r\n - [ ] Checked against [CODING guidelines](https://github.com/linz/geostore/blob/master/CODING.md)\r\n - [ ] Relevant new tasks are added to backlog and communicated to the team\r\n - [ ] Important decisions recorded in the issue ticket\r\n - [ ] Readme/Changelog/Diagrams are updated\r\n - [ ] Product Owner has approved acceptance criteria as complete\r\n - [ ] Meets non-functional requirements:\r\n - [ ] Scalability (data): Can scale to 300TB of data and 100,000,000 files and ability to\r\n increase 10% every year\r\n - [ ] Scability (users): Can scale to 100 concurrent users\r\n - [ ] Cost: Data can be stored at < 0.5 NZD per GB per year\r\n - [ ] Performance: A large dataset (500 GB and 50,000 files - e.g. Akl aerial imagery) can be\r\n validated, imported and stored within 24 hours\r\n - [ ] Accessibility: Can be used from LINZ networks and the public internet\r\n - [ ] Availability: System available 24 hours a day and 7 days a week, this does not include\r\n maintenance windows < 4 hours and does not include operational support\r\n - [ ] Recoverability: RPO of fully imported datasets < 4 hours, RTO of a single 3 TB dataset <\r\n 12 hours\r\n\r\n<!-- Please add one or more of these labels: 'spike', 'refactor', 'architecture', 'infrastructure', 'compliance' -->\r\n\n", "before_files": [{"content": "from functools import cached_property\nfrom json import load\nfrom os.path import dirname, join\n\nfrom jsonschema import Draft7Validator, FormatChecker, RefResolver\nfrom jsonschema._utils import URIDict\nfrom jsonschema.validators import extend\n\nfrom ..stac_format import LINZ_STAC_EXTENSIONS_LOCAL_PATH\nfrom ..types import JsonObject\n\n\nclass Schema:\n def __init__(self, path: str):\n self.path = path\n\n @cached_property\n def as_dict(self) -> JsonObject:\n with open(join(dirname(__file__), self.path), encoding=\"utf-8\") as file_pointer:\n result: JsonObject = load(file_pointer)\n return result\n\n @cached_property\n def schema_id(self) -> str:\n id_: str = self.as_dict[\"$id\"]\n return id_\n\n @cached_property\n def uri(self) -> str:\n uri_: str = URIDict().normalize(self.schema_id)\n return uri_\n\n\nFILE_STAC_SCHEMA_PATH = \"file/v2.0.0/schema.json\"\nPROJECTION_STAC_SCHEMA_PATH = \"projection/v1.0.0/schema.json\"\nVERSION_STAC_SCHEMA_PATH = \"version/v1.0.0/schema.json\"\nFILE_SCHEMA = Schema(FILE_STAC_SCHEMA_PATH)\n\nSTAC_VERSION = \"1.0.0\"\nSTAC_SPEC_PATH = f\"stac-spec/v{STAC_VERSION}\"\nCATALOG_SCHEMA = Schema(f\"{STAC_SPEC_PATH}/catalog-spec/json-schema/catalog.json\")\nLINZ_STAC_EXTENSIONS_URL_PATH = \"v0.0.14\"\nLINZ_SCHEMA_URL_DIRECTORY = f\"{LINZ_STAC_EXTENSIONS_URL_PATH}/linz\"\nLINZ_SCHEMA_URL_PATH = f\"{LINZ_SCHEMA_URL_DIRECTORY}/schema.json\"\nLINZ_SCHEMA = Schema(join(LINZ_STAC_EXTENSIONS_LOCAL_PATH, LINZ_SCHEMA_URL_PATH))\nSTAC_ITEM_SPEC_PATH = f\"{STAC_SPEC_PATH}/item-spec/json-schema\"\nITEM_SCHEMA = Schema(f\"{STAC_ITEM_SPEC_PATH}/item.json\")\nQUALITY_SCHEMA_PATH = f\"{LINZ_STAC_EXTENSIONS_URL_PATH}/quality/schema.json\"\n\nschema_store = {}\nfor schema in [\n CATALOG_SCHEMA,\n Schema(f\"{STAC_SPEC_PATH}/collection-spec/json-schema/collection.json\"),\n FILE_SCHEMA,\n Schema(\"geojson-spec/Feature.json\"),\n Schema(\"geojson-spec/Geometry.json\"),\n ITEM_SCHEMA,\n Schema(f\"{STAC_ITEM_SPEC_PATH}/basics.json\"),\n Schema(f\"{STAC_ITEM_SPEC_PATH}/datetime.json\"),\n Schema(f\"{STAC_ITEM_SPEC_PATH}/instrument.json\"),\n Schema(f\"{STAC_ITEM_SPEC_PATH}/licensing.json\"),\n Schema(f\"{STAC_ITEM_SPEC_PATH}/provider.json\"),\n LINZ_SCHEMA,\n Schema(PROJECTION_STAC_SCHEMA_PATH),\n Schema(VERSION_STAC_SCHEMA_PATH),\n Schema(join(LINZ_STAC_EXTENSIONS_LOCAL_PATH, QUALITY_SCHEMA_PATH)),\n]:\n # Normalize URLs the same way as jsonschema does\n schema_store[schema.uri] = schema.as_dict\n\nBaseSTACValidator = extend(Draft7Validator)\nBaseSTACValidator.format_checker = FormatChecker()\n\nSTACCatalogSchemaValidator = extend(BaseSTACValidator)(\n resolver=RefResolver.from_schema(CATALOG_SCHEMA.as_dict, store=schema_store),\n schema=CATALOG_SCHEMA.as_dict,\n)\n\nSTACCollectionSchemaValidator = extend(BaseSTACValidator)(\n resolver=RefResolver.from_schema(LINZ_SCHEMA.as_dict, store=schema_store),\n schema=LINZ_SCHEMA.as_dict,\n)\n\nSTACItemSchemaValidator = extend(BaseSTACValidator)(\n resolver=RefResolver.from_schema(LINZ_SCHEMA.as_dict, store=schema_store),\n schema=LINZ_SCHEMA.as_dict,\n)\n", "path": "geostore/check_stac_metadata/stac_validators.py"}]}
| 2,301 | 529 |
gh_patches_debug_22602
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-9612
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Jiosaavn] Extract more metadata
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
India
### Example URLs
https://www.jiosaavn.com/song/love-me-again/OgFZRER2A3c
### Provide a description that is worded well enough to be understood
Please add the option to include metadata when songs are downloaded froom Jiosaavn. I am able to add metadata when downloading from youtube music. Please do something similar for Jiosaavn
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--verbose', 'https://www.jiosaavn.com/song/love-me-again/OgFZRER2A3c']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [615a84447] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-112207-g8eb094adb2-20230928 (setts), ffprobe N-112207-g8eb094adb2-20230928
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.2.1, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1803 extractors
[JioSaavnSong] Extracting URL: https://www.jiosaavn.com/song/love-me-again/OgFZRER2A3c
[JioSaavnSong] OgFZRER2A3c: Downloading webpage
[JioSaavnSong] OgFZRER2A3c: Downloading format info for 128
[JioSaavnSong] OgFZRER2A3c: Downloading format info for 320
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] OgFZRER2A3c: Downloading 1 format(s): 320
[debug] Invoking http downloader on "https://ac.cf.saavncdn.com/593/23d333a3c1c0f706b6c57629f756f059_320.mp4?Expires=1712173385&Signature=fheCPEBOGuUsngOaUjck3xkTBBIyGE9jHg50kaEmIuzpD5DVzGw~7RDZEUO2ZeCk7UUvxsM1N7svIPh3V7cEfi3BCLjkiYqKwxh044TorkWad-GC-1P4oOcovsfAf0GxwCVQg3syVwza3QQJLWeUrcUS36B~rhqJg5~4AzaK8z~sinByrUCG5~BENyvLNsCGUN5gVnLQH3QN9MaavJ742Vn9Ew7DrddQkQuRD25j84hvBtPQsUuD3VAUX9zg5h1~bZ3~fWdrXJbCMPUy4Wq4b6KZexmMPu7tO8IjpwGXDpTdgB94N9R2UrAqc7S7HghmXEwESbNXNiC-iX-VSBUpCw__&Key-Pair-Id=APKAJB334VX63D3WJ5ZQ"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: Love Me Again [OgFZRER2A3c].mp4
[download] 100% of 9.20MiB in 00:00:01 at 5.19MiB/s
```
</issue>
<code>
[start of yt_dlp/extractor/jiosaavn.py]
1 from .common import InfoExtractor
2 from ..utils import (
3 int_or_none,
4 js_to_json,
5 url_or_none,
6 urlencode_postdata,
7 urljoin,
8 )
9 from ..utils.traversal import traverse_obj
10
11
12 class JioSaavnBaseIE(InfoExtractor):
13 def _extract_initial_data(self, url, audio_id):
14 webpage = self._download_webpage(url, audio_id)
15 return self._search_json(
16 r'window\.__INITIAL_DATA__\s*=', webpage,
17 'init json', audio_id, transform_source=js_to_json)
18
19
20 class JioSaavnSongIE(JioSaavnBaseIE):
21 _VALID_URL = r'https?://(?:www\.)?(?:jiosaavn\.com/song/[^/?#]+/|saavn\.com/s/song/(?:[^/?#]+/){3})(?P<id>[^/?#]+)'
22 _TESTS = [{
23 'url': 'https://www.jiosaavn.com/song/leja-re/OQsEfQFVUXk',
24 'md5': '3b84396d15ed9e083c3106f1fa589c04',
25 'info_dict': {
26 'id': 'OQsEfQFVUXk',
27 'ext': 'mp4',
28 'title': 'Leja Re',
29 'album': 'Leja Re',
30 'thumbnail': 'https://c.saavncdn.com/258/Leja-Re-Hindi-2018-20181124024539-500x500.jpg',
31 'duration': 205,
32 'view_count': int,
33 'release_year': 2018,
34 },
35 }, {
36 'url': 'https://www.saavn.com/s/song/hindi/Saathiya/O-Humdum-Suniyo-Re/KAMiazoCblU',
37 'only_matching': True,
38 }]
39
40 _VALID_BITRATES = ('16', '32', '64', '128', '320')
41
42 def _real_extract(self, url):
43 audio_id = self._match_id(url)
44 extract_bitrates = self._configuration_arg('bitrate', ['128', '320'], ie_key='JioSaavn')
45 if invalid_bitrates := [br for br in extract_bitrates if br not in self._VALID_BITRATES]:
46 raise ValueError(
47 f'Invalid bitrate(s): {", ".join(invalid_bitrates)}. '
48 + f'Valid bitrates are: {", ".join(self._VALID_BITRATES)}')
49
50 song_data = self._extract_initial_data(url, audio_id)['song']['song']
51 formats = []
52 for bitrate in extract_bitrates:
53 media_data = self._download_json(
54 'https://www.jiosaavn.com/api.php', audio_id, f'Downloading format info for {bitrate}',
55 fatal=False, data=urlencode_postdata({
56 '__call': 'song.generateAuthToken',
57 '_format': 'json',
58 'bitrate': bitrate,
59 'url': song_data['encrypted_media_url'],
60 }))
61 if not media_data.get('auth_url'):
62 self.report_warning(f'Unable to extract format info for {bitrate}')
63 continue
64 formats.append({
65 'url': media_data['auth_url'],
66 'ext': media_data.get('type'),
67 'format_id': bitrate,
68 'abr': int(bitrate),
69 'vcodec': 'none',
70 })
71
72 return {
73 'id': audio_id,
74 'formats': formats,
75 **traverse_obj(song_data, {
76 'title': ('title', 'text'),
77 'album': ('album', 'text'),
78 'thumbnail': ('image', 0, {url_or_none}),
79 'duration': ('duration', {int_or_none}),
80 'view_count': ('play_count', {int_or_none}),
81 'release_year': ('year', {int_or_none}),
82 }),
83 }
84
85
86 class JioSaavnAlbumIE(JioSaavnBaseIE):
87 _VALID_URL = r'https?://(?:www\.)?(?:jio)?saavn\.com/album/[^/?#]+/(?P<id>[^/?#]+)'
88 _TESTS = [{
89 'url': 'https://www.jiosaavn.com/album/96/buIOjYZDrNA_',
90 'info_dict': {
91 'id': 'buIOjYZDrNA_',
92 'title': '96',
93 },
94 'playlist_count': 10,
95 }]
96
97 def _real_extract(self, url):
98 album_id = self._match_id(url)
99 album_view = self._extract_initial_data(url, album_id)['albumView']
100
101 return self.playlist_from_matches(
102 traverse_obj(album_view, (
103 'modules', lambda _, x: x['key'] == 'list', 'data', ..., 'title', 'action', {str})),
104 album_id, traverse_obj(album_view, ('album', 'title', 'text', {str})), ie=JioSaavnSongIE,
105 getter=lambda x: urljoin('https://www.jiosaavn.com/', x))
106
[end of yt_dlp/extractor/jiosaavn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt_dlp/extractor/jiosaavn.py b/yt_dlp/extractor/jiosaavn.py
--- a/yt_dlp/extractor/jiosaavn.py
+++ b/yt_dlp/extractor/jiosaavn.py
@@ -2,6 +2,7 @@
from ..utils import (
int_or_none,
js_to_json,
+ orderedSet,
url_or_none,
urlencode_postdata,
urljoin,
@@ -31,6 +32,7 @@
'duration': 205,
'view_count': int,
'release_year': 2018,
+ 'artists': ['Sandesh Shandilya', 'Dhvani Bhanushali', 'Tanishk Bagchi', 'Rashmi Virag', 'Irshad Kamil'],
},
}, {
'url': 'https://www.saavn.com/s/song/hindi/Saathiya/O-Humdum-Suniyo-Re/KAMiazoCblU',
@@ -79,6 +81,7 @@
'duration': ('duration', {int_or_none}),
'view_count': ('play_count', {int_or_none}),
'release_year': ('year', {int_or_none}),
+ 'artists': ('artists', ..., 'name', {str}, all, {orderedSet}),
}),
}
|
{"golden_diff": "diff --git a/yt_dlp/extractor/jiosaavn.py b/yt_dlp/extractor/jiosaavn.py\n--- a/yt_dlp/extractor/jiosaavn.py\n+++ b/yt_dlp/extractor/jiosaavn.py\n@@ -2,6 +2,7 @@\n from ..utils import (\n int_or_none,\n js_to_json,\n+ orderedSet,\n url_or_none,\n urlencode_postdata,\n urljoin,\n@@ -31,6 +32,7 @@\n 'duration': 205,\n 'view_count': int,\n 'release_year': 2018,\n+ 'artists': ['Sandesh Shandilya', 'Dhvani Bhanushali', 'Tanishk Bagchi', 'Rashmi Virag', 'Irshad Kamil'],\n },\n }, {\n 'url': 'https://www.saavn.com/s/song/hindi/Saathiya/O-Humdum-Suniyo-Re/KAMiazoCblU',\n@@ -79,6 +81,7 @@\n 'duration': ('duration', {int_or_none}),\n 'view_count': ('play_count', {int_or_none}),\n 'release_year': ('year', {int_or_none}),\n+ 'artists': ('artists', ..., 'name', {str}, all, {orderedSet}),\n }),\n }\n", "issue": "[Jiosaavn] Extract more metadata\n### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm requesting a site-specific feature\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nIndia\n\n### Example URLs\n\nhttps://www.jiosaavn.com/song/love-me-again/OgFZRER2A3c\n\n### Provide a description that is worded well enough to be understood\n\nPlease add the option to include metadata when songs are downloaded froom Jiosaavn. I am able to add metadata when downloading from youtube music. Please do something similar for Jiosaavn\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['--verbose', 'https://www.jiosaavn.com/song/love-me-again/OgFZRER2A3c']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [615a84447] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg N-112207-g8eb094adb2-20230928 (setts), ffprobe N-112207-g8eb094adb2-20230928\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.2.1, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1803 extractors\r\n[JioSaavnSong] Extracting URL: https://www.jiosaavn.com/song/love-me-again/OgFZRER2A3c\r\n[JioSaavnSong] OgFZRER2A3c: Downloading webpage\r\n[JioSaavnSong] OgFZRER2A3c: Downloading format info for 128\r\n[JioSaavnSong] OgFZRER2A3c: Downloading format info for 320\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] OgFZRER2A3c: Downloading 1 format(s): 320\r\n[debug] Invoking http downloader on \"https://ac.cf.saavncdn.com/593/23d333a3c1c0f706b6c57629f756f059_320.mp4?Expires=1712173385&Signature=fheCPEBOGuUsngOaUjck3xkTBBIyGE9jHg50kaEmIuzpD5DVzGw~7RDZEUO2ZeCk7UUvxsM1N7svIPh3V7cEfi3BCLjkiYqKwxh044TorkWad-GC-1P4oOcovsfAf0GxwCVQg3syVwza3QQJLWeUrcUS36B~rhqJg5~4AzaK8z~sinByrUCG5~BENyvLNsCGUN5gVnLQH3QN9MaavJ742Vn9Ew7DrddQkQuRD25j84hvBtPQsUuD3VAUX9zg5h1~bZ3~fWdrXJbCMPUy4Wq4b6KZexmMPu7tO8IjpwGXDpTdgB94N9R2UrAqc7S7HghmXEwESbNXNiC-iX-VSBUpCw__&Key-Pair-Id=APKAJB334VX63D3WJ5ZQ\"\r\n[debug] File locking is not supported. Proceeding without locking\r\n[download] Destination: Love Me Again [OgFZRER2A3c].mp4\r\n[download] 100% of 9.20MiB in 00:00:01 at 5.19MiB/s\n```\n\n", "before_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n js_to_json,\n url_or_none,\n urlencode_postdata,\n urljoin,\n)\nfrom ..utils.traversal import traverse_obj\n\n\nclass JioSaavnBaseIE(InfoExtractor):\n def _extract_initial_data(self, url, audio_id):\n webpage = self._download_webpage(url, audio_id)\n return self._search_json(\n r'window\\.__INITIAL_DATA__\\s*=', webpage,\n 'init json', audio_id, transform_source=js_to_json)\n\n\nclass JioSaavnSongIE(JioSaavnBaseIE):\n _VALID_URL = r'https?://(?:www\\.)?(?:jiosaavn\\.com/song/[^/?#]+/|saavn\\.com/s/song/(?:[^/?#]+/){3})(?P<id>[^/?#]+)'\n _TESTS = [{\n 'url': 'https://www.jiosaavn.com/song/leja-re/OQsEfQFVUXk',\n 'md5': '3b84396d15ed9e083c3106f1fa589c04',\n 'info_dict': {\n 'id': 'OQsEfQFVUXk',\n 'ext': 'mp4',\n 'title': 'Leja Re',\n 'album': 'Leja Re',\n 'thumbnail': 'https://c.saavncdn.com/258/Leja-Re-Hindi-2018-20181124024539-500x500.jpg',\n 'duration': 205,\n 'view_count': int,\n 'release_year': 2018,\n },\n }, {\n 'url': 'https://www.saavn.com/s/song/hindi/Saathiya/O-Humdum-Suniyo-Re/KAMiazoCblU',\n 'only_matching': True,\n }]\n\n _VALID_BITRATES = ('16', '32', '64', '128', '320')\n\n def _real_extract(self, url):\n audio_id = self._match_id(url)\n extract_bitrates = self._configuration_arg('bitrate', ['128', '320'], ie_key='JioSaavn')\n if invalid_bitrates := [br for br in extract_bitrates if br not in self._VALID_BITRATES]:\n raise ValueError(\n f'Invalid bitrate(s): {\", \".join(invalid_bitrates)}. '\n + f'Valid bitrates are: {\", \".join(self._VALID_BITRATES)}')\n\n song_data = self._extract_initial_data(url, audio_id)['song']['song']\n formats = []\n for bitrate in extract_bitrates:\n media_data = self._download_json(\n 'https://www.jiosaavn.com/api.php', audio_id, f'Downloading format info for {bitrate}',\n fatal=False, data=urlencode_postdata({\n '__call': 'song.generateAuthToken',\n '_format': 'json',\n 'bitrate': bitrate,\n 'url': song_data['encrypted_media_url'],\n }))\n if not media_data.get('auth_url'):\n self.report_warning(f'Unable to extract format info for {bitrate}')\n continue\n formats.append({\n 'url': media_data['auth_url'],\n 'ext': media_data.get('type'),\n 'format_id': bitrate,\n 'abr': int(bitrate),\n 'vcodec': 'none',\n })\n\n return {\n 'id': audio_id,\n 'formats': formats,\n **traverse_obj(song_data, {\n 'title': ('title', 'text'),\n 'album': ('album', 'text'),\n 'thumbnail': ('image', 0, {url_or_none}),\n 'duration': ('duration', {int_or_none}),\n 'view_count': ('play_count', {int_or_none}),\n 'release_year': ('year', {int_or_none}),\n }),\n }\n\n\nclass JioSaavnAlbumIE(JioSaavnBaseIE):\n _VALID_URL = r'https?://(?:www\\.)?(?:jio)?saavn\\.com/album/[^/?#]+/(?P<id>[^/?#]+)'\n _TESTS = [{\n 'url': 'https://www.jiosaavn.com/album/96/buIOjYZDrNA_',\n 'info_dict': {\n 'id': 'buIOjYZDrNA_',\n 'title': '96',\n },\n 'playlist_count': 10,\n }]\n\n def _real_extract(self, url):\n album_id = self._match_id(url)\n album_view = self._extract_initial_data(url, album_id)['albumView']\n\n return self.playlist_from_matches(\n traverse_obj(album_view, (\n 'modules', lambda _, x: x['key'] == 'list', 'data', ..., 'title', 'action', {str})),\n album_id, traverse_obj(album_view, ('album', 'title', 'text', {str})), ie=JioSaavnSongIE,\n getter=lambda x: urljoin('https://www.jiosaavn.com/', x))\n", "path": "yt_dlp/extractor/jiosaavn.py"}]}
| 3,382 | 299 |
gh_patches_debug_14246
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-1554
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
raises_unsupported_algorithm gives very unhelpful errors
When the error tag is wrong you get errors along the lines of `assert <object object at 0xf0000000> is not <object object as 0xb0000000>`. This is not very helpful, it's not even particularly obvious that the error tag is actually what's wrong until you go and read the code.
Should probably generate a useful error message or somehow give the tag objects a more useful `repr` output.
</issue>
<code>
[start of src/cryptography/exceptions.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7
8 class _Reasons(object):
9 BACKEND_MISSING_INTERFACE = object()
10 UNSUPPORTED_HASH = object()
11 UNSUPPORTED_CIPHER = object()
12 UNSUPPORTED_PADDING = object()
13 UNSUPPORTED_MGF = object()
14 UNSUPPORTED_PUBLIC_KEY_ALGORITHM = object()
15 UNSUPPORTED_ELLIPTIC_CURVE = object()
16 UNSUPPORTED_SERIALIZATION = object()
17 UNSUPPORTED_X509 = object()
18
19
20 class UnsupportedAlgorithm(Exception):
21 def __init__(self, message, reason=None):
22 super(UnsupportedAlgorithm, self).__init__(message)
23 self._reason = reason
24
25
26 class AlreadyFinalized(Exception):
27 pass
28
29
30 class AlreadyUpdated(Exception):
31 pass
32
33
34 class NotYetFinalized(Exception):
35 pass
36
37
38 class InvalidTag(Exception):
39 pass
40
41
42 class InvalidSignature(Exception):
43 pass
44
45
46 class InternalError(Exception):
47 pass
48
49
50 class InvalidKey(Exception):
51 pass
52
53
54 class InvalidToken(Exception):
55 pass
56
[end of src/cryptography/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cryptography/exceptions.py b/src/cryptography/exceptions.py
--- a/src/cryptography/exceptions.py
+++ b/src/cryptography/exceptions.py
@@ -4,17 +4,19 @@
from __future__ import absolute_import, division, print_function
-
-class _Reasons(object):
- BACKEND_MISSING_INTERFACE = object()
- UNSUPPORTED_HASH = object()
- UNSUPPORTED_CIPHER = object()
- UNSUPPORTED_PADDING = object()
- UNSUPPORTED_MGF = object()
- UNSUPPORTED_PUBLIC_KEY_ALGORITHM = object()
- UNSUPPORTED_ELLIPTIC_CURVE = object()
- UNSUPPORTED_SERIALIZATION = object()
- UNSUPPORTED_X509 = object()
+from enum import Enum
+
+
+class _Reasons(Enum):
+ BACKEND_MISSING_INTERFACE = 0
+ UNSUPPORTED_HASH = 1
+ UNSUPPORTED_CIPHER = 2
+ UNSUPPORTED_PADDING = 3
+ UNSUPPORTED_MGF = 4
+ UNSUPPORTED_PUBLIC_KEY_ALGORITHM = 5
+ UNSUPPORTED_ELLIPTIC_CURVE = 6
+ UNSUPPORTED_SERIALIZATION = 7
+ UNSUPPORTED_X509 = 8
class UnsupportedAlgorithm(Exception):
|
{"golden_diff": "diff --git a/src/cryptography/exceptions.py b/src/cryptography/exceptions.py\n--- a/src/cryptography/exceptions.py\n+++ b/src/cryptography/exceptions.py\n@@ -4,17 +4,19 @@\n \n from __future__ import absolute_import, division, print_function\n \n-\n-class _Reasons(object):\n- BACKEND_MISSING_INTERFACE = object()\n- UNSUPPORTED_HASH = object()\n- UNSUPPORTED_CIPHER = object()\n- UNSUPPORTED_PADDING = object()\n- UNSUPPORTED_MGF = object()\n- UNSUPPORTED_PUBLIC_KEY_ALGORITHM = object()\n- UNSUPPORTED_ELLIPTIC_CURVE = object()\n- UNSUPPORTED_SERIALIZATION = object()\n- UNSUPPORTED_X509 = object()\n+from enum import Enum\n+\n+\n+class _Reasons(Enum):\n+ BACKEND_MISSING_INTERFACE = 0\n+ UNSUPPORTED_HASH = 1\n+ UNSUPPORTED_CIPHER = 2\n+ UNSUPPORTED_PADDING = 3\n+ UNSUPPORTED_MGF = 4\n+ UNSUPPORTED_PUBLIC_KEY_ALGORITHM = 5\n+ UNSUPPORTED_ELLIPTIC_CURVE = 6\n+ UNSUPPORTED_SERIALIZATION = 7\n+ UNSUPPORTED_X509 = 8\n \n \n class UnsupportedAlgorithm(Exception):\n", "issue": "raises_unsupported_algorithm gives very unhelpful errors\nWhen the error tag is wrong you get errors along the lines of `assert <object object at 0xf0000000> is not <object object as 0xb0000000>`. This is not very helpful, it's not even particularly obvious that the error tag is actually what's wrong until you go and read the code.\n\nShould probably generate a useful error message or somehow give the tag objects a more useful `repr` output.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\n\nclass _Reasons(object):\n BACKEND_MISSING_INTERFACE = object()\n UNSUPPORTED_HASH = object()\n UNSUPPORTED_CIPHER = object()\n UNSUPPORTED_PADDING = object()\n UNSUPPORTED_MGF = object()\n UNSUPPORTED_PUBLIC_KEY_ALGORITHM = object()\n UNSUPPORTED_ELLIPTIC_CURVE = object()\n UNSUPPORTED_SERIALIZATION = object()\n UNSUPPORTED_X509 = object()\n\n\nclass UnsupportedAlgorithm(Exception):\n def __init__(self, message, reason=None):\n super(UnsupportedAlgorithm, self).__init__(message)\n self._reason = reason\n\n\nclass AlreadyFinalized(Exception):\n pass\n\n\nclass AlreadyUpdated(Exception):\n pass\n\n\nclass NotYetFinalized(Exception):\n pass\n\n\nclass InvalidTag(Exception):\n pass\n\n\nclass InvalidSignature(Exception):\n pass\n\n\nclass InternalError(Exception):\n pass\n\n\nclass InvalidKey(Exception):\n pass\n\n\nclass InvalidToken(Exception):\n pass\n", "path": "src/cryptography/exceptions.py"}]}
| 1,014 | 275 |
gh_patches_debug_3149
|
rasdani/github-patches
|
git_diff
|
huggingface__dataset-viewer-479
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use main instead of master to load the datasets
The main branch in https://github.com/huggingface/datasets is now `main`, not `master` anymore. Note that it's backward compatible, so no need to hurry
</issue>
<code>
[start of services/worker/src/worker/constants.py]
1 from typing import Optional
2
3 DEFAULT_ASSETS_BASE_URL: str = "assets"
4 DEFAULT_ASSETS_DIRECTORY: None = None
5 DEFAULT_DATASETS_REVISION: str = "master"
6 DEFAULT_HF_TOKEN: Optional[str] = None
7 DEFAULT_LOG_LEVEL: str = "INFO"
8 DEFAULT_MAX_JOB_RETRIES: int = 3
9 DEFAULT_MAX_JOBS_PER_DATASET: int = 1
10 DEFAULT_MAX_LOAD_PCT: int = 70
11 DEFAULT_MAX_MEMORY_PCT: int = 80
12 DEFAULT_MAX_SIZE_FALLBACK: int = 100_000_000
13 DEFAULT_MIN_CELL_BYTES: int = 100
14 DEFAULT_MONGO_CACHE_DATABASE: str = "datasets_server_cache"
15 DEFAULT_MONGO_QUEUE_DATABASE: str = "datasets_server_queue"
16 DEFAULT_MONGO_URL: str = "mongodb://localhost:27018"
17 DEFAULT_ROWS_MAX_BYTES: int = 1_000_000
18 DEFAULT_ROWS_MAX_NUMBER: int = 100
19 DEFAULT_ROWS_MIN_NUMBER: int = 10
20 DEFAULT_WORKER_SLEEP_SECONDS: int = 15
21 DEFAULT_WORKER_QUEUE: str = "datasets"
22
[end of services/worker/src/worker/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/services/worker/src/worker/constants.py b/services/worker/src/worker/constants.py
--- a/services/worker/src/worker/constants.py
+++ b/services/worker/src/worker/constants.py
@@ -2,7 +2,7 @@
DEFAULT_ASSETS_BASE_URL: str = "assets"
DEFAULT_ASSETS_DIRECTORY: None = None
-DEFAULT_DATASETS_REVISION: str = "master"
+DEFAULT_DATASETS_REVISION: str = "main"
DEFAULT_HF_TOKEN: Optional[str] = None
DEFAULT_LOG_LEVEL: str = "INFO"
DEFAULT_MAX_JOB_RETRIES: int = 3
|
{"golden_diff": "diff --git a/services/worker/src/worker/constants.py b/services/worker/src/worker/constants.py\n--- a/services/worker/src/worker/constants.py\n+++ b/services/worker/src/worker/constants.py\n@@ -2,7 +2,7 @@\n \n DEFAULT_ASSETS_BASE_URL: str = \"assets\"\n DEFAULT_ASSETS_DIRECTORY: None = None\n-DEFAULT_DATASETS_REVISION: str = \"master\"\n+DEFAULT_DATASETS_REVISION: str = \"main\"\n DEFAULT_HF_TOKEN: Optional[str] = None\n DEFAULT_LOG_LEVEL: str = \"INFO\"\n DEFAULT_MAX_JOB_RETRIES: int = 3\n", "issue": "Use main instead of master to load the datasets\nThe main branch in https://github.com/huggingface/datasets is now `main`, not `master` anymore. Note that it's backward compatible, so no need to hurry\n", "before_files": [{"content": "from typing import Optional\n\nDEFAULT_ASSETS_BASE_URL: str = \"assets\"\nDEFAULT_ASSETS_DIRECTORY: None = None\nDEFAULT_DATASETS_REVISION: str = \"master\"\nDEFAULT_HF_TOKEN: Optional[str] = None\nDEFAULT_LOG_LEVEL: str = \"INFO\"\nDEFAULT_MAX_JOB_RETRIES: int = 3\nDEFAULT_MAX_JOBS_PER_DATASET: int = 1\nDEFAULT_MAX_LOAD_PCT: int = 70\nDEFAULT_MAX_MEMORY_PCT: int = 80\nDEFAULT_MAX_SIZE_FALLBACK: int = 100_000_000\nDEFAULT_MIN_CELL_BYTES: int = 100\nDEFAULT_MONGO_CACHE_DATABASE: str = \"datasets_server_cache\"\nDEFAULT_MONGO_QUEUE_DATABASE: str = \"datasets_server_queue\"\nDEFAULT_MONGO_URL: str = \"mongodb://localhost:27018\"\nDEFAULT_ROWS_MAX_BYTES: int = 1_000_000\nDEFAULT_ROWS_MAX_NUMBER: int = 100\nDEFAULT_ROWS_MIN_NUMBER: int = 10\nDEFAULT_WORKER_SLEEP_SECONDS: int = 15\nDEFAULT_WORKER_QUEUE: str = \"datasets\"\n", "path": "services/worker/src/worker/constants.py"}]}
| 869 | 132 |
gh_patches_debug_24387
|
rasdani/github-patches
|
git_diff
|
napari__napari-4401
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Screenshot test failing on main (due to vispy 0.10?)
## 🐛 Bug
The test `napari/_tests/test_with_screenshot.py:test_z_order_image_points_after_ndisplay` is failing on main:
https://github.com/napari/napari/runs/6069251907?check_suite_focus=true#step:7:294
I suspect that this is due to the VisPy 0.10 release, which happened in the last 24h or so.
</issue>
<code>
[start of napari/_vispy/visuals/volume.py]
1 from vispy.scene.visuals import Volume as BaseVolume
2
3 FUNCTION_DEFINITIONS = """
4 // the tolerance for testing equality of floats with floatEqual and floatNotEqual
5 const float equality_tolerance = 1e-8;
6
7 bool floatNotEqual(float val1, float val2)
8 {
9 // check if val1 and val2 are not equal
10 bool not_equal = abs(val1 - val2) > equality_tolerance;
11
12 return not_equal;
13 }
14
15 bool floatEqual(float val1, float val2)
16 {
17 // check if val1 and val2 are equal
18 bool equal = abs(val1 - val2) < equality_tolerance;
19
20 return equal;
21 }
22
23
24 // the background value for the iso_categorical shader
25 const float categorical_bg_value = 0;
26
27 int detectAdjacentBackground(float val_neg, float val_pos)
28 {
29 // determine if the adjacent voxels along an axis are both background
30 int adjacent_bg = int( floatEqual(val_neg, categorical_bg_value) );
31 adjacent_bg = adjacent_bg * int( floatEqual(val_pos, categorical_bg_value) );
32 return adjacent_bg;
33 }
34
35 vec4 calculateCategoricalColor(vec4 betterColor, vec3 loc, vec3 step)
36 {
37 // Calculate color by incorporating ambient and diffuse lighting
38 vec4 color0 = $sample(u_volumetex, loc);
39 vec4 color1;
40 vec4 color2;
41 float val0 = colorToVal(color0);
42 float val1 = 0;
43 float val2 = 0;
44 int n_bg_borders = 0;
45
46 // View direction
47 vec3 V = normalize(view_ray);
48
49 // calculate normal vector from gradient
50 vec3 N; // normal
51 color1 = $sample( u_volumetex, loc+vec3(-step[0],0.0,0.0) );
52 color2 = $sample( u_volumetex, loc+vec3(step[0],0.0,0.0) );
53 val1 = colorToVal(color1);
54 val2 = colorToVal(color2);
55 N[0] = val1 - val2;
56 n_bg_borders += detectAdjacentBackground(val1, val2);
57
58 color1 = $sample( u_volumetex, loc+vec3(0.0,-step[1],0.0) );
59 color2 = $sample( u_volumetex, loc+vec3(0.0,step[1],0.0) );
60 val1 = colorToVal(color1);
61 val2 = colorToVal(color2);
62 N[1] = val1 - val2;
63 n_bg_borders += detectAdjacentBackground(val1, val2);
64
65 color1 = $sample( u_volumetex, loc+vec3(0.0,0.0,-step[2]) );
66 color2 = $sample( u_volumetex, loc+vec3(0.0,0.0,step[2]) );
67 val1 = colorToVal(color1);
68 val2 = colorToVal(color2);
69 N[2] = val1 - val2;
70 n_bg_borders += detectAdjacentBackground(val1, val2);
71
72 // Normalize and flip normal so it points towards viewer
73 N = normalize(N);
74 float Nselect = float(dot(N,V) > 0.0);
75 N = (2.0*Nselect - 1.0) * N; // == Nselect * N - (1.0-Nselect)*N;
76
77 // Init colors
78 vec4 ambient_color = vec4(0.0, 0.0, 0.0, 0.0);
79 vec4 diffuse_color = vec4(0.0, 0.0, 0.0, 0.0);
80 vec4 final_color;
81
82 // todo: allow multiple light, define lights on viewvox or subscene
83 int nlights = 1;
84 for (int i=0; i<nlights; i++)
85 {
86 // Get light direction (make sure to prevent zero devision)
87 vec3 L = normalize(view_ray); //lightDirs[i];
88 float lightEnabled = float( length(L) > 0.0 );
89 L = normalize(L+(1.0-lightEnabled));
90
91 // Calculate lighting properties
92 float lambertTerm = clamp( dot(N,L), 0.0, 1.0 );
93 if (n_bg_borders > 0) {
94 // to fix dim pixels due to poor normal estimation,
95 // we give a default lambda to pixels surrounded by background
96 lambertTerm = 0.5;
97 }
98
99 // Calculate mask
100 float mask1 = lightEnabled;
101
102 // Calculate colors
103 ambient_color += mask1 * u_ambient; // * gl_LightSource[i].ambient;
104 diffuse_color += mask1 * lambertTerm;
105 }
106
107 // Calculate final color by componing different components
108 final_color = betterColor * ( ambient_color + diffuse_color);
109 final_color.a = betterColor.a;
110
111 // Done
112 return final_color;
113 }
114 """
115
116 ISO_CATEGORICAL_SNIPPETS = dict(
117 before_loop="""
118 vec4 color3 = vec4(0.0); // final color
119 vec3 dstep = 1.5 / u_shape; // step to sample derivative, set to match iso shader
120 gl_FragColor = vec4(0.0);
121 """,
122 in_loop="""
123 // check if value is different from the background value
124 if ( floatNotEqual(val, categorical_bg_value) ) {
125 // Take the last interval in smaller steps
126 vec3 iloc = loc - step;
127 for (int i=0; i<10; i++) {
128 color = $sample(u_volumetex, iloc);
129 if (floatNotEqual(color.g, categorical_bg_value) ) {
130 // when the non-background value is reached
131 // calculate the color (apply lighting effects)
132 color = applyColormap(color.g);
133 color = calculateCategoricalColor(color, iloc, dstep);
134 gl_FragColor = color;
135
136 // set the variables for the depth buffer
137 surface_point = iloc * u_shape;
138 surface_found = true;
139
140 iter = nsteps;
141 break;
142 }
143 iloc += step * 0.1;
144 }
145 }
146 """,
147 after_loop="""
148 if (surface_found == false) {
149 discard;
150 }
151 """,
152 )
153
154 shaders = BaseVolume._shaders.copy()
155 before, after = shaders['fragment'].split('void main()')
156 shaders['fragment'] = before + FUNCTION_DEFINITIONS + 'void main()' + after
157
158 rendering_methods = BaseVolume._rendering_methods.copy()
159 rendering_methods['iso_categorical'] = ISO_CATEGORICAL_SNIPPETS
160
161
162 class Volume(BaseVolume):
163 # add the new rendering method to the snippets dict
164 _shaders = shaders
165 _rendering_methods = rendering_methods
166
[end of napari/_vispy/visuals/volume.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/napari/_vispy/visuals/volume.py b/napari/_vispy/visuals/volume.py
--- a/napari/_vispy/visuals/volume.py
+++ b/napari/_vispy/visuals/volume.py
@@ -118,6 +118,7 @@
vec4 color3 = vec4(0.0); // final color
vec3 dstep = 1.5 / u_shape; // step to sample derivative, set to match iso shader
gl_FragColor = vec4(0.0);
+ bool discard_fragment = true;
""",
in_loop="""
// check if value is different from the background value
@@ -134,8 +135,8 @@
gl_FragColor = color;
// set the variables for the depth buffer
- surface_point = iloc * u_shape;
- surface_found = true;
+ frag_depth_point = iloc * u_shape;
+ discard_fragment = false;
iter = nsteps;
break;
@@ -145,9 +146,8 @@
}
""",
after_loop="""
- if (surface_found == false) {
+ if (discard_fragment)
discard;
- }
""",
)
|
{"golden_diff": "diff --git a/napari/_vispy/visuals/volume.py b/napari/_vispy/visuals/volume.py\n--- a/napari/_vispy/visuals/volume.py\n+++ b/napari/_vispy/visuals/volume.py\n@@ -118,6 +118,7 @@\n vec4 color3 = vec4(0.0); // final color\n vec3 dstep = 1.5 / u_shape; // step to sample derivative, set to match iso shader\n gl_FragColor = vec4(0.0);\n+ bool discard_fragment = true;\n \"\"\",\n in_loop=\"\"\"\n // check if value is different from the background value\n@@ -134,8 +135,8 @@\n gl_FragColor = color;\n \n // set the variables for the depth buffer\n- surface_point = iloc * u_shape;\n- surface_found = true;\n+ frag_depth_point = iloc * u_shape;\n+ discard_fragment = false;\n \n iter = nsteps;\n break;\n@@ -145,9 +146,8 @@\n }\n \"\"\",\n after_loop=\"\"\"\n- if (surface_found == false) {\n+ if (discard_fragment)\n discard;\n- }\n \"\"\",\n )\n", "issue": "Screenshot test failing on main (due to vispy 0.10?)\n## \ud83d\udc1b Bug\r\n\r\nThe test `napari/_tests/test_with_screenshot.py:test_z_order_image_points_after_ndisplay` is failing on main:\r\n\r\nhttps://github.com/napari/napari/runs/6069251907?check_suite_focus=true#step:7:294\r\n\r\nI suspect that this is due to the VisPy 0.10 release, which happened in the last 24h or so.\n", "before_files": [{"content": "from vispy.scene.visuals import Volume as BaseVolume\n\nFUNCTION_DEFINITIONS = \"\"\"\n// the tolerance for testing equality of floats with floatEqual and floatNotEqual\nconst float equality_tolerance = 1e-8;\n\nbool floatNotEqual(float val1, float val2)\n{\n // check if val1 and val2 are not equal\n bool not_equal = abs(val1 - val2) > equality_tolerance;\n\n return not_equal;\n}\n\nbool floatEqual(float val1, float val2)\n{\n // check if val1 and val2 are equal\n bool equal = abs(val1 - val2) < equality_tolerance;\n\n return equal;\n}\n\n\n// the background value for the iso_categorical shader\nconst float categorical_bg_value = 0;\n\nint detectAdjacentBackground(float val_neg, float val_pos)\n{\n // determine if the adjacent voxels along an axis are both background\n int adjacent_bg = int( floatEqual(val_neg, categorical_bg_value) );\n adjacent_bg = adjacent_bg * int( floatEqual(val_pos, categorical_bg_value) );\n return adjacent_bg;\n}\n\nvec4 calculateCategoricalColor(vec4 betterColor, vec3 loc, vec3 step)\n{\n // Calculate color by incorporating ambient and diffuse lighting\n vec4 color0 = $sample(u_volumetex, loc);\n vec4 color1;\n vec4 color2;\n float val0 = colorToVal(color0);\n float val1 = 0;\n float val2 = 0;\n int n_bg_borders = 0;\n\n // View direction\n vec3 V = normalize(view_ray);\n\n // calculate normal vector from gradient\n vec3 N; // normal\n color1 = $sample( u_volumetex, loc+vec3(-step[0],0.0,0.0) );\n color2 = $sample( u_volumetex, loc+vec3(step[0],0.0,0.0) );\n val1 = colorToVal(color1);\n val2 = colorToVal(color2);\n N[0] = val1 - val2;\n n_bg_borders += detectAdjacentBackground(val1, val2);\n\n color1 = $sample( u_volumetex, loc+vec3(0.0,-step[1],0.0) );\n color2 = $sample( u_volumetex, loc+vec3(0.0,step[1],0.0) );\n val1 = colorToVal(color1);\n val2 = colorToVal(color2);\n N[1] = val1 - val2;\n n_bg_borders += detectAdjacentBackground(val1, val2);\n\n color1 = $sample( u_volumetex, loc+vec3(0.0,0.0,-step[2]) );\n color2 = $sample( u_volumetex, loc+vec3(0.0,0.0,step[2]) );\n val1 = colorToVal(color1);\n val2 = colorToVal(color2);\n N[2] = val1 - val2;\n n_bg_borders += detectAdjacentBackground(val1, val2);\n\n // Normalize and flip normal so it points towards viewer\n N = normalize(N);\n float Nselect = float(dot(N,V) > 0.0);\n N = (2.0*Nselect - 1.0) * N; // == Nselect * N - (1.0-Nselect)*N;\n\n // Init colors\n vec4 ambient_color = vec4(0.0, 0.0, 0.0, 0.0);\n vec4 diffuse_color = vec4(0.0, 0.0, 0.0, 0.0);\n vec4 final_color;\n\n // todo: allow multiple light, define lights on viewvox or subscene\n int nlights = 1;\n for (int i=0; i<nlights; i++)\n {\n // Get light direction (make sure to prevent zero devision)\n vec3 L = normalize(view_ray); //lightDirs[i];\n float lightEnabled = float( length(L) > 0.0 );\n L = normalize(L+(1.0-lightEnabled));\n\n // Calculate lighting properties\n float lambertTerm = clamp( dot(N,L), 0.0, 1.0 );\n if (n_bg_borders > 0) {\n // to fix dim pixels due to poor normal estimation,\n // we give a default lambda to pixels surrounded by background\n lambertTerm = 0.5;\n }\n\n // Calculate mask\n float mask1 = lightEnabled;\n\n // Calculate colors\n ambient_color += mask1 * u_ambient; // * gl_LightSource[i].ambient;\n diffuse_color += mask1 * lambertTerm;\n }\n\n // Calculate final color by componing different components\n final_color = betterColor * ( ambient_color + diffuse_color);\n final_color.a = betterColor.a;\n\n // Done\n return final_color;\n}\n\"\"\"\n\nISO_CATEGORICAL_SNIPPETS = dict(\n before_loop=\"\"\"\n vec4 color3 = vec4(0.0); // final color\n vec3 dstep = 1.5 / u_shape; // step to sample derivative, set to match iso shader\n gl_FragColor = vec4(0.0);\n \"\"\",\n in_loop=\"\"\"\n // check if value is different from the background value\n if ( floatNotEqual(val, categorical_bg_value) ) {\n // Take the last interval in smaller steps\n vec3 iloc = loc - step;\n for (int i=0; i<10; i++) {\n color = $sample(u_volumetex, iloc);\n if (floatNotEqual(color.g, categorical_bg_value) ) {\n // when the non-background value is reached\n // calculate the color (apply lighting effects)\n color = applyColormap(color.g);\n color = calculateCategoricalColor(color, iloc, dstep);\n gl_FragColor = color;\n\n // set the variables for the depth buffer\n surface_point = iloc * u_shape;\n surface_found = true;\n\n iter = nsteps;\n break;\n }\n iloc += step * 0.1;\n }\n }\n \"\"\",\n after_loop=\"\"\"\n if (surface_found == false) {\n discard;\n }\n \"\"\",\n)\n\nshaders = BaseVolume._shaders.copy()\nbefore, after = shaders['fragment'].split('void main()')\nshaders['fragment'] = before + FUNCTION_DEFINITIONS + 'void main()' + after\n\nrendering_methods = BaseVolume._rendering_methods.copy()\nrendering_methods['iso_categorical'] = ISO_CATEGORICAL_SNIPPETS\n\n\nclass Volume(BaseVolume):\n # add the new rendering method to the snippets dict\n _shaders = shaders\n _rendering_methods = rendering_methods\n", "path": "napari/_vispy/visuals/volume.py"}]}
| 2,573 | 291 |
gh_patches_debug_11689
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-1140
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Poetry fails to install p4python due to read-only files
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **OS version and name**: Windows 10
- **Poetry version**: poetry 0.12.2
- **Link of a [pyproject.toml Gist](https://gist.github.com/epage/5f28e3b1e5eeb9a30697363e369a5fde)
- **Link of a [backtrace Gist](https://gist.github.com/epage/2584ad981ff5d9f175d55212b0192987)
## Issue
In digging into the problem, it seems that p4python's files are all marked read-only, causing windows to error when trying to delete them via `shutil.rmtree` which is invoked by poetry's custom temp directory handling.
</issue>
<code>
[start of poetry/utils/helpers.py]
1 import os
2 import re
3 import shutil
4 import stat
5 import tempfile
6
7 from contextlib import contextmanager
8 from typing import List
9 from typing import Optional
10
11 from poetry.config.config import Config
12 from poetry.utils._compat import Path
13 from poetry.version import Version
14
15
16 try:
17 from collections.abc import Mapping
18 except ImportError:
19 from collections import Mapping
20
21
22 _canonicalize_regex = re.compile("[-_]+")
23
24
25 def canonicalize_name(name): # type: (str) -> str
26 return _canonicalize_regex.sub("-", name).lower()
27
28
29 def module_name(name): # type: (str) -> str
30 return canonicalize_name(name).replace(".", "_").replace("-", "_")
31
32
33 def normalize_version(version): # type: (str) -> str
34 return str(Version(version))
35
36
37 @contextmanager
38 def temporary_directory(*args, **kwargs):
39 try:
40 from tempfile import TemporaryDirectory
41
42 with TemporaryDirectory(*args, **kwargs) as name:
43 yield name
44 except ImportError:
45 name = tempfile.mkdtemp(*args, **kwargs)
46
47 yield name
48
49 shutil.rmtree(name)
50
51
52 def parse_requires(requires): # type: (str) -> List[str]
53 lines = requires.split("\n")
54
55 requires_dist = []
56 in_section = False
57 current_marker = None
58 for line in lines:
59 line = line.strip()
60 if not line:
61 if in_section:
62 in_section = False
63
64 continue
65
66 if line.startswith("["):
67 # extras or conditional dependencies
68 marker = line.lstrip("[").rstrip("]")
69 if ":" not in marker:
70 extra, marker = marker, None
71 else:
72 extra, marker = marker.split(":")
73
74 if extra:
75 if marker:
76 marker = '{} and extra == "{}"'.format(marker, extra)
77 else:
78 marker = 'extra == "{}"'.format(extra)
79
80 if marker:
81 current_marker = marker
82
83 continue
84
85 if current_marker:
86 line = "{} ; {}".format(line, current_marker)
87
88 requires_dist.append(line)
89
90 return requires_dist
91
92
93 def get_cert(config, repository_name): # type: (Config, str) -> Optional[Path]
94 cert = config.get("certificates.{}.cert".format(repository_name))
95 if cert:
96 return Path(cert)
97 else:
98 return None
99
100
101 def get_client_cert(config, repository_name): # type: (Config, str) -> Optional[Path]
102 client_cert = config.get("certificates.{}.client-cert".format(repository_name))
103 if client_cert:
104 return Path(client_cert)
105 else:
106 return None
107
108
109 def _on_rm_error(func, path, exc_info):
110 if not os.path.exists(path):
111 return
112
113 os.chmod(path, stat.S_IWRITE)
114 func(path)
115
116
117 def safe_rmtree(path):
118 if Path(path).is_symlink():
119 return os.unlink(str(path))
120
121 shutil.rmtree(path, onerror=_on_rm_error)
122
123
124 def merge_dicts(d1, d2):
125 for k, v in d2.items():
126 if k in d1 and isinstance(d1[k], dict) and isinstance(d2[k], Mapping):
127 merge_dicts(d1[k], d2[k])
128 else:
129 d1[k] = d2[k]
130
[end of poetry/utils/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/poetry/utils/helpers.py b/poetry/utils/helpers.py
--- a/poetry/utils/helpers.py
+++ b/poetry/utils/helpers.py
@@ -34,19 +34,18 @@
return str(Version(version))
+def _del_ro(action, name, exc):
+ os.chmod(name, stat.S_IWRITE)
+ os.remove(name)
+
+
@contextmanager
def temporary_directory(*args, **kwargs):
- try:
- from tempfile import TemporaryDirectory
-
- with TemporaryDirectory(*args, **kwargs) as name:
- yield name
- except ImportError:
- name = tempfile.mkdtemp(*args, **kwargs)
+ name = tempfile.mkdtemp(*args, **kwargs)
- yield name
+ yield name
- shutil.rmtree(name)
+ shutil.rmtree(name, onerror=_del_ro)
def parse_requires(requires): # type: (str) -> List[str]
|
{"golden_diff": "diff --git a/poetry/utils/helpers.py b/poetry/utils/helpers.py\n--- a/poetry/utils/helpers.py\n+++ b/poetry/utils/helpers.py\n@@ -34,19 +34,18 @@\n return str(Version(version))\n \n \n+def _del_ro(action, name, exc):\n+ os.chmod(name, stat.S_IWRITE)\n+ os.remove(name)\n+\n+\n @contextmanager\n def temporary_directory(*args, **kwargs):\n- try:\n- from tempfile import TemporaryDirectory\n-\n- with TemporaryDirectory(*args, **kwargs) as name:\n- yield name\n- except ImportError:\n- name = tempfile.mkdtemp(*args, **kwargs)\n+ name = tempfile.mkdtemp(*args, **kwargs)\n \n- yield name\n+ yield name\n \n- shutil.rmtree(name)\n+ shutil.rmtree(name, onerror=_del_ro)\n \n \n def parse_requires(requires): # type: (str) -> List[str]\n", "issue": "Poetry fails to install p4python due to read-only files\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **OS version and name**: Windows 10\r\n- **Poetry version**: poetry 0.12.2\r\n- **Link of a [pyproject.toml Gist](https://gist.github.com/epage/5f28e3b1e5eeb9a30697363e369a5fde)\r\n- **Link of a [backtrace Gist](https://gist.github.com/epage/2584ad981ff5d9f175d55212b0192987)\r\n\r\n## Issue\r\n\r\nIn digging into the problem, it seems that p4python's files are all marked read-only, causing windows to error when trying to delete them via `shutil.rmtree` which is invoked by poetry's custom temp directory handling.\n", "before_files": [{"content": "import os\nimport re\nimport shutil\nimport stat\nimport tempfile\n\nfrom contextlib import contextmanager\nfrom typing import List\nfrom typing import Optional\n\nfrom poetry.config.config import Config\nfrom poetry.utils._compat import Path\nfrom poetry.version import Version\n\n\ntry:\n from collections.abc import Mapping\nexcept ImportError:\n from collections import Mapping\n\n\n_canonicalize_regex = re.compile(\"[-_]+\")\n\n\ndef canonicalize_name(name): # type: (str) -> str\n return _canonicalize_regex.sub(\"-\", name).lower()\n\n\ndef module_name(name): # type: (str) -> str\n return canonicalize_name(name).replace(\".\", \"_\").replace(\"-\", \"_\")\n\n\ndef normalize_version(version): # type: (str) -> str\n return str(Version(version))\n\n\n@contextmanager\ndef temporary_directory(*args, **kwargs):\n try:\n from tempfile import TemporaryDirectory\n\n with TemporaryDirectory(*args, **kwargs) as name:\n yield name\n except ImportError:\n name = tempfile.mkdtemp(*args, **kwargs)\n\n yield name\n\n shutil.rmtree(name)\n\n\ndef parse_requires(requires): # type: (str) -> List[str]\n lines = requires.split(\"\\n\")\n\n requires_dist = []\n in_section = False\n current_marker = None\n for line in lines:\n line = line.strip()\n if not line:\n if in_section:\n in_section = False\n\n continue\n\n if line.startswith(\"[\"):\n # extras or conditional dependencies\n marker = line.lstrip(\"[\").rstrip(\"]\")\n if \":\" not in marker:\n extra, marker = marker, None\n else:\n extra, marker = marker.split(\":\")\n\n if extra:\n if marker:\n marker = '{} and extra == \"{}\"'.format(marker, extra)\n else:\n marker = 'extra == \"{}\"'.format(extra)\n\n if marker:\n current_marker = marker\n\n continue\n\n if current_marker:\n line = \"{} ; {}\".format(line, current_marker)\n\n requires_dist.append(line)\n\n return requires_dist\n\n\ndef get_cert(config, repository_name): # type: (Config, str) -> Optional[Path]\n cert = config.get(\"certificates.{}.cert\".format(repository_name))\n if cert:\n return Path(cert)\n else:\n return None\n\n\ndef get_client_cert(config, repository_name): # type: (Config, str) -> Optional[Path]\n client_cert = config.get(\"certificates.{}.client-cert\".format(repository_name))\n if client_cert:\n return Path(client_cert)\n else:\n return None\n\n\ndef _on_rm_error(func, path, exc_info):\n if not os.path.exists(path):\n return\n\n os.chmod(path, stat.S_IWRITE)\n func(path)\n\n\ndef safe_rmtree(path):\n if Path(path).is_symlink():\n return os.unlink(str(path))\n\n shutil.rmtree(path, onerror=_on_rm_error)\n\n\ndef merge_dicts(d1, d2):\n for k, v in d2.items():\n if k in d1 and isinstance(d1[k], dict) and isinstance(d2[k], Mapping):\n merge_dicts(d1[k], d2[k])\n else:\n d1[k] = d2[k]\n", "path": "poetry/utils/helpers.py"}]}
| 1,930 | 214 |
gh_patches_debug_6859
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-7069
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
session could auto-no-op any callbacks invoked after the session is destroyed
Right now, if you do something like this:
```
def update_all_sessions(server_context):
for session_context in server_context.sessions:
yield session_context.with_locked_document(update_document)
```
One of the sessions could expire and be destroyed before your code gets to it.
So you need to write a check for that:
```
def update_all_sessions(server_context):
for session_context in server_context.sessions:
if not session_context.destroyed:
yield session_context.with_locked_document(update_document)
```
I think it would be better if `with_locked_document` did this automatically (just became a no-op on destroyed sessions). This could be done in session.py, `_needs_document_lock_wrapper` perhaps.
</issue>
<code>
[start of bokeh/server/session.py]
1 ''' Provides the ``ServerSession`` class.
2
3 '''
4 from __future__ import absolute_import
5
6 import logging
7 log = logging.getLogger(__name__)
8
9 import time
10
11 from tornado import gen, locks
12
13 from ..util.tornado import yield_for_all_futures
14
15 from .callbacks import _DocumentCallbackGroup
16
17 def current_time():
18 '''Return the time in milliseconds since the epoch as a floating
19 point number.
20 '''
21 try:
22 # python >=3.3 only
23 return time.monotonic() * 1000
24 except:
25 # if your python is old, don't set your clock backward!
26 return time.time() * 1000
27
28 def _needs_document_lock(func):
29 '''Decorator that adds the necessary locking and post-processing
30 to manipulate the session's document. Expects to decorate a
31 method on ServerSession and transforms it into a coroutine
32 if it wasn't already.
33 '''
34 @gen.coroutine
35 def _needs_document_lock_wrapper(self, *args, **kwargs):
36 # while we wait for and hold the lock, prevent the session
37 # from being discarded. This avoids potential weirdness
38 # with the session vanishing in the middle of some async
39 # task.
40 self.block_expiration()
41 try:
42 with (yield self._lock.acquire()):
43 if self._pending_writes is not None:
44 raise RuntimeError("internal class invariant violated: _pending_writes " + \
45 "should be None if lock is not held")
46 self._pending_writes = []
47 try:
48 result = yield yield_for_all_futures(func(self, *args, **kwargs))
49 finally:
50 # we want to be very sure we reset this or we'll
51 # keep hitting the RuntimeError above as soon as
52 # any callback goes wrong
53 pending_writes = self._pending_writes
54 self._pending_writes = None
55 for p in pending_writes:
56 yield p
57 raise gen.Return(result)
58 finally:
59 self.unblock_expiration()
60 return _needs_document_lock_wrapper
61
62 class ServerSession(object):
63 ''' Hosts an application "instance" (an instantiated Document) for one or more connections.
64
65 '''
66
67 def __init__(self, session_id, document, io_loop=None):
68 if session_id is None:
69 raise ValueError("Sessions must have an id")
70 if document is None:
71 raise ValueError("Sessions must have a document")
72 self._id = session_id
73 self._document = document
74 self._loop = io_loop
75 self._subscribed_connections = set()
76 self._last_unsubscribe_time = current_time()
77 self._lock = locks.Lock()
78 self._current_patch_connection = None
79 self._document.on_change_dispatch_to(self)
80 self._callbacks = _DocumentCallbackGroup(io_loop)
81 self._pending_writes = None
82 self._destroyed = False
83 self._expiration_requested = False
84 self._expiration_blocked_count = 0
85
86 wrapped_callbacks = self._wrap_session_callbacks(self._document.session_callbacks)
87 self._callbacks.add_session_callbacks(wrapped_callbacks)
88
89 @property
90 def document(self):
91 return self._document
92
93 @property
94 def id(self):
95 return self._id
96
97 @property
98 def destroyed(self):
99 return self._destroyed
100
101 @property
102 def expiration_requested(self):
103 return self._expiration_requested
104
105 @property
106 def expiration_blocked(self):
107 return self._expiration_blocked_count > 0
108
109 @property
110 def expiration_blocked_count(self):
111 return self._expiration_blocked_count
112
113 def destroy(self):
114 self._destroyed = True
115 self._document.delete_modules()
116 self._document.remove_on_change(self)
117 self._callbacks.remove_all_callbacks()
118
119 def request_expiration(self):
120 """ Used in test suite for now. Forces immediate expiration if no connections."""
121 self._expiration_requested = True
122
123 def block_expiration(self):
124 self._expiration_blocked_count += 1
125
126 def unblock_expiration(self):
127 if self._expiration_blocked_count <= 0:
128 raise RuntimeError("mismatched block_expiration / unblock_expiration")
129 self._expiration_blocked_count -= 1
130
131 def subscribe(self, connection):
132 """This should only be called by ServerConnection.subscribe_session or our book-keeping will be broken"""
133 self._subscribed_connections.add(connection)
134
135 def unsubscribe(self, connection):
136 """This should only be called by ServerConnection.unsubscribe_session or our book-keeping will be broken"""
137 self._subscribed_connections.discard(connection)
138 self._last_unsubscribe_time = current_time()
139
140 @property
141 def connection_count(self):
142 return len(self._subscribed_connections)
143
144 @property
145 def milliseconds_since_last_unsubscribe(self):
146 return current_time() - self._last_unsubscribe_time
147
148 @_needs_document_lock
149 def with_document_locked(self, func, *args, **kwargs):
150 ''' Asynchronously locks the document and runs the function with it locked.'''
151 return func(*args, **kwargs)
152
153 def _wrap_document_callback(self, callback):
154 if getattr(callback, "nolock", False):
155 return callback
156 def wrapped_callback(*args, **kwargs):
157 return self.with_document_locked(callback, *args, **kwargs)
158 return wrapped_callback
159
160 def _wrap_session_callback(self, callback):
161 wrapped = self._wrap_document_callback(callback.callback)
162 return callback._copy_with_changed_callback(wrapped)
163
164 def _wrap_session_callbacks(self, callbacks):
165 wrapped = []
166 for cb in callbacks:
167 wrapped.append(self._wrap_session_callback(cb))
168 return wrapped
169
170 def _document_patched(self, event):
171 may_suppress = event.setter is self
172
173 if self._pending_writes is None:
174 raise RuntimeError("_pending_writes should be non-None when we have a document lock, and we should have the lock when the document changes")
175
176 # TODO (havocp): our "change sync" protocol is flawed because if both
177 # sides change the same attribute at the same time, they will each end
178 # up with the state of the other and their final states will differ.
179 for connection in self._subscribed_connections:
180 if may_suppress and connection is self._current_patch_connection:
181 log.trace("Not sending notification back to client %r for a change it requested", connection)
182 else:
183 self._pending_writes.append(connection.send_patch_document(event))
184
185 @_needs_document_lock
186 def _handle_pull(self, message, connection):
187 log.debug("Sending pull-doc-reply from session %r", self.id)
188 return connection.protocol.create('PULL-DOC-REPLY', message.header['msgid'], self.document)
189
190 def _session_callback_added(self, event):
191 wrapped = self._wrap_session_callback(event.callback)
192 self._callbacks.add_session_callback(wrapped)
193
194 def _session_callback_removed(self, event):
195 self._callbacks.remove_session_callback(event.callback)
196
197 @classmethod
198 def pull(cls, message, connection):
199 ''' Handle a PULL-DOC, return a Future with work to be scheduled. '''
200 return connection.session._handle_pull(message, connection)
201
202 @_needs_document_lock
203 def _handle_push(self, message, connection):
204 log.debug("pushing doc to session %r", self.id)
205 message.push_to_document(self.document)
206 return connection.ok(message)
207
208 @classmethod
209 def push(cls, message, connection):
210 ''' Handle a PUSH-DOC, return a Future with work to be scheduled. '''
211 return connection.session._handle_push(message, connection)
212
213 @_needs_document_lock
214 def _handle_patch(self, message, connection):
215 self._current_patch_connection = connection
216 try:
217 message.apply_to_document(self.document, self)
218 finally:
219 self._current_patch_connection = None
220
221 return connection.ok(message)
222
223 @_needs_document_lock
224 def _handle_event(self, message, connection):
225 message.notify_event(self.document)
226 return connection.ok(message)
227
228 @classmethod
229 def event(cls, message, connection):
230 return connection.session._handle_event(message, connection)
231
232
233 @classmethod
234 def patch(cls, message, connection):
235 ''' Handle a PATCH-DOC, return a Future with work to be scheduled. '''
236 return connection.session._handle_patch(message, connection)
237
[end of bokeh/server/session.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bokeh/server/session.py b/bokeh/server/session.py
--- a/bokeh/server/session.py
+++ b/bokeh/server/session.py
@@ -37,6 +37,9 @@
# from being discarded. This avoids potential weirdness
# with the session vanishing in the middle of some async
# task.
+ if self.destroyed:
+ log.debug("Ignoring locked callback on already-destroyed session.")
+ raise gen.Return(None)
self.block_expiration()
try:
with (yield self._lock.acquire()):
|
{"golden_diff": "diff --git a/bokeh/server/session.py b/bokeh/server/session.py\n--- a/bokeh/server/session.py\n+++ b/bokeh/server/session.py\n@@ -37,6 +37,9 @@\n # from being discarded. This avoids potential weirdness\n # with the session vanishing in the middle of some async\n # task.\n+ if self.destroyed:\n+ log.debug(\"Ignoring locked callback on already-destroyed session.\")\n+ raise gen.Return(None)\n self.block_expiration()\n try:\n with (yield self._lock.acquire()):\n", "issue": "session could auto-no-op any callbacks invoked after the session is destroyed\nRight now, if you do something like this:\n\n```\ndef update_all_sessions(server_context):\n for session_context in server_context.sessions:\n yield session_context.with_locked_document(update_document)\n```\n\nOne of the sessions could expire and be destroyed before your code gets to it.\n\nSo you need to write a check for that:\n\n```\ndef update_all_sessions(server_context):\n for session_context in server_context.sessions:\n if not session_context.destroyed:\n yield session_context.with_locked_document(update_document)\n```\n\nI think it would be better if `with_locked_document` did this automatically (just became a no-op on destroyed sessions). This could be done in session.py, `_needs_document_lock_wrapper` perhaps.\n\n", "before_files": [{"content": "''' Provides the ``ServerSession`` class.\n\n'''\nfrom __future__ import absolute_import\n\nimport logging\nlog = logging.getLogger(__name__)\n\nimport time\n\nfrom tornado import gen, locks\n\nfrom ..util.tornado import yield_for_all_futures\n\nfrom .callbacks import _DocumentCallbackGroup\n\ndef current_time():\n '''Return the time in milliseconds since the epoch as a floating\n point number.\n '''\n try:\n # python >=3.3 only\n return time.monotonic() * 1000\n except:\n # if your python is old, don't set your clock backward!\n return time.time() * 1000\n\ndef _needs_document_lock(func):\n '''Decorator that adds the necessary locking and post-processing\n to manipulate the session's document. Expects to decorate a\n method on ServerSession and transforms it into a coroutine\n if it wasn't already.\n '''\n @gen.coroutine\n def _needs_document_lock_wrapper(self, *args, **kwargs):\n # while we wait for and hold the lock, prevent the session\n # from being discarded. This avoids potential weirdness\n # with the session vanishing in the middle of some async\n # task.\n self.block_expiration()\n try:\n with (yield self._lock.acquire()):\n if self._pending_writes is not None:\n raise RuntimeError(\"internal class invariant violated: _pending_writes \" + \\\n \"should be None if lock is not held\")\n self._pending_writes = []\n try:\n result = yield yield_for_all_futures(func(self, *args, **kwargs))\n finally:\n # we want to be very sure we reset this or we'll\n # keep hitting the RuntimeError above as soon as\n # any callback goes wrong\n pending_writes = self._pending_writes\n self._pending_writes = None\n for p in pending_writes:\n yield p\n raise gen.Return(result)\n finally:\n self.unblock_expiration()\n return _needs_document_lock_wrapper\n\nclass ServerSession(object):\n ''' Hosts an application \"instance\" (an instantiated Document) for one or more connections.\n\n '''\n\n def __init__(self, session_id, document, io_loop=None):\n if session_id is None:\n raise ValueError(\"Sessions must have an id\")\n if document is None:\n raise ValueError(\"Sessions must have a document\")\n self._id = session_id\n self._document = document\n self._loop = io_loop\n self._subscribed_connections = set()\n self._last_unsubscribe_time = current_time()\n self._lock = locks.Lock()\n self._current_patch_connection = None\n self._document.on_change_dispatch_to(self)\n self._callbacks = _DocumentCallbackGroup(io_loop)\n self._pending_writes = None\n self._destroyed = False\n self._expiration_requested = False\n self._expiration_blocked_count = 0\n\n wrapped_callbacks = self._wrap_session_callbacks(self._document.session_callbacks)\n self._callbacks.add_session_callbacks(wrapped_callbacks)\n\n @property\n def document(self):\n return self._document\n\n @property\n def id(self):\n return self._id\n\n @property\n def destroyed(self):\n return self._destroyed\n\n @property\n def expiration_requested(self):\n return self._expiration_requested\n\n @property\n def expiration_blocked(self):\n return self._expiration_blocked_count > 0\n\n @property\n def expiration_blocked_count(self):\n return self._expiration_blocked_count\n\n def destroy(self):\n self._destroyed = True\n self._document.delete_modules()\n self._document.remove_on_change(self)\n self._callbacks.remove_all_callbacks()\n\n def request_expiration(self):\n \"\"\" Used in test suite for now. Forces immediate expiration if no connections.\"\"\"\n self._expiration_requested = True\n\n def block_expiration(self):\n self._expiration_blocked_count += 1\n\n def unblock_expiration(self):\n if self._expiration_blocked_count <= 0:\n raise RuntimeError(\"mismatched block_expiration / unblock_expiration\")\n self._expiration_blocked_count -= 1\n\n def subscribe(self, connection):\n \"\"\"This should only be called by ServerConnection.subscribe_session or our book-keeping will be broken\"\"\"\n self._subscribed_connections.add(connection)\n\n def unsubscribe(self, connection):\n \"\"\"This should only be called by ServerConnection.unsubscribe_session or our book-keeping will be broken\"\"\"\n self._subscribed_connections.discard(connection)\n self._last_unsubscribe_time = current_time()\n\n @property\n def connection_count(self):\n return len(self._subscribed_connections)\n\n @property\n def milliseconds_since_last_unsubscribe(self):\n return current_time() - self._last_unsubscribe_time\n\n @_needs_document_lock\n def with_document_locked(self, func, *args, **kwargs):\n ''' Asynchronously locks the document and runs the function with it locked.'''\n return func(*args, **kwargs)\n\n def _wrap_document_callback(self, callback):\n if getattr(callback, \"nolock\", False):\n return callback\n def wrapped_callback(*args, **kwargs):\n return self.with_document_locked(callback, *args, **kwargs)\n return wrapped_callback\n\n def _wrap_session_callback(self, callback):\n wrapped = self._wrap_document_callback(callback.callback)\n return callback._copy_with_changed_callback(wrapped)\n\n def _wrap_session_callbacks(self, callbacks):\n wrapped = []\n for cb in callbacks:\n wrapped.append(self._wrap_session_callback(cb))\n return wrapped\n\n def _document_patched(self, event):\n may_suppress = event.setter is self\n\n if self._pending_writes is None:\n raise RuntimeError(\"_pending_writes should be non-None when we have a document lock, and we should have the lock when the document changes\")\n\n # TODO (havocp): our \"change sync\" protocol is flawed because if both\n # sides change the same attribute at the same time, they will each end\n # up with the state of the other and their final states will differ.\n for connection in self._subscribed_connections:\n if may_suppress and connection is self._current_patch_connection:\n log.trace(\"Not sending notification back to client %r for a change it requested\", connection)\n else:\n self._pending_writes.append(connection.send_patch_document(event))\n\n @_needs_document_lock\n def _handle_pull(self, message, connection):\n log.debug(\"Sending pull-doc-reply from session %r\", self.id)\n return connection.protocol.create('PULL-DOC-REPLY', message.header['msgid'], self.document)\n\n def _session_callback_added(self, event):\n wrapped = self._wrap_session_callback(event.callback)\n self._callbacks.add_session_callback(wrapped)\n\n def _session_callback_removed(self, event):\n self._callbacks.remove_session_callback(event.callback)\n\n @classmethod\n def pull(cls, message, connection):\n ''' Handle a PULL-DOC, return a Future with work to be scheduled. '''\n return connection.session._handle_pull(message, connection)\n\n @_needs_document_lock\n def _handle_push(self, message, connection):\n log.debug(\"pushing doc to session %r\", self.id)\n message.push_to_document(self.document)\n return connection.ok(message)\n\n @classmethod\n def push(cls, message, connection):\n ''' Handle a PUSH-DOC, return a Future with work to be scheduled. '''\n return connection.session._handle_push(message, connection)\n\n @_needs_document_lock\n def _handle_patch(self, message, connection):\n self._current_patch_connection = connection\n try:\n message.apply_to_document(self.document, self)\n finally:\n self._current_patch_connection = None\n\n return connection.ok(message)\n\n @_needs_document_lock\n def _handle_event(self, message, connection):\n message.notify_event(self.document)\n return connection.ok(message)\n\n @classmethod\n def event(cls, message, connection):\n return connection.session._handle_event(message, connection)\n\n\n @classmethod\n def patch(cls, message, connection):\n ''' Handle a PATCH-DOC, return a Future with work to be scheduled. '''\n return connection.session._handle_patch(message, connection)\n", "path": "bokeh/server/session.py"}]}
| 3,091 | 125 |
gh_patches_debug_23207
|
rasdani/github-patches
|
git_diff
|
getsentry__snuba-1794
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Snuba cleanup for sentry onpremise
### Environment
Sentry self-hosted 21.3.0 (based on docker-compose from here https://github.com/getsentry/onpremise/blob/21.3.0/docker-compose.yml)
### Steps to Reproduce
1) Setup all containers and up snuba-cleanup container
2) Check logs for snuba-cleanup: Every 5 minutes in log - `Dropped 0 partitions on None`
It looks like variable CLICKHOUSE_HOST is ignored here
https://github.com/getsentry/snuba/blob/41d7fe76aaf8a594e8f6e84015607dcde3f67ad4/snuba/cli/cleanup.py#L13
After manual run command in container - `snuba cleanup --clickhouse-host CLICKHOUSE_HOST_HERE --dry-run True`
i got `Dropped 0 partitions on CLICKHOUSE_HOST_HERE`
### Expected Result
Pass variable https://github.com/getsentry/onpremise/blob/bdd2686021cfea07507bc07d2756ac34a775c680/docker-compose.yml#L44 into cleanup command
### Actual Result
variable is `None` instead of clickhouse host
I'am not sure, bug this or not.
</issue>
<code>
[start of snuba/cli/cleanup.py]
1 from typing import Optional
2
3 import click
4
5 from snuba.clusters.cluster import ClickhouseClientSettings
6 from snuba.datasets.storages import StorageKey
7 from snuba.datasets.storages.factory import get_writable_storage
8 from snuba.environment import setup_logging
9
10
11 @click.command()
12 @click.option(
13 "--clickhouse-host", help="Clickhouse server to write to.",
14 )
15 @click.option(
16 "--clickhouse-port", type=int, help="Clickhouse native port to write to.",
17 )
18 @click.option(
19 "--dry-run",
20 type=bool,
21 default=True,
22 help="If true, only print which partitions would be dropped.",
23 )
24 @click.option(
25 "--storage",
26 "storage_name",
27 default="events",
28 type=click.Choice(["events", "errors", "transactions"]),
29 help="The storage to target",
30 )
31 @click.option("--log-level", help="Logging level to use.")
32 def cleanup(
33 *,
34 clickhouse_host: Optional[str],
35 clickhouse_port: Optional[int],
36 dry_run: bool,
37 storage_name: str,
38 log_level: Optional[str] = None,
39 ) -> None:
40 """
41 Deletes stale partitions for ClickHouse tables
42 """
43
44 setup_logging(log_level)
45
46 from snuba.cleanup import run_cleanup, logger
47 from snuba.clickhouse.native import ClickhousePool
48
49 storage = get_writable_storage(StorageKey(storage_name))
50
51 (clickhouse_user, clickhouse_password,) = storage.get_cluster().get_credentials()
52
53 database = storage.get_cluster().get_database()
54
55 if clickhouse_host and clickhouse_port:
56 connection = ClickhousePool(
57 clickhouse_host,
58 clickhouse_port,
59 clickhouse_user,
60 clickhouse_password,
61 database,
62 )
63 elif not storage.get_cluster().is_single_node():
64 raise click.ClickException("Provide ClickHouse host and port for cleanup")
65 else:
66 connection = storage.get_cluster().get_query_connection(
67 ClickhouseClientSettings.CLEANUP
68 )
69
70 num_dropped = run_cleanup(connection, storage, database, dry_run=dry_run)
71 logger.info("Dropped %s partitions on %s" % (num_dropped, clickhouse_host))
72
[end of snuba/cli/cleanup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snuba/cli/cleanup.py b/snuba/cli/cleanup.py
--- a/snuba/cli/cleanup.py
+++ b/snuba/cli/cleanup.py
@@ -50,7 +50,8 @@
(clickhouse_user, clickhouse_password,) = storage.get_cluster().get_credentials()
- database = storage.get_cluster().get_database()
+ cluster = storage.get_cluster()
+ database = cluster.get_database()
if clickhouse_host and clickhouse_port:
connection = ClickhousePool(
@@ -60,12 +61,12 @@
clickhouse_password,
database,
)
- elif not storage.get_cluster().is_single_node():
+ elif not cluster.is_single_node():
raise click.ClickException("Provide ClickHouse host and port for cleanup")
else:
- connection = storage.get_cluster().get_query_connection(
+ connection = cluster.get_query_connection(
ClickhouseClientSettings.CLEANUP
)
num_dropped = run_cleanup(connection, storage, database, dry_run=dry_run)
- logger.info("Dropped %s partitions on %s" % (num_dropped, clickhouse_host))
+ logger.info("Dropped %s partitions on %s" % (num_dropped, cluster))
|
{"golden_diff": "diff --git a/snuba/cli/cleanup.py b/snuba/cli/cleanup.py\n--- a/snuba/cli/cleanup.py\n+++ b/snuba/cli/cleanup.py\n@@ -50,7 +50,8 @@\n \n (clickhouse_user, clickhouse_password,) = storage.get_cluster().get_credentials()\n \n- database = storage.get_cluster().get_database()\n+ cluster = storage.get_cluster()\n+ database = cluster.get_database()\n \n if clickhouse_host and clickhouse_port:\n connection = ClickhousePool(\n@@ -60,12 +61,12 @@\n clickhouse_password,\n database,\n )\n- elif not storage.get_cluster().is_single_node():\n+ elif not cluster.is_single_node():\n raise click.ClickException(\"Provide ClickHouse host and port for cleanup\")\n else:\n- connection = storage.get_cluster().get_query_connection(\n+ connection = cluster.get_query_connection(\n ClickhouseClientSettings.CLEANUP\n )\n \n num_dropped = run_cleanup(connection, storage, database, dry_run=dry_run)\n- logger.info(\"Dropped %s partitions on %s\" % (num_dropped, clickhouse_host))\n+ logger.info(\"Dropped %s partitions on %s\" % (num_dropped, cluster))\n", "issue": "Snuba cleanup for sentry onpremise\n### Environment\r\n\r\nSentry self-hosted 21.3.0 (based on docker-compose from here https://github.com/getsentry/onpremise/blob/21.3.0/docker-compose.yml)\r\n\r\n### Steps to Reproduce\r\n\r\n1) Setup all containers and up snuba-cleanup container\r\n2) Check logs for snuba-cleanup: Every 5 minutes in log - `Dropped 0 partitions on None`\r\nIt looks like variable CLICKHOUSE_HOST is ignored here\r\nhttps://github.com/getsentry/snuba/blob/41d7fe76aaf8a594e8f6e84015607dcde3f67ad4/snuba/cli/cleanup.py#L13\r\nAfter manual run command in container - `snuba cleanup --clickhouse-host CLICKHOUSE_HOST_HERE --dry-run True`\r\ni got `Dropped 0 partitions on CLICKHOUSE_HOST_HERE`\r\n\r\n### Expected Result\r\n\r\nPass variable https://github.com/getsentry/onpremise/blob/bdd2686021cfea07507bc07d2756ac34a775c680/docker-compose.yml#L44 into cleanup command\r\n\r\n### Actual Result\r\n\r\nvariable is `None` instead of clickhouse host\r\n\r\nI'am not sure, bug this or not.\n", "before_files": [{"content": "from typing import Optional\n\nimport click\n\nfrom snuba.clusters.cluster import ClickhouseClientSettings\nfrom snuba.datasets.storages import StorageKey\nfrom snuba.datasets.storages.factory import get_writable_storage\nfrom snuba.environment import setup_logging\n\n\[email protected]()\[email protected](\n \"--clickhouse-host\", help=\"Clickhouse server to write to.\",\n)\[email protected](\n \"--clickhouse-port\", type=int, help=\"Clickhouse native port to write to.\",\n)\[email protected](\n \"--dry-run\",\n type=bool,\n default=True,\n help=\"If true, only print which partitions would be dropped.\",\n)\[email protected](\n \"--storage\",\n \"storage_name\",\n default=\"events\",\n type=click.Choice([\"events\", \"errors\", \"transactions\"]),\n help=\"The storage to target\",\n)\[email protected](\"--log-level\", help=\"Logging level to use.\")\ndef cleanup(\n *,\n clickhouse_host: Optional[str],\n clickhouse_port: Optional[int],\n dry_run: bool,\n storage_name: str,\n log_level: Optional[str] = None,\n) -> None:\n \"\"\"\n Deletes stale partitions for ClickHouse tables\n \"\"\"\n\n setup_logging(log_level)\n\n from snuba.cleanup import run_cleanup, logger\n from snuba.clickhouse.native import ClickhousePool\n\n storage = get_writable_storage(StorageKey(storage_name))\n\n (clickhouse_user, clickhouse_password,) = storage.get_cluster().get_credentials()\n\n database = storage.get_cluster().get_database()\n\n if clickhouse_host and clickhouse_port:\n connection = ClickhousePool(\n clickhouse_host,\n clickhouse_port,\n clickhouse_user,\n clickhouse_password,\n database,\n )\n elif not storage.get_cluster().is_single_node():\n raise click.ClickException(\"Provide ClickHouse host and port for cleanup\")\n else:\n connection = storage.get_cluster().get_query_connection(\n ClickhouseClientSettings.CLEANUP\n )\n\n num_dropped = run_cleanup(connection, storage, database, dry_run=dry_run)\n logger.info(\"Dropped %s partitions on %s\" % (num_dropped, clickhouse_host))\n", "path": "snuba/cli/cleanup.py"}]}
| 1,431 | 274 |
gh_patches_debug_36708
|
rasdani/github-patches
|
git_diff
|
ros__ros_comm-1695
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Topic Statistics reports 0 for topics under 1Hz
When `/enable_statistics` parameter is `true`, Subscribers publish to the `/statistics` topic. For a topic that has an expected 0.2Hz publishing frequency, this feature always reports `msg.mean_period` as 0, no matter how much time is given.
</issue>
<code>
[start of clients/rospy/src/rospy/impl/statistics.py]
1 # Software License Agreement (BSD License)
2 #
3 # Copyright (c) 2013-2014 Dariush Forouher
4 # All rights reserved.
5 #
6 # Based on code adapted from diagnostics_updater by Blaise Gassend
7 #
8 # Redistribution and use in source and binary forms, with or without
9 # modification, are permitted provided that the following conditions
10 # are met:
11 #
12 # * Redistributions of source code must retain the above copyright
13 # notice, this list of conditions and the following disclaimer.
14 # * Redistributions in binary form must reproduce the above
15 # copyright notice, this list of conditions and the following
16 # disclaimer in the documentation and/or other materials provided
17 # with the distribution.
18 # * Neither the name of Willow Garage, Inc. nor the names of its
19 # contributors may be used to endorse or promote products derived
20 # from this software without specific prior written permission.
21 #
22 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
23 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
24 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
25 # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
26 # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
27 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
28 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
29 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
30 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
31 # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
32 # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
33 # POSSIBILITY OF SUCH DAMAGE.
34
35 from math import sqrt
36 import logging
37 import sys
38
39 from rosgraph_msgs.msg import TopicStatistics
40 import rospy
41
42 _logger = logging.getLogger('rospy.impl.statistics')
43
44
45 class SubscriberStatisticsLogger():
46 """
47 Class that monitors each subscriber.
48
49 this class basically just keeps a collection of ConnectionStatisticsLogger.
50 """
51
52 @classmethod
53 def is_enabled(cls):
54 # disable statistics if node can't talk to parameter server
55 # which is the case in unit tests
56 try:
57 return rospy.get_param("/enable_statistics", False)
58 except Exception:
59 return False
60
61 def __init__(self, subscriber):
62 self.subscriber_name = subscriber.name
63 self.connections = dict()
64 self.read_parameters()
65
66 def read_parameters(self):
67 """
68 Fetch window parameters from parameter server
69 """
70
71 # Range of window length, in seconds
72 self.min_elements = rospy.get_param("/statistics_window_min_elements", 10)
73 self.max_elements = rospy.get_param("/statistics_window_max_elements", 100)
74
75 # Range of acceptable messages in window.
76 # Window size will be adjusted if number of observed is
77 # outside this range.
78 self.max_window = rospy.get_param("/statistics_window_max_size", 64)
79 self.min_window = rospy.get_param("/statistics_window_min_size", 4)
80
81 def callback(self, msg, publisher, stat_bytes):
82 """
83 This method is called for every message that has been received.
84
85 @param msg: The message received.
86 @param publisher: The name of the publisher node that sent the msg
87 @param stat_bytes: A counter, how many bytes have been moved across
88 this connection since it exists.
89
90 This method just looks up the ConnectionStatisticsLogger for the specific connection
91 between publisher and subscriber and delegates to statistics logging to that
92 instance.
93 """
94
95 # /clock is special, as it is subscribed very early
96 # also exclude /statistics to reduce noise.
97 if self.subscriber_name == "/clock" or self.subscriber_name == "/statistics":
98 return
99
100 try:
101 # create ConnectionStatisticsLogger for new connections
102 logger = self.connections.get(publisher)
103 if logger is None:
104 logger = ConnectionStatisticsLogger(self.subscriber_name, rospy.get_name(), publisher)
105 self.connections[publisher] = logger
106
107 # delegate stuff to that instance
108 logger.callback(self, msg, stat_bytes)
109 except Exception as e:
110 rospy.logerr("Unexpected error during statistics measurement: %s", str(e))
111
112 def shutdown(self):
113 for logger in self.connections.values():
114 logger.shutdown()
115 self.connections.clear()
116
117
118 class ConnectionStatisticsLogger():
119 """
120 Class that monitors lots of stuff for each connection.
121
122 is created whenever a subscriber is created.
123 is destroyed whenever its parent subscriber is destroyed.
124 its lifecycle is therefore bound to its parent subscriber.
125 """
126
127 def __init__(self, topic, subscriber, publisher):
128 """
129 Constructor.
130
131 @param topic: Name of the topic
132 @param subscriber: Name of the subscriber
133 @param publisher: Name of the publisher
134
135 These three should uniquely identify the connection.
136 """
137
138 self.topic = topic
139 self.subscriber = subscriber
140 self.publisher = publisher
141
142 self.pub = rospy.Publisher("/statistics", TopicStatistics, queue_size=10)
143
144 # reset window
145 self.last_pub_time = rospy.Time(0)
146 self.pub_frequency = rospy.Duration(1.0)
147
148 # timestamp age
149 self.age_list_ = []
150
151 # period calculations
152 self.arrival_time_list_ = []
153
154 self.last_seq_ = 0
155 self.dropped_msgs_ = 0
156 self.window_start = rospy.Time.now()
157
158 # temporary variables
159 self.stat_bytes_last_ = 0
160 self.stat_bytes_window_ = 0
161
162 def sendStatistics(self, subscriber_statistics_logger):
163 """
164 Send out statistics. Aggregate collected stats information.
165
166 Currently done blocking. Might be moved to own thread later. But at the moment
167 any computation done here should be rather quick.
168 """
169 curtime = rospy.Time.now()
170
171 msg = TopicStatistics()
172 msg.topic = self.topic
173 msg.node_sub = self.subscriber
174 msg.node_pub = self.publisher
175
176 msg.window_start = self.window_start
177 msg.window_stop = curtime
178
179 # Calculate bytes since last message
180 msg.traffic = self.stat_bytes_window_ - self.stat_bytes_last_
181
182 msg.delivered_msgs = len(self.arrival_time_list_)
183 msg.dropped_msgs = self.dropped_msgs_
184
185 # we can only calculate message age if the messages did contain Header fields.
186 if len(self.age_list_) > 0:
187 msg.stamp_age_mean = rospy.Duration(sum(self.age_list_, rospy.Duration(0)).to_sec() / len(self.age_list_))
188 variance = sum((rospy.Duration((msg.stamp_age_mean - value).to_sec() ** 2) for value in self.age_list_), rospy.Duration(0)) / len(self.age_list_)
189 msg.stamp_age_stddev = rospy.Duration(sqrt(variance.to_sec()))
190 msg.stamp_age_max = max(self.age_list_)
191 else:
192 msg.stamp_age_mean = rospy.Duration(0)
193 msg.stamp_age_stddev = rospy.Duration(0)
194 msg.stamp_age_max = rospy.Duration(0)
195
196 # computer period/frequency. we need at least two messages within the window to do this.
197 if len(self.arrival_time_list_) > 1:
198 periods = [j - i for i, j in zip(self.arrival_time_list_[:-1], self.arrival_time_list_[1:])]
199 msg.period_mean = rospy.Duration(sum(periods, rospy.Duration(0)).to_sec() / len(periods))
200 variance = sum((rospy.Duration((msg.period_mean - value).to_sec() ** 2) for value in periods), rospy.Duration(0)) / len(periods)
201 msg.period_stddev = rospy.Duration(sqrt(variance.to_sec()))
202 msg.period_max = max(periods)
203 else:
204 msg.period_mean = rospy.Duration(0)
205 msg.period_stddev = rospy.Duration(0)
206 msg.period_max = rospy.Duration(0)
207
208 self.pub.publish(msg)
209
210 # adjust window, if message count is not appropriate.
211 if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and self.pub_frequency.to_sec() * 2 <= subscriber_statistics_logger.max_window:
212 self.pub_frequency *= 2
213 if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and self.pub_frequency.to_sec() / 2 >= subscriber_statistics_logger.min_window:
214 self.pub_frequency /= 2
215
216 # clear collected stats, start new window.
217 self.age_list_ = []
218 self.arrival_time_list_ = []
219 self.dropped_msgs_ = 0
220
221 self.window_start = curtime
222
223 self.stat_bytes_last_ = self.stat_bytes_window_
224
225 def callback(self, subscriber_statistics_logger, msg, stat_bytes):
226 """
227 This method is called for every message, that is received on this
228 subscriber.
229
230 this callback will keep some statistics and publish the results
231 periodically on a topic. the publishing should probably be done
232 asynchronically in another thread.
233
234 @param msg: The message, that has been received. The message has usually
235 been already deserialized. However this is not always the case. (AnyMsg)
236 @param stat_bytes: A counter, how many bytes have been moved across
237 this connection since it exists.
238
239 Any computing-heavy stuff should be done somewhere else, as this
240 callback has to return before the message is delivered to the user.
241 """
242
243 arrival_time = rospy.Time.now()
244
245 self.arrival_time_list_.append(arrival_time)
246
247 self.stat_bytes_window_ = stat_bytes
248
249 # rospy has the feature to subscribe a topic with AnyMsg which aren't deserialized.
250 # Those subscribers won't have a header. But as these subscribers are rather rare
251 # ("rostopic hz" is the only one I know of), I'm gonna ignore them.
252 if msg._has_header:
253 self.age_list_.append(arrival_time - msg.header.stamp)
254
255 if self.last_seq_ + 1 != msg.header.seq:
256 self.dropped_msgs_ = self.dropped_msgs_ + 1
257 self.last_seq_ = msg.header.seq
258
259 # send out statistics with a certain frequency
260 if self.last_pub_time + self.pub_frequency < arrival_time:
261 self.last_pub_time = arrival_time
262 self.sendStatistics(subscriber_statistics_logger)
263
264 def shutdown(self):
265 self.pub.unregister()
266
[end of clients/rospy/src/rospy/impl/statistics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/clients/rospy/src/rospy/impl/statistics.py b/clients/rospy/src/rospy/impl/statistics.py
--- a/clients/rospy/src/rospy/impl/statistics.py
+++ b/clients/rospy/src/rospy/impl/statistics.py
@@ -68,15 +68,15 @@
Fetch window parameters from parameter server
"""
- # Range of window length, in seconds
- self.min_elements = rospy.get_param("/statistics_window_min_elements", 10)
- self.max_elements = rospy.get_param("/statistics_window_max_elements", 100)
-
# Range of acceptable messages in window.
# Window size will be adjusted if number of observed is
# outside this range.
- self.max_window = rospy.get_param("/statistics_window_max_size", 64)
+ self.min_elements = rospy.get_param("/statistics_window_min_elements", 10)
+ self.max_elements = rospy.get_param("/statistics_window_max_elements", 100)
+
+ # Range of window length, in seconds
self.min_window = rospy.get_param("/statistics_window_min_size", 4)
+ self.max_window = rospy.get_param("/statistics_window_max_size", 64)
def callback(self, msg, publisher, stat_bytes):
"""
@@ -208,9 +208,10 @@
self.pub.publish(msg)
# adjust window, if message count is not appropriate.
- if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and self.pub_frequency.to_sec() * 2 <= subscriber_statistics_logger.max_window:
+ pub_period = 1.0 / self.pub_frequency.to_sec()
+ if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and pub_period / 2 >= subscriber_statistics_logger.min_window:
self.pub_frequency *= 2
- if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and self.pub_frequency.to_sec() / 2 >= subscriber_statistics_logger.min_window:
+ if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and pub_period * 2 <= subscriber_statistics_logger.max_window:
self.pub_frequency /= 2
# clear collected stats, start new window.
@@ -257,7 +258,7 @@
self.last_seq_ = msg.header.seq
# send out statistics with a certain frequency
- if self.last_pub_time + self.pub_frequency < arrival_time:
+ if self.last_pub_time + rospy.Duration(1.0 / self.pub_frequency.to_sec()) < arrival_time:
self.last_pub_time = arrival_time
self.sendStatistics(subscriber_statistics_logger)
|
{"golden_diff": "diff --git a/clients/rospy/src/rospy/impl/statistics.py b/clients/rospy/src/rospy/impl/statistics.py\n--- a/clients/rospy/src/rospy/impl/statistics.py\n+++ b/clients/rospy/src/rospy/impl/statistics.py\n@@ -68,15 +68,15 @@\n Fetch window parameters from parameter server\n \"\"\"\n \n- # Range of window length, in seconds\n- self.min_elements = rospy.get_param(\"/statistics_window_min_elements\", 10)\n- self.max_elements = rospy.get_param(\"/statistics_window_max_elements\", 100)\n-\n # Range of acceptable messages in window.\n # Window size will be adjusted if number of observed is\n # outside this range.\n- self.max_window = rospy.get_param(\"/statistics_window_max_size\", 64)\n+ self.min_elements = rospy.get_param(\"/statistics_window_min_elements\", 10)\n+ self.max_elements = rospy.get_param(\"/statistics_window_max_elements\", 100)\n+\n+ # Range of window length, in seconds\n self.min_window = rospy.get_param(\"/statistics_window_min_size\", 4)\n+ self.max_window = rospy.get_param(\"/statistics_window_max_size\", 64)\n \n def callback(self, msg, publisher, stat_bytes):\n \"\"\"\n@@ -208,9 +208,10 @@\n self.pub.publish(msg)\n \n # adjust window, if message count is not appropriate.\n- if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and self.pub_frequency.to_sec() * 2 <= subscriber_statistics_logger.max_window:\n+ pub_period = 1.0 / self.pub_frequency.to_sec()\n+ if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and pub_period / 2 >= subscriber_statistics_logger.min_window:\n self.pub_frequency *= 2\n- if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and self.pub_frequency.to_sec() / 2 >= subscriber_statistics_logger.min_window:\n+ if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and pub_period * 2 <= subscriber_statistics_logger.max_window:\n self.pub_frequency /= 2\n \n # clear collected stats, start new window.\n@@ -257,7 +258,7 @@\n self.last_seq_ = msg.header.seq\n \n # send out statistics with a certain frequency\n- if self.last_pub_time + self.pub_frequency < arrival_time:\n+ if self.last_pub_time + rospy.Duration(1.0 / self.pub_frequency.to_sec()) < arrival_time:\n self.last_pub_time = arrival_time\n self.sendStatistics(subscriber_statistics_logger)\n", "issue": "Topic Statistics reports 0 for topics under 1Hz\nWhen `/enable_statistics` parameter is `true`, Subscribers publish to the `/statistics` topic. For a topic that has an expected 0.2Hz publishing frequency, this feature always reports `msg.mean_period` as 0, no matter how much time is given.\n", "before_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2013-2014 Dariush Forouher\n# All rights reserved.\n#\n# Based on code adapted from diagnostics_updater by Blaise Gassend\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\nfrom math import sqrt\nimport logging\nimport sys\n\nfrom rosgraph_msgs.msg import TopicStatistics\nimport rospy\n\n_logger = logging.getLogger('rospy.impl.statistics')\n\n\nclass SubscriberStatisticsLogger():\n \"\"\"\n Class that monitors each subscriber.\n\n this class basically just keeps a collection of ConnectionStatisticsLogger.\n \"\"\"\n\n @classmethod\n def is_enabled(cls):\n # disable statistics if node can't talk to parameter server\n # which is the case in unit tests\n try:\n return rospy.get_param(\"/enable_statistics\", False)\n except Exception:\n return False\n\n def __init__(self, subscriber):\n self.subscriber_name = subscriber.name\n self.connections = dict()\n self.read_parameters()\n\n def read_parameters(self):\n \"\"\"\n Fetch window parameters from parameter server\n \"\"\"\n\n # Range of window length, in seconds\n self.min_elements = rospy.get_param(\"/statistics_window_min_elements\", 10)\n self.max_elements = rospy.get_param(\"/statistics_window_max_elements\", 100)\n\n # Range of acceptable messages in window.\n # Window size will be adjusted if number of observed is\n # outside this range.\n self.max_window = rospy.get_param(\"/statistics_window_max_size\", 64)\n self.min_window = rospy.get_param(\"/statistics_window_min_size\", 4)\n\n def callback(self, msg, publisher, stat_bytes):\n \"\"\"\n This method is called for every message that has been received.\n\n @param msg: The message received.\n @param publisher: The name of the publisher node that sent the msg\n @param stat_bytes: A counter, how many bytes have been moved across\n this connection since it exists.\n\n This method just looks up the ConnectionStatisticsLogger for the specific connection\n between publisher and subscriber and delegates to statistics logging to that\n instance.\n \"\"\"\n\n # /clock is special, as it is subscribed very early\n # also exclude /statistics to reduce noise.\n if self.subscriber_name == \"/clock\" or self.subscriber_name == \"/statistics\":\n return\n\n try:\n # create ConnectionStatisticsLogger for new connections\n logger = self.connections.get(publisher)\n if logger is None:\n logger = ConnectionStatisticsLogger(self.subscriber_name, rospy.get_name(), publisher)\n self.connections[publisher] = logger\n\n # delegate stuff to that instance\n logger.callback(self, msg, stat_bytes)\n except Exception as e:\n rospy.logerr(\"Unexpected error during statistics measurement: %s\", str(e))\n\n def shutdown(self):\n for logger in self.connections.values():\n logger.shutdown()\n self.connections.clear()\n\n\nclass ConnectionStatisticsLogger():\n \"\"\"\n Class that monitors lots of stuff for each connection.\n\n is created whenever a subscriber is created.\n is destroyed whenever its parent subscriber is destroyed.\n its lifecycle is therefore bound to its parent subscriber.\n \"\"\"\n\n def __init__(self, topic, subscriber, publisher):\n \"\"\"\n Constructor.\n\n @param topic: Name of the topic\n @param subscriber: Name of the subscriber\n @param publisher: Name of the publisher\n\n These three should uniquely identify the connection.\n \"\"\"\n\n self.topic = topic\n self.subscriber = subscriber\n self.publisher = publisher\n\n self.pub = rospy.Publisher(\"/statistics\", TopicStatistics, queue_size=10)\n\n # reset window\n self.last_pub_time = rospy.Time(0)\n self.pub_frequency = rospy.Duration(1.0)\n\n # timestamp age\n self.age_list_ = []\n\n # period calculations\n self.arrival_time_list_ = []\n\n self.last_seq_ = 0\n self.dropped_msgs_ = 0\n self.window_start = rospy.Time.now()\n\n # temporary variables\n self.stat_bytes_last_ = 0\n self.stat_bytes_window_ = 0\n\n def sendStatistics(self, subscriber_statistics_logger):\n \"\"\"\n Send out statistics. Aggregate collected stats information.\n\n Currently done blocking. Might be moved to own thread later. But at the moment\n any computation done here should be rather quick.\n \"\"\"\n curtime = rospy.Time.now()\n\n msg = TopicStatistics()\n msg.topic = self.topic\n msg.node_sub = self.subscriber\n msg.node_pub = self.publisher\n\n msg.window_start = self.window_start\n msg.window_stop = curtime\n\n # Calculate bytes since last message\n msg.traffic = self.stat_bytes_window_ - self.stat_bytes_last_\n\n msg.delivered_msgs = len(self.arrival_time_list_)\n msg.dropped_msgs = self.dropped_msgs_\n\n # we can only calculate message age if the messages did contain Header fields.\n if len(self.age_list_) > 0:\n msg.stamp_age_mean = rospy.Duration(sum(self.age_list_, rospy.Duration(0)).to_sec() / len(self.age_list_))\n variance = sum((rospy.Duration((msg.stamp_age_mean - value).to_sec() ** 2) for value in self.age_list_), rospy.Duration(0)) / len(self.age_list_)\n msg.stamp_age_stddev = rospy.Duration(sqrt(variance.to_sec()))\n msg.stamp_age_max = max(self.age_list_)\n else:\n msg.stamp_age_mean = rospy.Duration(0)\n msg.stamp_age_stddev = rospy.Duration(0)\n msg.stamp_age_max = rospy.Duration(0)\n\n # computer period/frequency. we need at least two messages within the window to do this.\n if len(self.arrival_time_list_) > 1:\n periods = [j - i for i, j in zip(self.arrival_time_list_[:-1], self.arrival_time_list_[1:])]\n msg.period_mean = rospy.Duration(sum(periods, rospy.Duration(0)).to_sec() / len(periods))\n variance = sum((rospy.Duration((msg.period_mean - value).to_sec() ** 2) for value in periods), rospy.Duration(0)) / len(periods)\n msg.period_stddev = rospy.Duration(sqrt(variance.to_sec()))\n msg.period_max = max(periods)\n else:\n msg.period_mean = rospy.Duration(0)\n msg.period_stddev = rospy.Duration(0)\n msg.period_max = rospy.Duration(0)\n\n self.pub.publish(msg)\n\n # adjust window, if message count is not appropriate.\n if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and self.pub_frequency.to_sec() * 2 <= subscriber_statistics_logger.max_window:\n self.pub_frequency *= 2\n if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and self.pub_frequency.to_sec() / 2 >= subscriber_statistics_logger.min_window:\n self.pub_frequency /= 2\n\n # clear collected stats, start new window.\n self.age_list_ = []\n self.arrival_time_list_ = []\n self.dropped_msgs_ = 0\n\n self.window_start = curtime\n\n self.stat_bytes_last_ = self.stat_bytes_window_\n\n def callback(self, subscriber_statistics_logger, msg, stat_bytes):\n \"\"\"\n This method is called for every message, that is received on this\n subscriber.\n\n this callback will keep some statistics and publish the results\n periodically on a topic. the publishing should probably be done\n asynchronically in another thread.\n\n @param msg: The message, that has been received. The message has usually\n been already deserialized. However this is not always the case. (AnyMsg)\n @param stat_bytes: A counter, how many bytes have been moved across\n this connection since it exists.\n\n Any computing-heavy stuff should be done somewhere else, as this\n callback has to return before the message is delivered to the user.\n \"\"\"\n\n arrival_time = rospy.Time.now()\n\n self.arrival_time_list_.append(arrival_time)\n\n self.stat_bytes_window_ = stat_bytes\n\n # rospy has the feature to subscribe a topic with AnyMsg which aren't deserialized.\n # Those subscribers won't have a header. But as these subscribers are rather rare\n # (\"rostopic hz\" is the only one I know of), I'm gonna ignore them.\n if msg._has_header:\n self.age_list_.append(arrival_time - msg.header.stamp)\n\n if self.last_seq_ + 1 != msg.header.seq:\n self.dropped_msgs_ = self.dropped_msgs_ + 1\n self.last_seq_ = msg.header.seq\n\n # send out statistics with a certain frequency\n if self.last_pub_time + self.pub_frequency < arrival_time:\n self.last_pub_time = arrival_time\n self.sendStatistics(subscriber_statistics_logger)\n\n def shutdown(self):\n self.pub.unregister()\n", "path": "clients/rospy/src/rospy/impl/statistics.py"}]}
| 3,533 | 585 |
gh_patches_debug_11354
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-2523
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nikola generates invalid html5 when markdown footnote extension is used
The [default output type](http://pythonhosted.org/Markdown/reference.html#markdown) of the python markdown library is xhtml1. The 4 templates that ship with Nikola have <!DOCTYPE html> which defines them as html5, so I'm assuming that we're intending to generate html5.
When the footnote markdown extension is used, it generates invalid html5 according to the w3c validator.
`<a class="footnote-ref" href="..." rev="footnote">...</a>`
(rev="footnote" is valid html4, but not html5)
The markdown library indicates that this is invalid html5 (https://github.com/waylan/Python-Markdown/blob/master/markdown/extensions/footnotes.py#L149) so we can trigger the correct behaviour by setting the output_format.
Given the markdown library does not make much use of the output_format variable, I don't think this is likely to materially change the output for many people at all - https://github.com/waylan/Python-Markdown/search?utf8=%E2%9C%93&q=output_format)
</issue>
<code>
[start of nikola/plugins/compile/markdown/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2016 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Implementation of compile_html based on markdown."""
28
29 from __future__ import unicode_literals
30
31 import io
32 import os
33
34 try:
35 from markdown import markdown
36 except ImportError:
37 markdown = None # NOQA
38 nikola_extension = None
39 gist_extension = None
40 podcast_extension = None
41
42 from nikola.plugin_categories import PageCompiler
43 from nikola.utils import makedirs, req_missing, write_metadata
44
45
46 class CompileMarkdown(PageCompiler):
47 """Compile Markdown into HTML."""
48
49 name = "markdown"
50 friendly_name = "Markdown"
51 demote_headers = True
52 extensions = []
53 site = None
54
55 def set_site(self, site):
56 """Set Nikola site."""
57 super(CompileMarkdown, self).set_site(site)
58 self.config_dependencies = []
59 for plugin_info in self.get_compiler_extensions():
60 self.config_dependencies.append(plugin_info.name)
61 self.extensions.append(plugin_info.plugin_object)
62 plugin_info.plugin_object.short_help = plugin_info.description
63
64 self.config_dependencies.append(str(sorted(site.config.get("MARKDOWN_EXTENSIONS"))))
65
66 def compile_html(self, source, dest, is_two_file=True):
67 """Compile source file into HTML and save as dest."""
68 if markdown is None:
69 req_missing(['markdown'], 'build this site (compile Markdown)')
70 makedirs(os.path.dirname(dest))
71 self.extensions += self.site.config.get("MARKDOWN_EXTENSIONS")
72 try:
73 post = self.site.post_per_input_file[source]
74 except KeyError:
75 post = None
76 with io.open(dest, "w+", encoding="utf8") as out_file:
77 with io.open(source, "r", encoding="utf8") as in_file:
78 data = in_file.read()
79 if not is_two_file:
80 _, data = self.split_metadata(data)
81 output = markdown(data, self.extensions)
82 output, shortcode_deps = self.site.apply_shortcodes(output, filename=source, with_dependencies=True, extra_context=dict(post=post))
83 out_file.write(output)
84 if post is None:
85 if shortcode_deps:
86 self.logger.error(
87 "Cannot save dependencies for post {0} due to unregistered source file name",
88 source)
89 else:
90 post._depfile[dest] += shortcode_deps
91
92 def create_post(self, path, **kw):
93 """Create a new post."""
94 content = kw.pop('content', None)
95 onefile = kw.pop('onefile', False)
96 # is_page is not used by create_post as of now.
97 kw.pop('is_page', False)
98
99 metadata = {}
100 metadata.update(self.default_metadata)
101 metadata.update(kw)
102 makedirs(os.path.dirname(path))
103 if not content.endswith('\n'):
104 content += '\n'
105 with io.open(path, "w+", encoding="utf8") as fd:
106 if onefile:
107 fd.write('<!-- \n')
108 fd.write(write_metadata(metadata))
109 fd.write('-->\n\n')
110 fd.write(content)
111
[end of nikola/plugins/compile/markdown/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nikola/plugins/compile/markdown/__init__.py b/nikola/plugins/compile/markdown/__init__.py
--- a/nikola/plugins/compile/markdown/__init__.py
+++ b/nikola/plugins/compile/markdown/__init__.py
@@ -78,7 +78,7 @@
data = in_file.read()
if not is_two_file:
_, data = self.split_metadata(data)
- output = markdown(data, self.extensions)
+ output = markdown(data, self.extensions, output_format="html5")
output, shortcode_deps = self.site.apply_shortcodes(output, filename=source, with_dependencies=True, extra_context=dict(post=post))
out_file.write(output)
if post is None:
|
{"golden_diff": "diff --git a/nikola/plugins/compile/markdown/__init__.py b/nikola/plugins/compile/markdown/__init__.py\n--- a/nikola/plugins/compile/markdown/__init__.py\n+++ b/nikola/plugins/compile/markdown/__init__.py\n@@ -78,7 +78,7 @@\n data = in_file.read()\n if not is_two_file:\n _, data = self.split_metadata(data)\n- output = markdown(data, self.extensions)\n+ output = markdown(data, self.extensions, output_format=\"html5\")\n output, shortcode_deps = self.site.apply_shortcodes(output, filename=source, with_dependencies=True, extra_context=dict(post=post))\n out_file.write(output)\n if post is None:\n", "issue": "Nikola generates invalid html5 when markdown footnote extension is used\nThe [default output type](http://pythonhosted.org/Markdown/reference.html#markdown) of the python markdown library is xhtml1. The 4 templates that ship with Nikola have <!DOCTYPE html> which defines them as html5, so I'm assuming that we're intending to generate html5.\n\nWhen the footnote markdown extension is used, it generates invalid html5 according to the w3c validator.\n\n`<a class=\"footnote-ref\" href=\"...\" rev=\"footnote\">...</a>`\n\n(rev=\"footnote\" is valid html4, but not html5)\n\nThe markdown library indicates that this is invalid html5 (https://github.com/waylan/Python-Markdown/blob/master/markdown/extensions/footnotes.py#L149) so we can trigger the correct behaviour by setting the output_format.\n\nGiven the markdown library does not make much use of the output_format variable, I don't think this is likely to materially change the output for many people at all - https://github.com/waylan/Python-Markdown/search?utf8=%E2%9C%93&q=output_format)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2016 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Implementation of compile_html based on markdown.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport io\nimport os\n\ntry:\n from markdown import markdown\nexcept ImportError:\n markdown = None # NOQA\n nikola_extension = None\n gist_extension = None\n podcast_extension = None\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import makedirs, req_missing, write_metadata\n\n\nclass CompileMarkdown(PageCompiler):\n \"\"\"Compile Markdown into HTML.\"\"\"\n\n name = \"markdown\"\n friendly_name = \"Markdown\"\n demote_headers = True\n extensions = []\n site = None\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n super(CompileMarkdown, self).set_site(site)\n self.config_dependencies = []\n for plugin_info in self.get_compiler_extensions():\n self.config_dependencies.append(plugin_info.name)\n self.extensions.append(plugin_info.plugin_object)\n plugin_info.plugin_object.short_help = plugin_info.description\n\n self.config_dependencies.append(str(sorted(site.config.get(\"MARKDOWN_EXTENSIONS\"))))\n\n def compile_html(self, source, dest, is_two_file=True):\n \"\"\"Compile source file into HTML and save as dest.\"\"\"\n if markdown is None:\n req_missing(['markdown'], 'build this site (compile Markdown)')\n makedirs(os.path.dirname(dest))\n self.extensions += self.site.config.get(\"MARKDOWN_EXTENSIONS\")\n try:\n post = self.site.post_per_input_file[source]\n except KeyError:\n post = None\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n data = in_file.read()\n if not is_two_file:\n _, data = self.split_metadata(data)\n output = markdown(data, self.extensions)\n output, shortcode_deps = self.site.apply_shortcodes(output, filename=source, with_dependencies=True, extra_context=dict(post=post))\n out_file.write(output)\n if post is None:\n if shortcode_deps:\n self.logger.error(\n \"Cannot save dependencies for post {0} due to unregistered source file name\",\n source)\n else:\n post._depfile[dest] += shortcode_deps\n\n def create_post(self, path, **kw):\n \"\"\"Create a new post.\"\"\"\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not used by create_post as of now.\n kw.pop('is_page', False)\n\n metadata = {}\n metadata.update(self.default_metadata)\n metadata.update(kw)\n makedirs(os.path.dirname(path))\n if not content.endswith('\\n'):\n content += '\\n'\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if onefile:\n fd.write('<!-- \\n')\n fd.write(write_metadata(metadata))\n fd.write('-->\\n\\n')\n fd.write(content)\n", "path": "nikola/plugins/compile/markdown/__init__.py"}]}
| 1,901 | 160 |
gh_patches_debug_12528
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-281
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Better HTTP response code handling in Eligibility API
In the Eligibility API `Client`, we currently attempt to parse the response as if it were a JWT, regardless of response code:
From https://github.com/cal-itp/benefits/blob/dev/benefits/eligibility/api.py#L145
```python
try:
r = requests.get(self.verifier.api_url, headers=self._auth_headers(token))
except requests.ConnectionError:
raise ApiError("Connection to verification server failed")
except requests.Timeout:
raise ApiError("Connection to verification server timed out")
except requests.TooManyRedirects:
raise ApiError("Too many redirects to verification server")
except requests.HTTPError as e:
raise ApiError(e)
return self._tokenize_response(r)
```
Since input errors on the form are returned as JWTs, the same as success payloads, this code worked fine for 200 and 400 responses. But if the API outright rejects the call with a 403, the above code attempts to parse _that_ response as a JWT, throwing an unhandled exception.
Let's guard the `return self._tokenize_response(r)` to ensure we are only trying to tokenize the expected 200 and 400 responses; other codes should raise an `ApiError`.
</issue>
<code>
[start of benefits/eligibility/api.py]
1 """
2 The eligibility application: Eligibility Verification API implementation.
3 """
4 import datetime
5 import json
6 import logging
7 import uuid
8
9 from jwcrypto import common as jwcrypto, jwe, jws, jwt
10 import requests
11
12 from benefits.settings import ALLOWED_HOSTS
13
14
15 logger = logging.getLogger(__name__)
16
17
18 class ApiError(Exception):
19 """Error calling the Eligibility Verification API."""
20
21 pass
22
23
24 class TokenError(Exception):
25 """Error with API request/response token."""
26
27 pass
28
29
30 class RequestToken:
31 """Eligibility Verification API request token."""
32
33 def __init__(self, agency, verifier, sub, name):
34 logger.info("Initialize new request token")
35
36 # send the eligibility type names
37 types = list(map(lambda t: t.name, agency.types_to_verify()))
38
39 # craft the main token payload
40 payload = dict(
41 jti=str(uuid.uuid4()),
42 iss=ALLOWED_HOSTS[0],
43 iat=int(datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).timestamp()),
44 agency=agency.agency_id,
45 eligibility=types,
46 sub=sub,
47 name=name,
48 )
49
50 logger.debug("Sign token payload with agency's private key")
51 header = {"typ": "JWS", "alg": agency.jws_signing_alg}
52 signed_token = jwt.JWT(header=header, claims=payload)
53 signed_token.make_signed_token(agency.private_jwk)
54 signed_payload = signed_token.serialize()
55
56 logger.debug("Encrypt signed token payload with verifier's public key")
57 header = {"typ": "JWE", "alg": verifier.jwe_encryption_alg, "enc": verifier.jwe_cek_enc}
58 encrypted_token = jwt.JWT(header=header, claims=signed_payload)
59 encrypted_token.make_encrypted_token(verifier.public_jwk)
60
61 logger.info("Signed and encrypted request token initialized")
62 self._jwe = encrypted_token
63
64 def __repr__(self):
65 return str(self)
66
67 def __str__(self):
68 return self._jwe.serialize()
69
70
71 class ResponseToken:
72 """Eligibility Verification API response token."""
73
74 def __init__(self, response, agency, verifier):
75 logger.info("Read encrypted token from response")
76
77 try:
78 encrypted_signed_token = response.text
79 if not encrypted_signed_token:
80 raise ValueError()
81 # strip extra spaces and wrapping quote chars
82 encrypted_signed_token = encrypted_signed_token.strip("'\n\"")
83 except ValueError:
84 raise TokenError("Invalid response format")
85
86 logger.debug("Decrypt response token using agency's private key")
87 allowed_algs = [verifier.jwe_encryption_alg, verifier.jwe_cek_enc]
88 decrypted_token = jwe.JWE(algs=allowed_algs)
89 try:
90 decrypted_token.deserialize(encrypted_signed_token, key=agency.private_jwk)
91 except jwe.InvalidJWEData:
92 raise TokenError("Invalid JWE token")
93 except jwe.InvalidJWEOperation:
94 raise TokenError("JWE token decryption failed")
95
96 decrypted_payload = str(decrypted_token.payload, "utf-8")
97
98 logger.debug("Verify decrypted response token's signature using verifier's public key")
99 signed_token = jws.JWS()
100 try:
101 signed_token.deserialize(decrypted_payload, key=verifier.public_jwk, alg=agency.jws_signing_alg)
102 except jws.InvalidJWSObject:
103 raise TokenError("Invalid JWS token")
104 except jws.InvalidJWSSignature:
105 raise TokenError("JWS token signature verification failed")
106
107 logger.info("Response token decrypted and signature verified")
108
109 payload = json.loads(str(signed_token.payload, "utf-8"))
110 self.eligibility = list(payload.get("eligibility", []))
111 self.error = payload.get("error", None)
112
113
114 class Client:
115 """Eligibility Verification API HTTP client."""
116
117 def __init__(self, agency):
118 logger.debug(f"Initialize client for agency: {agency.short_name}")
119 self.agency = agency
120 self.verifier = agency.eligibility_verifier
121
122 def _tokenize_request(self, sub, name):
123 """Create a request token."""
124 return RequestToken(self.agency, self.verifier, sub, name)
125
126 def _tokenize_response(self, response):
127 """Parse a response token."""
128 return ResponseToken(response, self.agency, self.verifier)
129
130 def _auth_headers(self, token):
131 """Create headers for the request with the token and verifier API keys"""
132 headers = dict(Authorization=f"Bearer {token}")
133 headers[self.verifier.api_auth_header] = self.verifier.api_auth_key
134 return headers
135
136 def _request(self, sub, name):
137 """Make an API request for eligibility verification."""
138 logger.debug("Start new eligibility verification request")
139
140 try:
141 token = self._tokenize_request(sub, name)
142 except jwcrypto.JWException:
143 raise TokenError("Failed to tokenize form values")
144
145 try:
146 logger.debug(f"GET request to {self.verifier.api_url}")
147 r = requests.get(self.verifier.api_url, headers=self._auth_headers(token))
148 except requests.ConnectionError:
149 raise ApiError("Connection to verification server failed")
150 except requests.Timeout:
151 raise ApiError("Connection to verification server timed out")
152 except requests.TooManyRedirects:
153 raise ApiError("Too many redirects to verification server")
154 except requests.HTTPError as e:
155 raise ApiError(e)
156
157 logger.debug("Process eligiblity verification response")
158 return self._tokenize_response(r)
159
160 def verify(self, sub, name):
161 """Check eligibility for the subject and name."""
162 return self._request(sub, name)
163
[end of benefits/eligibility/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benefits/eligibility/api.py b/benefits/eligibility/api.py
--- a/benefits/eligibility/api.py
+++ b/benefits/eligibility/api.py
@@ -154,8 +154,13 @@
except requests.HTTPError as e:
raise ApiError(e)
- logger.debug("Process eligiblity verification response")
- return self._tokenize_response(r)
+ expected_status_codes = {200, 400}
+ if r.status_code in expected_status_codes:
+ logger.debug("Process eligiblity verification response")
+ return self._tokenize_response(r)
+ else:
+ logger.warning(f"Unexpected eligibility verification response status code: {r.status_code}")
+ raise ApiError("Unexpected eligibility verification response")
def verify(self, sub, name):
"""Check eligibility for the subject and name."""
|
{"golden_diff": "diff --git a/benefits/eligibility/api.py b/benefits/eligibility/api.py\n--- a/benefits/eligibility/api.py\n+++ b/benefits/eligibility/api.py\n@@ -154,8 +154,13 @@\n except requests.HTTPError as e:\n raise ApiError(e)\n \n- logger.debug(\"Process eligiblity verification response\")\n- return self._tokenize_response(r)\n+ expected_status_codes = {200, 400}\n+ if r.status_code in expected_status_codes:\n+ logger.debug(\"Process eligiblity verification response\")\n+ return self._tokenize_response(r)\n+ else:\n+ logger.warning(f\"Unexpected eligibility verification response status code: {r.status_code}\")\n+ raise ApiError(\"Unexpected eligibility verification response\")\n \n def verify(self, sub, name):\n \"\"\"Check eligibility for the subject and name.\"\"\"\n", "issue": "Better HTTP response code handling in Eligibility API\nIn the Eligibility API `Client`, we currently attempt to parse the response as if it were a JWT, regardless of response code:\r\n\r\nFrom https://github.com/cal-itp/benefits/blob/dev/benefits/eligibility/api.py#L145\r\n\r\n```python\r\ntry:\r\n r = requests.get(self.verifier.api_url, headers=self._auth_headers(token))\r\nexcept requests.ConnectionError:\r\n raise ApiError(\"Connection to verification server failed\")\r\nexcept requests.Timeout:\r\n raise ApiError(\"Connection to verification server timed out\")\r\nexcept requests.TooManyRedirects:\r\n raise ApiError(\"Too many redirects to verification server\")\r\nexcept requests.HTTPError as e:\r\n raise ApiError(e)\r\n\r\nreturn self._tokenize_response(r)\r\n```\r\n\r\nSince input errors on the form are returned as JWTs, the same as success payloads, this code worked fine for 200 and 400 responses. But if the API outright rejects the call with a 403, the above code attempts to parse _that_ response as a JWT, throwing an unhandled exception.\r\n\r\nLet's guard the `return self._tokenize_response(r)` to ensure we are only trying to tokenize the expected 200 and 400 responses; other codes should raise an `ApiError`.\n", "before_files": [{"content": "\"\"\"\nThe eligibility application: Eligibility Verification API implementation.\n\"\"\"\nimport datetime\nimport json\nimport logging\nimport uuid\n\nfrom jwcrypto import common as jwcrypto, jwe, jws, jwt\nimport requests\n\nfrom benefits.settings import ALLOWED_HOSTS\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApiError(Exception):\n \"\"\"Error calling the Eligibility Verification API.\"\"\"\n\n pass\n\n\nclass TokenError(Exception):\n \"\"\"Error with API request/response token.\"\"\"\n\n pass\n\n\nclass RequestToken:\n \"\"\"Eligibility Verification API request token.\"\"\"\n\n def __init__(self, agency, verifier, sub, name):\n logger.info(\"Initialize new request token\")\n\n # send the eligibility type names\n types = list(map(lambda t: t.name, agency.types_to_verify()))\n\n # craft the main token payload\n payload = dict(\n jti=str(uuid.uuid4()),\n iss=ALLOWED_HOSTS[0],\n iat=int(datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).timestamp()),\n agency=agency.agency_id,\n eligibility=types,\n sub=sub,\n name=name,\n )\n\n logger.debug(\"Sign token payload with agency's private key\")\n header = {\"typ\": \"JWS\", \"alg\": agency.jws_signing_alg}\n signed_token = jwt.JWT(header=header, claims=payload)\n signed_token.make_signed_token(agency.private_jwk)\n signed_payload = signed_token.serialize()\n\n logger.debug(\"Encrypt signed token payload with verifier's public key\")\n header = {\"typ\": \"JWE\", \"alg\": verifier.jwe_encryption_alg, \"enc\": verifier.jwe_cek_enc}\n encrypted_token = jwt.JWT(header=header, claims=signed_payload)\n encrypted_token.make_encrypted_token(verifier.public_jwk)\n\n logger.info(\"Signed and encrypted request token initialized\")\n self._jwe = encrypted_token\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n return self._jwe.serialize()\n\n\nclass ResponseToken:\n \"\"\"Eligibility Verification API response token.\"\"\"\n\n def __init__(self, response, agency, verifier):\n logger.info(\"Read encrypted token from response\")\n\n try:\n encrypted_signed_token = response.text\n if not encrypted_signed_token:\n raise ValueError()\n # strip extra spaces and wrapping quote chars\n encrypted_signed_token = encrypted_signed_token.strip(\"'\\n\\\"\")\n except ValueError:\n raise TokenError(\"Invalid response format\")\n\n logger.debug(\"Decrypt response token using agency's private key\")\n allowed_algs = [verifier.jwe_encryption_alg, verifier.jwe_cek_enc]\n decrypted_token = jwe.JWE(algs=allowed_algs)\n try:\n decrypted_token.deserialize(encrypted_signed_token, key=agency.private_jwk)\n except jwe.InvalidJWEData:\n raise TokenError(\"Invalid JWE token\")\n except jwe.InvalidJWEOperation:\n raise TokenError(\"JWE token decryption failed\")\n\n decrypted_payload = str(decrypted_token.payload, \"utf-8\")\n\n logger.debug(\"Verify decrypted response token's signature using verifier's public key\")\n signed_token = jws.JWS()\n try:\n signed_token.deserialize(decrypted_payload, key=verifier.public_jwk, alg=agency.jws_signing_alg)\n except jws.InvalidJWSObject:\n raise TokenError(\"Invalid JWS token\")\n except jws.InvalidJWSSignature:\n raise TokenError(\"JWS token signature verification failed\")\n\n logger.info(\"Response token decrypted and signature verified\")\n\n payload = json.loads(str(signed_token.payload, \"utf-8\"))\n self.eligibility = list(payload.get(\"eligibility\", []))\n self.error = payload.get(\"error\", None)\n\n\nclass Client:\n \"\"\"Eligibility Verification API HTTP client.\"\"\"\n\n def __init__(self, agency):\n logger.debug(f\"Initialize client for agency: {agency.short_name}\")\n self.agency = agency\n self.verifier = agency.eligibility_verifier\n\n def _tokenize_request(self, sub, name):\n \"\"\"Create a request token.\"\"\"\n return RequestToken(self.agency, self.verifier, sub, name)\n\n def _tokenize_response(self, response):\n \"\"\"Parse a response token.\"\"\"\n return ResponseToken(response, self.agency, self.verifier)\n\n def _auth_headers(self, token):\n \"\"\"Create headers for the request with the token and verifier API keys\"\"\"\n headers = dict(Authorization=f\"Bearer {token}\")\n headers[self.verifier.api_auth_header] = self.verifier.api_auth_key\n return headers\n\n def _request(self, sub, name):\n \"\"\"Make an API request for eligibility verification.\"\"\"\n logger.debug(\"Start new eligibility verification request\")\n\n try:\n token = self._tokenize_request(sub, name)\n except jwcrypto.JWException:\n raise TokenError(\"Failed to tokenize form values\")\n\n try:\n logger.debug(f\"GET request to {self.verifier.api_url}\")\n r = requests.get(self.verifier.api_url, headers=self._auth_headers(token))\n except requests.ConnectionError:\n raise ApiError(\"Connection to verification server failed\")\n except requests.Timeout:\n raise ApiError(\"Connection to verification server timed out\")\n except requests.TooManyRedirects:\n raise ApiError(\"Too many redirects to verification server\")\n except requests.HTTPError as e:\n raise ApiError(e)\n\n logger.debug(\"Process eligiblity verification response\")\n return self._tokenize_response(r)\n\n def verify(self, sub, name):\n \"\"\"Check eligibility for the subject and name.\"\"\"\n return self._request(sub, name)\n", "path": "benefits/eligibility/api.py"}]}
| 2,414 | 200 |
gh_patches_debug_10981
|
rasdani/github-patches
|
git_diff
|
sbi-dev__sbi-442
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Training SNRE_A on GPU fails
Hi! Using SNRE_A with `device="gpu"` currently fails :-(
The error is as follows:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
The origin of the issue is in `SNRE_A._loss` which instantiates `labels` without moving it to the device. Adding
```
labels = labels.to(self._device)
```
below line 126 of `sbi/inference/snre/snre_a.py` fixes the issue.
</issue>
<code>
[start of sbi/inference/snre/snre_a.py]
1 from typing import Any, Callable, Dict, Optional, Union
2
3 import torch
4 from torch import Tensor, nn, ones
5
6 from sbi.inference.posteriors.base_posterior import NeuralPosterior
7 from sbi.inference.snre.snre_base import RatioEstimator
8 from sbi.types import TensorboardSummaryWriter
9 from sbi.utils import del_entries
10
11
12 class SNRE_A(RatioEstimator):
13 def __init__(
14 self,
15 prior,
16 classifier: Union[str, Callable] = "resnet",
17 device: str = "cpu",
18 logging_level: Union[int, str] = "warning",
19 summary_writer: Optional[TensorboardSummaryWriter] = None,
20 show_progress_bars: bool = True,
21 **unused_args
22 ):
23 r"""AALR[1], here known as SNRE_A.
24
25 [1] _Likelihood-free MCMC with Amortized Approximate Likelihood Ratios_, Hermans
26 et al., ICML 2020, https://arxiv.org/abs/1903.04057
27
28 Args:
29 prior: A probability distribution that expresses prior knowledge about the
30 parameters, e.g. which ranges are meaningful for them. Any
31 object with `.log_prob()`and `.sample()` (for example, a PyTorch
32 distribution) can be used.
33 classifier: Classifier trained to approximate likelihood ratios. If it is
34 a string, use a pre-configured network of the provided type (one of
35 linear, mlp, resnet). Alternatively, a function that builds a custom
36 neural network can be provided. The function will be called with the
37 first batch of simulations (theta, x), which can thus be used for shape
38 inference and potentially for z-scoring. It needs to return a PyTorch
39 `nn.Module` implementing the classifier.
40 device: torch device on which to compute, e.g. gpu, cpu.
41 logging_level: Minimum severity of messages to log. One of the strings
42 INFO, WARNING, DEBUG, ERROR and CRITICAL.
43 summary_writer: A tensorboard `SummaryWriter` to control, among others, log
44 file location (default is `<current working directory>/logs`.)
45 show_progress_bars: Whether to show a progressbar during simulation and
46 sampling.
47 unused_args: Absorbs additional arguments. No entries will be used. If it
48 is not empty, we warn. In future versions, when the new interface of
49 0.14.0 is more mature, we will remove this argument.
50 """
51
52 kwargs = del_entries(locals(), entries=("self", "__class__", "unused_args"))
53 super().__init__(**kwargs, **unused_args)
54
55 def train(
56 self,
57 training_batch_size: int = 50,
58 learning_rate: float = 5e-4,
59 validation_fraction: float = 0.1,
60 stop_after_epochs: int = 20,
61 max_num_epochs: Optional[int] = None,
62 clip_max_norm: Optional[float] = 5.0,
63 exclude_invalid_x: bool = True,
64 resume_training: bool = False,
65 discard_prior_samples: bool = False,
66 retrain_from_scratch_each_round: bool = False,
67 show_train_summary: bool = False,
68 ) -> NeuralPosterior:
69 r"""
70 Return classifier that approximates the ratio $p(\theta,x)/p(\theta)p(x)$.
71
72 Args:
73 training_batch_size: Training batch size.
74 learning_rate: Learning rate for Adam optimizer.
75 validation_fraction: The fraction of data to use for validation.
76 stop_after_epochs: The number of epochs to wait for improvement on the
77 validation set before terminating training.
78 max_num_epochs: Maximum number of epochs to run. If reached, we stop
79 training even when the validation loss is still decreasing. If None, we
80 train until validation loss increases (see also `stop_after_epochs`).
81 clip_max_norm: Value at which to clip the total gradient norm in order to
82 prevent exploding gradients. Use None for no clipping.
83 exclude_invalid_x: Whether to exclude simulation outputs `x=NaN` or `x=±∞`
84 during training. Expect errors, silent or explicit, when `False`.
85 resume_training: Can be used in case training time is limited, e.g. on a
86 cluster. If `True`, the split between train and validation set, the
87 optimizer, the number of epochs, and the best validation log-prob will
88 be restored from the last time `.train()` was called.
89 discard_prior_samples: Whether to discard samples simulated in round 1, i.e.
90 from the prior. Training may be sped up by ignoring such less targeted
91 samples.
92 retrain_from_scratch_each_round: Whether to retrain the conditional density
93 estimator for the posterior from scratch each round.
94 show_train_summary: Whether to print the number of epochs and validation
95 loss and leakage after the training.
96
97 Returns:
98 Classifier that approximates the ratio $p(\theta,x)/p(\theta)p(x)$.
99 """
100
101 # AALR is defined for `num_atoms=2`.
102 # Proxy to `super().__call__` to ensure right parameter.
103 kwargs = del_entries(locals(), entries=("self", "__class__"))
104 return super().train(**kwargs, num_atoms=2)
105
106 def _loss(self, theta: Tensor, x: Tensor, num_atoms: int) -> Tensor:
107 """
108 Returns the binary cross-entropy loss for the trained classifier.
109
110 The classifier takes as input a $(\theta,x)$ pair. It is trained to predict 1
111 if the pair was sampled from the joint $p(\theta,x)$, and to predict 0 if the
112 pair was sampled from the marginals $p(\theta)p(x)$.
113 """
114
115 assert theta.shape[0] == x.shape[0], "Batch sizes for theta and x must match."
116 batch_size = theta.shape[0]
117
118 logits = self._classifier_logits(theta, x, num_atoms)
119 likelihood = torch.sigmoid(logits).squeeze()
120
121 # Alternating pairs where there is one sampled from the joint and one
122 # sampled from the marginals. The first element is sampled from the
123 # joint p(theta, x) and is labelled 1. The second element is sampled
124 # from the marginals p(theta)p(x) and is labelled 0. And so on.
125 labels = ones(2 * batch_size) # two atoms
126 labels[1::2] = 0.0
127
128 # Binary cross entropy to learn the likelihood (AALR-specific)
129 return nn.BCELoss()(likelihood, labels)
130
[end of sbi/inference/snre/snre_a.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sbi/inference/snre/snre_a.py b/sbi/inference/snre/snre_a.py
--- a/sbi/inference/snre/snre_a.py
+++ b/sbi/inference/snre/snre_a.py
@@ -122,7 +122,7 @@
# sampled from the marginals. The first element is sampled from the
# joint p(theta, x) and is labelled 1. The second element is sampled
# from the marginals p(theta)p(x) and is labelled 0. And so on.
- labels = ones(2 * batch_size) # two atoms
+ labels = ones(2 * batch_size, device=self._device) # two atoms
labels[1::2] = 0.0
# Binary cross entropy to learn the likelihood (AALR-specific)
|
{"golden_diff": "diff --git a/sbi/inference/snre/snre_a.py b/sbi/inference/snre/snre_a.py\n--- a/sbi/inference/snre/snre_a.py\n+++ b/sbi/inference/snre/snre_a.py\n@@ -122,7 +122,7 @@\n # sampled from the marginals. The first element is sampled from the\n # joint p(theta, x) and is labelled 1. The second element is sampled\n # from the marginals p(theta)p(x) and is labelled 0. And so on.\n- labels = ones(2 * batch_size) # two atoms\n+ labels = ones(2 * batch_size, device=self._device) # two atoms\n labels[1::2] = 0.0\n \n # Binary cross entropy to learn the likelihood (AALR-specific)\n", "issue": "Training SNRE_A on GPU fails\nHi! Using SNRE_A with `device=\"gpu\"` currently fails :-(\r\n\r\nThe error is as follows:\r\n```\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!\r\n```\r\nThe origin of the issue is in `SNRE_A._loss` which instantiates `labels` without moving it to the device. Adding \r\n```\r\nlabels = labels.to(self._device)\r\n```\r\nbelow line 126 of `sbi/inference/snre/snre_a.py` fixes the issue.\n", "before_files": [{"content": "from typing import Any, Callable, Dict, Optional, Union\n\nimport torch\nfrom torch import Tensor, nn, ones\n\nfrom sbi.inference.posteriors.base_posterior import NeuralPosterior\nfrom sbi.inference.snre.snre_base import RatioEstimator\nfrom sbi.types import TensorboardSummaryWriter\nfrom sbi.utils import del_entries\n\n\nclass SNRE_A(RatioEstimator):\n def __init__(\n self,\n prior,\n classifier: Union[str, Callable] = \"resnet\",\n device: str = \"cpu\",\n logging_level: Union[int, str] = \"warning\",\n summary_writer: Optional[TensorboardSummaryWriter] = None,\n show_progress_bars: bool = True,\n **unused_args\n ):\n r\"\"\"AALR[1], here known as SNRE_A.\n\n [1] _Likelihood-free MCMC with Amortized Approximate Likelihood Ratios_, Hermans\n et al., ICML 2020, https://arxiv.org/abs/1903.04057\n\n Args:\n prior: A probability distribution that expresses prior knowledge about the\n parameters, e.g. which ranges are meaningful for them. Any\n object with `.log_prob()`and `.sample()` (for example, a PyTorch\n distribution) can be used.\n classifier: Classifier trained to approximate likelihood ratios. If it is\n a string, use a pre-configured network of the provided type (one of\n linear, mlp, resnet). Alternatively, a function that builds a custom\n neural network can be provided. The function will be called with the\n first batch of simulations (theta, x), which can thus be used for shape\n inference and potentially for z-scoring. It needs to return a PyTorch\n `nn.Module` implementing the classifier.\n device: torch device on which to compute, e.g. gpu, cpu.\n logging_level: Minimum severity of messages to log. One of the strings\n INFO, WARNING, DEBUG, ERROR and CRITICAL.\n summary_writer: A tensorboard `SummaryWriter` to control, among others, log\n file location (default is `<current working directory>/logs`.)\n show_progress_bars: Whether to show a progressbar during simulation and\n sampling.\n unused_args: Absorbs additional arguments. No entries will be used. If it\n is not empty, we warn. In future versions, when the new interface of\n 0.14.0 is more mature, we will remove this argument.\n \"\"\"\n\n kwargs = del_entries(locals(), entries=(\"self\", \"__class__\", \"unused_args\"))\n super().__init__(**kwargs, **unused_args)\n\n def train(\n self,\n training_batch_size: int = 50,\n learning_rate: float = 5e-4,\n validation_fraction: float = 0.1,\n stop_after_epochs: int = 20,\n max_num_epochs: Optional[int] = None,\n clip_max_norm: Optional[float] = 5.0,\n exclude_invalid_x: bool = True,\n resume_training: bool = False,\n discard_prior_samples: bool = False,\n retrain_from_scratch_each_round: bool = False,\n show_train_summary: bool = False,\n ) -> NeuralPosterior:\n r\"\"\"\n Return classifier that approximates the ratio $p(\\theta,x)/p(\\theta)p(x)$.\n\n Args:\n training_batch_size: Training batch size.\n learning_rate: Learning rate for Adam optimizer.\n validation_fraction: The fraction of data to use for validation.\n stop_after_epochs: The number of epochs to wait for improvement on the\n validation set before terminating training.\n max_num_epochs: Maximum number of epochs to run. If reached, we stop\n training even when the validation loss is still decreasing. If None, we\n train until validation loss increases (see also `stop_after_epochs`).\n clip_max_norm: Value at which to clip the total gradient norm in order to\n prevent exploding gradients. Use None for no clipping.\n exclude_invalid_x: Whether to exclude simulation outputs `x=NaN` or `x=\u00b1\u221e`\n during training. Expect errors, silent or explicit, when `False`.\n resume_training: Can be used in case training time is limited, e.g. on a\n cluster. If `True`, the split between train and validation set, the\n optimizer, the number of epochs, and the best validation log-prob will\n be restored from the last time `.train()` was called.\n discard_prior_samples: Whether to discard samples simulated in round 1, i.e.\n from the prior. Training may be sped up by ignoring such less targeted\n samples.\n retrain_from_scratch_each_round: Whether to retrain the conditional density\n estimator for the posterior from scratch each round.\n show_train_summary: Whether to print the number of epochs and validation\n loss and leakage after the training.\n\n Returns:\n Classifier that approximates the ratio $p(\\theta,x)/p(\\theta)p(x)$.\n \"\"\"\n\n # AALR is defined for `num_atoms=2`.\n # Proxy to `super().__call__` to ensure right parameter.\n kwargs = del_entries(locals(), entries=(\"self\", \"__class__\"))\n return super().train(**kwargs, num_atoms=2)\n\n def _loss(self, theta: Tensor, x: Tensor, num_atoms: int) -> Tensor:\n \"\"\"\n Returns the binary cross-entropy loss for the trained classifier.\n\n The classifier takes as input a $(\\theta,x)$ pair. It is trained to predict 1\n if the pair was sampled from the joint $p(\\theta,x)$, and to predict 0 if the\n pair was sampled from the marginals $p(\\theta)p(x)$.\n \"\"\"\n\n assert theta.shape[0] == x.shape[0], \"Batch sizes for theta and x must match.\"\n batch_size = theta.shape[0]\n\n logits = self._classifier_logits(theta, x, num_atoms)\n likelihood = torch.sigmoid(logits).squeeze()\n\n # Alternating pairs where there is one sampled from the joint and one\n # sampled from the marginals. The first element is sampled from the\n # joint p(theta, x) and is labelled 1. The second element is sampled\n # from the marginals p(theta)p(x) and is labelled 0. And so on.\n labels = ones(2 * batch_size) # two atoms\n labels[1::2] = 0.0\n\n # Binary cross entropy to learn the likelihood (AALR-specific)\n return nn.BCELoss()(likelihood, labels)\n", "path": "sbi/inference/snre/snre_a.py"}]}
| 2,402 | 188 |
gh_patches_debug_10501
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-1964
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'NoneType' object has no attribute 'group' with virtualenv 20.0.32 on CygWin
**Issue**
We are also testing on CygWin (using appveyor), and since this morning, tox fails creating a virtualenv with an AttributeError. Unfortunately, tox does not display the entire traceback, but just the exception.
Since virtualenv 20.0.32 was released just 4h ago, I suspect that to be the culprit.
From https://ci.appveyor.com/project/andy-maier/pywbem/builds/35526352/job/l3k6a2vb39bweqsw#L936:
```
if "%UNIX_PATH%"=="C:\cygwin64\bin" ( bash -c "which tox && tox -vv -e %TOX_ENV% && echo appveyor.yml: tox rc=$?" )
/usr/bin/tox
using tox.ini: /cygdrive/c/projects/pywbem/tox.ini (pid 1822)
using tox-3.20.0 from /usr/lib/python3.8/site-packages/tox/__init__.py (pid 1822)
skipping sdist step
cygwin64_py38 uses /usr/bin/python3.8.exe
cygwin64_py38 start: getenv /cygdrive/c/projects/pywbem/.tox/cygwin64_py38
cygwin64_py38 cannot reuse: no previous config /cygdrive/c/projects/pywbem/.tox/cygwin64_py38/.tox-config1
cygwin64_py38 create: /cygdrive/c/projects/pywbem/.tox/cygwin64_py38
setting PATH=/cygdrive/c/projects/pywbem/.tox/cygwin64_py38/bin:/usr/bin:/cygdrive/c/Windows/system32:/cygdrive/c/Windows:/cygdrive/c/ProgramData/chocolatey/bin
[1825] /cygdrive/c/projects/pywbem/.tox$ /usr/bin/python3.8.exe -m virtualenv --no-download --python /usr/bin/python3.8.exe cygwin64_py38
AttributeError: 'NoneType' object has no attribute 'group'
ERROR: invocation failed (exit code 1)
ERROR: InvocationError for command /usr/bin/python3.8.exe -m virtualenv --no-download --python /usr/bin/python3.8.exe cygwin64_py38 (exited with code 1)
cygwin64_py38 finish: getenv /cygdrive/c/projects/pywbem/.tox/cygwin64_py38 after 4.23 seconds
```
I am setting up a direct invocation of virtualenv in that environment, in order to get the full traceback, and will post that here.
**Environment**
Provide at least:
- OS: CygWin64
- ``pip list`` of the host python where ``virtualenv`` is installed:
```console
See next comment, below
```
**Output of the virtual environment creation**
Make sure to run the creation with `-vvv --with-traceback`:
```console
See next comment, below
```
</issue>
<code>
[start of src/virtualenv/activation/via_template.py]
1 from __future__ import absolute_import, unicode_literals
2
3 import os
4 import re
5 import sys
6 import sysconfig
7 from abc import ABCMeta, abstractmethod
8
9 from six import add_metaclass
10
11 from virtualenv.util.six import ensure_text
12
13 from .activator import Activator
14
15 if sys.version_info >= (3, 7):
16 from importlib.resources import read_binary
17 else:
18 from importlib_resources import read_binary
19
20
21 @add_metaclass(ABCMeta)
22 class ViaTemplateActivator(Activator):
23 @abstractmethod
24 def templates(self):
25 raise NotImplementedError
26
27 def generate(self, creator):
28 dest_folder = creator.bin_dir
29 replacements = self.replacements(creator, dest_folder)
30 generated = self._generate(replacements, self.templates(), dest_folder, creator)
31 if self.flag_prompt is not None:
32 creator.pyenv_cfg["prompt"] = self.flag_prompt
33 return generated
34
35 def replacements(self, creator, dest_folder):
36 current_platform = sysconfig.get_platform()
37 platforms = ["mingw", "cygwin", "msys"]
38 if any(platform in current_platform for platform in platforms):
39 pattern = re.compile("^([A-Za-z]):(.*)")
40 match = pattern.match(str(creator.dest))
41 virtual_env = "/" + match.group(1).lower() + match.group(2)
42 else:
43 virtual_env = str(creator.dest)
44 return {
45 "__VIRTUAL_PROMPT__": "" if self.flag_prompt is None else self.flag_prompt,
46 "__VIRTUAL_ENV__": ensure_text(virtual_env),
47 "__VIRTUAL_NAME__": creator.env_name,
48 "__BIN_NAME__": ensure_text(str(creator.bin_dir.relative_to(creator.dest))),
49 "__PATH_SEP__": ensure_text(os.pathsep),
50 }
51
52 def _generate(self, replacements, templates, to_folder, creator):
53 generated = []
54 for template in templates:
55 text = self.instantiate_template(replacements, template, creator)
56 dest = to_folder / self.as_name(template)
57 # use write_bytes to avoid platform specific line normalization (\n -> \r\n)
58 dest.write_bytes(text.encode("utf-8"))
59 generated.append(dest)
60 return generated
61
62 def as_name(self, template):
63 return template.name
64
65 def instantiate_template(self, replacements, template, creator):
66 # read content as binary to avoid platform specific line normalization (\n -> \r\n)
67 binary = read_binary(self.__module__, str(template))
68 text = binary.decode("utf-8", errors="strict")
69 for key, value in replacements.items():
70 value = self._repr_unicode(creator, value)
71 text = text.replace(key, value)
72 return text
73
74 @staticmethod
75 def _repr_unicode(creator, value):
76 # by default we just let it be unicode
77 return value
78
[end of src/virtualenv/activation/via_template.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/virtualenv/activation/via_template.py b/src/virtualenv/activation/via_template.py
--- a/src/virtualenv/activation/via_template.py
+++ b/src/virtualenv/activation/via_template.py
@@ -38,7 +38,10 @@
if any(platform in current_platform for platform in platforms):
pattern = re.compile("^([A-Za-z]):(.*)")
match = pattern.match(str(creator.dest))
- virtual_env = "/" + match.group(1).lower() + match.group(2)
+ if match:
+ virtual_env = "/" + match.group(1).lower() + match.group(2)
+ else:
+ virtual_env = str(creator.dest)
else:
virtual_env = str(creator.dest)
return {
|
{"golden_diff": "diff --git a/src/virtualenv/activation/via_template.py b/src/virtualenv/activation/via_template.py\n--- a/src/virtualenv/activation/via_template.py\n+++ b/src/virtualenv/activation/via_template.py\n@@ -38,7 +38,10 @@\n if any(platform in current_platform for platform in platforms):\n pattern = re.compile(\"^([A-Za-z]):(.*)\")\n match = pattern.match(str(creator.dest))\n- virtual_env = \"/\" + match.group(1).lower() + match.group(2)\n+ if match:\n+ virtual_env = \"/\" + match.group(1).lower() + match.group(2)\n+ else:\n+ virtual_env = str(creator.dest)\n else:\n virtual_env = str(creator.dest)\n return {\n", "issue": "AttributeError: 'NoneType' object has no attribute 'group' with virtualenv 20.0.32 on CygWin\n**Issue**\r\nWe are also testing on CygWin (using appveyor), and since this morning, tox fails creating a virtualenv with an AttributeError. Unfortunately, tox does not display the entire traceback, but just the exception.\r\nSince virtualenv 20.0.32 was released just 4h ago, I suspect that to be the culprit.\r\n\r\nFrom https://ci.appveyor.com/project/andy-maier/pywbem/builds/35526352/job/l3k6a2vb39bweqsw#L936:\r\n```\r\nif \"%UNIX_PATH%\"==\"C:\\cygwin64\\bin\" ( bash -c \"which tox && tox -vv -e %TOX_ENV% && echo appveyor.yml: tox rc=$?\" )\r\n/usr/bin/tox\r\nusing tox.ini: /cygdrive/c/projects/pywbem/tox.ini (pid 1822)\r\nusing tox-3.20.0 from /usr/lib/python3.8/site-packages/tox/__init__.py (pid 1822)\r\nskipping sdist step\r\ncygwin64_py38 uses /usr/bin/python3.8.exe\r\ncygwin64_py38 start: getenv /cygdrive/c/projects/pywbem/.tox/cygwin64_py38\r\ncygwin64_py38 cannot reuse: no previous config /cygdrive/c/projects/pywbem/.tox/cygwin64_py38/.tox-config1\r\ncygwin64_py38 create: /cygdrive/c/projects/pywbem/.tox/cygwin64_py38\r\nsetting PATH=/cygdrive/c/projects/pywbem/.tox/cygwin64_py38/bin:/usr/bin:/cygdrive/c/Windows/system32:/cygdrive/c/Windows:/cygdrive/c/ProgramData/chocolatey/bin\r\n[1825] /cygdrive/c/projects/pywbem/.tox$ /usr/bin/python3.8.exe -m virtualenv --no-download --python /usr/bin/python3.8.exe cygwin64_py38\r\nAttributeError: 'NoneType' object has no attribute 'group'\r\nERROR: invocation failed (exit code 1)\r\nERROR: InvocationError for command /usr/bin/python3.8.exe -m virtualenv --no-download --python /usr/bin/python3.8.exe cygwin64_py38 (exited with code 1)\r\ncygwin64_py38 finish: getenv /cygdrive/c/projects/pywbem/.tox/cygwin64_py38 after 4.23 seconds\r\n```\r\n\r\nI am setting up a direct invocation of virtualenv in that environment, in order to get the full traceback, and will post that here.\r\n\r\n**Environment**\r\n\r\nProvide at least:\r\n- OS: CygWin64\r\n- ``pip list`` of the host python where ``virtualenv`` is installed:\r\n ```console\r\n See next comment, below\r\n ```\r\n\r\n**Output of the virtual environment creation**\r\n\r\nMake sure to run the creation with `-vvv --with-traceback`:\r\n\r\n```console\r\nSee next comment, below\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport os\nimport re\nimport sys\nimport sysconfig\nfrom abc import ABCMeta, abstractmethod\n\nfrom six import add_metaclass\n\nfrom virtualenv.util.six import ensure_text\n\nfrom .activator import Activator\n\nif sys.version_info >= (3, 7):\n from importlib.resources import read_binary\nelse:\n from importlib_resources import read_binary\n\n\n@add_metaclass(ABCMeta)\nclass ViaTemplateActivator(Activator):\n @abstractmethod\n def templates(self):\n raise NotImplementedError\n\n def generate(self, creator):\n dest_folder = creator.bin_dir\n replacements = self.replacements(creator, dest_folder)\n generated = self._generate(replacements, self.templates(), dest_folder, creator)\n if self.flag_prompt is not None:\n creator.pyenv_cfg[\"prompt\"] = self.flag_prompt\n return generated\n\n def replacements(self, creator, dest_folder):\n current_platform = sysconfig.get_platform()\n platforms = [\"mingw\", \"cygwin\", \"msys\"]\n if any(platform in current_platform for platform in platforms):\n pattern = re.compile(\"^([A-Za-z]):(.*)\")\n match = pattern.match(str(creator.dest))\n virtual_env = \"/\" + match.group(1).lower() + match.group(2)\n else:\n virtual_env = str(creator.dest)\n return {\n \"__VIRTUAL_PROMPT__\": \"\" if self.flag_prompt is None else self.flag_prompt,\n \"__VIRTUAL_ENV__\": ensure_text(virtual_env),\n \"__VIRTUAL_NAME__\": creator.env_name,\n \"__BIN_NAME__\": ensure_text(str(creator.bin_dir.relative_to(creator.dest))),\n \"__PATH_SEP__\": ensure_text(os.pathsep),\n }\n\n def _generate(self, replacements, templates, to_folder, creator):\n generated = []\n for template in templates:\n text = self.instantiate_template(replacements, template, creator)\n dest = to_folder / self.as_name(template)\n # use write_bytes to avoid platform specific line normalization (\\n -> \\r\\n)\n dest.write_bytes(text.encode(\"utf-8\"))\n generated.append(dest)\n return generated\n\n def as_name(self, template):\n return template.name\n\n def instantiate_template(self, replacements, template, creator):\n # read content as binary to avoid platform specific line normalization (\\n -> \\r\\n)\n binary = read_binary(self.__module__, str(template))\n text = binary.decode(\"utf-8\", errors=\"strict\")\n for key, value in replacements.items():\n value = self._repr_unicode(creator, value)\n text = text.replace(key, value)\n return text\n\n @staticmethod\n def _repr_unicode(creator, value):\n # by default we just let it be unicode\n return value\n", "path": "src/virtualenv/activation/via_template.py"}]}
| 2,015 | 175 |
gh_patches_debug_2631
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-693
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tests_selenium.py:976: Resurrect test `test_cart_page_open`
The puzzle `473-5159ab9c` from #473 has to be resolved:
https://github.com/fidals/shopelectro/blob/f7dc2793dc5c7eddb2e68a68368337d77ba3139e/shopelectro/tests/tests_selenium.py#L976-L976
The puzzle was created by duker33 on 08-Aug-18.
Estimate: 15 minutes,
If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).
</issue>
<code>
[start of shopelectro/models.py]
1 import random
2 import string
3 import typing
4 from uuid import uuid4
5
6 from django.conf import settings
7 from django.db import models
8 from django.urls import reverse
9 from django.utils.translation import ugettext_lazy as _
10 import mptt
11
12 from catalog import models as catalog_models
13 from ecommerce import models as ecommerce_models
14 from pages import models as pages_models
15
16
17 def randomize_slug(slug: str) -> str:
18 slug_hash = ''.join(
19 random.choices(string.ascii_lowercase, k=settings.SLUG_HASH_SIZE)
20 )
21 return f'{slug}_{slug_hash}'
22
23
24 class SECategoryQuerySet(catalog_models.CategoryQuerySet):
25 def get_categories_tree_with_pictures(self) -> 'SECategoryQuerySet':
26 categories_with_pictures = (
27 self
28 .filter(products__page__images__isnull=False)
29 .distinct()
30 )
31
32 return categories_with_pictures.get_ancestors(include_self=True)
33
34
35 class SECategoryManager(
36 catalog_models.CategoryManager.from_queryset(SECategoryQuerySet)
37 ):
38 pass
39
40
41 class Category(catalog_models.AbstractCategory, pages_models.SyncPageMixin):
42
43 objects = SECategoryManager()
44 uuid = models.UUIDField(default=uuid4, editable=False)
45
46 @classmethod
47 def get_default_parent(cls):
48 return pages_models.CustomPage.objects.filter(slug='catalog').first()
49
50 @property
51 def image(self):
52 products = self.products.all()
53 return products[0].image if products else None
54
55 def get_absolute_url(self):
56 return reverse('category', args=(self.page.slug,))
57
58
59 class Product(catalog_models.AbstractProduct, pages_models.SyncPageMixin):
60
61 # That's why we are needed to explicitly add objects manager here
62 # because of Django special managers behaviour.
63 # Se se#480 for details.
64 objects = catalog_models.ProductManager()
65
66 category = models.ForeignKey(
67 Category,
68 on_delete=models.CASCADE,
69 null=True,
70 related_name='products',
71 verbose_name=_('category'),
72 )
73
74 tags = models.ManyToManyField(
75 'Tag',
76 related_name='products',
77 blank=True,
78 verbose_name=_('tags'),
79 )
80
81 vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))
82 uuid = models.UUIDField(default=uuid4, editable=False)
83 purchase_price = models.FloatField(
84 default=0, verbose_name=_('purchase_price'))
85 wholesale_small = models.FloatField(
86 default=0, verbose_name=_('wholesale_small'))
87 wholesale_medium = models.FloatField(
88 default=0, verbose_name=_('wholesale_medium'))
89 wholesale_large = models.FloatField(
90 default=0, verbose_name=_('wholesale_large'))
91
92 def get_absolute_url(self):
93 return reverse('product', args=(self.vendor_code,))
94
95 @property
96 def average_rate(self):
97 """Return rounded to first decimal averaged rating."""
98 rating = self.product_feedbacks.aggregate(
99 avg=models.Avg('rating')).get('avg', 0)
100 return round(rating, 1)
101
102 @property
103 def feedback_count(self):
104 return self.product_feedbacks.count()
105
106 @property
107 def feedback(self):
108 return self.product_feedbacks.all().order_by('-date')
109
110 def get_params(self):
111 return Tag.objects.filter_by_products([self]).get_group_tags_pairs()
112
113 def get_brand_name(self) -> str:
114 brand: typing.Optional['Tag'] = Tag.objects.get_brands([self]).get(self)
115 return brand.name if brand else ''
116
117
118 class ProductFeedback(models.Model):
119 product = models.ForeignKey(
120 Product, on_delete=models.CASCADE, null=True,
121 related_name='product_feedbacks'
122 )
123
124 date = models.DateTimeField(
125 auto_now=True, db_index=True, verbose_name=_('date'))
126 name = models.CharField(
127 max_length=255, db_index=True, verbose_name=_('name'))
128 rating = models.PositiveSmallIntegerField(
129 default=1, db_index=True, verbose_name=_('rating'))
130 dignities = models.TextField(
131 default='', blank=True, verbose_name=_('dignities'))
132 limitations = models.TextField(
133 default='', blank=True, verbose_name=_('limitations'))
134 general = models.TextField(
135 default='', blank=True, verbose_name=_('limitations'))
136
137
138 def _default_payment():
139 """Default payment option is first element of first tuple in options."""
140 assert settings.PAYMENT_OPTIONS[0][0], 'No payment options!'
141 return settings.PAYMENT_OPTIONS[0][0]
142
143
144 class Order(ecommerce_models.Order):
145 address = models.TextField(blank=True, default='')
146 payment_type = models.CharField(
147 max_length=255,
148 choices=settings.PAYMENT_OPTIONS,
149 default=_default_payment()
150 )
151 comment = models.TextField(blank=True, default='')
152 # total price - total purchase price
153 revenue = models.FloatField(default=0, null=True, verbose_name=_('revenue'))
154
155 @property
156 def payment_type_name(self):
157 """Return name for an order's payment option."""
158 return next(
159 name for option, name in settings.PAYMENT_OPTIONS
160 if self.payment_type == option
161 )
162
163 def set_positions(self, cart):
164 """
165 Save cart's state into Order instance.
166
167 @todo #589:60m Create Cart model.
168 See details here: https://github.com/fidals/shopelectro/pull/590#discussion_r222544672
169 """
170 self.revenue = cart.total_revenue()
171 self.save()
172 for id_, position in cart:
173 self.positions.create(
174 order=self,
175 product_id=id_,
176 vendor_code=position['vendor_code'],
177 name=position['name'],
178 price=position['price'],
179 quantity=position['quantity'],
180 )
181 return self
182
183
184 class CategoryPage(pages_models.ModelPage):
185 """Create proxy model for Admin."""
186
187 class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)
188 proxy = True
189
190 # noinspection PyTypeChecker
191 objects = pages_models.ModelPage.create_model_page_managers(Category)
192
193
194 class ProductPage(pages_models.ModelPage):
195 """Create proxy model for Admin."""
196
197 class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)
198 proxy = True
199
200 # noinspection PyTypeChecker
201 objects = (
202 pages_models.ModelPage
203 .create_model_page_managers(Product)
204 )
205
206
207 class TagGroup(catalog_models.TagGroup):
208 pass
209
210
211 class TagQuerySet(catalog_models.TagQuerySet):
212 pass
213
214
215 class Tag(catalog_models.Tag):
216 group = models.ForeignKey(
217 TagGroup, on_delete=models.CASCADE, null=True, related_name='tags',
218 )
219
[end of shopelectro/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/shopelectro/models.py b/shopelectro/models.py
--- a/shopelectro/models.py
+++ b/shopelectro/models.py
@@ -7,7 +7,6 @@
from django.db import models
from django.urls import reverse
from django.utils.translation import ugettext_lazy as _
-import mptt
from catalog import models as catalog_models
from ecommerce import models as ecommerce_models
|
{"golden_diff": "diff --git a/shopelectro/models.py b/shopelectro/models.py\n--- a/shopelectro/models.py\n+++ b/shopelectro/models.py\n@@ -7,7 +7,6 @@\n from django.db import models\n from django.urls import reverse\n from django.utils.translation import ugettext_lazy as _\n-import mptt\n \n from catalog import models as catalog_models\n from ecommerce import models as ecommerce_models\n", "issue": "tests_selenium.py:976: Resurrect test `test_cart_page_open`\nThe puzzle `473-5159ab9c` from #473 has to be resolved:\n\nhttps://github.com/fidals/shopelectro/blob/f7dc2793dc5c7eddb2e68a68368337d77ba3139e/shopelectro/tests/tests_selenium.py#L976-L976\n\nThe puzzle was created by duker33 on 08-Aug-18. \n\nEstimate: 15 minutes, \n\nIf you have any technical questions, don't ask me, submit new tickets instead. The task will be \"done\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).\n", "before_files": [{"content": "import random\nimport string\nimport typing\nfrom uuid import uuid4\n\nfrom django.conf import settings\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\nimport mptt\n\nfrom catalog import models as catalog_models\nfrom ecommerce import models as ecommerce_models\nfrom pages import models as pages_models\n\n\ndef randomize_slug(slug: str) -> str:\n slug_hash = ''.join(\n random.choices(string.ascii_lowercase, k=settings.SLUG_HASH_SIZE)\n )\n return f'{slug}_{slug_hash}'\n\n\nclass SECategoryQuerySet(catalog_models.CategoryQuerySet):\n def get_categories_tree_with_pictures(self) -> 'SECategoryQuerySet':\n categories_with_pictures = (\n self\n .filter(products__page__images__isnull=False)\n .distinct()\n )\n\n return categories_with_pictures.get_ancestors(include_self=True)\n\n\nclass SECategoryManager(\n catalog_models.CategoryManager.from_queryset(SECategoryQuerySet)\n):\n pass\n\n\nclass Category(catalog_models.AbstractCategory, pages_models.SyncPageMixin):\n\n objects = SECategoryManager()\n uuid = models.UUIDField(default=uuid4, editable=False)\n\n @classmethod\n def get_default_parent(cls):\n return pages_models.CustomPage.objects.filter(slug='catalog').first()\n\n @property\n def image(self):\n products = self.products.all()\n return products[0].image if products else None\n\n def get_absolute_url(self):\n return reverse('category', args=(self.page.slug,))\n\n\nclass Product(catalog_models.AbstractProduct, pages_models.SyncPageMixin):\n\n # That's why we are needed to explicitly add objects manager here\n # because of Django special managers behaviour.\n # Se se#480 for details.\n objects = catalog_models.ProductManager()\n\n category = models.ForeignKey(\n Category,\n on_delete=models.CASCADE,\n null=True,\n related_name='products',\n verbose_name=_('category'),\n )\n\n tags = models.ManyToManyField(\n 'Tag',\n related_name='products',\n blank=True,\n verbose_name=_('tags'),\n )\n\n vendor_code = models.SmallIntegerField(verbose_name=_('vendor_code'))\n uuid = models.UUIDField(default=uuid4, editable=False)\n purchase_price = models.FloatField(\n default=0, verbose_name=_('purchase_price'))\n wholesale_small = models.FloatField(\n default=0, verbose_name=_('wholesale_small'))\n wholesale_medium = models.FloatField(\n default=0, verbose_name=_('wholesale_medium'))\n wholesale_large = models.FloatField(\n default=0, verbose_name=_('wholesale_large'))\n\n def get_absolute_url(self):\n return reverse('product', args=(self.vendor_code,))\n\n @property\n def average_rate(self):\n \"\"\"Return rounded to first decimal averaged rating.\"\"\"\n rating = self.product_feedbacks.aggregate(\n avg=models.Avg('rating')).get('avg', 0)\n return round(rating, 1)\n\n @property\n def feedback_count(self):\n return self.product_feedbacks.count()\n\n @property\n def feedback(self):\n return self.product_feedbacks.all().order_by('-date')\n\n def get_params(self):\n return Tag.objects.filter_by_products([self]).get_group_tags_pairs()\n\n def get_brand_name(self) -> str:\n brand: typing.Optional['Tag'] = Tag.objects.get_brands([self]).get(self)\n return brand.name if brand else ''\n\n\nclass ProductFeedback(models.Model):\n product = models.ForeignKey(\n Product, on_delete=models.CASCADE, null=True,\n related_name='product_feedbacks'\n )\n\n date = models.DateTimeField(\n auto_now=True, db_index=True, verbose_name=_('date'))\n name = models.CharField(\n max_length=255, db_index=True, verbose_name=_('name'))\n rating = models.PositiveSmallIntegerField(\n default=1, db_index=True, verbose_name=_('rating'))\n dignities = models.TextField(\n default='', blank=True, verbose_name=_('dignities'))\n limitations = models.TextField(\n default='', blank=True, verbose_name=_('limitations'))\n general = models.TextField(\n default='', blank=True, verbose_name=_('limitations'))\n\n\ndef _default_payment():\n \"\"\"Default payment option is first element of first tuple in options.\"\"\"\n assert settings.PAYMENT_OPTIONS[0][0], 'No payment options!'\n return settings.PAYMENT_OPTIONS[0][0]\n\n\nclass Order(ecommerce_models.Order):\n address = models.TextField(blank=True, default='')\n payment_type = models.CharField(\n max_length=255,\n choices=settings.PAYMENT_OPTIONS,\n default=_default_payment()\n )\n comment = models.TextField(blank=True, default='')\n # total price - total purchase price\n revenue = models.FloatField(default=0, null=True, verbose_name=_('revenue'))\n\n @property\n def payment_type_name(self):\n \"\"\"Return name for an order's payment option.\"\"\"\n return next(\n name for option, name in settings.PAYMENT_OPTIONS\n if self.payment_type == option\n )\n\n def set_positions(self, cart):\n \"\"\"\n Save cart's state into Order instance.\n\n @todo #589:60m Create Cart model.\n See details here: https://github.com/fidals/shopelectro/pull/590#discussion_r222544672\n \"\"\"\n self.revenue = cart.total_revenue()\n self.save()\n for id_, position in cart:\n self.positions.create(\n order=self,\n product_id=id_,\n vendor_code=position['vendor_code'],\n name=position['name'],\n price=position['price'],\n quantity=position['quantity'],\n )\n return self\n\n\nclass CategoryPage(pages_models.ModelPage):\n \"\"\"Create proxy model for Admin.\"\"\"\n\n class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)\n proxy = True\n\n # noinspection PyTypeChecker\n objects = pages_models.ModelPage.create_model_page_managers(Category)\n\n\nclass ProductPage(pages_models.ModelPage):\n \"\"\"Create proxy model for Admin.\"\"\"\n\n class Meta(pages_models.ModelPage.Meta): # Ignore PycodestyleBear (E303)\n proxy = True\n\n # noinspection PyTypeChecker\n objects = (\n pages_models.ModelPage\n .create_model_page_managers(Product)\n )\n\n\nclass TagGroup(catalog_models.TagGroup):\n pass\n\n\nclass TagQuerySet(catalog_models.TagQuerySet):\n pass\n\n\nclass Tag(catalog_models.Tag):\n group = models.ForeignKey(\n TagGroup, on_delete=models.CASCADE, null=True, related_name='tags',\n )\n", "path": "shopelectro/models.py"}]}
| 2,766 | 89 |
gh_patches_debug_22280
|
rasdani/github-patches
|
git_diff
|
airctic__icevision-878
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix download_mmdet_configs
No need to download the zip file if it exists. This will solve the issue encountered in the Kaggle offline installation.
</issue>
<code>
[start of icevision/models/mmdet/download_configs.py]
1 __all__ = ["download_mmdet_configs"]
2
3 from icevision.imports import *
4 from icevision.utils import *
5
6 VERSION = "v2.10.0"
7 BASE_URL = "https://codeload.github.com/airctic/mmdetection_configs/zip/refs/tags"
8
9
10 def download_mmdet_configs() -> Path:
11 save_dir = get_root_dir() / f"mmdetection_configs"
12 save_dir.mkdir(parents=True, exist_ok=True)
13
14 download_path = save_dir / f"{VERSION}.zip"
15 if not download_path.exists():
16 logger.info("Downloading mmdet configs")
17
18 download_and_extract(f"{BASE_URL}/{VERSION}", download_path)
19
20 return save_dir / f"mmdetection_configs-{VERSION[1:]}/configs"
21
[end of icevision/models/mmdet/download_configs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/icevision/models/mmdet/download_configs.py b/icevision/models/mmdet/download_configs.py
--- a/icevision/models/mmdet/download_configs.py
+++ b/icevision/models/mmdet/download_configs.py
@@ -9,12 +9,26 @@
def download_mmdet_configs() -> Path:
save_dir = get_root_dir() / f"mmdetection_configs"
- save_dir.mkdir(parents=True, exist_ok=True)
+ mmdet_config_path = save_dir / f"mmdetection_configs-{VERSION[1:]}/configs"
download_path = save_dir / f"{VERSION}.zip"
- if not download_path.exists():
- logger.info("Downloading mmdet configs")
- download_and_extract(f"{BASE_URL}/{VERSION}", download_path)
+ if mmdet_config_path.exists():
+ logger.info(
+ f"The mmdet config folder already exists. No need to downloaded it. Path : {mmdet_config_path}"
+ )
+ elif download_path.exists():
+ # The zip file was downloaded by not extracted yet
+ # Extract zip file
+ logger.info(f"Extracting the {VERSION}.zip file.")
+ save_dir = Path(download_path).parent
+ shutil.unpack_archive(filename=str(download_path), extract_dir=str(save_dir))
+ else:
+ save_dir.mkdir(parents=True, exist_ok=True)
- return save_dir / f"mmdetection_configs-{VERSION[1:]}/configs"
+ download_path = save_dir / f"{VERSION}.zip"
+ if not download_path.exists():
+ logger.info("Downloading mmdet configs")
+ download_and_extract(f"{BASE_URL}/{VERSION}", download_path)
+
+ return mmdet_config_path
|
{"golden_diff": "diff --git a/icevision/models/mmdet/download_configs.py b/icevision/models/mmdet/download_configs.py\n--- a/icevision/models/mmdet/download_configs.py\n+++ b/icevision/models/mmdet/download_configs.py\n@@ -9,12 +9,26 @@\n \n def download_mmdet_configs() -> Path:\n save_dir = get_root_dir() / f\"mmdetection_configs\"\n- save_dir.mkdir(parents=True, exist_ok=True)\n \n+ mmdet_config_path = save_dir / f\"mmdetection_configs-{VERSION[1:]}/configs\"\n download_path = save_dir / f\"{VERSION}.zip\"\n- if not download_path.exists():\n- logger.info(\"Downloading mmdet configs\")\n \n- download_and_extract(f\"{BASE_URL}/{VERSION}\", download_path)\n+ if mmdet_config_path.exists():\n+ logger.info(\n+ f\"The mmdet config folder already exists. No need to downloaded it. Path : {mmdet_config_path}\"\n+ )\n+ elif download_path.exists():\n+ # The zip file was downloaded by not extracted yet\n+ # Extract zip file\n+ logger.info(f\"Extracting the {VERSION}.zip file.\")\n+ save_dir = Path(download_path).parent\n+ shutil.unpack_archive(filename=str(download_path), extract_dir=str(save_dir))\n+ else:\n+ save_dir.mkdir(parents=True, exist_ok=True)\n \n- return save_dir / f\"mmdetection_configs-{VERSION[1:]}/configs\"\n+ download_path = save_dir / f\"{VERSION}.zip\"\n+ if not download_path.exists():\n+ logger.info(\"Downloading mmdet configs\")\n+ download_and_extract(f\"{BASE_URL}/{VERSION}\", download_path)\n+\n+ return mmdet_config_path\n", "issue": "Fix download_mmdet_configs\nNo need to download the zip file if it exists. This will solve the issue encountered in the Kaggle offline installation.\r\n\n", "before_files": [{"content": "__all__ = [\"download_mmdet_configs\"]\n\nfrom icevision.imports import *\nfrom icevision.utils import *\n\nVERSION = \"v2.10.0\"\nBASE_URL = \"https://codeload.github.com/airctic/mmdetection_configs/zip/refs/tags\"\n\n\ndef download_mmdet_configs() -> Path:\n save_dir = get_root_dir() / f\"mmdetection_configs\"\n save_dir.mkdir(parents=True, exist_ok=True)\n\n download_path = save_dir / f\"{VERSION}.zip\"\n if not download_path.exists():\n logger.info(\"Downloading mmdet configs\")\n\n download_and_extract(f\"{BASE_URL}/{VERSION}\", download_path)\n\n return save_dir / f\"mmdetection_configs-{VERSION[1:]}/configs\"\n", "path": "icevision/models/mmdet/download_configs.py"}]}
| 776 | 388 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.